One of the main reasons for implementing machine translation (MT) into localization workflows is that it saves money. And time. This time, let’s focus on money. In particular, cost savings.
Common sense dictates that it should depend not only on the language pair, the content domain, the text type, or the quality of the MT output but on the actual effort you put into the process. If a light post-editing is all that is required (when the linguist tweaks the raw MT aiming for the so-called “good enough” quality), then the bigger discounts of up to 35-40 percent off the regular word rate make sense. But if a fully polished target is expected, with no trace of MT (i.e., full post-editing), then sometimes it takes even more effort to process the MT output rather than translate from scratch. So, how should this effort be measured?
The ideal situation would be to have the actual time per segment tracked, but it’s mostly not available in TMS. Furthermore, the edit-based measuring approach “creates a conflict of interest (COI) for the post-editor to do more edits, while the goal is to avoid changing what is good enough, and it reaches up to 80% with Custom NMT models”, Konstantin confirms.
However, if such measuring happens in-house, either on the client or the vendor side, then the COI may not occur. For instance, a linguist may post-edit three different versions of the same text, using short samples coming from three different MT engines, in order to make a human evaluation of the MT output in the process. After that, together with the client, the decision is made on the best option to implement for further post-editing projects.
In such a scenario, this whole “MT story” is a different type of service for the end customer. Filipp Vein from LocalTrans explains it further: “Internal MT output evaluation is based on both automatic scores such as BLEU and human evaluation.” The editor has no major conflict of interest here. Rather, he’s more interested in picking the best MT engine option, for he will need to work with it further when the light post-editing project kicks off. Thus, the post-editor naturally picks the output from the MT engine which produces better results quicker and saves the linguist the effort.
But the idea of saving the effort and thus increasing productivity is sometimes misinterpreted. For example, some language service providers (LSPs) announce per hour rates in their MTPE offering, with the number of hours calculated by simply dividing the project size by the average translator productivity…
A natural alternative to the hourly rate is a per-word (source) rate. A survey by TAUS indicated that this is the (most common) way.
In multi-language vendor companies (MLVs), the actual PE step is regularly done in a CAT tool with the MT engine linked. And instead of No match, the MT output will appear. In addition to the TM matches where a linguist had an empty segment, they now will see the MT output which they need to post-edit. As a result, here’s an example of a fuzzy grid with MT instead of No match:
|Words||Fuzzy “Discount”||MT “Discount”||Payable count|
|Saving on MT (in words that are not paid)||-660|
In this example, only the new words (0-49 percent match) are MT-discounted. And this is quite often the case with LSPs when they send the PE work to linguists or outsource to other LSPs.
Introduction The language services industry is undergoing a profound transformation with the emergence of cutting-edge technologies such as ChatGPT and large language models (LLMs). These powerful language generation models have captivated the attention of businesses and language professionals alike, offering exciting possibilities for translation, localization, and content creation. In this article, we will explore the […]
It’s already been six years now since Google revealed that Google Translate processes 146 billion words a day — three times more than what all the professional translators in the world combined can do in a month. That was 2016 and things haven’t really slowed down in the machine translation (MT) universe since.
We recently introduced you to the two- (or five-) second rule, which is essentially the reaction or decision-making time a linguist should spend judging whether to post-edit a segment of machine translation (MT) output or to retranslate it. This rule of thumb aims to help increase the linguist’s productivity when working with MT.
If you’re a driver, you’ve probably heard of the two-second rule. Staying at least two seconds behind any vehicle is considered a rule of thumb for drivers wanting to maintain a safe following distance at any speed. The two seconds don’t represent safe stopping distance but rather safe reaction time.