One of the main reasons for implementing machine translation (MT) into localization workflows is that it saves money. And time. This time, let’s focus on money. In particular, cost savings.
Common sense dictates that it should depend not only on the language pair, the content domain, the text type, or the quality of the MT output but on the actual effort you put into the process. If a light post-editing is all that is required (when the linguist tweaks the raw MT aiming for the so-called “good enough” quality), then the bigger discounts of up to 35-40 percent off the regular word rate make sense. But if a fully polished target is expected, with no trace of MT (i.e., full post-editing), then sometimes it takes even more effort to process the MT output rather than translate from scratch. So, how should this effort be measured?
The ideal situation would be to have the actual time per segment tracked, but it’s mostly not available in TMS. Furthermore, the edit-based measuring approach “creates a conflict of interest (COI) for the post-editor to do more edits, while the goal is to avoid changing what is good enough, and it reaches up to 80% with Custom NMT models”, Konstantin confirms.
However, if such measuring happens in-house, either on the client or the vendor side, then the COI may not occur. For instance, a linguist may post-edit three different versions of the same text, using short samples coming from three different MT engines, in order to make a human evaluation of the MT output in the process. After that, together with the client, the decision is made on the best option to implement for further post-editing projects.
In such a scenario, this whole “MT story” is a different type of service for the end customer. Filipp Vein from LocalTrans explains it further: “Internal MT output evaluation is based on both automatic scores such as BLEU and human evaluation.” The editor has no major conflict of interest here. Rather, he’s more interested in picking the best MT engine option, for he will need to work with it further when the light post-editing project kicks off. Thus, the post-editor naturally picks the output from the MT engine which produces better results quicker and saves the linguist the effort.
But the idea of saving the effort and thus increasing productivity is sometimes misinterpreted. For example, some language service providers (LSPs) announce per hour rates in their MTPE offering, with the number of hours calculated by simply dividing the project size by the average translator productivity…
In multi-language vendor companies (MLVs), the actual PE step is regularly done in a CAT tool with the MT engine linked. And instead of No match, the MT output will appear. In addition to the TM matches where a linguist had an empty segment, they now will see the MT output which they need to post-edit. As a result, here’s an example of a fuzzy grid with MT instead of No match:
|Words||Fuzzy “Discount”||MT “Discount”||Payable count|
|Saving on MT (in words that are not paid)||-660|
In this example, only the new words (0-49 percent match) are MT-discounted. And this is quite often the case with LSPs when they send the PE work to linguists or outsource to other LSPs.
Do you remember the last time when people were NOT talking about machine translation (MT)? We don't. Wherever you go, there’s someone talking about MT. With few exceptions, it seems like the only major disruptors in our industry over the past few decades have been breakthroughs in language technology.
Some machine translation providers are holding out hope for MT systems that adapt to document context. Could this development eliminate the need for custom MT engines? Will context-enabled MT help MT achieve human parity? Will we still need to customize a few years from now? Let’s discuss further.
Before the rise of Translation Management Systems (TMS), there were CAT tools. A CAT (Computer-Assisted or Computer-Aided Translation) tool is software that allows a user to work with bilingual text – the source and the target (translation).