In December 2020, Nimdzi was given an opportunity to test a brand new product — Spotlight. It is developed by Intento to support machine translation (MT) curation, enabling quick analysis of the MT training results. This product is intended mainly for those who train custom MT models and thus regularly face the task of evaluating MT quality.
The usual evaluation methods include random sampling and costly human review (that runs the risk of providing different results for the same samples), which oftentimes happens after the trained model is already in production (alas!). Also, there’s usually no easy way to understand if a model can be improved further or to find examples of improved and degraded segments of the text. All of this can make the MT trainers’ and evaluators’ job onerous and daunting. Not to mention the fact that the evaluation sometimes occurs after it is actually needed, with the end users of the resulting MT output wondering what the evaluators do in the shadows. Intento’s Spotlight is designed to shed some light on this subject and dispel the gloom.
Spotlight is a cloud solution available on demand from the Intento Console. The user interface (UI) is lean and the wizard helping you create an evaluation is pretty straightforward.
We played with this new product using COVID-related corpora by TAUS from Intento’s research on the best MT engines for this area. It was Google Cloud Advanced Translation API (stock) versus Google Cloud Advanced Translation API (custom) dataset, from English to Russian.
Spotlight suggests the “Less is More” principle for the dataset size: it uses the first 2,000 segments from the evaluation files, as it's considered the optimal size for an evaluation that is sufficiently accurate.
In addition to hLEPOR, BERT score is coming soon, with two more metrics, TER and BLEU also on Intento’s roadmap.
In our small experiment, Spotlight showed the higher overall hLEPOR score of 0.61 achieved by custom Google Cloud Advanced Translation API — compared to the 0.58 by a stock engine.
After getting a quick overview of the evaluation situation, a reviewer is welcome to proceed to a detailed analysis of the segments, e.g., the degraded ones appearing below the line, or check improved ones.
In the process of such a review, a reviewer is able to:
This “light-weight” review approach helps get faster evaluation results by catching and addressing only the issues that need to be improved.
Depending on the results of the evaluation by Spotlight, users may want to retrain the custom MT engine or mention the particular issues to the post-editors. The reviewed data (already corrected and “annotated”) can also be used to retrain the MT model.
An overview of the segment-level hLEPOR scores helps to get an understanding of the current MT customization situation and save time by performing a focused review instead of a full scope evaluation.
According to the development roadmap mentioned at the presentation of Spotlight in November 2020 (the launch page offers a virtual demo of Spotlight and a slide deck from that event), this is just one of the tools from Intento’s product MT Studio. The new toolkit for the complex MT curation will include options for data cleaning, training, and evaluating of multiple MT models, which can be even more interesting for a broader audience.
Source: Intento
Being a software company, Intento leaves the task of trying a new service and actually training the engines to language service providers (LSPs). However, they do use Spotlight internally at Intento, saving their analytics team hours of precious time. Yes, that is correct: even with such agile automation, the human stays in the loop — to curate the MT training, evaluate the engines, fine-tune the process and adjust it where needed.
The year is 2023. Six years after the big neural MT push of 2017, it seems appropriate to say that machine translation (MT) has finally found its way in the localization industry. Most MT providers are producing reasonably acceptable baseline quality and MT solutions have never been more accessible. As a result, MT is becoming a reality in many organizations. What’s more, MT technology has reached a certain level of maturity in terms of customization and training.
Developing your own approach to using generative AI models such as ChatGPT — one that is both practical AND ethically sound — is perhaps the best way of proving naysayers wrong and ensuring that you get the most out of this promising piece of technology. Perhaps surprisingly, the first key to success with generative AI models is to learn how to talk to them.