Neural machine translation evaluation innovations Is your NMT engine better than Google Translate?
With the proliferation of MT engines in 2018, choosing the best became a challenge. To address the pain of evaluating and selecting MT, companies are launching new services and technology. In this article we review ONEs, human MT evaluation from One Hour Translation, and Inten.to, an automated MT evaluation/marketplace service. Both are available via API.
So far, independent MT comparisons have been hard to come by. In 2016 – 2017 new entrants to NMT have posted numerous press-releases stating they’ve approached human quality (Google 2016, Microsoft 2018), or beat competitors (Lilt 2017, taken down since, DeepL 2017). Most of these comparisons have relied on a variation BLEU metric, which compares machine translation with a pre-loaded human translation, and can be manipulated.
At the same time, the need for independent evaluation is rising.
There are many new options
The interest to machine translation is at an all-time high
Ease of access
Training custom engines is easier than ever
Choice has become more difficult. Keep reading to learn how new offerings facilitate it.
Human MT evaluation over API (One Hour Translation)
One Hour Translation launched a human machine translation evaluation service, ONEs (OHT NMT Evaluation Score). It opened up with an infographic comparing Google, Neural Systran and DeepL in 4 language combinations.
OHT’s approach is to have 20 human evaluators per language combination, all with experience in the subject area. Each scores the translations sentence by sentence from 1 to 100 with instructions regarding what each degree means. Tests are blind – translators do not know which engine they are evaluating, and thus carry no bias. Two statistical tests verify human inputs – the first one to ensure the results statistically significant, and the second to calculate the confidence score and margin of error. In the reporting section, OHT slices the results by winning sentences per engine, valuations for each sentence, and score distribution.
Evaluations take about two days to compile, and cost in the range of USD 2,000 – 4,000 depending on the number of sentences, languages, and engines compared.
This human evaluation can work for any technology, including RbMT, SMT, and NMT, as well as public and on-premise MT.
Automated MT evaluation + marketplace (Inten.to)
Inten.to has launched an automatic evaluation with a marketplace. They made the first appearance in mid-2017 and since then got funded for close to USD 1 million.
Inten.to compares engines by quality and price on a quarterly basis, automatically selects the most suitable for the current task, and then provides that engine via API. Inten.to monetizes by selling MT with a markup.
For quality evaluations Inten.to uses LEPOR scores, a derivative of BLEU that compares MT with reference human translations.
So far, Inten.to has been integrated into Smartcat and Fluency TMS, where users can select its aggregation just like a regular engine.
These two new offerings usher in a new market niche of MT selection.
There still remain plenty of opportunities. For instance, there isn’t:
- Any service to spot-check whether an instant translation of a document via MT + TM is good enough for business purposes
- An online marketplace for small MT engines built by LSPs
- Any large modern community to hire and train MT training specialists – not even on LinkedIn
While rapidly progressing towards end-users, MT training and comparison remains expert-driven and somewhat academic in flavor. In the next 3 years however, all that could change.
Stay up to date as Nimdzi publishes new insights. We will keep you posted as each new report is published so that you are sure not to miss anything.