The language technology landscape continues to evolve by leaps and bounds. Yet, translation quality standards (as well as client quality expectations) remain just as high as they were before the rise of modern machine translation (MT) technology. When it comes to quality checks, automatic quality assurance (QA) tools have been a godsend for linguists.
Automatic QA tools are like your run-of-the-mill spell checker, but much, much better. Using high-precision and high-performance QA tools not only helps improve the quality of a text but can also speed up the turnaround time of localization projects, and can in turn lead to cost savings. At Nimdzi, we already mapped out most such tools in 2019. Among the most historically popular ones have been Xbench (by ApSIC) and Verifika (by Palex).
A List of QA checks in Xbench. Source: Xbench
Yet, in the great modern era that is 2021 a number of language service providers (LSPs) still continue to use Excel reports generated by the QA tools. However, it can be quite time-consuming to switch back and forth between working environments and implement all the changes as needed. A more up-to-date way to go about it is to take advantage of solutions that offer automatic and live updates of segments that have been assured for quality – whether directly in the QA environment or in the cloud.
One example of a cloud-based QA solution that can be used either as stand-alone tool or integrated via API connectors is lexiQA. It can be used across a wealth of operating systems and browsers, from Safari on Mac to Brave and Tor for Linux. However, as an enterprise-level product, lexiQA doesn’t offer a plan for freelancers, as the developers believe that freelancers should not pay for translation software, and so freelancers can access lexiQA while working within their TMS of choice or in their client’s platform.
Among lexiQA’s most interesting attributes we found the following:
Thai spell check performance comparison. Source: lexiQA
Now to be fair, Thai may not be the most common target language in most organizations (although it is in fact one of the top Asian languages in terms of volume of content translated). Yet, something that those in charge of your typical localization program should really be paying more attention to when selecting a QA tool is the amount of false positives and false negatives that have to be dealt with, especially in heavily inflected languages.
Morphology control is still a bit of a tricky subject for most QA solutions out there on the market. There are still way too many morphology-related issues in your average QA report. This is why QA tools such as Rigora, by Logrus Global, or Phoenix, by ITI, continue to be developed in LSPs that deal with heavily inflected target languages on a daily basis.
Comparison of false issues provided by several QA tools. Source: lexiQA
All checks in lexiQA are locale-specific by default, which means their design is based on the grammar of each locale' rather than generic pattern-matching rules.
The introduction of a QA tool in the localization workflow adds another layer of complexity to the QA game for both LSPs and translation buyers, which directly influences time and quality of the work.
Fit-for-purpose, cloud-based QA tools help avoid the quirks of using different translation environments and operating systems (e.g., Mac users suffer from insufficient support of QA checking tools) as well as help get quicker results in a custom workflow.
Moving forward, analytics collected by QA tools will be able to be used to capture business intelligence data. And speaking of data, proper QA checkers help improve raw data for MT engine training and clean up linguistic assets such as translation memories.
As reported in the 2021 edition of the Nimdzi 100, interpreting has arguably been the sector within the language industry that was the most heavily affected by the COVID-19 pandemic — both positively and negatively..
2020 was a big year for language technology. One lesser-known application for AI, dubbed the “digital shield,” is also set to become a more prominent part of the fight against misleading and manipulative content.
Evaluating and migrating between translation management systems (TMS) is a lot of work and there are always reasons not to do it. It might be the fear of moving away from a familiar TMS, even if it isn’t fit for purpose, the impact on other teams and external stakeholders, or the prospect of the time, technical work, and costs involved. The number of TMS solutions on the market can also make the decision far from simple and straightforward.