So you jumped on the automatization bandwagon and now want to run automatic Quality Assurance (QA) on translations.
The general approach to run automatic QA is to use:
QA tools normally work with bilingual (source and target) files. They help you to find:
To avoid false positives that these QA tools may generate, it’s highly recommended to create special configurations that would be used to automatically reduce the noise. For example, for Verifika it would be a quality profile per project/language.
Though some tools still provide QA reports in Excel sheets, the better way is to utilize the solutions which offer automatic updates of the segments being QAed – right from the report. Otherwise, it takes a lot of time to switch between working environments and implement all the needed changes into the working files.
The year is 2023. Six years after the big neural MT push of 2017, it seems appropriate to say that machine translation (MT) has finally found its way in the localization industry. Most MT providers are producing reasonably acceptable baseline quality and MT solutions have never been more accessible. As a result, MT is becoming a reality in many organizations. What’s more, MT technology has reached a certain level of maturity in terms of customization and training.
Developing your own approach to using generative AI models such as ChatGPT — one that is both practical AND ethically sound — is perhaps the best way of proving naysayers wrong and ensuring that you get the most out of this promising piece of technology. Perhaps surprisingly, the first key to success with generative AI models is to learn how to talk to them.