As we discussed in the June 2020 edition of the Nimdzi Language Tech Atlas, there exist different kinds of tools that help minimize human errors when dealing with localization. These are automated Quality Assurance (QA checkers) as well as proven solutions for in-context, in-country review (such as InContext Translation and QA by Lingoport, Rigi.io, or visualReview by translate5).
To further help QA teams, companies like Microsoft developed tools such as MS Policheck which produced reports on the localized content for human evaluators to go through. Reports contain potential issues with “offensive” or “contentious” terms.
And yet, marketing and localization teams across the globe continue to call out review as an ongoing issue. Here’s a common situation described in this podcast on maximizing the impact of localized content:
As it happens, even when automation is in place to help ensure linguistic quality, one can still get frustrated customers and offended users. Here’s where a manual approach to quality matters.
There are quite a few types of testing that involve human input and dedicated involvement (functionality testing, linguistic QA, regression testing, etc.) with culturalization testing being one of the most important types. It requires human effort to check anything that could be potentially considered inappropriate, offensive or unwittingly laughable in target locales.
Localization and testing companies like Alpha CRC also use a manual approach to localization auditing: all checks are done by auditors directly on the platforms any end-user would use. Over the course of the audit, another important thing a machine can’t yet help check is impression. As Alpha CRC put it, impression is basically checking overall content in context. Suitability for the target audience, tone, style, and fluency are examples of other aspects a human tester is keeping an eye on.
Linguistic quality audit. Source: Alpha CRC
Why is such an audit important? All in all, the instructions, lists of potentially offensive terms, corporate glossaries, and style guides used during testing are the result of the work of one person (or one group). The problem is, then, that content quality is based on just this single person's opinion. The testers and auditors are here to provide a second opinion. Anything questionable will result in a discussion. It's important to have multiple opinions and questions raised before release—so that these are not brought up by end-users once the product is live.