As we discussed in the June 2020 edition of the Nimdzi Language Tech Atlas, there exist different kinds of tools that help minimize human errors when dealing with localization. These are automated Quality Assurance (QA checkers) as well as proven solutions for in-context, in-country review (such as InContext Translation and QA by Lingoport, Rigi.io, or visualReview by translate5).
To further help QA teams, companies like Microsoft developed tools such as MS Policheck which produced reports on the localized content for human evaluators to go through. Reports contain potential issues with “offensive” or “contentious” terms.
And yet, marketing and localization teams across the globe continue to call out review as an ongoing issue. Here’s a common situation described in this podcast on maximizing the impact of localized content:
As it happens, even when automation is in place to help ensure linguistic quality, one can still get frustrated customers and offended users. Here’s where a manual approach to quality matters.
There are quite a few types of testing that involve human input and dedicated involvement (functionality testing, linguistic QA, regression testing, etc.) with culturalization testing being one of the most important types. It requires human effort to check anything that could be potentially considered inappropriate, offensive or unwittingly laughable in target locales.
Localization and testing companies like Alpha CRC also use a manual approach to localization auditing: all checks are done by auditors directly on the platforms any end-user would use. Over the course of the audit, another important thing a machine can’t yet help check is impression. As Alpha CRC put it, impression is basically checking overall content in context. Suitability for the target audience, tone, style, and fluency are examples of other aspects a human tester is keeping an eye on.
Linguistic quality audit. Source: Alpha CRC
Why is such an audit important? All in all, the instructions, lists of potentially offensive terms, corporate glossaries, and style guides used during testing are the result of the work of one person (or one group). The problem is, then, that content quality is based on just this single person's opinion. The testers and auditors are here to provide a second opinion. Anything questionable will result in a discussion. It's important to have multiple opinions and questions raised before release—so that these are not brought up by end-users once the product is live.
Pleo is a European fintech company specializing in expense and spend management solutions. It empowers employees to make work-related purchases while ensuring that their companies maintain control over all spending. Using breakthrough technology and commercial cards, Pleo eliminates the need for expense reports, reduces administrative complexity, and simplifies bookkeeping. The company serves clients in 16 locales across Europe and operates in 11 languages, thanks to its localization team.As a European fintech company that specializes in expense and spend management solutions, Pleo empowers employees to purchase the things they need for work while ensuring that their companies maintain full control over all spending. With the help of breakthrough technology and commercial cards, Pleo eliminates expense reports, reduces administrative complexity, and simplifies bookkeeping. With a presence throughout Europe, the company serves clients in 11 languages across 16 locales thanks to its localization team.
The ongoing coronavirus outbreak has been affecting the way businesses and individuals work. What does it mean for the localization industry?