fbpx

The Human Factor in Linguistic Quality Evaluation

When Automation Is Not Enough: Culturalization and Impression

Nimdzi Finger Food is the bite-sized and free to sample insight you need to fuel your decision-making today.

If you want to learn more from our experts about language quality auditing and why human input is necessary, contact us today to become a Nimdzi Partner.

As we discussed in the June 2020 edition of the Nimdzi Language Tech Atlas, there exist different kinds of tools that help minimize human errors when dealing with localization. These are automated Quality Assurance (QA checkers) as well as proven solutions for in-context, in-country review (such as InContext Translation and QA by Lingoport, Rigi.io, or visualReview by translate5).

To further help QA teams, companies like Microsoft developed tools such as MS Policheck which produced reports on the localized content for human evaluators to go through. Reports contain potential issues with “offensive” or “contentious” terms.

And yet, marketing and localization teams across the globe continue to call out review as an ongoing issue. Here’s a common situation described in this podcast on maximizing the impact of localized content:

“...I was in a meeting where my day job was actually providing LQA services, and they were saying, “Wow, you’re rating this other company really high, and yet our in-country stakeholders have so much feedback and they’re not satisfied.”

As it happens, even when automation is in place to help ensure linguistic quality, one can still get frustrated customers and offended users. Here’s where a manual approach to quality matters.

There are quite a few types of testing that involve human input and dedicated involvement (functionality testing, linguistic QA, regression testing, etc.) with culturalization testing being one of the most important types. It requires human effort to check anything that could be potentially considered inappropriate, offensive or unwittingly laughable in target locales.

Linguistic quality audit

Localization and testing companies like Alpha CRC also use a manual approach to localization auditing: all checks are done by auditors directly on the platforms any end-user would use. Over the course of the audit, another important thing a machine can’t yet help check is impression. As Alpha CRC put it, impression is basically checking overall content in context. Suitability for the target audience, tone, style, and fluency are examples of other aspects a human tester is keeping an eye on.

Linguistic quality audit. Source: Alpha CRC

Why is such an audit important? All in all, the instructions, lists of potentially offensive terms, corporate glossaries, and style guides used during testing are the result of the work of one person (or one group). The problem is, then, that content quality is based on just this single person's opinion. The testers and auditors are here to provide a second opinion. Anything questionable will result in a discussion. It's important to have multiple opinions and questions raised before release—so that these are not brought up by end-users once the product is live.

All these manual testing activities around culturalization and impression help create content that resonates with local audiences.

Nimdzi Finger Food is the bite-sized and free to sample insight you need to fuel your decision-making today.

If you want to learn more from our experts about language quality auditing and why human input is necessary, contact us today to become a Nimdzi Partner.

Stay up to date as Nimdzi publishes new insights.
We will keep you posted as each new report is published so that you are sure not to miss anything.

Related posts