So you jumped on the automatization bandwagon and now want to run automatic Quality Assurance (QA) on translations.
The general approach to run automatic QA is to use:
QA tools normally work with bilingual (source and target) files. They help you to find:
To avoid false positives that these QA tools may generate, it’s highly recommended to create special configurations that would be used to automatically reduce the noise. For example, for Verifika it would be a quality profile per project/language.
Though some tools still provide QA reports in Excel sheets, the better way is to utilize the solutions which offer automatic updates of the segments being QAed – right from the report. Otherwise, it takes a lot of time to switch between working environments and implement all the needed changes into the working files.
New challenges brought about by doing business in our digital world demand new solutions. Some constants still remain, however, without which a text and the quality of its translation would be less than satisfactory. One good example of such a constant is terminology and terminology management. Terminology management includes a number of different aspects, but […]
Domo is a cloud-native platform that provides data integration and visualization capabilities, as well as a foundation to create custom apps for tracking key business metrics. The company was founded in 2010 and serves the technology, manufacturing, media and entertainment, and other industries.
VMware is a global leader in cloud infrastructure & digital workspace technology, accelerating digital transformation for evolving IT environments. VMware’s compute, cloud, mobility, networking, and security offerings form a digital foundation that powers the apps, services, and experiences that are transforming the world. For this Lesson in Localization, we spoke with Clara Macedo, Senior Manager of LATAM Localization Operations and Head of Marketing Globalization PMO, and Zhenhui Chao, Localization Manager at VMware.
We recently introduced you to the two- (or five-) second rule, which is essentially the reaction or decision-making time a linguist should spend judging whether to post-edit a segment of machine translation (MT) output or to retranslate it. This rule of thumb aims to help increase the linguist’s productivity when working with MT.