Throughout history, the game changer for business has always been technology. Manufacturers replaced artisans, factories took over from manufacturers, automation and telemetry made most of the repetitive manual labor obsolete. From personal computers to the Internet and mobile commerce, in the past 30 years, the world has seen the emergence of the digital economy which is quickly overtaking the economy of commodities.
It is clear that the next biggest shift comes with Artificial Intelligence. Applied AI has the potential to revolutionize everything: health, finance, property, transport and travel, manufacturing, marketing and sales, agriculture, energy, business services, decision making, and even politics. It will split the history into pre-AI and post-AI age. Autonomous robots, computers conversing with humans, self-driving cars, and automated translation in hundreds of languages: these products of human imagination now turn into reality.
|Authoring||Dozens of scientists||Thousands of engineers||Millions of individuals|
|Development tools||Code||Workbench||Step-by-step wizard|
|Language||Bits and digits||Controlled single-language||Natural multi-language|
And because AI is still in its infancy, localization and language services professionals need to be prepared to work in this environment. The door is wide open.
Since the launch of deep learning in the early 2000s, AI has become more accessible. Barriers to entry are now lower. No longer a field for scientists, coders and deep tech developers only, AI and machine learning are now accessible to common mortals.
This is a very recent development. In 2015, Amazon launched Machine Learning, and Microsoft launched Azure ML. In 2018, Google introduced AutoML. These online tools from the tech giants provide the ability to train custom AI models. Standardized APIs make it fairly simple to connect AI to any piece of software or application. Sandbox accounts for experimentation are typically free, and starting is as easy as signing up. Newer releases bring about the drag-and-drop interface to build AI so that theoretically even children could do it.
Machine learning and AI training courses have proliferated, they are available from universities, online learning platforms, and from the likes of Facebook, Google, and Microsoft. A significant part of the training programs is free. There are machine learning conferences and hackathons, social media groups, help websites, and local communities. Help is readily available. This means that the number of machine learning engineers will increase in 3-5 years from an estimated 300,000 in 2017 to millions. This is one of the ways the tech giants are taking on the challenge of talent shortage.
For those who do not wish to depend on the tech giants, and prefer a Do It Yourself approach, neural network code has gone open source. The best toolkits are available as free downloads on Github. The user has to understand programming and to be able to tell the difference between convolutional neural networks and generative adversarial networks. Still, it is not the same as developing machine intelligence from scratch. It’s more like Lego building blocks.
This increasing accessibility means that AI can now be a valuable resource for anybody, not just Ivy-Leaguers and Google Geniuses sitting high in their Ivory Cubicles. For those who work in the language services industry, this is exciting news.
Most AI developed today is English-centric.
Specialists who develop AI typically operate from a monolingual perspective. IT engineers and data scientists use English for technical communication, and they carry the language into user communication. Marketing, support, and product specialists who experiment with AI focus on the application in their field. Localization is, as usual, an afterthought.
In reality, customers and their data speak many languages, and there is tremendous opportunity in leveraging local content at an early stage in AI development.
Don’t wait! Find AI initiatives in your company, work with their leaders from an early stage to add a global perspective. Share the credit.
Those of us with backgrounds in localization often fall into the trap of equating AI with Neural Machine Translation (NMT). Since the beginning, machine translation (MT) has been a classic AI application and the most visible and widely talked about.
As with its predecessors, NMT is, today (and will possibly be even more) the driver for nearly every language service. It is having a profound effect on the language services industry, but there remain some challenges.
Although from a general perspective, NMT has largely outperformed and replaced SMT, it is still quite pricey and challenging as to technical requirements and operational complexity, thus out of range for most customers. Also, even more than before, NMT is not exactly child’s play. Not only is NMT data-hungry, the underlying infrastructures and knowledge required are also very taxing and challenging.
For this reason, to date, one leading actor dominates the NMT scene with two supporting actors and a few side actors struggling for an appearance on the proscenium.
In fact, aside showing superior performances on public benchmarks and rapid adoption in deployments and steady improvements, there have also been reports of poor performance, such as the systems built under low-resource conditions, confirming that NMT systems have lower quality out of domain. This implies that the learning curve may be quite steep with respect to the amount and, most importantly, the quality of training data. To correctly deal with data, more than basic skills are necessary, starting from those for evaluation, which is a huge task. All this draws an even bleaker scenario for NLP applications and NMT for long-tail languages for which no demand is sustainable without the proper skills and good and substantial data.
Additionally, NMT systems are still little interpretable. The probably best-known drawback of Artificial Neural Networks is their black-box nature, meaning that you never know how and, most importantly, why they come up with a certain output. This makes any improvements extremely complex and random, when not arbitrary.
It sometimes seems that everybody in the language services is talking about neural machine translation, leading us to believe that it is widely adopted, but this is not necessarily the case. Over 77% of enterprises are not using neural machine translation widely, though many are interested in piloting new programs.
The above data were taken from an industry-wide poll of enterprise-side localization managers to assess how NMT is used (or not used) in their respective programs.
Executives perceive AI is a business driver and a strategic differentiator, as a key investment into the company future. A Statista survey found that 84% of businesses adopt AI because they believe it gives them a competitive advantage. In this environment, many executives are eager to invest into any initiative that promises to harness the power of artificial intelligence.
This presents a new opportunity for managers in those companies to advance their individual and team goals within their respective organizations. By proactively embracing and channeling this enthusiasm for AI, managers will gain the attention of executive stakeholders, giving them more opportunities to advance their personal and professional goals.
To put it another way – AI is presenting opportunities for strategic differentiation not only to organizations, but to the individuals working within those organizations. Professionals interested in taking advantage of these opportunities will fully embrace this new reality.
Natural Language Processing (NLP) is a subfield of artificial intelligence that covers interactions between computers and human languages, in particular how computers process, parse and understand natural language. Natural language processing comprises speech recognition, natural language understanding, natural language generation, and machine translation.
Combined with AI technologies such as information retrieval, machine learning, and sentiment analysis, NLP lays the foundation to create breakthrough opportunities in such areas as customer interaction, information monitoring, data mining, and expert systems. NLP allows professionals to find information and bring the collective human knowledge in a subject matter area to bear on individual cases.
Let tech giants and specialist companies deal with fundamental challenges such as making AI easy to use, faster and cheaper to customize and deploy, improving its accuracy and training enough machine learning engineers to satisfy the demand. For most businesses, there is one key pragmatic question they need to answer: how to apply AI in their subject matter area. AI’s big promise is to create powerful efficiency gains and new possibilities. How do you achieve that?
Machine learning and natural language processing can be applied to every industry and every field. Here are some of the AI startups and AI implementation projects in established companies.
Medical AI makes extensive use of image recognition and annotation for diagnostics. Coming up are the prediction expert systems that match patient symptoms and measurements from medical devices with treatments. Getting patient data from paper and hospital records into structured databases to improve drug efficiency is a prominent NLP challenge.
A prolific industry for AI, finance makes heavy use of prediction systems. More recent developments include news and social media listening to understand the impact on ticker valuations. Chatbots make forays into personal finance and banking to promote self-service and make bank customer support systems more efficient.
The judicial and legal fields make heavy use of language AI and machine translation. New legislation across multiple jurisdictions can be tracked and summarized in a single database. Prosecutors scan hard drives for digital evidence in multiple languages in a process called eDiscovery. Analytics and expert systems aim to predict court outcomes based on the case materials and judge personality.
The media is all about language, speech and video. AI-powered monitoring systems scan the news and interpret them. Captioning systems render spoken word into text in real time for further analysis and translation. Image recognition systems identify actors and characters on the screen to match content and advertising.
Energy distribution and efficiency are the major specific challenges in the energy sector that are being addressed by the AI. With more personal power plants integrated into the grid, and intermittent sources such as Wind and Solar, AI can make energy distribution smarter and help develop marketplaces for electric energy.
Virtually every sector can benefit from AI and NLP advances for customer interaction, news, review and social media listening across multiple languages, converting piecemeal and scattered company information into structured data, information retrieval and voice UI for smoother customer experience.
Not all ideas survive, natural selection is at work here… In fact, more often than not artificial intelligence projects fail. They look good on paper, but when it comes to the actual adoption, these experimental systems fall short of delivering meaningful results and get discarded.
According to Forrester surveys, AI adoption has but ground to a halt last year. In 2017, 51% of enterprises surveyed used some form of AI, and in 2018 this ratio increased only by 2%. to 53% adoption. So, now Forrester says 75% of early AI projects underwhelm. Gartner concurs with a similar prediction: 85% of AI projects won’t deliver for their sponsors.
The startup scene is littered with dead AI companies that did not connect with the market, whose product was of low quality, or because its function was useless in the real world. Some of the most spectacular fails in the AI world belong to the giants of the industry
Here are just a few examples of very public initiatives that failed. Fortunately, these companies have plenty of resources to invest, and these failures are helping to pave the way for future success.
On the business side, there aren’t enough leaders to apply machine intelligence, identify use cases, set and manage expectations and reliably execute AI projects. Some of the problems include:
Human processes and predictions have developed over decades, AI only had a few years to learn, yet executives expect them to outperform humans from early on.
Where humans are allowed to fail, AI isn’t. AI systems begin to see widespread adoption only after their quality exceeds near-human accuracy threshold. Machine translation compared to human translation is a classic example. The average edit rate for human translation is 11%, but machine translation is expected to have 100% accuracy.
AI projects often start by collecting all the available data and then looking for a way to use it. The problem is that only a fraction of most data has value.
All of the previous factors lead to data. AI model accuracy and output quality, its ability to outperform humans and meet business objectives depend on the size and cleanliness of the dataset. At project launch, there is typically too much general data from public sources, from parsing websites on the internet, from company record-keeping, but there is no guarantee that inside the corpus there aren’t any patches of information that can contaminate the dataset. Automated tests detect some of it, but humans are still necessary also to scrutinize results and feed machines with training data to tune them and improve performance.
On the production side, AI in localization today is manpower: educating engines by tagging, labeling, evaluating, collecting, integrating data. These tasks require program and vendor management competences that software engineers and mid-level managers in marketing, support and product departments often lack. Success often depends on the ability to run teams with hundreds of people efficiently to complete on time and on budget.
The language services industry is all about providing, well, language services. Services, as a rule, are something that are incredibly hard to patent or trademark. You cannot patent the act of translating any more than you could copyright a verb. This doesn’t mean that translation companies haven’t tried to get a competitive advantage by building and protecting their own intellectual property (IP). Usually this comes in the form of either patenting a technology or a certain workflow process. Most of the time, though, the technologies and the workflow processes are so interconnected that it is difficult to distinguish one from the other.
From typewriters to machine translation, technology has continuously transformed language services. The future capacity of companies and individuals to win business and influence the industry depends on having a technological advantage. Locations with hubs of impactful and popular language technologies will attract better talent, create more jobs, and enjoy economic development more than others. So, […]
USD 500 million in data services and machine intelligence work. This is what Appen, Lionbridge, Pactera, Welocalize, Alibaba LS, and other techy language service providers (LSPs) will likely generate in revenue this year. That’s close to 7 percent of revenue generated by the global Top-100 LSPs, and it’s on par with the volume they derive […]
How do companies find the human touch in a global ecosystem of increasingly prevalent language technology? How willing are clients to embrace innovative translation platforms? How do you combine human and translation technology to optimize the localization process and effectively deliver content internationally? These are some of the topics covered at the TAUS Global Content […]