Article by Gabriel Karandysovsky.
The future is a science fiction novel. And we’re writing the story right now, or so it feels, in real time.
Dan Simmons’ Hyperion (an excellent read whether or not you’re into science fiction), contains a quote by a character — a poet and self-professed master of words — which aptly reflects the current debate in the language services industry. Reproducing it here, albeit slightly redacting the language (you’ll understand why):
The poet laments the state of the known universe, where literature (and poetry in particular) doesn’t quite have a place anymore — where literature has become a product that is preprogrammed and preprocessed to cater to users’ tastes, with everything and anything being accessible at a moment’s notice through the book author’s futuristic version of the world wide web. Poets are a dying breed, and the character remains the last specimen.
The parallels that can be drawn with what we’re seeing unfolding in front of our eyes today are obvious. You could reasonably copy/paste this quote to many other areas of business today and it wouldn’t feel out of place. It is a manifestation of the collective anxieties we experience whenever talk veers to AI and how it’s out to get our jobs.
Two thoughts immediately come to mind when reading this passage of the book:
The first one is that we, the collective language industry, has reached the point somewhere between the word processor (aka machine translation) and the thought processor (can ChatGPT or its offspring become that?) stages. Eerily similar to our reality, isn’t it?
Secondly, Simmons implies that technology is a loop where the emergence of one piece of technology leads to the disappearance (or transformation) of some other technology elsewhere. Let’s push any romanticized opinions we may have about this aside — it’s important to recall that this is how it has always been throughout history. A quick Google search will yield dozens of examples of occupations that no longer exist because we have evolved our ways of thinking, living, and working, effectively rendering them obsolete. The transformation is constant.
And so came the idea of this article: On the one hand, it is an attempt to capture where our collective psyche is at this point, some five or six months after OpenAI publicly launched ChatGPT, and on the other, to take stock of all the work that still lies ahead of us as we continue to report on, test, and implement this latest piece of technology in our lives.
Attempting to distill the lively debate surrounding AI and ChatGPT in one article would be nigh on impossible. There are so many facets to this discourse, from mapping use cases (many are experimenting with large language models, or LLMs) to security concerns (as with any transformative technology, the legislative landscape is bound to catch up and is even starting to add roadblocks) to whether it’ll make translation and localization irrelevant (spoiler, no, it will not).
Rather, let’s focus on finding our bearings as right now, in spring 2023, getting familiar with the AI debate is becoming increasingly essential.
There is a handy tool that allows us to take a snapshot of where we are in relation to this (or really any) technology — Rogers’ theory on the diffusion of innovations that allows us to visualize where we are along the adoption curve. Right now, it does feel as if we are collectively trying to bridge the chasm between the innovators and early adopters, and the pragmatists.
The theory on the diffusion of innovations developed by E.M. Rogers in 1962 allows for the measuring of how a product, behavior, or a novel idea spreads across a population.
In truth, we’re still just at the beginning of the AI journey, barely scratching the surface of what generative AI applications such as ChatGPT can do for us. Gartner’s AI hype cycle still places us in the innovation trigger/breakthrough phase and projects mass user adoption of AI later in this decade. The enthusiasts may have been talking ChatGPT up, but let’s remember that AI is already everywhere — not to mention there are different types of AI, too. AI has been powering so many tools and applications we use, from curated song playlists on Spotify to showing recommendations on Netflix to customized training programs for the users of Freeletics. So why does it feel different this time around?
Now, one of the questions this brings about is: Should we care?
There has been enough (virtual) ink spilled on the matter, so most of us should be able to take a stance on ChatGPT by now. Depending on where on the above spectrum of sentiment you find yourself, the key action to take in the immediate future is to continue asking important questions regarding ChatGPT and how it might be used (or AI at large for that matter).
Let’s catalog a few frequently asked questions (or frequently voiced concerns) and provide some initial answers and/or food for thought.
For now, it may be (which is valid grounds for caring and making the debate about AI more inclusive). There is no stopping the AI train, however. In the same way no one abandoned cars a century ago to return to horse-drawn carriages, the momentum is firmly on the side of AI. The investment levels continue, big tech companies are jockeying for position at the front of the AI race, and the public sector is slowly (desperately?) catching up as well. The implication for practitioners and end-users is that there is much training and upskilling to do in order to develop the necessary skills and catch up with a new reality where AI is more prominent in the workplace. New AI applications will pop up. We all ought to be developing our AI literacy.
This is difficult to quantify, but there is a good (and recent) example language professionals can refer to when trying to frame the ChatGPT conversation in monetary terms — MT. Conventional machine translation technology has been proven to save millions for companies that have implemented it in their workflows and, in a way, it serves as the benchmark. It took a while before MT/NMT reached the mass adoption stage, but it has had a transformational effect on those who stuck with it and have been iterating on the quality of the output. The relative cost of implementation has been far outweighed by the cost savings that MT generates (not to mention shuffling human hands to other value-add tasks).
The underlying infrastructure and computing power (and the associated cost) to power LLMs may not be for everyone. But global companies that routinely generate millions of dollars in profit won’t likely balk at the expense of implementing generative AI — especially since it has so many more uses than “simple” MT.
Nimdzi’s qualitative research shows that practically all companies (and all of their departments, from HR to sales to production) are investigating where and how generative AI can be implemented. Understandably, many are keeping internal developments under wraps as AI is also seen as a key competitive advantage. There is also the fact that many companies are still undergoing digital transformation, and centralizing AI efforts is taking time. Once AI clicks into place — and buyers figure out how it works alongside and within well-established translation and localization workflows — expect that the client-service provider relationship may need reengineering. LSPs will need to reinvent themselves once again. And they are so good at that.
And yet, ChatGPT and the like won’t be for everyone. Decision-makers on the enterprise side are also humans (we are far away from the AI-tinted dystopia still) and somewhere on the adopter-to-laggard spectrum. Some translation and localization managers we’ve recently interviewed shared that they’ve been dismissive of the debate because their company’s leadership decided to skip the AI train altogether, because they pride themselves on their human-centric approach to their products. Several studies Nimdzi has done on MT over the last couple of years have shown that adoption of the technology is currently around 60 percent — and MT has been around for quite a while. It may take some time before generative AI embedded in translation workflows may reach similar numbers (but that is speculation at this point, of course).
Not that anyone is openly talking about doing so. The focus right now seems to lie in figuring out if and how AI can be leveraged in existing workflows. However, at some point, it is fair to expect questions of transparency, data privacy, and ethics to arise from those that ultimately pay the bill, the end-users. AI has had its coming out moment, where it’s no longer in the shadows but is powering frontend interfaces that users engage with directly. In late 2021, a full year before the ChatGPT buzz began, Nimdzi conducted a limited-scope sentiment study across six countries (US, UK, France, Japan, China, and India) where we quizzed end users about a number of AI-related topics. We’re sharing two data points with you here.
Source: Nimdzi Insights
Source: Nimdzi Insights
As the results of these two questions show, attitudes and sentiments toward AI will differ across geographies. It’s very likely that this sentiment has evolved since the time our study was conducted (and especially in light of all the buzz surrounding ChatGPT).
Ultimately, the question of whether to use AI is not too different from the “Should I translate or should I not translate?” debate the language industry has been trying to find an answer to for decades. Sensibilities and expectations regarding AI will also differ from country to country. As users have access to more information on AI and are getting more educated about it, sentiment will continue to evolve. Expectations will rise. Global companies will do well to factor everything in when making product-related decisions.
Inevitably, the ChatGPT buzz will wind down (or will be replaced by the next big thing). Let’s face it, LLMs are here to stay and we’ll likely see bigger and better products than ChatGPT come out of them as companies unveil what they’ve been working on (for announced examples of how GPT-4 can be used via APIs, click here). Now that all the industry luminaries have spoken, however, the rest of us pragmatists need to roll up our sleeves — it’s on us to fashion LLMs into products that are viable and tick the right boxes in terms of process integration, user experience, transparency, and ethics.
Science fiction novels have a tendency to skip the part where the worker bees get to work building the future and simply fast-track us into a dystopian future. Plot is the king, after all. But now it’s on us to come after the inventors and make sure that AI is used in the right ways.