+44 161 653 1004 info@lktranslations.co.uk

The reality of digitisation and the pace with which it is infiltrating and impacting on our everyday lives is undeniable – and, seemingly, unstoppable. From apps to social media, interactive advertisements to smart fridges, virtually every aspect of our modern existence is touched by technology in some way. The impact of technological innovations is keenly felt among the translation community, with fears among some that automated or machine translation (MT) will soon replace human translators altogether. As a newcomer to the field, I should perhaps find all this talk of a machine takeover a little alarming. However, I’ve determined to take a more optimistic stance – and the speakers at the talk entitled Translators in the digital era – what kind of jobs will we have ten years from now? at this year’s Language Show at London’s Olympia exhibition centre certainly seemed to share this attitude.

There are a number of technologies that have impacted the job of the translator in the past, and which are set to continue making an impact in the years to come. As part of his segment of the talk, translator and interpreter Michael Wells asserted that “statistical machine-based translation is dead; neural machine translation has taken over”. These two types of translation are the most recent incarnations of the overarching domain of MT. Up to about the 1980s, rule-based systems which made use of extensive sets of specific linguistic rules and exceptions formed the main basis of MT input. These systems were replaced in the 1990s by statistical methods of translation, using actual examples of linguistic data from giant corpora (sets of already existing source and target translations) to produce translations of specific words and short phrases. The problem with this kind of translation was that the broader context around the individual units being translated would often become lost. More recently, developments in neural machine translation – translation tools that try to mimic how the brain actually works – have meant that whole sentences can be translated in one go, speeding up the translation process and meaning that the whole (linguistic) context of each sentence is taken into consideration.

Rise of the machines?

Two other crucial developments that have been and continue to be game-changing in the translation industry are the Internet and the advancement of CAT – i.e. computer-assisted translation – tools. Some of the speakers alluded wistfully (and others with a clear sense of good riddance!) to the days when it was still necessary to call up your local fishmonger for advice on the correct translation of a very specific fish bone, or when freelancers were still required to haul about dog-eared paper dictionaries to assist them in their research. Now, translators have a wealth of knowledge and information spanning virtually every sector and industry available to them quite literally at the click of a button. CAT tools and the translation memory functions they offer mean that aligning repetitions and similar phrasings across documents for translation is resolved before the task has even begun, and termbase glossaries help ensure that customers’ specific terminology is applied accurately and consistently across different texts, no matter who’s doing the translating or where the text will ultimately appear.

All these developments demonstrate the huge impact technology has already had on translation as an industry. However, as the speakers all attested, there are some crucial facts that ought to be borne in mind before translators relegate themselves to the linguistic sidelines. Crucially, while MT systems, particularly those harnessing neural networks for their language processing, are no doubt advancing, as Sara Bawa-Mason, Chair of the Institute of Translation and Interpreting, pointed out, the fact of the matter still remains: machines are very different from brains. While a machine might be able to work out the linguistic links between the words and phrases in a sentence, detecting the subtle intricacies of tone, register, formality, sarcasm, humour, pun, wordplay and witticism is a totally different kettle of fish (ein ganz anderer kessel fisch in German, in case you were wondering. According to Google Translate, of course). Even on the grammar and punctuation side of things, MT systems still make mistakes. I once had an MT tool suggest (very unhelpfully) that instead of using three armed forces as the correct translation of the Dutch drie strijdkrachten, I should rather hyphenate to create three-armed forces – a much more terrifying, and wholly incorrect, prospect. Progress is certainly being made, but human input is still very much needed. Perhaps somewhat paradoxically, it is also important to remember that the very input that goes into MT systems is in fact generated by humans themselves. Without the corpora of source and target texts and the comparable renderings of different multi-language sites like Wikipedia, MT models would have no input on which to base their algorithms and predict their output. Clearly, as the above examples demonstrate, the input we have been feeding these systems is not always perfect, which is why translators are still very much needed to ensure that the right messages are in fact being put across and expressed in the appropriate way.

Or a case of working hand in hand?

It’s also important to remember that it’s not just the tools for translation that are changing, but also the subject matter of the translations itself. With increasing technological developments and innovations comes the need for writing on such developments and innovations – often in as many language combinations as possible. As technical and commercial translators, we often get to write about state-of-the-art appliances, tools and devices for which, in many cases, very little reference material is already available. It is very unlikely that an MT tool would be able to cope with the kinds of brand-new words and concepts that human translators have to precisely research and discuss in detail with their customers. The human touch comes in to play here, too. Translators spend a lot of time and effort analysing the context not just of a word or an individual sentence, but of all the words and sentences that make up a document within the context of the document as a whole. They take into account the purpose of the text (persuade? Inform? Advertise? Dissuade?), the specific audience (specialist or non-specialist? Native or non-native speakers? Adults, young adults, teens, children?), the subject matter (technical? Commercial? Medical? Financial?), the language variant (British, American, Australian or Canadian English? Swiss, Austrian or German German?), any character limits, cultural discrepancies, formality differences, customer punctuation preferences, and all manner of highly specific contextual cues to produce a coherent text specifically tailored and tuned to a customer’s needs, engaging with the customer throughout the translation process. I’m not sure any MT systems are anywhere close to this kind of level of socio-linguistic advancement – or, indeed, human interaction.

Now, don’t get me wrong – I’m by no means opposed to developments in translation technology. I think translation memories are one of the most useful tools in the modern-day translator’s arsenal, and that, provided we make the effort to adapt and embrace change rather than shying away from it, the future of the translator really will be a story of evolution rather than extinction. Crucially, with the sheer volume of text out there that needs to be translated in quick-time, I do believe that automated translation is an excellent solution for quick, “good enough” renderings in different languages. The question is: is “good enough” really good enough to replace the linguistic, cultural, textual and personal sensitivity of the human translator? Personally, I don’t think so, and I won’t be giving up my day job any time soon.