Some cogent aphorisms are often misattributed, possibly as looking too pithy to be conceivable. Quite expectedly, many of these are ascribed to Albert Einstein, and two seem perfectly apt to the translation industry. One goes “Insanity is doing the same thing over and over again and expecting different results,” and the other, “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.”
On the contrary, the closing sentence in Isaac Asimov’s essay Who Needs Money? from his 1981 book Change! Seventy-One Glimpses of the Future is absolutely original: “Part of the inhumanity of the computer is that once it is competently programmed and working smoothly—it is completely honest.” This too fits perfectly to the translation industry, so loaded with bullshit.
Some insanity must indeed exist in the kind of repetition compulsion with the same models that have proved largely inefficient and ineffective for centuries, and when it comes to technology, this insanity overflows in a desperate Neo-Luddism seasoned with plenty of wishful thinking together with absolutely nonsensical incompetence.
On the other hand, unless accrediting the translation community with the power of corrupting even the best minds of other industries, this insanity must be contagious if a brilliant mind like Lilt’s CEO Spence Green was puzzled by “the complexity of the translation supply chain, and the relationship dynamics in that supply chain.”
It is indeed hard to imagine how the logistical and business model problems may be greater than the original technology ones, but so it was apparently.
It is possibly easier to imagine that translation orthodoxy is very hard to ply. And yet, it’s something that anyone who wants to enter the translation industry, let alone being part of the translation community, must quickly learn to deal with.
Not surprisingly, even the mulishly fiercest opponents of translation technology, especially machine translation, have been coming to terms with both, up to openly refusing to have it banned, while admitting using it frequently.
Gutta cavat lapidem, even though they stubbornly insist on hoping people “understand that machine translation is not really a translation, but a tool, which may look like a translation to some” and that “human translators can look forward to a booming business in the exciting field of human translation for a few more centuries.” Good luck with that.
It is understandable, then, that it may still be considered necessary to clarify that post-editing is a workflow, and machine translation is a technology, and that therefore these terms should not be used interchangeably, while it is misleading to say that “what translators have done with fuzzy matches for decades is precisely post-editing, the difference being only the provenance of the matches.” In fact, fuzzy matches over 85% are inherently correct and coherent and generally require minor changes, while machine-translated segments, even when coming from state-of-the-art NMT engines, may contain errors and inaccuracies.
The whole translation community has been stubbornly keeping alive century-old processes and models that have been long proved outdated, inconvenient and uneconomic.
This resistance has been fueled with the belief that translation, whatever the kind, has peculiar artistic features of its own that are supposed to make it impervious to any automatic processing. This belief has been being fed in turn, thus establishing a vicious circle that is still unbroken and, in fact, armored.
The history of technology, especially artificial intelligence, has always been characterized by an ‘expectation gap’, but this explains why the one for machine translation is still wider than for many other areas.
Humans have consistently aimed to gauge technology on its humanity (anthropomorphization of computers), and this have reinforced the attitude of the translation community towards the lower adoption of machine translation. Also, the lack of an objective, simple, and easily accessible way of benchmarking HT as well as MT capabilities against its reality has led to a series of ‘boom and bust’ cycles.
Given the premises, although it is at least as old as machine translation, and in spite of the many bombastic declarations following any new technology, engines, and/or releases, post-editing is here to stay because insiders have always perfectly known that it would be necessary to dynamically assess output and make any corrections downstream processing.
Nonetheless, as Milica Panic duly noted in her report on the 13th TAUS QE Summit, through the words of DCU’s Sharon O’Brien, post-editing does not belong to translation studies. It is seen as part of revision, and therefore (sic) separate from translation. Panic also reports that O’Brien pointed out that TM and MT are talked about separately, while in fact they have merged.
Creativity is still supposed to be at the core of all study programs, and along with a renewed effort to boost specialization in target markets, cultures and advertising jobs, feeding the ambition of training new generations to be content creators.
This is exactly the kind of utopianism at the base of EMT, which is just the certification of the crystallization of an academic pond whose only purpose in its 12 years of existence has been in perpetuating the models and processes described above to ensure the survival of dinosaurs that should have already extinguished, without any help from a giant meteorite.
On the other hand, what is the favorite social activity of the translation community? Reassurance. Members from the community spend a lot of their time reassuring each other, patting on each other’s back, telling each other how s/he’s doing good, saying that everything is great, and that it can only be better. And when something is going awry, it’s always someone’s fault. Everybody lies.
What is necessary, then, in translation education today to be marketable? Today, translation competence is a three-legged table, based on data, tools, and knowledge: It is less and less a question of language knowledge and more one of knowing how to use it and the right tools to exploit it. These three legs must be of the same length, and then grow at par, for the table not to wobble. Technology proficiency, not just IT, is pivotal. When everything you need is available through the Internet, you need to know how this really works, for example, to harness its power. In this respect, the practical intelligence to handle the increasing amount of data and information will be increasingly necessary. And it is typically human.
Translation will be more and more an engineering thing, but machines will remain dependent on humans for building their “knowledge” from training data for the foreseeable future. Like many so-called “creative jobs”, translation will be at the serious risk of being replaced by machines in the next few years, and “more lucrative markets” in the creative and language-related area—if any—will be harder and harder to spot.
The translation community may not know it, but it desperately needs some positive deviance. Positive deviants are people with uncommon behaviors and methods that enable them to find better solutions to common problems than the majority, while having access to the same resources and facing similar or worse challenges.
A positive deviance is now much easier than the initial nutrition research in the 1970s thanks to the abundance and power of data. For example, it can be applied to recognize and address quality issues and get a prediction of translation quality outcomes, or in finding the best-suited person for a task.
The problem with data is in their analysis and the following interpretation. Both requires a different kind of skills than those developed under the teachings given by old-fashioned teachers in old-fashioned courses.
Is really post-editing the new frontier in translation education? Or is it rather something that most academics seems to have to grasp yet?