In recent years, the blogosphere has lost much of its original appeal, mainly because its connected community has largely moved to social media, which, today, ended up conveying most content. Indeed, social media help much content emerge that would otherwise remain buried. Social media—as we all know—also convey content that should better be ignored anyway, but even crap has its raison d’être: That’s content marketing, baby, content marketing, and there’s nothing you can do about it, nothing.
Content, skills and knowledge
Indeed, this content offers a plea to run some basic psychometrics on the small groups of people one follows on social media. Don’t get fooled by the Facebook/Cambridge Analytica scandal, it’s not rocket science: Even likes can tell you a lot and help you understand what your contacts are paying attention to and why, especially if they are not just virtual acquaintances.
Social media activity of your contacts can even provide you with much more confirmations than expected. The fundamentals of content marketing say that the content produced should be of absolute value, but this is hardly true because marketing is supposed to exert its effects anyway and one does not always have something definitive to say.
What would you think, for example, of an acquaintance of yours recommending a post by someone who admits s/he is an absolute beginner with machine translation, has no technical knowledge of it and yet thinks s/he can provide his/her customers with solid advice anyway? And what would you think of the same acquaintance of yours who defines him/herself as an industry professional while admitting his/her revulsion for MT and declaring her cast-iron belief in any professional as being capable of sparing his/her customers a “poor figure”? Well, these people are really telling a lot about themselves with a post and a like.
The power of data
Seth Stephen-Dawidowitz’s Everybody Lies is a terrific book for how simply it shows the power of data. Just like Seth Stephen-Dawidowitz in his book, Google’s Mackenzie Nicholson displaced many attendees at the recent Smartling Global Ready EMEA, by asking a few classic questions with a seemingly obvious and yet invariably incorrect answer. For example, when it comes to clichés, no one would have bet that Italians pay far more attention to price than Germans, Scots, Israelis as Google’s data unequivocally shows.
It came as no surprise, then, that analytics generally indicate that in-house reviews mostly result overly expensive and largely pointless, as Kevin Cohn later on showed in the same occasion. Simply put, despite great expectations almost no actual improvement is recorded. Indeed, most edits are usually irrelevant and simply a matter of personal taste. Incidentally, Kevin Cohn is a data scientist who only speaks English and admittedly knows almost nothing about translation. Anyway, as the wise man says, data ipsa loquuntur.
Hypes you (don’t) expect
Of the many expectations that have been generating hypes over the last few years, the ones about data are not inflated, and people are, maybe slowly but steadily, getting accustomed to reckoning with data-driven predictions. As algorithms will be growing in numbers and potentials, the confidence in their applications will also grow.
In fact, hypes are aimed at and address people outside verticals, so Microsoft’s recent hype on NMT achieving human parity, for example, was not meant for the translation industry.
So why all the fuss?
As a matter of fact, the difference between human and machine translation is becoming thinner and thinner, at least looking at quality scores and statistical incidence. Also, the concept of parity may be quite hard for a layman to grasp. This, if anything, makes the desolation of posts like the one mentioned above even more evident. Indeed, it is pretty unlikely for the general media to get the news correctly in such cases like the Microsoft hype case: However complete and clear the article might have been, it was even misleading in the title, which usually is the only catch phrase for the media.
In Microsoft’s much-vexed, and yet, don’t forget it, scientific article, parity is defined mostly as a functional feature, i.e. as a measure of the ability to communicate across language barriers. Parity is compared to professional human translations, and yet keeping clearly in mind the idea that “computers achieving human quality level is generally considered unattainable and triggers negative reactions from the research community and end users alike” and that “this is understandable, as previous similar announcements have turned out to be overly optimistic.”
As a matter of fact, it is made equally clear that the quality of NMT output in the case examined exceeds that of crowd-sourced non-professional translations, which should come as no surprise for those translation pundits who have read the article.
On the other hand, a recent study from the University of Maryland found that “users reacted more strongly to fluency errors than adequacy errors.” Since the main criterion in recruiting participants was their English language ability, the study indirectly confirms that “adequacy” implies a vertical kind of knowledge, the same that could prevent hypes from arising and spreading.
The unpleasant side of this story is that, once again, many so-called translation professionals still can’t see how MT is just a stress-relieving technology, conceived and developed to enhance translation, make it easier and faster and possibly better.
That’s why (N)MT is no inflated hype, and it has actually been on the plateau of productivity for years now.
Overcoming language barriers is an ageless aspiration of humankind that does not generate any fears, unlike the much-fabled singularity. Except, possibly, amongst language professionals, despite the continuous, recurrent, self-reassurance (wishful thinking?) that machines will never replace men, at least in this creative and thus undeniably human task.
In the end, the NMT hype falls within mainstream tech news, which is sprayed as toxic gas to win a market war that is battled on much more profitable fronts than NLP, corporate business platforms. Indeed, the NMT arena is dominated by a leading actor with a supporting actor and many smaller side actors struggling for an appearance on the proscenium. Predictably, a translation industry “star”, which is just a “dwarf” in the global business universe, recently opted for buying instead of making its own NMT engine, citing the scarcity of data scientists—and money, of course—as the main reason for the decision.
Actually, not only has NMT emerged as the most promising approach, it has also been showing superior performances on public benchmarks and rapid adoption in deployments and steady improvements. Undeniably, there have also been reports of poor performance, such as the systems built under low-resource conditions, confirming that NMT systems have lower quality out of domain. This implies that the learning curve may be quite steep with respect to the amount and, most importantly, the quality of training data. Also, NMT systems are still little interpretable, meaning that any improvements are extremely complex and random, when not arbitrary.
Anyway, to be unmistakably clear, MT is definitely “at parity” with human translation, especially when this is below expectations, i.e. sadly average low-grade. And Arle Lommel is right in writing that an article titled New Study Shows That MT Isn’t Terrible would not generate much attention. At the same time, though, when he writes that “the only translators who need to worry about machine translation are those who translate like machines” he can’t possibly even imagine that this is exactly what most human translators have been doing, maybe forcedly, for decades.
Therefore, the NMT hype is such only for the people in the translation industry who, on the other hand, are much more open to stuff that insiders in other industry would label as crap.
After all, NMT is just another algorithm and, with the world going increasingly digital and (inter)connected, and so information-intensive, resorting to algorithms is inevitable, because it is necessary.
Data as fuel
The fuel of algorithms is data. Unfortunately, despite the long practice in producing language and translation data, translation professionals and businesses have seemingly learnt very little about data and are still very late in adopting data-driven applications. Indeed, data can be an asset if you know what to do with it, how to take advantage from it, how to profit from it.
In this respect, besides showing a total ignorance of what “big data” is, the inconsiderate use of non-sensical “translation big data” has been seriously damaging any chance for the effectual trading of language and translation data. This is just one of the impact of fads and hypes, especially if ignorantly borrowed from and spread through equally ignorant (social) media.
As Andrew Joscelyne finally wrote in his latest post for the TAUS blog, «Language data […] has never been “big” in the Big Data sense.»
In fact, with the translation industry processing less than 1% of translation requests, language data can’t be exactly big, while translation businesses don’t have the necessary knowledge, tools, and capability to effectively exploit and benefit from translation (project) data. Exceptions are de rigueur, of course, but one can count them on the fingers of one hand, and they all are technology providers.
Data and quality
Unfortunately, the translation industry is affected by a syndrome, blaming technology for replacing services, products and habits with others of lower quality, impoverished and/or simplified. Luddite anyone?
Indeed, only human laziness should be blamed for unsatisfactory quality. And this is consistent with the perennial, grueling and inconclusive debate on quality, the magical mystery word that instantly explains everything and forbids further questioning.
A solid example is the anxiety for confidentiality with online MT, which is not quite an issue. Confidentiality is definitely a minor issue for an industry whose players are still extensively using email, when not FTP unsecured connections and servers for exchanging files. Confidentiality is definitely not a major issue when it is mostly delegated to NDAs, without providing for any enforcement mechanism, especially when non-disclosure agreements are perceived as offensive, for revealing lack of trust and questioning professionalism. Confidentiality is not an issue when, even in spite of bombastic certifications, the violation of any confidentiality obligations is around the corner for keeping customer’s data unsecured, having no contingency or security plan in place or re-using the same data for other projects, knowingly or not. Also, in most cases, IPR rather than confidentiality is the real issue.
Anyway, when such issues arise, never is technology to blame but human laziness, sloppiness, helplessness, and ineptness.
Are all these traits also affecting data? Of course, they are. It is not a case that translation businesses believe they are so different than other service businesses, to the point that to real innovation has ever come from them. Even when they choose to build their own platforms, these are so peculiar that they could never be made available to the whole community, even if their makers would, and they wouldn’t. After all, this is also a reason for the proliferation of unnecessary standards. Narcissism is the boulder blocking the road to change and innovation.
The same dysfunctional approach affects data. For example, should one believe in the meager results of the perennial, grueling and inconclusive debate on quality, one should only be able to measure it downstream and only by counting and weighing errors, in a typical red-pen syndrome. On the contrary, a predictive quality score can be computed based on past project data, which is extremely interesting for buyers.
More ML applications
Now, imagine a predictive quality score combined with a post-factum score deduced from content profiling and initial requirements (checklists), classic QA, and linguistic evaluation based on correlation and dependence, precision and recall and edit distance.
Only a weak point will be left, i.e. how to recruit, vet, compensate, and retain vendors to have always the best fit.
During his presentation on KPIs at the recent interpretation and translation congress in Breda, XTRF’s Andrzej Nedoma recalled how project managers always tend to use the same resources, who are not necessarily always the most suitable.
With vendor managers continuously vetting and monitoring vendors and constantly updating the vendor database, project managers could have a reliable repository to get their picks from. And with project managers updating, in turn, the vendor database with performance data, this could be combined with assessments and ratings from customer and peers to feed an algorithm that would provide for best fits at any new projects and, in short, ultimately start a virtuous circle and maximize customer satisfaction.
To be unambiguously clear once again, this is by no means an endorsement of translation marketplaces. On the contrary, the inherent vice of translation marketplaces is the ultra-exploitation of information asymmetry as they provide no mechanism to help factual vetting and evaluation, thus ultimately disintermediation. However, any platform that users from all parties could join in and be vetted and evaluated—and their performances fairly measured—will eventually prevail.
If the idea of translation marketplaces has not worked out so far is not because of a supposedly unique nature of translation; on the contrary, this is one of the conditions that makes the translation industry an ideal candidate for disruption. In fact, with suitable data and the right algorithms, machine learning—including deep learning—can provide many high-value solutions.
Where’s the weakness in data then? In humans.