AI is not about replacing the human with a robot.
It is about taking the robot out of the human.
Digitization has gained momentum and it is getting magnitude and speed, with most efforts aiming at making manual tasks as less repetitive and boring as more accurate and faster.
Economists studying the relationship between technological change, productivity and employment almost unanimously agree that AI systems are going to transform the economy. However, they have also been insisting that, to increase productivity, investments in AI must go together with investments in IT infrastructure, skills and business processes. Some of these investments involve ‘intangibles’ such as data, information and knowledge.
Indeed, digitization or, more properly, digital transformation should be instrumental in improving organizations, while there are still people who think of artificial intelligence (AI) as a kind of magic to turn their business into a fully automated plant in a flash, switch the lights off, and just hang out waiting for any text messages in the occurrence of dull problems in the workflow.
Brynjolfsson’s paper focuses on how the need of the above intangibles is delaying the emergence of AI in productivity growth or in labor demand. The other paper shows how AI systems are a long way from reading the news, re-plan supply chains in response to anticipated events or trade disputes and adapting production tasks to new sources of parts and materials.
In fact, many companies are still in an initial adoption stage, figuring out which data to collect, how to collect and organize it and get greater insights to current operations. This requires the integration of multiple data sources and skills, while ensuring continuous improvement to the production system.
According to a 2018 survey by New Vantage Partners, 97% of firms are investing in big data and AI, and the primary goal for most is to deploy advanced analytics capabilities for business decision-making.
This is also why many people think machines are going to replace them overnight.
AI or ML?
Most people simply don’t know what AI really is and how it works today. Actually, in a typical synecdoche, AI is mostly confused with machine learning (ML), which it used to recognize and learn patterns in data and is highly focused at performing very specific tasks. For this reason, AI cannot take over the world.
It is true that self-employed people working in very popular—but lowest paid—occupations have the greatest risk of losing their job to AI as some AI application is more and more going to replace routine work. However, this does not mean that every profession is at risk. Those involving teamwork, negotiation and extensive decision-making are not, at least until AI’s imitation game comes that far. On the other hand, self-employed workers, as well as SMEs just don’t have access to the same resources as larger companies, which makes it difficult for them to exploit AI and neutralize risks.
So far, ML is used mostly for analytics to gain data-driven insights for decision-making and make jobs easier, faster and more productive, and outcomes more reliable.
However, ML models are a major challenge when looking for creating value through AI. Also, the more complex the models, the harder it gets to achieve and, most importantly, to deploy the right mix of performance, portability and scalability. For example, ML models need to be updated more frequently than standard software applications.
Returning to the above intangibles, if Python is to ML as SQL is to relational databases, the constant is data, unbiased and trusted as the systems themselves need to be. Today, every company has multiple databases for nearly every business task and SQL and RDBMS 101 are a must for virtually every job applicant. Tomorrow, the same is going to happen with Python or R and TensorFlow or MATLAB. In fact, Gartner forecasts that, by 2020, the focus within machine learning will shift from algorithms to high-value data.
Thirty years ago, mastering Lotus 1-2-3 was a real plus; five years later that was HTML, and so on… The translation industry has been dealing with practically a single technology—incidentally based on an even older one—for twenty-five years. How about machine translation, then? MT has been around for 70 years, first as RbMT for 50 years, then, for a decade or so as SMT and now as NMT. Artificial neural networks are as old as MT, and ML is 60-year old.
However, along with the peculiar model-building skills, human help is necessary in ML to label and make data readable to the algorithms so that they can find the patterns associating the various points in data sets.
What are the constants in the translation industry? Translation, of course, what else? Terminology is another constant in translation work, maybe the only one.
Translation and terminology work provide a lot of language data, which is certainly pivotal in ML, but also project data is crucial for ML to find patterns in performance data.
Do you know of any translation industry player, at any level, storing, structuring and analyzing project data on a regular basis and deriving insights to improve their processes and business? Translation industry players are obsessed with language data, although they seldom curate it.
Also, they are the typical victims of The Tyranny of Metrics. Even though their performance hardly gets impaired for measurability bias and deleterious numerical evaluations, they certainly keep using bizarre performance metrics as the basis of reward—rarely—or sanction.
ML for Measuring
Indeed, some things can be measured, some are worth measuring, and some measures may not be relevant.
The costs of measuring may be greater than the benefits and draw effort away from the really important things or provide distorted knowledge, which seems solid but is actually deceptive.
If measures must be the criteria to reward and sanction, e.g. the basis of pay-for-performance or ratings, then these measures should be based on long historical series, and the underlying metrics should not be prone to being “gamed”, i.e. deliberately misused.
This is exactly where ML can get into play with project data. Customers could use their historical project data to determine, for example, how many edit/review rounds are necessary or whether any is necessary at all. Vendors could use their historical project data to find the best pick for a job, provided of course, that their vendor base is constantly updated and each profile thoroughly curated.
ML and Terminology
Terminology work is getting more and more important and no longer “just” translation oriented. Indeed, the scope of terminology work is significantly much wider and ranges from authoring to knowledge management, from education and training to marketing. Today, with an organization’s systems all basically integrated, terminology is the main information vehicle whether to name, label, and detail products, conjure up an action, help users, support and drive a maintenance task, or to persuade and sell or purchase.
However, since, optimistically, less than half of what buyers spend goes to language-oriented tasks, it comes as no surprise that terminology work is still so neglected.
The hardest part of terminology work, as well as the most painstaking and boring, although possibly most fruitful, is term mining. It is also costly, especially when done on bulky texts, and requires specific skills and knowledge. This is why the practical value of terminology work is largely underestimated and seldom matches its technical value, until something goes wrong obviously.
In the past, term mining software followed a linguistic or a statistical approach, sometimes a combination of the two. A linguistic approach is based on rules and dictionaries to find all possible combinations corresponding to certain structures of speech. This can be done one language at a time. A statistical approach, on the contrary, is language-independent and based on frequency. Foreign terms, synonyms, variants, abbreviations, ellipses and improper use of language are all issues when using a linguistic approach. Similarly, frequency may not necessarily be a sign of importance, and actions are necessary to reduce noise when using a statistical approach.
Today, the linguistic approach is universally adopted, although it does not make things easier and still requires specific skills and knowledge. Also, term mining still heavily depends on the quality of the text to mine and, most importantly, on the time available for separating the wheat from the chaff, i.e. cycle through the list of candidate terms, isolate the relevant terms and discard the irrelevant ones. On the other hand, manual mining could take hours, if not days, depending on the size of the text to mine.
All this keeps term mining still out of scope for many businesses, despite the reasonably priced software products available, but also make term mining the perfect job for ML.
For example, relations between terms and concepts can be very helpful to detect the relevant terms in a candidate term list and ML algorithms could find these relations, in the form of patterns by combining statistical analysis with conceptual relations. In other words, ML could help find terms that are used together in the same context, as in tag clouds. In many cases, they could also draft tentative definitions from context and, where available, from other available corpora and terminology.