Peras imposuit Iuppiter nobis duas:
propriis repletam vitiis post tergum dedit,
alienis ante pectus suspendit gravem.
If documentation is a necessary evil, a task that no one is willing to do, but that anyone is sooner or later, more or less, involved with, translation is a task that is most often viewed as unimportant, unjustifiably costly, and definitely trivial.
It is not surprising, then, that translation is noticed only when it is rough or missing. On the other hand, a translator’s favorite sport is censuring errors and mistakes in someone else’s work, always without a trace of the indulgence that s/he claims for her/himself.
The ancillary subject of a translator’s favorite sport is rates, to complain about those applied by LSP’s or colleagues, whether they are too low or too high, and seldom are pricing policies discussed seriously.
For a serious approach to pricing models and policies, a few questions should be asked first:
- What do translation buyers care about?
- Where do current pricing models come from?
- Are TEP and the current translation pricing models still making sense?
- What is the impact of technology on translation pricing?
- Are freeconomics and/or crowdsourcing the new frontier?
- Is negotiating (on) quality realistic?
- What is the best way to determine how much a translation should cost?
Buyers (and users) care about benefits. Translators, and many LSP’s, care about quality, but quality is a feature, not a benefit. In fact, quality is the unique selling proposition of the entire translation industry, maybe because translators and LSP’s have ever been trained to see the difference. Moreover, translators and LSP’s still reason of quality under the same old vague concept that most of them have been accustomed to since their university days, although far it could be from their current practice.
For a pricing model to be effective and competitive, it is important to understand what clients need rather than guessing what they could need, to solve their pains or provide them with gains.
Not surprisingly, then, translation buyers and vendors disagree on how price influences quality: buyers think price has little to do with quality, while suppliers claim that everyone agrees on the equations high prices = high quality, and lower prices = lower expectations. Hence, it should also come as no surprise that, according to a recent Common Sense Advisory Survey disclosed by Don DePalma at the GALA 2013 Conference in Miami, for 82% of buyers, best suppliers charge mid-range prices.
A sign hanging in Albert Einstein’s office at Princeton showed a famous quote of the scientist: “Not everything that counts can be counted, and not everything that can be counted counts.”
The current pricing methods in the translation industry all come from the publishing industry tradition. In publishing, space is crucial: the space a piece of text will take up when published costs money and can be a key to success. In fact, length is a basic requirement in advertising, and in many educational environments, length is used as a path to a disciplined approach.
Measuring has always been hard. It requires a reference model defining the distance between two objects. With the advent of computers, measuring becomes much simpler, even in typography where imposition determines the cost of press time and materials, and the printed sheet must be filled as fully as possible.
To a computer, counting word is quite simple: a word is any sequence of characters with space around it. Therefore, paying on a per-word basis looks quite simple too, and could give the client a precise price quote ahead of time.
Since the dawn of publishing, many things have changed, and yet even though publishing, and translation, became affordable for a vast public, the publication system caught on, with any associated models, including TEP (Translation, Editing, Proofread).
On the other hand, TEP offers a control mechanism that proves effective more for middlemen rather than customers. There are no middlemen who can comfortably guarantee the accuracy and fluency of a translation in more than two or three language pairs. Therefore, TEP has long been the only way to sustainable quality assurance.
The translation industry is still based on a typical information asymmetry, relying on the average buyer’s inability to assess a translation effort and the value of the product/service through examination before sale is made: sellers usually know more about their products than buyers, and are thus able to assess accurately the value of the product/service prior to sale.
Furthermore, happy users usually do not manifest any appreciation for translation quality, considering it a given. The typical translation buyer can hardly see why a translation should be expensive, even though C-level awareness with respect to translation is deemed growing, together with its relevance with the business.
Therefore, at least for LSP’s, information asymmetry can hardly be exploited any longer as it is increasingly narrower than before: more and more knowledgeable translation buyers want to pay less and have more, negotiation is getting harder and harder, and vendors have to be smarter, more responsive and flexible than ever.
Nevertheless, a lack of commitment from vendors can often still be noticed, with a growing attitude to “educate customers.”
Educating customers is not a cost effective advertising strategy for most small business like most LSP’s, especially towards first-time buyers.
When someone is first thinking about buying a product or service, this potential customer probably doesn’t know very much about it. Information asymmetry could then prove effective, initially.
On the other hand, assuming that everyone needs translation, and that all it is needed to do is to tell customers about it is the wrong approach.
Translation industry players, especially translators, tend to talk about themselves, the quality of their services, the many resources they use, etc. They fail to communicate the results their products/services will bring to their clients.
The Dunning-Kruger effect could be a reason why some translation industry players are still tempted to approach their customers this way, possibly from an illusory superiority. This superiority is what constitutes the information asymmetry.
They offer their products/services (solution) to people who don’t recognize that they have a demand (problem) by assuming a sort of functional illiteracy, assuming (again) they are interested in features rather than in benefits, thus failing to understand and quantify their client’s needs, and showing a lack of interest for their customer’s business.
As a standard process for the industry, TEP is of little interest for a first-time or an unknowing customer, while illustrating this and any other standard industry processes to prospect customers could lead them to buy from some other vendors.
Translators and LSP’s shouldn’t try to educate their clients: they’d better empower them, especially if they understand the issue behind their demand for translation, and will be searching for solutions.
The typical questions a manager in charge of getting the software, documentation, training materials etc. translated might ask are:
- How much will it cost?
- How much time will it take?
- How can I reduce that cost?
- Will the translated matter be in the right format?
- How do I know that the translation is correct?
TEP and pricing models are no relevant answers to any of the above questions, and TEP is certainly not the ultimate answer.
In a recent post, Nancy Locke wonders whether translators should be worried about the advance of machine translation. Over the last three decades, translation technology has made huge leaps forward. A huge boost came from the Internet, much more than from translation memories.
The Internet has made it possible to access to a market-driven global economy, and to a massive amount of references, including the most varied terminology sources and translated texts in different languages.
Machine translation does not come with Google Translate: it dates back to almost 60 years ago, born from military needs. The best known event in machine translation history is the publication of the (in)famous ALPAC report in late 1966 stating that machine translation was not useful and that there was no immediate or predictable prospect of useful machine translation at the time, thus bringing research funding to an end in the United States for some twenty years, and sending the clear message to the general public and the rest of the scientific community that machine translation was hopeless.
Conclusions in the ALPAC report were mostly due to the large amount of available translators at the time and the relatively small amount of texts to be translated, that made machine translation uneconomical.
ALPAC’s final recommendations were that research should be supported on:
- practical methods for evaluation of translations;
- means for speeding up the human translation process;
- evaluation of quality and cost of various sources of translations;
- investigation of the utilization of translations, to guard against production of translations that are never read;
- study of delays in the over-all translation process, and means for eliminating them, both in journals and in individual items;
- evaluation of the relative speed and cost of various sorts of machine-aided translation;
- adaptation of existing mechanized editing and production processes in translation;
- the over-all translation process;
- production of adequate reference works for the translator, including the adaptation of glossaries that now exist primarily for automatic dictionary look-up in machine translation.
Most of these recommendations largely remained unfulfilled, especially points 1., 3., 5., and 8. Points 6. and 9. have been partially fulfilled thanks to or by the Internet.
In fact, translation technology, and specifically machine translation, has always met the opposition of the translation industry, and especially of translators, fearing it could either reduce job opportunities or harm their reputation.
The undeniably less effort required for translating, thanks to the huge availability of new tools and easily and freely accessible information, together with the ability to leverage from previously translated material and market globalization have helped the slump of translation prices. The typical longstanding weaknesses of the translation industry have contributed too, to the same extent, if not more.
And yet, even today, despite Google Translate’s success, the “failure” of machine translation is still repeated by many as an indisputable fact, and translation industry insiders are still debating on nothing, as in a recent post of the self-acclaimed International Association of Professional Translators and Interpreters (IAPTI).
Translation technology, namely machine translation, has little to do with the plunge of translation prices, in any case less than the spreading of computers and the Internet. This same spreading is also the origin of the steadily growing amount of content and the consequent surge in the demand for translations, bringing translation beyond human scale.
The recent developments of machine translation are bringing a different perception in translation buyers, even in traditional ones. More and more people are now willing to acknowledge the importance of translation, even though the intrinsic gratuitous nature of the Internet (and specifically of free machine translation engines) has affected the understanding of professional translation as a highly qualified service.
TAUS’s recent Translation Technology Landscape Report — a primary source also for the report on the status and potential of the European language technology markets (LT2013) — predicts that machine translation will still have a substantial market impact.
Machine translation is here to stay: burying one’s head in the sand, claiming that machines can never replace humans, and machine translation is not going to last is self-defeating. As Nancy Locke reminds, history tells humans have always retooled and retrained to bravely survive and thrive in new (not always improved) contexts.
However, retraining and retooling are not enough, they are only a first step for meeting the imminent challenges. The next step consists in choosing new traveling companions, that actively participate in the evolution to fulfill the new roles being created: source text editors, post-editors, data miners, terminology experts etc. LSP’s and translators should start re-thinking their jobs. And attitudes. And mentality. And approach. And vision. They should be humbler and intellectually honest.
And yet, according to Terry Lawlor (and to many others):
- Machine translation will never be good enough to remove the need for human translation;
- Machine translation is often good enough to add value to existing human translation processes;
- Machine translation is often good enough by itself to deliver useful translations.
Terry Lawlor, then, suggests adopting a “just enough” approach consisting in doing just enough translation, i.e.:
- Just enough human translation where you need the guaranteed high quality;
- Just enough post-edited machine translation where you need consistency from your translation process;
- Just enough pure machine translation where consistency and quality are luxuries over utility.
Terry Lawlor suggests building flexible and dynamic translation processes that will always route content through the right translation path, but he should have known better that outsiders from the translation industry are already building flexible and dynamic translation processes. They call it crowdsourcing.
Are freeconomics and/or crowdsourcing the new frontier?
Think of open source: it has taken over almost everywhere, even though most of the general population has no idea of what it is. Most electronics devices that are taken for granted today wouldn’t exist without open source software, or they would perform significantly worse because community developers wouldn’t be able to contribute with improvements. This ability to improve is what made open source different.
In an interview to Wired in 2003, Linus Torvalds, the father of Linux, the symbol of open source, said he made it free and open (to collaboration) because he didn’t want to deal with parts of the development he was not interested in (“the crap work”). Similarly, the father of the Web, Tim Berners-Lee, said that the Web became so popular because it was free, and that it would never have taken off should he have kept control of it. In The Wealth of Networks, Yochai Benkler suggests that free knowledge helps ideas grow. According to Wired editor Kevin Kelly, “the power of sharing, cooperation, collaboration, openness, free pricing and transparency has proven to be more practical than we capitalists thought possible”.
The idea of selling one item at a low price or giving it away for free to boost the sales of complementary products and services is nothing new. It is called freebie marketing or more recently freemium to become (in)famous via Chris Anderson’s Free: The Future of a Radical Price, but it dates back to the late XIX century.
LSP’s and translators should devise new service models to sell translation at a very low price and derive their revenues from complementary products/services, maybe the ancillary ones they have always provided in bundle. They could start thinking of what machine translation can do for them, e.g. by integrating it into their business processes.
Not surprisingly, many providers are at least trying to drop TEP as a monolith, quote only T and leave E, P and other ancillary tasks (e.g. terminology) as extras, to offer more with apparently lower prices.
Also, e-commerce is becoming the norm in global sales. The translation industry is predominantly B2B oriented. LSP’s and translators should consider e-commerce also as a business opportunity to expand to the consumer market. For the translation industry, e-commerce can be what Just In Time has been for the manufacturing industry. What else is trading language data if not managing inventory?
Finally, for customers willing to crowdsource their translation jobs, LSP’s — and, why not, freelancing professional translators — could exploit the associated opportunities by offering the collaboration platform, their vendor management and project management services.
In his post for GALA’s newsletter of Q1 2013, Wayne Bourland, Director of Translation at Dell, reminded that the promises of machine translation are all well-known: cheap translations, lingua franca, and the ability to keep up with the content explosion. If translators and LSP’s have not caught up with it, it’s not because customers are too demanding: most customers would ostensibly be willing to accept any deviations from the quality standards they are accustomed with for a dramatic reduction in cost.
In a post dating back Web-eons ago (late 2007!), John Yunker sentenced the end of translation as we know it. Yet, the “star attraction” machine translation is still widely mocked by most translators, still focusing on the art of translation, despite more and more academic institutions are rebranding their translation departments in the name of a science of translation. Most translators think of themselves as artists, creators, or at least writers even though they all know that the quality of the majority’s work is the first clue that they are neither writers nor creators.
Not surprisingly, quality is the life vest of the translation industry: all players rely on this magical mystery word that instantly explains everything and forbids further questioning, as passengers on a plane feel confident knowing there is a life vest under their seat even though they could hardly use it in the case of an accident just on pre-flight instructions.
In Innovation and Entrepreneurship, quality guru Peter Drucker wrote: “Quality in a product or service is not what the supplier puts in. It is what the customer gets out and is willing to pay for. A product is not quality because it is hard to make and costs a lot of money, as manufacturers typically believe. This is incompetence. Customers pay only for what is of use to them and gives them value. Nothing else constitutes quality.”
In this view, quality as a feature is irrelevant since customers only care about the benefits they receive, and yet quality is still the unique selling proposition of the entire translation industry making any differentiation apart from price almost invisible. This attitude is helping make translation a commodity.
In addition, as correctly stated in a recent TAUS post, in contrast to other industries, the translation industry is struggling to find unambiguous definitions and ways to measure its deliverables, and while common measurable definitions are ever more necessary the typical approach to quality remains “I know it when I see it.”
In reality, in a negotiation, quality should be definitely dealt with even though limited to assurance criteria, to allow the customer to verify that the vendor meets requirements. This criteria, as well as the requirements, should be clear for the customers to grab, while all the metrics still being discussed, from MQM to DQF
Customers are interested in four basic elements that are those making quality a given:
Most importantly, since the scope of translation services varies widely, their quality varies as well and buyers should be put in a position to compare vendors to find the right one for their products. Unfortunately, the current approach to quality still shows a significant flaw: there are no objective metrics to allow customers to verify a vendor’s references, including quality certification (GB/T 193636.1, EN 15038:2006, ASTM F2575-06, CAN/CGSB-131.10, the would-be ISO 76100, ISO 9000’s, or whatever).
In the effort of educating their customers, the most typical mistake of LSP’s is to talk too much about themselves. Customers are not interested in TEP, TM’s, and rates (either per word or line or page); they are only interested in the total cost of a translation, when it is delivered to be profitably used, and the benefit it will produce i.e. how much value it will add to the customer’s business.
Therefore, there is no use in trying to explain how a translation price is made by referring to quality (generally high, very high or superior when not supreme, unrivaled, and unchallenged) and revision by a second linguist, i.e. educating the customer.
In this respect, productivity is crucial, and this is where machine translation comes into play. Not only are content type and volumes steadily increasing, but the consequent surge in the demand for translations is bringing translation beyond human scale, while good translators are and will remain scarce, and probably will even be scarcer in the future. This is due to the mistaken approach of LSP’s to sales, by competing only on price to scoop more and more jobs to cope with the increasing overhead. This approach is at the origin of the application of Gresham’s law to the translation industry: bad translators are driving out the good.
The dilemma of how much to charge for a product or service is a typical starting point for discussions about pricing, even though a vendor should focus on the value customers place in that product or service.
In setting the price for a product or service, production cost, market place, competition, market condition, promotion, and quality of product must be considered.
Unfortunately, due to a longstanding habit of subjection to customers, LSP’s and translators are hardly in a position to dictate their prices, and the price for a translation is most often demand-based or competitor-indexing-based, and fixed a priori. Rarely, it is value-based.
Any pricing policy should be relevant to the cost of business, and the first step consists in thoroughly assessing this cost. This is another reason for good translators to leave the industry: compensation is not enough to cover the cost of business and provide for decent livelihood.
Many translators claim that they should be paid an hourly rate like liberal professionals (i.e. lawyers, doctors, dentists, architects, engineers, lawyers, veterinarians, accountants etc.), but it will most probably make no difference as long as productivity is still going to be measured in words per day, and no way is shown to customers to control how they are charged.
Selling in bulk rather than on a per-unit basis is necessary to shift to time rates, but this requires a deep change in attitude of the whole industry, starting from LSP-freelancer relationships.
The future is in disintermediation and collaboration.
Disintermediation may serve to streamline processes and reduce costs, not to upset long-established bad practice. To streamline processes, however, good initial conditions are necessary. In fact, disintermediation leads to the elimination of intermediaries in the supply chain, resulting in drastic reduction of operating costs.
The Internet helped disintermediation by allowing for the direct connection between customers and suppliers. ATMs and online banking are a perfect example, with the reduction of operating costs, typically infrastructures and workforce.
However, banking disintermediation was possible and, above all, effective thanks to process maturity and the ample room for efficiency gains that were still possible through the use of similarly mature technologies.
Disintermediation may create room for even greater success in the production of intangible assets that the end user can directly order to the producer in real time and pay directly, thus assessing costs and benefits in their actual value.