Tuesday, October 27, 2020

Anonymization Regulations and Data Privacy with MT

 This is a guest post from Pangeanic that focuses on very specific data privacy issues and highlights some of the concerns that any enterprise must address when using MT technology on a large scale across large volumes of customer data.

I recently wrote about the robust cloud data security that Microsoft MT offers in contrast to all the other major Public MT services. Data privacy and security continue to grow into a touchstone issue for enterprise MT vendors and legislation like GDPR makes it an increasingly critical issue for any internet service that gathers customer data.

Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of removing personally identifiable information from data sets so that the people whom the data describe remain anonymous.

Data anonymization has been defined as a "process by which personal data is irreversibly altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party." [1] Data anonymization may enable the transfer of information across a boundary, such as between two departments within an agency or between two agencies while reducing the risk of unintended disclosure, and in certain environments in a manner that enables evaluation and analytics post-anonymization. 

This is clumsy to describe, and even harder to do, but is likely to be a key requirement when dealing with customer data that spans the globe. Thus, I thought it was worth a closer look.

*** ===== ***

Anonymization Regulations, Privacy Acts and Confidentiality Agreements 

How do they differ and what do they protect us from?


One of the possible definitions of privacy is the right that all people have to control information about themselves, and particularly who can access personal information, under what conditions and with what guarantees. In many cases, privacy is a concept that is intertwined with security. However, security is a much broader concept that encompasses different mechanisms. 

Security provides us with tools to help protect privacy. One of the most widely used security techniques to protect information is data encryption. Encryption allows us to protect our information from unauthorized access. So, if by encrypting I am protecting my data and access to it, isn't that enough?  

Encryption is not enough for Anonymization because…

in many cases, the information in the metadata is unprotected. For example, the content of an email can be encrypted. This gives us a [false] idea about some protection. When we send the message, there is a destination address. If the email sent is addressed, for example, to a political party, that fact would be revealing sensitive information despite having protected the content of the message.

On the other hand, there are many scenarios in which we cannot encrypt the information. For example, if we want to outsource the processing of a database or release it for third parties to carry out analyses or studies for statistical purposes. In these types of scenarios we often encounter the problem that the database contains a large amount of personal or sensitive information, and even if we remove personal identifiers (e.g., name or passport number), it may not be sufficient to protect the privacy of individuals. 

Anonymization: protecting our privacy

Anonymization (also known as “data masking”) is a set of techniques that allows the user to protect the privacy of the documents or information by modifying the data. This means anonymization with gaps (deletion), anonymization with placeholders (substitution) or pseudoanonymizing data.

[Interfaz de usuario gráfica, Aplicación Descripción generada automáticamente]
In general, anonymization aims to alter the data in such a way that, even if it is subsequently processed by a third party, the identity or sensitive attributes of the persons whose data is being processed cannot be revealed.

Privacy management is regulated similarly across legal jurisdictions in the world. In Europe, it is known as GDPR (General Data Protection Regulation). which was approved in 2016 and implemented in 2018. In the US, the California Consumer Privacy Act (CCPA) was approved in January 2018 and is applicable to businesses that  

  • have annual gross revenues in excess of $25 million;
  • buys, receive, or sell the personal information of 50,000 or more consumers or households; or
  • earn more than half of its annual revenue from selling consumers' personal information

It is expected that most other States will follow the spirit of California’s CPA any time soon. This will affect the way organizations collect, hold, release, buy, and sell personal data.

In Japan, the reformed privacy law came into full force on May 30, 2017, and it is known as the Japanese Act on Protection of Personal Information (APPI). The main differences with the European GDPR are the specific clauses defining private identifiable information which in Europe are “Personal data means any information relating to an identified or identifiable natural person” but APPI itemizes.

In general, all privacy laws want to provide citizens with the right to:  

  1. Know what personal data is being collected about them.
  2. Know whether their personal data is sold or disclosed and to whom.
  3. Say no to the sale of personal data.
  4. Access their personal data.
  5. Request a business to delete any personal information about a consumer collected from that consumer.[9]
  6. Not be discriminated against for exercising their privacy rights.

The new regulations seek to regulate the processing of our personal data. Each one of them establishes that data must be subject to adequate guarantees, minimizing personal data.


What is PangeaMT doing about Anonymization?

PangeaMT is Pangeanic’s R&D arm. We lead the MAPA Project – the first multilingual anonymization effort making deep use of bilingual encoders for transformers in order to identify actors, personal identifiers such as names and surnames, addresses, job titles and functions, and a deep taxonomy.

Together with our partners (Centre National pour la Recherche Scientifique in Paris, Vicomtech, etc.) we are developing the first truly multilingual anonymization software. The project will release a fully customizable, open-source solution that can be adopted by Public Administrations to start their journey in de-identification and anonymization. Corporations will also be able to benefit from MAPA as the commercial version will be released on 01.01.2021.


Wednesday, October 21, 2020

The Evolving Translator-Computer Interface

This is a guest post by 
Nico Herbig from the German Research Center for Artificial Intelligence (DFKI).

For as long as I have been involved with the translation industry, I have wondered why the prevailing translator machine interface was so arcane and primitive. It seems that the basic user interface used for managing translation memory was borrowed from DOS spreadsheets and has eventually evolved to become Windows spreadsheets. Apart from problems related to inaccurate matching, the basic interaction model has also been quite limited. Data enters the translation environment through some form of file or text import and is then processed in a columnar word processing style. I think to a great extent these limitations were due to the insistence on maintaining a desktop computing model for the translation task. While this does allow some power users to become productive keystroke experts it also presents a demanding learning curve to new translators.

Cloud-based translation environments can offer much more versatile and powerful interaction modes, and I saw evidence of this at the recent AMTA 2020 conference (a great conference by the way that deserves much better social media coverage than it has received.) Nico Herbig from the German Research Center for Artificial Intelligence (DFKI) presented a multi-modal translator environment that I felt shows great promise in updating the translator-machine interaction experience in the modern era. 
Of course, it includes the ability to interact with the content via speech, handwriting, touch, eye-tracking, and seamless interaction with supportive tools like dictionaries, concordance databases, and MT among other possibilities. Nico's presentation focuses on the interface needs of the PEMT task, but the environment could be reconfigured for scenarios where MT is not involved and only used if it adds value to the translation task. I recommend that interested readers take a quick look through the video presentation to get a better sense of this.

*** ======== ***

MMPE: A Multi-Modal Interface for Post-Editing Machine Translation

As machine translation has been making substantial improvements in recent years, more and more professional translators are integrating this technology into their translation workflows. The process of using a pre-translated text as a basis and improving it to create the final translation is called post-editing (PE). While PE can save time and reduce errors, it also affects the design of translation interfaces: the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals, thereby requiring significantly less keyboard input, which in turn offers potential for interaction modalities other than mouse and keyboard. To explore which PE tasks might be well supported by which interaction modalities, we conducted a so-called elicitation study, where participants can freely propose interactions without focusing on technical limitations. The results showed that professional translators envision PE interfaces relying on touch, pen, and speech input combined with mouse and keyboard as particularly useful. We thus developed and evaluated MMPE, a CAT environment combining these input possibilities. 

Hardware and Software

MMPE was developed using web technologies and works within a browser. For handwriting support, one should ideally use a touch screen with a digital pen, where larger displays and the option to tilt the screen or lay it on the desk facilitate ergonomic handwriting. Nevertheless, any tablet device also works. To improve automatic speech recognition accuracy, we recommend using an external microphone, e.g., a headset. Mouse and keyboard are naturally supported as well. For exploring our newly developed eye-tracking features (see below), an eye tracker needs to be attached. Depending on the features to explore, a subset of this hardware is sufficient; there is no need to have the full setup. Since our focus is on exploring new interaction modalities, MMPE’s contribution lies on the front-end. At the same time, the backend is rather minimal, supporting only storing and loading of files or forwarding the microphone stream to speech recognition services. Naturally, we plan on extending this functionality in the future, i.e., adding project and user management functionality, integrating Machine Translation (instead of loading it from file), Translation Memory, Quality Estimation, and other tools directly in the prototype.

Interface Layout

As a layout, we implemented a horizontal source-target layout and tried to avoid overloading the interface. On the far right, support tools are offered, e.g., a bilingual concordancer (Linguee). The top of the interface shows a toolbar where users can save, load, and navigate between projects, and enable or disable spell checking, whitespace visualization, speech recognition and eye-tracking. The current segment is enlarged, thereby offering space for handwritten input and allowing users to view the context while still seeing the current segment in a comfortable manner. The view for the current segment is further divided into the source segment (left) and tabbed editing planes for the target (right), one for handwriting and drawing gestures, and one for touch deletion & reordering, as well as a standard mouse and keyboard input. By clicking on the tabs at the top, the user can quickly switch between the two modes. As the prototype focuses on PE, the target views initially show the MT proposal to be edited. Undo and redo functionality and segment confirmation are also implemented through hotkeys, buttons, or speech commands. Currently, we are adding further customization possibilities, e.g., to adapt the font size or to switch between displaying source and target side by side or one above the other.


Hand-writing in the hand-writing tab is recognized using the MyScript Interactive Ink SDK, which worked well in our study. The input field further offers drawing gestures like strike-through or scribble for deletions, breaking a word into two (draw a line from top to bottom), and joining words (draw a line from bottom to top). If there is a lack of space to hand-write the intended text, the user can create such space by breaking the line (draw a long line from top to bottom). The editor further shows the recognized input immediately at the top of the drawing view. Apart from using the pen, the user can use his/her finger or the mouse for hand-writing, all of which have been used in our study, even though the pen was clearly preferred. Our participants highly valued deletion by strike-through or scribbling through the text, as this would nicely resemble standard copy-editing. However, hand-writing for replacements and insertions was considered to work well only for short modifications. For more extended changes, participants argued that one should instead fall back to typing or speech commands.

Touch Reorder

Reordering using (pen or finger) touch is supported with a simple drag and drop procedure: Users have two options: (1) They can drag and drop single words by starting a drag directly on top of a word, or (2) they can double-tap to start a selection process, define which part of the sentence should be selected (e.g., multiple words or a part of a word), and then move it. 

We visualize the picked-up word(s) below the touch position and show the calculated current drop position through a small arrow element. Spaces between words and punctuation marks are automatically fixed, i.e., double spaces at the pickup position are removed, and missing spaces at the drop position are inserted. In our study, touch reordering was highlighted as particularly useful or even “perfect” and received the highest subjective scores and lowest time required for reordering. 



To minimize lag during speech recognition, we use a streaming approach, sending the recorded audio to IBM Watson servers to receive a transcription, which is then interpreted in a command-based fashion. The transcription itself is shown at the top of the default editing tab next to a microphone symbol. As commands, post-editors can “insert,” “delete,” “replace,” and “reorder” words or sub-phrases. To specify the position if it is ambiguous, anchors can be specified, e.g., “after”/”before”/”between” or the occurrence of the token (“first”/”second”/”last”) can be defined. A full example is “replace A between B and C by D,” where A, B, C, and D can be words or sub-phrases. Again, spaces between words and punctuation marks are automatically fixed. In our study, speech [recognition] received good ratings for insertions and replacements but worse ratings for reorderings and deletions. According to the participants, speech would become especially compelling for longer insertions and would be preferable when commands remain simple. For invalid commands, we display why they are invalid below the transcription (e.g., “Cannot delete the comma after nevertheless, as nevertheless does not exist”). Furthermore, the interface temporarily highlights insertions and replacements in green, deletions in red (the space at the position), and combinations of green and red for reorderings. The color fades away after the command. 

Multi-Modal Combinations of Pen/Touch/Mouse&Keyboard with Speech

Multi-modal combinations are also supported: Target word(s)/position(s) must first be specified by performing a text selection using the pen, finger touch, or the mouse/keyboard. 

Afterwards, the user can use a voice command like “delete” (see the figure below), “insert A,” “move after/before A/between A and B,” or “replace with A” without needing to specify the position/word, thereby making the commands less complex. In our study, multi-modal interaction received good ratings for insertions and replacements, but worse ratings for reorderings and deletions. 

Eye Tracking

While not tested in a study yet, we currently explore other approaches to enhance PE through multi-modal interaction, e.g., through the integration of an eye tracker. The idea is to simply fixate the word to be replaced/deleted/reordered or the gap used for insertion, and state the simplified speech command (e.g., “replace with A”/”delete”), instead of having to manually place the cursor through touch/pen/mouse/keyboard. To provide feedback to the user, we show his/her fixations in the interface and highlight text changes, as discussed above. Apart from possibly speeding up multi-modal interaction, this approach would also solve the issue reported by several participants in our study that one would have to “do two things at once” while keeping the advantage of having simple commands in comparison to the speech-only approach.


MMPE supports extensive logging functionality, where we log all text manipulations on a higher level to simplify text editing analysis. Specifically, we log whether the manipulation was an insertion, deletion, replacement, or reordering, with the manipulated tokens, their positions, and the whole segment text. Furthermore, all log entries contain the modality of the interaction, e.g., speech or pen, thereby allowing the analysis of which modality was used for which editing operation. 


Our study with professional translators showed a high level of interest and enthusiasm about using these new modalities. For deletions and reorderings, pen and touch both received high subjective ratings, with the pen being even better than the mouse & keyboard. Participants especially highlighted that pen and touch deletion or reordering “nicely resemble a standard correction task.” For insertions and replacements, speech and multi-modal interaction of select & speech were seen as suitable interaction modes; however, mouse & keyboard were still favored and faster. Here, participants preferred the speech-only approach when commands are simple but stated that the multi-modal approach becomes relevant when the sentences' ambiguities make speech-only commands too complex. However, since the study participants stated that mouse and keyboard only work well due to years of experience and muscle memory, we are optimistic that these new modalities can yield real benefit within future CAT tools.


Due to continuously improving MT systems, PE is becoming more and more relevant in modern-day translation. The interfaces used by translators still heavily focus on translation from scratch, and in particular on mouse and keyboard input modalities. Since PE requires less production of text but instead requires more error corrections, we implemented and evaluated the MMPE CAT environment that explores the use of speech commands, handwriting input, touch reordering, and multi-modal combinations for PE of MT. 

In the next steps, we want to run a study that specifically explores the newly developed combination of eye and speech input for PE. Apart from that, longer-term studies exploring how the modality usage changes over time, whether translators continuously switch modalities or stick to specific ones for specific tasks are planned. 

Instead of replacing the human translator with artificial intelligence (AI), MMPE investigates approaches to better support the human-AI collaboration in the translation domain by providing a multi-modal interface for correcting machine-translation output. We are currently working on proper code documentation and plan to open-source release the prototype within the next months. MMPE was developed in a tight collaboration between the German Research Center for Artificial Intelligence (DFKI) and Saarland University and is funded in part by the German Research Foundation (DFG).


Nico Herbig -

German Research Center for Artificial Intelligence (DFKI)

Further information:


Paper and additional information:

Multi-Modal Approaches for Post-Editing Machine Translation
Nico Herbig, Santanu Pal, Josef van Genabith, Antonio Krüger Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM 2019
ACM Digital Library - Paper access

(Presenting an elicitation study that guided the design of MMPE)

MMPE: A Multi-Modal Interface using Handwriting, Touch Reordering, and Speech Commands for Post-Editing Machine Translation
Nico Herbig, Santanu Pal, Tim Düwel, Kalliopi Meladaki, Mahsa Monshizadeh, Vladislav Hnatovskiy, Antonio Krüger, Josef van Genabith Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. ACL 2020
ACL Anthology - Paper access

(Demo paper presenting the original prototype in detail)

MMPE: A Multi-Modal Interface for Post-Editing Machine Translation
Nico Herbig, Tim Düwel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Krüger, Josef van Genabith Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. ACL 2020
ACL Anthology - Paper access - Video

(Briefly presenting MMPE prototype and focusing on its evaluation)

Improving the Multi-Modal Post-Editing (MMPE) CAT Environment based on Professional Translators’ Feedback
Nico Herbig, Santanu Pal, Tim Düwel, Raksha Shenoy, Antonio Krüger, Josef van Genabith Proceedings of the 1st Workshop on Post-Editing in Modern-Day Translation at AMTA 2020. ACL 2020
Paper access - Video of presentation

(Recent improvements and extensions to the prototype)

Thursday, September 24, 2020

NiuTrans: An Emerging Enterprise MT Provider from China

 This post highlights a Chinese MT vendor who I suspect is not well known in the US or Europe currently, but who I expect will become better known over the coming years. While the US giants (FAAMG) still dominate the MT landscape around the world today, I think it is increasingly possible that other players from around the world, especially from China may become much more recognized in the future. 

One indicator that has been historically reliable to forecast and predict emerging economic power is the volume of patent filings in a country. This has been true for Japan and Germany historically where we saw voluminous patent activity precede the economic rise of these countries, and recently we see that this predictor is also aligned with the rise of S. Korea and China as economic powerhouses. However, the sheer volume of filings is not necessarily a lead indicator of true innovation, and some experts say that the volume of patents filed and granted abroad is a better indicator of innovation and patent quality. But today we see emerging giants from Asia in consumer electronics, automobiles, eCommerce, internet services, and nobody questions the building innovation momentum happening in Asia today. 

Artificial Intelligence (AI) is heralded by many as a key driver of wealth creation for the next 50 years. To build momentum with AI requires a combination of access to large volumes of "good" data, computing resources, and deep expertise in machine learning, NLP, and other closely related technologies. Today, the US and China look poised to be the dominant players in the wider application of AI and machine learning-based technologies with a few others close behind. And here too deep knowledge and clout are indicated by the volume of influential papers published and referenced by the global community. A recent analysis, by the Allen Institute for Artificial Intelligence in Seattle, Washington found that China has steadily increased its share of authorship of the top 10% most-cited papers. The researchers found that America’s share of the most-cited 10 percent of papers declined from a high of 47 percent in 1982 to a low of 29 percent in 2018. China’s share, meanwhile, has been “rising steeply,” reaching a high of 26.5 percent last year, Though the US still has significant advantages with the relative supply of expert manpower and dominance in manufacture of AI semiconductor chip technology, this too is slowly changing even though most experts expect the US to maintain leadership for other reasons

Credit: Allen Institute for Artificial Intelligence

These trends also impact the translation industry and they change the relative benefit and economic value of different languages. The global market is slowly changing from a FIGS-centric view of the world to one where both the most important source language (ZH, KO, HI) and target languages are changing.  The fastest-growing economies today are in Africa and Asia and are not likely to be well served by a FIGS-centric view though it appears that English will remain a critical world language for knowledge sharing for at least another 25 years. These changes create an opportunity for agile and skillful Asian technology entrepreneurs like NiuTrans who are much more tuned-in to this rapidly evolving world.  I have noted that some of the most capable new MT initiatives I have seen in the last few years were based in China. India has lagged far behind with MT, even though the need there is much greater, because of the myth that English matters more, and possibly because of the lack of governmental support and sponsorship of NLP research.

The Chinese MT Market: A Quick Overview

I recently sat down with Chungliang Zhang from NiuTrans, an emerging enterprise MT vendor in China, to discuss the Chinese MT market and his company’s own MT offerings. He pointed out that China is the second-largest global economy today, and it is now increasingly commonplace for both Chinese individuals and enterprises to have active global interactions. The economic momentum naturally drives the demand for automated translation services.

Some examples, he pointed out:

In 2019, China’s outbound tourist traffic totaled 155M people, up 3.3% from the previous year. This massive volume of traveler traffic results in a concomitant demand for language translation. Chungliang pointed out that this travel momentum significantly drives the need for voice translation devices in the consumer market like those produced by Sougou, iFlyTek, and others, which have been very much in demand in the last few years.

There is also a growing interest by Chinese enterprises, both state-owned or privately owned, to build and expand their business presence in global markets. For example, Alibaba, China’s largest eCommerce company, is listed on the NYSE and has established an international B2B portal ( where 20 million enterprises gather and work to “Buy Global, Sell Global.” Currently, the Alibaba MT team builds the largest eCommerce MT systems globally, often reaching volumes of 1.79 billion translation calls per day, which is a larger transaction volume than either Google or Amazon.

“All in all, as we can see it, there is a clear trend that MT is increasingly being used in more and more industries, such as language service industries, intellectual property services, pharmaceutical industries, and information analysis services.”

While it is clear that consumers and individuals worldwide are regularly using MT, the primary enterprise users of MT in China are government agencies and internet-based businesses like eCommerce. This need for translation is now expanding to more enterprises who seek to increase their international business presence and realize that MT can enable and accelerate these initiatives.

The Chinese MT technology leaders in terms of volume and regular user base are the internet services giants (such as Baidu, Tencent, Alibaba, Sogou, Netease) or the AI tech giants (such as iFlyTek). Google Translate and Microsoft Bing Translator are also popular in China since they are free, but they don’t have a large share of the total use if the focus is strictly on MT technology.

When asked to comment on the characteristics and changes in the Chinese MT market, Chungliang said:

“In our understanding, Sogou and iFlytek's primary business focus is the B2C market, and thus both of them develop consumer hardware like personal voice translators. Sogou was recently (July 29, 2020) purchased by Tencent (a major social media player), so we don’t know what will happen next. iFlytek is famous for its Speech-To-Speech technology capabilities. Thus it is natural for them to develop MT, to get the two technologies integrated and grab a larger share of the market.

As for the other important MT players in China, Alibaba MT mainly serves its own global focused eCommerce business, and Tencent Translate focuses on providing the translation needs of its users in social networking use scenarios. Like Google Translate, Baidu Translate is a portal to attract individual users who might need translation during a search. It also serves to expand Baidu’s influence as a whole. While Netease Youdao focuses on the education industry, and the Youdao Team integrates the Youdao online dictionary, direct MT, and human translation.

What are the main languages that people/customers translate? As far as we know, the most translated language is English, Japanese is second, followed by Arabic, Korean, Thai, Russian, German, and Spanish.” Of course, this is all direct to and from Chinese.”

NiuTrans Focus: The Enterprise

The NiuTrans team learned very early in their operational history and during their startup phase that their business survival was linked to providing MT services for the enterprise rather than for individual users and consumers. The market for individuals is dominated by offerings like Google Translate and Baidu Translate that offer virtually-free services. In contrast, NiuTrans is focused on meeting the enterprise demands for MT, which often means deploying on-premise MT engines and the development of custom engines. These enterprises tend to be concentrated around Intellectual Property and Patent services, Pharmaceuticals, Vehicle Manufacturing, IT, Education, and AI companies. For example, NiuTrans builds customized patent-domain MT engines for the China Patent Information Center (CNPAT, a branch of the China National Intellectual Property Administration, a large-scale patent information service based in Beijing.)

CNPAT has the largest collections of multilingual parallel data for patents, and services ongoing and substantial demands for patent-related MT needs in various use scenarios such as patent application filing and examination, patent-related transactions, and patent-based lawsuits. Given the scale of the client’s needs, NiuTrans sends an R&D team on-site to work with CNPAT’s technical team for data processing and data cleaning. This data is then used in the NiuTrans.NMT training module to develop patent-domain NMT engines on CNPAT’s on-premise servers. The on-site team also develops custom MT APIs on-demand to fit into CNPAT’s current workflow and customer servicing needs.

Besides powering and enabling the specialized translation needs of services like CNPAT, NiuTrans also provides back-end MT services for industrial leaders, including iFlyTek (also an early investor in NiuTrans), (the No. 2 eCommerce business in China), Tencent (the largest social networking company in China), Xiaomi (a leader of smart devices OEMs in China), and Kingsoft (a leader of office software in China).

NiuTrans has an online cloud API that also attracts 100,000+ small and medium enterprises interested in expanding their international operations and business presence. The pricing for these smaller users are based on the volume of characters these users translate and is much lower than Google Translate and Baidu Translate prices.

NiuTran’ Online Cloud User Locations

You can visit the NiuTrans Translate portal at

NiuTrans write and maintain their own NMT code-base rather than use open source options for NiuTrans.NMT and claim that they achieve comparable, if not better, quality performance with their competitors. Their comparative performance at the WMT19 evaluations suggests that they actually do better than most of their competitors. They are not dependent on TensorFlow, PyTorch, or OpenNMT to build their systems. Today, NiuTrans is a key MT technology provider, especially for enterprises in China.

NiuTrans.NMT is a lightweight and efficient Transformer-based neural machine translation system. Its main features are:

  • Few dependencies. It is implemented with pure C++, and all dependencies are optional.
  • Fast decoding. It supports various decoding acceleration strategies, such as batch pruning and dynamic batch size.
  • Advanced NMT models, such as Deep Transformer.
  • Flexible running modes. The system can be run on various systems and devices (Linux vs. Windows, CPUs vs. GPUs, FP32 vs. FP16, etc.).
  • Framework agnostic. It supports various models trained with other tools, e.g., Fairseq models.
  • The code is simple and friendly to beginners.

When I probed into why NiuTrans had chosen to develop their own NMT technology rather than use the widely accepted open-source solutions, I was provided with a history of the company and its evolution through various approaches to developing MT technology.

The NiuTrans team originated in the NLP Lab at Northeastern University, China (NEUNLP Lab), a machine translation research leader in the Chinese academic world going as far back as 1980. Like many elsewhere in the world, the team initially studied rule-based MT from 1980 to 2005. In 2006 Professor Jingbo Zhu (the current Chairman of NiuTrans) returned from a year-long visit to ISI-USC and decided to switch to statistical MT research working together with Tong Xiao, who was a fresh graduate student at the time and is now the CEO of NiuTrans. They made rapid strides in SMT research, releasing the first version of NiuTrans SMT open source in 2011. At that time, Chinese academia primarily used Moses to conduct MT-related research and develop MT engines. The development of the NiuTrans.SMT open-source proved that Chinese engineers could do the same as, or even better than Moses, and also helped to showcase the strength and competence of the NiuTrans team. Thus, in 2012, confident with their MT technology and armed with a dream to expand the potential of this technology to connect the world with MT, the NiuTrans team decided to form an MT company, converting the 30+ years’ of MT research work to developing MT software for industrial use.

Given their origins in academia, they kept a close watch on MT research and breakthroughs worldwide and noticed in 2014 that there was a growing base of research being done with neural network-based deep learning models. Therefore, the NiuTrans team started studying deep learning technologies in 2015 and released its first version of NiuTrans.NMT in December 2016, just three months after Google announced the release of its first NMT engines.

NiuTrans prefers to avoid using open source MT platforms like TensorFlow, PyTorch, or OpenNMT as they have developed deep competence in MT technology gathered over 40 years of engagement. The leadership believes there are specific advantages to building the whole technology stack for MT and intend to continue with this basic development strategy. As an example, Chunliang pointed me to the release of NiuTensor, their own deep learning tool: ( NiuTrans.NMT Open Source ( They are confident that they can keep pace with continuous improvements in open source with support from the NEUNLP Lab, which has eight permanent staff and 40+ Ph.D./MS students focusing on MT issues of relevance and interest for their overall mission. This group also allows NiuTrans to stay abreast of the worldwide research being done elsewhere.

NiuTrans understands that a critical requirement for an enterprise user is to adapt and customize the MT system to enterprise-specific terminology or use. Thus, it provides both a user terminology module to introduce user terminology into the MT system and a user translation memory module to introduce the users’ sentence pairs to tune the MT system. Another more sophisticated solution is incremental training. They incorporate user data to modify the NiuTrans model parameters to get the MT model better adjusted to user data features.

NiuTrans also gathers post-editing feedback on critical language pairs like ZH <> EN and ZH <> JP on an ongoing basis, then analyze error patterns to develop continuing engine performance improvements.

Quality Improvement, Data Security, and Deployment

NiuTrans evaluates MT system performance using BLEU and a human evaluation technique that ranks relative systems. They prefer not to use the widely used 5-point scale to assign an absolute value to a translation. Thus if they were comparing NiuTrans, Google, and DeepL, they would use a combination of BLEU and have humans rank the same blind test set for the three systems.

NiuTrans also has an ongoing program to improve its MT engines continually. They do this in three different ways:

  1. Firstly, as the company has a strong research team that is continually experimenting and evaluating new research, the impact of this research is continuously tested to determine if it can be incorporated into the existing model framework. This kind of significant technical innovation is added into the model two or three times a year.
  2. Secondly, customer feedback, ongoing error analysis, or specialized human evaluation feedback also trigger regular updates to the most important MT systems (e.g. ZH<>EN) at least once a month.
  3. Thirdly, engines will be updated as new data is discovered, gathered, or provided by new clients. High-quality training data is always sought after and considered valuable to drive ongoing MT system improvements.

NiuTrans has performed well in comparative evaluations of their MT systems against other academic and large online MT solutions. Here is a summary of the results from WMT19. They report that their performance in WMT20 is also excellent, but final results have not yet been published.

NiuTrans training data comes mainly from two sources: data crawling and data purchase from reliable vendors.

NiuTrans uses crawlers to collect the parallel texts from the websites that do not prohibit or prevent this, e.g., some Chinese government agencies’ websites that often provide data in several languages. They also buy parallel sentences (TM) and dictionaries from specific data provider companies, who might require signing an agreement, specifying that the data provider retains the intellectual property rights of the data.

NiuTrans gets the bulk of its revenue from data-security concerned customers who deploy their MT systems on On-premise systems. However, NiuTrans is also working on an Open Cloud offering, allowing customers to access an online API and avoid installing the infrastructure needed to set up on-premise systems. The Open Cloud is a more cost-effective option for smaller SME companies, and NiuTrans has seen rapid adoption of this new deployment in specific market segments.

International customers, especially the larger ones, much prefer to deploy their NiuTrans MT systems on-premise. For those international customers who cannot afford on-premise systems, the NiuTrans Open Cloud solution is an option. This system is deployed on the Alibaba Cloud that is governed by Chinese internet security laws that require that user data be kept for six months before deletion. The company plans to build another cloud service on the Amazon Cloud for international customers who have data security concerns. This new capability will allow users to encrypt their data locally, transfer the data securely to the Amazon Cloud. NiuTrans will then decrypt the source data on their servers, translate it, and finally delete all the user data and the corresponding translation results once the source data has been translated.

NiuTrans currently has 100+ employees, directed by Dr. Jjingbo Zhu and Dr. Tong Xiao, two leading MT scientists in China. Shenyang is the seat of the company’s headquarters and R&D team as well. Technical support and services are available in Beijing, Shanghai, Hangzhou, Chendu, and Shenzhen currently, but the company is now exploring entering the Japanese market, with the assistance of partners in Tokyo and Osaka. While NiuTrans is not a well-known name in the US/EU translation industry today, I suspect that they will become an increasingly better-known provider of enterprise MT technology in the future.

Friday, September 4, 2020

The Premium Translation Market: Come On In. The Water’s Perfect.

This is the full text of a response by Kevin Hendzel to a guest post by Luigi Muzii that challenged various attributes and characteristics of the "premium" market that I described in a post focused on this market segment.  

The notion of premium evokes strong opinions from both translators and LSPs, and one can see the range of views and opinions that are highlighted in this post, as well as the other two linked above. Like much of the phenomena in the translation industry and the definition of the translation market itself, the views on the premium market are fragmented. 

Fragmentation means to see partially, to not see the whole. Insight is only possible when one sees the whole. 

We see that the professional market research firms completely overlook the market (mostly because it is much harder to research and pindown) and thus perpetuate the view that the market does not exist, but we also see that there are huge differences in their own analysis on what exactly is contained in the "translation market". The differences are so large that it raises credibility questions on the validity of any or all of the estimates that are currently available.

For the record, I stand by my initial "opinion" on the premium market as I cannot really say that it is more than an opinion. I cannot provide any more data than I already have.

In fact, this discussion on the translation market and what it really is brings to mind a story I was told as a child, about blind men who encounter an elephant for the first time. 

The parable of the Blind Men and an Elephant originated in the ancient Indian subcontinent, from where it has been widely diffused. It is a story of a group of blind men who have never come across an elephant before and who learn and conceptualize what the elephant is like by touching it. Each blind man feels a different part of the elephant's body, but only one part, such as the side or the tusk. They then describe the elephant based on their limited experience and their descriptions of the elephant are different from each other. In some versions, they come to suspect that the other person is dishonest and they come to blows. The moral of the parable is that humans have a tendency to claim absolute truth based on their limited, subjective experience as they ignore other people's limited, subjective experiences which may be equally true. (Source: Wikipedia)

And so these men of Indostan

Disputed loud and long,

Each in his own opinion

Exceeding stiff and strong,

Though each was partly in the right

And all were in the wrong!


As many readers of this blog certainly know, Mr. Muzii and I have been consistently at odds over the existence of the premium market. I’ve lived inside it for decades, so I’m reporting my own personal and extended experience within this market as well as the exceptionally hard work of my colleagues in various segments of the market all over the world. I also had to become an expert on the markets writ large when I was the ATA National Media spokesman (2001-2012) in order to avoid misleading the media, researchers, and my own colleagues.

So I work in the very market that Kirti expertly describes in his original post.

Happily, as such a practitioner, and despite (by extension) being called a “fool” four times, told these views “reflect ignorance,” and demonstrate “a profound lack of respect for bulk market translators,” I still welcome the opportunity to respond to Mr. Muzii from the viewpoint of an individual who has actually lived in this market for most of his professional life.

Mr. Muzii’s repeated denials over the years, and especially those proffered above, are heavily based on repeated speculation on a whole range of market activity with no factual basis whatsoever, combined with the total absence of experience in the premium market. It’s an argument from the absence of data, not the presence of it. This stalemate persists because Mr. Muzii refuses to allow any first-hand or published descriptions of the premium market to ever be allowed to be treated as data.

This reminds me of Lord Kelvin’s resounding and confident claim in 1895: “I can state flatly that heavier-than-air flying machines are impossible.”

I’m pleased that this also gives me the opportunity to go back in time to my Precambrian college days and quote a professor who insisted that the proponents of Marxism had a lot in common with the star of the TV mystery crime show “Columbo.” The star of “Columbo” always knew from the beginning who was guilty. It was fun to watch him trap the offender under a mountain of actual facts. Real evidence.

My professor’s point was that Marxist ideology already had the conclusion in hand, too. The only difference – and this is crucial – is that the Marxists only cared about the conclusion. They were converts. They knew what the conclusion would be. (The premium market does not exist!). No matter what events actually occurred in the world, those facts would be wrestled and twisted and crammed into that Marxist suit. It was the conclusion that was important. Any set of assumptions or facts or evidence could readily be twisted and jammed into that poor distorted suit.

This approach is unhelpful because it makes it difficult to reasonably consider other views, ideas, and concepts – to say nothing about [any] personal experience -- in an increasingly complex world.

Let’s consider this claim by Mr. Muzii:

“Now, anyone who has worked in translation for a while knows that, for the job to be sustainable, a translation should not outperform the original (especially if this is already good), and that a professional translator producing superior texts from crap is not the reincarnation of Cagliostro, but a fool.”

I’ve been in translation for longer than Mr. Muzii, so I guess that means I qualify as “anyone.” It turns out, though, that premium-market clients are far more concerned about whether a translation works as a form of communication. The overriding objective is to successfully convey a message that is crucial to the client and can involve significant market risk. These client/translator collaborative efforts are deeply rooted in comparing multiple different translations and tweaking the options. It’s why premium-market translators are consistently and thoroughly engaged with their clients. This is in fact discussed in detail in the original post on premium markets. I can’t imagine how Mr. Muzii could have missed David Jemielity’s video image and quote in a 14-point type on how communication is the primary objective.

Mr. Muzii’s claim would also be news to the late Gabriel García Márquez, who called Gregory Rabassa's translation of “One Hundred Years of Solitude” better than the Spanish original.

Maybe the winner of the 1982 Nobel Prize in Literature was just another of Mr. Muzii’s “fools.”

I also found this comment by Mr. Muzii interesting:

“Finally, bringing on the example of US defense contracts to support the existence of the premium translation market is pointless. In this case, translation jobs go to some professional services contracting company, which is part of some large conglomerate, as in the case of GLS and other regular military contractors.”

I found this argument to fail on so many points that it’s a real head-scratcher to figure out how to begin to address it. First, Mr. Muzii selects a single isolated example and then turns around and projects it out to represent an enormously complicated multibillion-dollar market. Second, most of that spend bypasses companies because top-notch translators and interpreters are directly recruited and hired by government agencies at USD six-figure salaries, which only begins to cover the demand. At the ATA conference in Phoenix, Arizona, back in 2003, I worked personally to put a significant number of skilled translators in touch with these agencies, which again represented only a tiny trickle of recruiting activity. Third, large companies do provide a significant percentage of talent, but they pay translators exceedingly well because of the complexity of the missions. Fourth, Kirti himself – who has had a peek into this market – observed in the original post that this certainly extends beyond “U.S. defense contractors.” While many U.S. intelligence and defense agencies are major consumers of MT, their funding of premium-market translation experts is 50x greater.

I thought it would be most helpful to end on a topic where Mr. Muzii and I are in solid agreement.

“The specialization required to become a skillful and well-paid translator with a lasting position at the high end of the translation market involves substantial commitment, time, and investment. Skills do not grow on trees or accumulate overnight; building a network of relations is toilsome; learning to exploit it may require some major changes in character, and no one can guarantee stable high prices and job satisfaction.”

This is all certainly true. What I have not yet figured out is how we will train future generations of premium-market translators as our working world is increasingly interwoven with AI and ML, both of which are improving daily. Much of the bulk market has already been gouged out by these technologies, with varying degrees of success. My guess is that translation programs will need to recruit from graduate programs in engineering, physics, and law, etc., or future generations of premium-market translators will simply train themselves.

So this is a topic about which I am – without a doubt – woefully ignorant.

Kevin Hendzel is an Award-Winning Translator, Linguist, Author, National Media Consultant, and Translation Industry Expert

Monday, August 31, 2020

Lead and Gold: Challenging the Premium Translation Market Claims

This is a guest post that is a detailed response by Luigi Muzii and is a clear rebuttal of my previous post on the Premium Translation market. While Luigi agrees that the translation market is made up of many smaller segments, he does not see enough evidence of a clearly discernible premium market. He admits that there are premium customers who are willing to pay higher prices he questions the long-term viability of a market that ALWAYS pays higher prices for translation when reasonable lower-cost alternatives are available.

Since Luigi took the trouble to document his criticisms into a complete post, I felt it deserved to be an independent post that furthers the dialogue on the subject, by presenting another opinion and hopefully attracts further conversation from those who see the issues most clearly. 

We live today in a world where respectful dialogue is woefully inadequate, especially in national politics. While some of the comments might be seen as scathing, or overly negative, I find that his statement of his views (which I do not necessarily agree with) meets my standards for respectful professional discussion. Disagreements need to be forceful at times, and for this blog, I only expect that they do not become petty and personally disrespectful.

I also gathered that a primary motivation for his comments was his concern that premium market discussions would discourage both young translators who are just starting out and old-timers who might feel discouraged by choices they have made years ago.

There are, unfortunately, clear value associations in the very contrast created by the words "bulk" versus "premium".   The value creation aspects are more opaque in this dichotomy, as I have seen large MT projects can create more value (in monetary terms) than the most expert translators can around a single project, and that there is a place for the whole spectrum of translation production possibilities that exist in the world today.

Thus, I maintain that for those with demonstrated competence and true subject matter expertise, a premium market does exist. This means that it is not just higher priced work, but also that the client to translator engagement is much more active, collaborative, and consultative. 


Do you know the way to El Dorado?
I’ve been away so long. I may go wrong and lose my way
Do you know the way to El Dorado?
I’m going back to find some peace of mind in El Dorado.

Do you know the way to El Dorado?

The fabled El Dorado of translation, the “premium market”, is to my mind much like a losing stream. Unfortunately for the many believers who still exist, there is no El Dorado. It is a legend, nothing more than a popular topic in blogs and at conferences. There is no proof of its existence because none of its fierce advocates have ever produced any, so there is no map or instructions to get there and no one is able or willing to provide any.

A few years ago, the news circulated frantically, without any confirmation from the said client, that Le Manoir de la Régate, a gastronomic restaurant in Nantes, had paid a notorious advocate of the mythical “premium market” € 800,00 for the translation of a “postcard”.

Lately, another example has been circulating, supposedly to put an end to any controversy about the existence of the so-called “premium market”.

Now, anyone who has worked in translation for a while knows that, for the job to be sustainable, a translation should not outperform the original (especially if this is already good), and that a professional translator producing superior texts from crap is not the reincarnation of Cagliostro, but a fool.

It may, of course, happen that the translator’s writing skills are such, that they easily outperform the original from the start, but in that case, the client is a fool, willing to pay more for a derivative work than for authoring. Provided, of course, that the translator is not so foolish or bashful as to accept to work for a pittance while being aware of their skills, the poor quality of the source text, and the intended use of the translation.

No top defense counsel is going to represent a pickpocket in court, just as no pickpocket is likely to have the means to hire a top defense counsel. Maybe because no defense counsel becomes the best in court overnight.

Similarly, a wicked lawyer may counsel a pickpocket and bill a fortune, just as a translator may charge an outrageous fee for translating a postcard. But, if this is true, this is rather a matter of professional ethics. Incidentally, counseling may not save the pickpocket, and the postcard may remain an isolated marketing attempt (the restaurant’s website above, for example, is still in French only).

Anyway, a customer looking for a translation to outperform the original has much more serious problems than finding a top (premium) translator. On the other hand, it is highly unlikely that the author of a legislative text, a patent application, an economic or financial report, or legal advice, would accept any comments, remarks, or writing directions from a translator, however capable. Unless, of course, the translator is also an equally capable lawyer, engineer, scientist, or economist.

Today, all published content is indeed global, and users can easily have it machine translated if they do not master the language(s) in which it needs to be available. On the other hand, in an ideal world, content would be designed and authored with translation in mind, the software would be perfectly internationalized and multimedia ready for multilingual subtitling. Only in an ideal world.

In the real world, translation is often unappreciated, most often seen as a necessary evil, and, as such, left for the end of the content production cycle, to be done cost-efficiently. So, a client who pays more than four times the market average for the translation of a standard text is a fool, all the more so if that translation is not worth the price differential.


The commoditization that has been affecting translation for some years now has the same effects that all commodities endure.

For example, deforestation and climate change are progressively and significantly reducing coffee crops and worsening the conditions of extreme poverty in which farmers are already living, although the coffee trade generates revenues of over US$ 100 billion per year, leading many farmers to leave. The increasing sales of fine varieties like kopi luwak will be of no help.

Just as the demand for coffee, the demand for translation is widespread and increasing; the global marketplace is crowded with price-sensitive buyers and there is little point now to bring about the issue of information asymmetry and signaling, which has been regularly dismissed for years as irrelevant.

Like for coffee and kopi luwak, there is no translation “premium market”: There may be a few “premium customers” that can, at most, and with much goodwill (from my side), represent a segment. And you should be accurate with your lexicon, especially if you are a linguist and you work for the banking and financial industry.

Another example could be a bespoke, hand-sewn three-piece men's suit from a luxury tailor shop in Savile Row, which does not necessarily make any of the tailors in the shop, and maybe not even the owner, a wealthy guy. Likewise, Brioni can provide a very demanding customer with a sartorial ready-to-wear suit while Marinella can still sell its famous handmade custom ties at any of its stores around the world.

In short, the bulk-premium dichotomy is not only simplistic, Manichaean, and capricious, it is mala fide and reflects ignorance and a profound lack of respect for all those who make a more than respectful living working with “bulk-market” customers.

Finally, bringing on the example of US defense contracts to support the existence of the premium translation market is pointless. In this case, translation jobs go to some professional services contracting company, which is part of some large conglomerate, as in the case of GLS and other regular military contractors.

If “premium” simply means that the word rate is higher, possibly a few isolated, individual translators and, most probably, some sub-contracting LSPs may earn better money than average, but this definitely does not make US defense contracts any kind of  “premium market”.

In contrast, it is possible that other major institutional customers, e.g. the EU, push hard on translation prices when procuring for translations to compensate for the untenable stipends of their in-house translators, thus further contributing to commoditizing translation.

For all these reasons, should a “premium market” exist it would most probably be “fiercely guarded and (often) shrouded in secrecy to prevent additional competition”, and this would make it even harder to find and access it.

Venture Capital and Private Equity

The advocates of the so-called “premium market” have been using the interest that some private equity firms and, to a much lesser extent, a few venture-capital funds have recently shown in the translation industry to restate their arguments.

Others maintain that the Big Four accounting firms regularly approach translation boutique firms to explore potential opportunities.

Leaving aside for a moment the numerous and repeated criticisms made over the years to those Big Four firms, who figure prominently in corporate collusion allegations, and the suspicion of money laundering behind some PE transactions, recently some mid-to-high gross margin LSPs have caught the attention of PE firms because higher gross margins usually mean higher cash-flow margin to investors.

However, VC and PE firms are typically interested in short-term growth, possibly via M&A, but top growth rates are made up of, among other things, the pace of hiring, the complexity of services delivered, and the capital intensity of expansion, with respect to the market size, maturity, and competition, all things that are very hard to find in SME LSPs.

Indeed, the translation industry’s CAGR is often claimed to be steadily above the World’s GDP growth levels, following the explosion of content volume and expanded global trade. LSPs in the gaming and life sciences niches might in fact grow even faster, but PE firms usually expect to make a three-time cash-on-cash return or more on a five-years typical investment time horizon, and the expected IRR is 20-25 percent minimum. Objectively, these results are hard to achieve by investing in SMEs LSPs.


As a matter of fact, most of the people hailing the fabled “premium market” live and work outside it, and most probably do not know the path to it or the key to access it.

This, however, does not hold them from ranting against those who they believe, or maybe they just assume are the culprits, for the decadence of translation and the translation profession.

Indeed, PE firms targeting their investments in the translation industry cut freelancers out of the equation, and this gets things back to square one with the so-called “premium market”. However, the recent RWS’s takeover of SDL shows that domain expertise builds value, and this can be found not only in high-profile professionals. In this respect, back in 1993, AITI (the Italian Association of the Translation Industry) invited RWS to an international conference on translation quality assessment and report about being the first translation company certified to BS 5750, the predecessor of ISO 9002. The conference proceedings are available for download.

The fundamentalist fever against ‘corruption’ of translation also affects Academia. Actually, it started there and has always been spreading from there. Specifically, translation students are not taught to deal with the intricacies of the real market because most teachers have never translated a line in their lives. Preserving the status quo of the old curriculum, with the associated models, is a reason for the continuing survival of these attitudes.

So, a recent paper on the sustainability of the current models in the translation industry comes as no surprise, even if it comes from an otherwise seemingly innovative institution like DCU.

Sustainability is not a new topic and has often been associated with quality. Unfortunately, a major flaw in Joss Moorkens’s paper can be found right away. In actual fact, the working situation described in his paper is relatively recent and has not been “live for decades now”.

Moorkens’s paper presents the typical traits of confirmation bias, the same that can be found in most arguments from the advocates of the so-called “premium market”. Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence.

In fact, the paper does not substantiate or provide any actual evidence of the alleged widespread application to translation-work of Taylorism, which Moorkens also apparently confuses with Fordism. It may happen. However, the application of documented information as related to processes and workflows is the basis for quality management standards and even applies to the many, (somewhat poor), translation quality assurance standards. Even the existing translation quality assessment models, which come mostly from academics, are based–more or less knowingly–on “scientific management”. Finally, the standardization of production has allowed consumers to buy cars, and soldiers to be safer when using ammunition. See my further comments on the freely available A Contrarian’s View on Translation Standards.

Of course, the remuneration of translators is a critical issue, but M&A and PE firms have little to do with it. It is true, though, that remuneration has been decreasing for the last three decades due to the many technological innovations that have been introduced almost entirely from the outside into the translation industry. This is another interesting topic that academics seemingly prefer to ignore in their studies, possibly because it does not relate to their field of study and it doesn’t help to safeguard the status quo.

On the other hand, the issue of wage reduction has been at the heart of business associations’ propaganda for years, because it is easy, costs nothing, and avoids entrepreneurs from having to open their wallets to modernize production structures and processes.

Surprise, things are changing even in the temples of laissez-faire. For example, in The Economics of Belonging, the Financial Times’s European economics commentator Martin Sandbu argues that compressing labor costs reduces productivity, that higher labor costs push companies to move to more advanced production models, to make more investments, and that, as long as people are paid little, companies are settling on low value-added production. Likewise, in The Limits of the Market, Paul De Grauwe argues that if employers like to keep labor costs low, they will only succeed if they work to curb technological progress.

Wait a minute! Has this nothing to do with the Gresham’s Law? Maybe the advocates of the fabled “premium market” are short of updates in labor economics. And not just that.

However, if RWS’s takeover of SDL is good news for the translation industry as many seem to think, there will be less and less space for a “premium market”. The specialization required to become a skillful and well-paid translator with a lasting position at the high end of the translation market involves substantial commitment, time, and investment. Skills do not grow on trees or accumulate overnight; building a network of relations is toilsome; learning to exploit it may require some major changes in character, and no one can guarantee stable high prices and job satisfaction. Time is crucial unless your parents or your spouse can indefinitely pay for your continuing education while you are stuck in the “bulk market.” And possibly find you a permanent position in some financial or military institution.

Do advocates of the “premium market” ever tell this to their pupils willing to hear their fairy tales and feed their wishful thinking?

To be honest, in his paper, at least Joss Moorkens admits that “the hollowing out of the middle section of the market may make it more difficult to climb to the high end”.


Luigi Muzii's profile photo

Luigi Muzii has been in the "translation business" since 1982 and has been a business consultant since 2002, in the translation and localization industry through his firm. He focuses on helping customers choose and implement best-suited technologies and redesign their business processes for the greatest effectiveness of translation and localization-related work.

This link provides access to his other blog posts.


Post Script Addendum

I dug into the €800,00 postcard translation example given above and found out some facts on this that I think are worth sharing to give it an accurate context.

The example comes from Chris Durban who uses it as a teaching aid (shown above) in a classroom setting to explore and show different aspects of value-added translation work. It was done for an FDI client of hers whose primary focus was on texts that were intended to attract foreign investment into a region of France to increase employment. This client was sending a team to Davos, and the postcard was part of a press kit pointing to the quality of life characteristics of the region and was intended to attract investment, and expatriates to consider the region for new business initiatives. This was to be used at the WEF conference, where it would be compared to other premium marketing and communication messaging.

"My point with this exercise is generally to introduce (and raise awareness of) value pricing, expertise, context (purpose of the text, client's communication goal), and time factors."  

It required the contribution of a specialist cookbook translator working together with Chris working within tight deadlines to make it work with the quality-of-life theme they were trying to promote in the press kit. The point of the example is to show how value is added when one looks beyond the words that need to be translated and focus on the broader intent of the communication, which typically requires more elaborate and knowledgeable integration procedures.