Pages

Wednesday, October 21, 2020

The Evolving Translator-Computer Interface


This is a guest post by 
Nico Herbig from the German Research Center for Artificial Intelligence (DFKI).

For as long as I have been involved with the translation industry, I have wondered why the prevailing translator machine interface was so arcane and primitive. It seems that the basic user interface used for managing translation memory was borrowed from DOS spreadsheets and has eventually evolved to become Windows spreadsheets. Apart from problems related to inaccurate matching, the basic interaction model has also been quite limited. Data enters the translation environment through some form of file or text import and is then processed in a columnar word processing style. I think to a great extent these limitations were due to the insistence on maintaining a desktop computing model for the translation task. While this does allow some power users to become productive keystroke experts it also presents a demanding learning curve to new translators.




Cloud-based translation environments can offer much more versatile and powerful interaction modes, and I saw evidence of this at the recent AMTA 2020 conference (a great conference by the way that deserves much better social media coverage than it has received.) Nico Herbig from the German Research Center for Artificial Intelligence (DFKI) presented a multi-modal translator environment that I felt shows great promise in updating the translator-machine interaction experience in the modern era. 
 
Of course, it includes the ability to interact with the content via speech, handwriting, touch, eye-tracking, and seamless interaction with supportive tools like dictionaries, concordance databases, and MT among other possibilities. Nico's presentation focuses on the interface needs of the PEMT task, but the environment could be reconfigured for scenarios where MT is not involved and only used if it adds value to the translation task. I recommend that interested readers take a quick look through the video presentation to get a better sense of this.

*** ======== ***


MMPE: A Multi-Modal Interface for Post-Editing Machine Translation

As machine translation has been making substantial improvements in recent years, more and more professional translators are integrating this technology into their translation workflows. The process of using a pre-translated text as a basis and improving it to create the final translation is called post-editing (PE). While PE can save time and reduce errors, it also affects the design of translation interfaces: the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals, thereby requiring significantly less keyboard input, which in turn offers potential for interaction modalities other than mouse and keyboard. To explore which PE tasks might be well supported by which interaction modalities, we conducted a so-called elicitation study, where participants can freely propose interactions without focusing on technical limitations. The results showed that professional translators envision PE interfaces relying on touch, pen, and speech input combined with mouse and keyboard as particularly useful. We thus developed and evaluated MMPE, a CAT environment combining these input possibilities. 

Hardware and Software

MMPE was developed using web technologies and works within a browser. For handwriting support, one should ideally use a touch screen with a digital pen, where larger displays and the option to tilt the screen or lay it on the desk facilitate ergonomic handwriting. Nevertheless, any tablet device also works. To improve automatic speech recognition accuracy, we recommend using an external microphone, e.g., a headset. Mouse and keyboard are naturally supported as well. For exploring our newly developed eye-tracking features (see below), an eye tracker needs to be attached. Depending on the features to explore, a subset of this hardware is sufficient; there is no need to have the full setup. Since our focus is on exploring new interaction modalities, MMPE’s contribution lies on the front-end. At the same time, the backend is rather minimal, supporting only storing and loading of files or forwarding the microphone stream to speech recognition services. Naturally, we plan on extending this functionality in the future, i.e., adding project and user management functionality, integrating Machine Translation (instead of loading it from file), Translation Memory, Quality Estimation, and other tools directly in the prototype.


Interface Layout

As a layout, we implemented a horizontal source-target layout and tried to avoid overloading the interface. On the far right, support tools are offered, e.g., a bilingual concordancer (Linguee). The top of the interface shows a toolbar where users can save, load, and navigate between projects, and enable or disable spell checking, whitespace visualization, speech recognition and eye-tracking. The current segment is enlarged, thereby offering space for handwritten input and allowing users to view the context while still seeing the current segment in a comfortable manner. The view for the current segment is further divided into the source segment (left) and tabbed editing planes for the target (right), one for handwriting and drawing gestures, and one for touch deletion & reordering, as well as a standard mouse and keyboard input. By clicking on the tabs at the top, the user can quickly switch between the two modes. As the prototype focuses on PE, the target views initially show the MT proposal to be edited. Undo and redo functionality and segment confirmation are also implemented through hotkeys, buttons, or speech commands. Currently, we are adding further customization possibilities, e.g., to adapt the font size or to switch between displaying source and target side by side or one above the other.


Handwriting

Hand-writing in the hand-writing tab is recognized using the MyScript Interactive Ink SDK, which worked well in our study. The input field further offers drawing gestures like strike-through or scribble for deletions, breaking a word into two (draw a line from top to bottom), and joining words (draw a line from bottom to top). If there is a lack of space to hand-write the intended text, the user can create such space by breaking the line (draw a long line from top to bottom). The editor further shows the recognized input immediately at the top of the drawing view. Apart from using the pen, the user can use his/her finger or the mouse for hand-writing, all of which have been used in our study, even though the pen was clearly preferred. Our participants highly valued deletion by strike-through or scribbling through the text, as this would nicely resemble standard copy-editing. However, hand-writing for replacements and insertions was considered to work well only for short modifications. For more extended changes, participants argued that one should instead fall back to typing or speech commands.

Touch Reorder

Reordering using (pen or finger) touch is supported with a simple drag and drop procedure: Users have two options: (1) They can drag and drop single words by starting a drag directly on top of a word, or (2) they can double-tap to start a selection process, define which part of the sentence should be selected (e.g., multiple words or a part of a word), and then move it. 

We visualize the picked-up word(s) below the touch position and show the calculated current drop position through a small arrow element. Spaces between words and punctuation marks are automatically fixed, i.e., double spaces at the pickup position are removed, and missing spaces at the drop position are inserted. In our study, touch reordering was highlighted as particularly useful or even “perfect” and received the highest subjective scores and lowest time required for reordering. 

 

Speech

To minimize lag during speech recognition, we use a streaming approach, sending the recorded audio to IBM Watson servers to receive a transcription, which is then interpreted in a command-based fashion. The transcription itself is shown at the top of the default editing tab next to a microphone symbol. As commands, post-editors can “insert,” “delete,” “replace,” and “reorder” words or sub-phrases. To specify the position if it is ambiguous, anchors can be specified, e.g., “after”/”before”/”between” or the occurrence of the token (“first”/”second”/”last”) can be defined. A full example is “replace A between B and C by D,” where A, B, C, and D can be words or sub-phrases. Again, spaces between words and punctuation marks are automatically fixed. In our study, speech [recognition] received good ratings for insertions and replacements but worse ratings for reorderings and deletions. According to the participants, speech would become especially compelling for longer insertions and would be preferable when commands remain simple. For invalid commands, we display why they are invalid below the transcription (e.g., “Cannot delete the comma after nevertheless, as nevertheless does not exist”). Furthermore, the interface temporarily highlights insertions and replacements in green, deletions in red (the space at the position), and combinations of green and red for reorderings. The color fades away after the command. 

Multi-Modal Combinations of Pen/Touch/Mouse&Keyboard with Speech

Multi-modal combinations are also supported: Target word(s)/position(s) must first be specified by performing a text selection using the pen, finger touch, or the mouse/keyboard. 

Afterwards, the user can use a voice command like “delete” (see the figure below), “insert A,” “move after/before A/between A and B,” or “replace with A” without needing to specify the position/word, thereby making the commands less complex. In our study, multi-modal interaction received good ratings for insertions and replacements, but worse ratings for reorderings and deletions. 

Eye Tracking

While not tested in a study yet, we currently explore other approaches to enhance PE through multi-modal interaction, e.g., through the integration of an eye tracker. The idea is to simply fixate the word to be replaced/deleted/reordered or the gap used for insertion, and state the simplified speech command (e.g., “replace with A”/”delete”), instead of having to manually place the cursor through touch/pen/mouse/keyboard. To provide feedback to the user, we show his/her fixations in the interface and highlight text changes, as discussed above. Apart from possibly speeding up multi-modal interaction, this approach would also solve the issue reported by several participants in our study that one would have to “do two things at once” while keeping the advantage of having simple commands in comparison to the speech-only approach.

Logging

MMPE supports extensive logging functionality, where we log all text manipulations on a higher level to simplify text editing analysis. Specifically, we log whether the manipulation was an insertion, deletion, replacement, or reordering, with the manipulated tokens, their positions, and the whole segment text. Furthermore, all log entries contain the modality of the interaction, e.g., speech or pen, thereby allowing the analysis of which modality was used for which editing operation. 


Evaluation

Our study with professional translators showed a high level of interest and enthusiasm about using these new modalities. For deletions and reorderings, pen and touch both received high subjective ratings, with the pen being even better than the mouse & keyboard. Participants especially highlighted that pen and touch deletion or reordering “nicely resemble a standard correction task.” For insertions and replacements, speech and multi-modal interaction of select & speech were seen as suitable interaction modes; however, mouse & keyboard were still favored and faster. Here, participants preferred the speech-only approach when commands are simple but stated that the multi-modal approach becomes relevant when the sentences' ambiguities make speech-only commands too complex. However, since the study participants stated that mouse and keyboard only work well due to years of experience and muscle memory, we are optimistic that these new modalities can yield real benefit within future CAT tools.

Conclusion

Due to continuously improving MT systems, PE is becoming more and more relevant in modern-day translation. The interfaces used by translators still heavily focus on translation from scratch, and in particular on mouse and keyboard input modalities. Since PE requires less production of text but instead requires more error corrections, we implemented and evaluated the MMPE CAT environment that explores the use of speech commands, handwriting input, touch reordering, and multi-modal combinations for PE of MT. 

In the next steps, we want to run a study that specifically explores the newly developed combination of eye and speech input for PE. Apart from that, longer-term studies exploring how the modality usage changes over time, whether translators continuously switch modalities or stick to specific ones for specific tasks are planned. 

Instead of replacing the human translator with artificial intelligence (AI), MMPE investigates approaches to better support the human-AI collaboration in the translation domain by providing a multi-modal interface for correcting machine-translation output. We are currently working on proper code documentation and plan to open-source release the prototype within the next months. MMPE was developed in a tight collaboration between the German Research Center for Artificial Intelligence (DFKI) and Saarland University and is funded in part by the German Research Foundation (DFG).


Contact

Nico Herbig - nico.herbig@dfki.de

German Research Center for Artificial Intelligence (DFKI)

Further information:

Website: https://mmpe.dfki.de/

Paper and additional information:

Multi-Modal Approaches for Post-Editing Machine Translation
Nico Herbig, Santanu Pal, Josef van Genabith, Antonio Krüger Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM 2019
ACM Digital Library - Paper access

(Presenting an elicitation study that guided the design of MMPE)

MMPE: A Multi-Modal Interface using Handwriting, Touch Reordering, and Speech Commands for Post-Editing Machine Translation
Nico Herbig, Santanu Pal, Tim Düwel, Kalliopi Meladaki, Mahsa Monshizadeh, Vladislav Hnatovskiy, Antonio Krüger, Josef van Genabith Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. ACL 2020
ACL Anthology - Paper access

(Demo paper presenting the original prototype in detail)

MMPE: A Multi-Modal Interface for Post-Editing Machine Translation
Nico Herbig, Tim Düwel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Krüger, Josef van Genabith Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. ACL 2020
ACL Anthology - Paper access - Video

(Briefly presenting MMPE prototype and focusing on its evaluation)

Improving the Multi-Modal Post-Editing (MMPE) CAT Environment based on Professional Translators’ Feedback
Nico Herbig, Santanu Pal, Tim Düwel, Raksha Shenoy, Antonio Krüger, Josef van Genabith Proceedings of the 1st Workshop on Post-Editing in Modern-Day Translation at AMTA 2020. ACL 2020
Paper access - Video of presentation

(Recent improvements and extensions to the prototype)

Thursday, September 24, 2020

NiuTrans: An Emerging Enterprise MT Provider from China

 This post highlights a Chinese MT vendor who I suspect is not well known in the US or Europe currently, but who I expect will become better known over the coming years. While the US giants (FAAMG) still dominate the MT landscape around the world today, I think it is increasingly possible that other players from around the world, especially from China may become much more recognized in the future. 

One indicator that has been historically reliable to forecast and predict emerging economic power is the volume of patent filings in a country. This has been true for Japan and Germany historically where we saw voluminous patent activity precede the economic rise of these countries, and recently we see that this predictor is also aligned with the rise of S. Korea and China as economic powerhouses. However, the sheer volume of filings is not necessarily a lead indicator of true innovation, and some experts say that the volume of patents filed and granted abroad is a better indicator of innovation and patent quality. But today we see emerging giants from Asia in consumer electronics, automobiles, eCommerce, internet services, and nobody questions the building innovation momentum happening in Asia today. 


Artificial Intelligence (AI) is heralded by many as a key driver of wealth creation for the next 50 years. To build momentum with AI requires a combination of access to large volumes of "good" data, computing resources, and deep expertise in machine learning, NLP, and other closely related technologies. Today, the US and China look poised to be the dominant players in the wider application of AI and machine learning-based technologies with a few others close behind. And here too deep knowledge and clout are indicated by the volume of influential papers published and referenced by the global community. A recent analysis, by the Allen Institute for Artificial Intelligence in Seattle, Washington found that China has steadily increased its share of authorship of the top 10% most-cited papers. The researchers found that America’s share of the most-cited 10 percent of papers declined from a high of 47 percent in 1982 to a low of 29 percent in 2018. China’s share, meanwhile, has been “rising steeply,” reaching a high of 26.5 percent last year, Though the US still has significant advantages with the relative supply of expert manpower and dominance in manufacture of AI semiconductor chip technology, this too is slowly changing even though most experts expect the US to maintain leadership for other reasons

Credit: Allen Institute for Artificial Intelligence

These trends also impact the translation industry and they change the relative benefit and economic value of different languages. The global market is slowly changing from a FIGS-centric view of the world to one where both the most important source language (ZH, KO, HI) and target languages are changing.  The fastest-growing economies today are in Africa and Asia and are not likely to be well served by a FIGS-centric view though it appears that English will remain a critical world language for knowledge sharing for at least another 25 years. These changes create an opportunity for agile and skillful Asian technology entrepreneurs like NiuTrans who are much more tuned-in to this rapidly evolving world.  I have noted that some of the most capable new MT initiatives I have seen in the last few years were based in China. India has lagged far behind with MT, even though the need there is much greater, because of the myth that English matters more, and possibly because of the lack of governmental support and sponsorship of NLP research.


The Chinese MT Market: A Quick Overview

I recently sat down with Chungliang Zhang from NiuTrans, an emerging enterprise MT vendor in China, to discuss the Chinese MT market and his company’s own MT offerings. He pointed out that China is the second-largest global economy today, and it is now increasingly commonplace for both Chinese individuals and enterprises to have active global interactions. The economic momentum naturally drives the demand for automated translation services.

Some examples, he pointed out:

In 2019, China’s outbound tourist traffic totaled 155M people, up 3.3% from the previous year. This massive volume of traveler traffic results in a concomitant demand for language translation. Chungliang pointed out that this travel momentum significantly drives the need for voice translation devices in the consumer market like those produced by Sougou, iFlyTek, and others, which have been very much in demand in the last few years.

There is also a growing interest by Chinese enterprises, both state-owned or privately owned, to build and expand their business presence in global markets. For example, Alibaba, China’s largest eCommerce company, is listed on the NYSE and has established an international B2B portal (Alibaba.com) where 20 million enterprises gather and work to “Buy Global, Sell Global.” Currently, the Alibaba MT team builds the largest eCommerce MT systems globally, often reaching volumes of 1.79 billion translation calls per day, which is a larger transaction volume than either Google or Amazon.

“All in all, as we can see it, there is a clear trend that MT is increasingly being used in more and more industries, such as language service industries, intellectual property services, pharmaceutical industries, and information analysis services.”

While it is clear that consumers and individuals worldwide are regularly using MT, the primary enterprise users of MT in China are government agencies and internet-based businesses like eCommerce. This need for translation is now expanding to more enterprises who seek to increase their international business presence and realize that MT can enable and accelerate these initiatives.

The Chinese MT technology leaders in terms of volume and regular user base are the internet services giants (such as Baidu, Tencent, Alibaba, Sogou, Netease) or the AI tech giants (such as iFlyTek). Google Translate and Microsoft Bing Translator are also popular in China since they are free, but they don’t have a large share of the total use if the focus is strictly on MT technology.

When asked to comment on the characteristics and changes in the Chinese MT market, Chungliang said:

“In our understanding, Sogou and iFlytek's primary business focus is the B2C market, and thus both of them develop consumer hardware like personal voice translators. Sogou was recently (July 29, 2020) purchased by Tencent (a major social media player), so we don’t know what will happen next. iFlytek is famous for its Speech-To-Speech technology capabilities. Thus it is natural for them to develop MT, to get the two technologies integrated and grab a larger share of the market.

As for the other important MT players in China, Alibaba MT mainly serves its own global focused eCommerce business, and Tencent Translate focuses on providing the translation needs of its users in social networking use scenarios. Like Google Translate, Baidu Translate is a portal to attract individual users who might need translation during a search. It also serves to expand Baidu’s influence as a whole. While Netease Youdao focuses on the education industry, and the Youdao Team integrates the Youdao online dictionary, direct MT, and human translation.

What are the main languages that people/customers translate? As far as we know, the most translated language is English, Japanese is second, followed by Arabic, Korean, Thai, Russian, German, and Spanish.” Of course, this is all direct to and from Chinese.”


NiuTrans Focus: The Enterprise

The NiuTrans team learned very early in their operational history and during their startup phase that their business survival was linked to providing MT services for the enterprise rather than for individual users and consumers. The market for individuals is dominated by offerings like Google Translate and Baidu Translate that offer virtually-free services. In contrast, NiuTrans is focused on meeting the enterprise demands for MT, which often means deploying on-premise MT engines and the development of custom engines. These enterprises tend to be concentrated around Intellectual Property and Patent services, Pharmaceuticals, Vehicle Manufacturing, IT, Education, and AI companies. For example, NiuTrans builds customized patent-domain MT engines for the China Patent Information Center (CNPAT, a branch of the China National Intellectual Property Administration, a large-scale patent information service based in Beijing.)

CNPAT has the largest collections of multilingual parallel data for patents, and services ongoing and substantial demands for patent-related MT needs in various use scenarios such as patent application filing and examination, patent-related transactions, and patent-based lawsuits. Given the scale of the client’s needs, NiuTrans sends an R&D team on-site to work with CNPAT’s technical team for data processing and data cleaning. This data is then used in the NiuTrans.NMT training module to develop patent-domain NMT engines on CNPAT’s on-premise servers. The on-site team also develops custom MT APIs on-demand to fit into CNPAT’s current workflow and customer servicing needs.


Besides powering and enabling the specialized translation needs of services like CNPAT, NiuTrans also provides back-end MT services for industrial leaders, including iFlyTek (also an early investor in NiuTrans), JD.com (the No. 2 eCommerce business in China), Tencent (the largest social networking company in China), Xiaomi (a leader of smart devices OEMs in China), and Kingsoft (a leader of office software in China).

NiuTrans has an online cloud API that also attracts 100,000+ small and medium enterprises interested in expanding their international operations and business presence. The pricing for these smaller users are based on the volume of characters these users translate and is much lower than Google Translate and Baidu Translate prices.

NiuTran’ Online Cloud User Locations

You can visit the NiuTrans Translate portal at https://niutrans.com

NiuTrans write and maintain their own NMT code-base rather than use open source options for NiuTrans.NMT and claim that they achieve comparable, if not better, quality performance with their competitors. Their comparative performance at the WMT19 evaluations suggests that they actually do better than most of their competitors. They are not dependent on TensorFlow, PyTorch, or OpenNMT to build their systems. Today, NiuTrans is a key MT technology provider, especially for enterprises in China.

NiuTrans.NMT is a lightweight and efficient Transformer-based neural machine translation system. Its main features are:

  • Few dependencies. It is implemented with pure C++, and all dependencies are optional.
  • Fast decoding. It supports various decoding acceleration strategies, such as batch pruning and dynamic batch size.
  • Advanced NMT models, such as Deep Transformer.
  • Flexible running modes. The system can be run on various systems and devices (Linux vs. Windows, CPUs vs. GPUs, FP32 vs. FP16, etc.).
  • Framework agnostic. It supports various models trained with other tools, e.g., Fairseq models.
  • The code is simple and friendly to beginners.

When I probed into why NiuTrans had chosen to develop their own NMT technology rather than use the widely accepted open-source solutions, I was provided with a history of the company and its evolution through various approaches to developing MT technology.

The NiuTrans team originated in the NLP Lab at Northeastern University, China (NEUNLP Lab), a machine translation research leader in the Chinese academic world going as far back as 1980. Like many elsewhere in the world, the team initially studied rule-based MT from 1980 to 2005. In 2006 Professor Jingbo Zhu (the current Chairman of NiuTrans) returned from a year-long visit to ISI-USC and decided to switch to statistical MT research working together with Tong Xiao, who was a fresh graduate student at the time and is now the CEO of NiuTrans. They made rapid strides in SMT research, releasing the first version of NiuTrans SMT open source in 2011. At that time, Chinese academia primarily used Moses to conduct MT-related research and develop MT engines. The development of the NiuTrans.SMT open-source proved that Chinese engineers could do the same as, or even better than Moses, and also helped to showcase the strength and competence of the NiuTrans team. Thus, in 2012, confident with their MT technology and armed with a dream to expand the potential of this technology to connect the world with MT, the NiuTrans team decided to form an MT company, converting the 30+ years’ of MT research work to developing MT software for industrial use.

Given their origins in academia, they kept a close watch on MT research and breakthroughs worldwide and noticed in 2014 that there was a growing base of research being done with neural network-based deep learning models. Therefore, the NiuTrans team started studying deep learning technologies in 2015 and released its first version of NiuTrans.NMT in December 2016, just three months after Google announced the release of its first NMT engines.

NiuTrans prefers to avoid using open source MT platforms like TensorFlow, PyTorch, or OpenNMT as they have developed deep competence in MT technology gathered over 40 years of engagement. The leadership believes there are specific advantages to building the whole technology stack for MT and intend to continue with this basic development strategy. As an example, Chunliang pointed me to the release of NiuTensor, their own deep learning tool: (https://github.com/NiuTrans/NiuTensorand NiuTrans.NMT Open Source (https://github.com/NiuTrans/NiuTrans.NMT). They are confident that they can keep pace with continuous improvements in open source with support from the NEUNLP Lab, which has eight permanent staff and 40+ Ph.D./MS students focusing on MT issues of relevance and interest for their overall mission. This group also allows NiuTrans to stay abreast of the worldwide research being done elsewhere.

NiuTrans understands that a critical requirement for an enterprise user is to adapt and customize the MT system to enterprise-specific terminology or use. Thus, it provides both a user terminology module to introduce user terminology into the MT system and a user translation memory module to introduce the users’ sentence pairs to tune the MT system. Another more sophisticated solution is incremental training. They incorporate user data to modify the NiuTrans model parameters to get the MT model better adjusted to user data features.

NiuTrans also gathers post-editing feedback on critical language pairs like ZH <> EN and ZH <> JP on an ongoing basis, then analyze error patterns to develop continuing engine performance improvements.


Quality Improvement, Data Security, and Deployment

NiuTrans evaluates MT system performance using BLEU and a human evaluation technique that ranks relative systems. They prefer not to use the widely used 5-point scale to assign an absolute value to a translation. Thus if they were comparing NiuTrans, Google, and DeepL, they would use a combination of BLEU and have humans rank the same blind test set for the three systems.

NiuTrans also has an ongoing program to improve its MT engines continually. They do this in three different ways:

  1. Firstly, as the company has a strong research team that is continually experimenting and evaluating new research, the impact of this research is continuously tested to determine if it can be incorporated into the existing model framework. This kind of significant technical innovation is added into the model two or three times a year.
  2. Secondly, customer feedback, ongoing error analysis, or specialized human evaluation feedback also trigger regular updates to the most important MT systems (e.g. ZH<>EN) at least once a month.
  3. Thirdly, engines will be updated as new data is discovered, gathered, or provided by new clients. High-quality training data is always sought after and considered valuable to drive ongoing MT system improvements.

NiuTrans has performed well in comparative evaluations of their MT systems against other academic and large online MT solutions. Here is a summary of the results from WMT19. They report that their performance in WMT20 is also excellent, but final results have not yet been published.

NiuTrans training data comes mainly from two sources: data crawling and data purchase from reliable vendors.

NiuTrans uses crawlers to collect the parallel texts from the websites that do not prohibit or prevent this, e.g., some Chinese government agencies’ websites that often provide data in several languages. They also buy parallel sentences (TM) and dictionaries from specific data provider companies, who might require signing an agreement, specifying that the data provider retains the intellectual property rights of the data.

NiuTrans gets the bulk of its revenue from data-security concerned customers who deploy their MT systems on On-premise systems. However, NiuTrans is also working on an Open Cloud https://niutrans.com offering, allowing customers to access an online API and avoid installing the infrastructure needed to set up on-premise systems. The Open Cloud is a more cost-effective option for smaller SME companies, and NiuTrans has seen rapid adoption of this new deployment in specific market segments.

International customers, especially the larger ones, much prefer to deploy their NiuTrans MT systems on-premise. For those international customers who cannot afford on-premise systems, the NiuTrans Open Cloud solution is an option. This system is deployed on the Alibaba Cloud that is governed by Chinese internet security laws that require that user data be kept for six months before deletion. The company plans to build another cloud service on the Amazon Cloud for international customers who have data security concerns. This new capability will allow users to encrypt their data locally, transfer the data securely to the Amazon Cloud. NiuTrans will then decrypt the source data on their servers, translate it, and finally delete all the user data and the corresponding translation results once the source data has been translated.


NiuTrans currently has 100+ employees, directed by Dr. Jjingbo Zhu and Dr. Tong Xiao, two leading MT scientists in China. Shenyang is the seat of the company’s headquarters and R&D team as well. Technical support and services are available in Beijing, Shanghai, Hangzhou, Chendu, and Shenzhen currently, but the company is now exploring entering the Japanese market, with the assistance of partners in Tokyo and Osaka. While NiuTrans is not a well-known name in the US/EU translation industry today, I suspect that they will become an increasingly better-known provider of enterprise MT technology in the future.


Friday, September 4, 2020

The Premium Translation Market: Come On In. The Water’s Perfect.

This is the full text of a response by Kevin Hendzel to a guest post by Luigi Muzii that challenged various attributes and characteristics of the "premium" market that I described in a post focused on this market segment.  

The notion of premium evokes strong opinions from both translators and LSPs, and one can see the range of views and opinions that are highlighted in this post, as well as the other two linked above. Like much of the phenomena in the translation industry and the definition of the translation market itself, the views on the premium market are fragmented. 

Fragmentation means to see partially, to not see the whole. Insight is only possible when one sees the whole. 


We see that the professional market research firms completely overlook the market (mostly because it is much harder to research and pindown) and thus perpetuate the view that the market does not exist, but we also see that there are huge differences in their own analysis on what exactly is contained in the "translation market". The differences are so large that it raises credibility questions on the validity of any or all of the estimates that are currently available.

For the record, I stand by my initial "opinion" on the premium market as I cannot really say that it is more than an opinion. I cannot provide any more data than I already have.

In fact, this discussion on the translation market and what it really is brings to mind a story I was told as a child, about blind men who encounter an elephant for the first time. 

The parable of the Blind Men and an Elephant originated in the ancient Indian subcontinent, from where it has been widely diffused. It is a story of a group of blind men who have never come across an elephant before and who learn and conceptualize what the elephant is like by touching it. Each blind man feels a different part of the elephant's body, but only one part, such as the side or the tusk. They then describe the elephant based on their limited experience and their descriptions of the elephant are different from each other. In some versions, they come to suspect that the other person is dishonest and they come to blows. The moral of the parable is that humans have a tendency to claim absolute truth based on their limited, subjective experience as they ignore other people's limited, subjective experiences which may be equally true. (Source: Wikipedia)

And so these men of Indostan

Disputed loud and long,

Each in his own opinion

Exceeding stiff and strong,

Though each was partly in the right

And all were in the wrong!




=======




As many readers of this blog certainly know, Mr. Muzii and I have been consistently at odds over the existence of the premium market. I’ve lived inside it for decades, so I’m reporting my own personal and extended experience within this market as well as the exceptionally hard work of my colleagues in various segments of the market all over the world. I also had to become an expert on the markets writ large when I was the ATA National Media spokesman (2001-2012) in order to avoid misleading the media, researchers, and my own colleagues.

So I work in the very market that Kirti expertly describes in his original post.

Happily, as such a practitioner, and despite (by extension) being called a “fool” four times, told these views “reflect ignorance,” and demonstrate “a profound lack of respect for bulk market translators,” I still welcome the opportunity to respond to Mr. Muzii from the viewpoint of an individual who has actually lived in this market for most of his professional life.

Mr. Muzii’s repeated denials over the years, and especially those proffered above, are heavily based on repeated speculation on a whole range of market activity with no factual basis whatsoever, combined with the total absence of experience in the premium market. It’s an argument from the absence of data, not the presence of it. This stalemate persists because Mr. Muzii refuses to allow any first-hand or published descriptions of the premium market to ever be allowed to be treated as data.

This reminds me of Lord Kelvin’s resounding and confident claim in 1895: “I can state flatly that heavier-than-air flying machines are impossible.”

I’m pleased that this also gives me the opportunity to go back in time to my Precambrian college days and quote a professor who insisted that the proponents of Marxism had a lot in common with the star of the TV mystery crime show “Columbo.” The star of “Columbo” always knew from the beginning who was guilty. It was fun to watch him trap the offender under a mountain of actual facts. Real evidence.

My professor’s point was that Marxist ideology already had the conclusion in hand, too. The only difference – and this is crucial – is that the Marxists only cared about the conclusion. They were converts. They knew what the conclusion would be. (The premium market does not exist!). No matter what events actually occurred in the world, those facts would be wrestled and twisted and crammed into that Marxist suit. It was the conclusion that was important. Any set of assumptions or facts or evidence could readily be twisted and jammed into that poor distorted suit.

This approach is unhelpful because it makes it difficult to reasonably consider other views, ideas, and concepts – to say nothing about [any] personal experience -- in an increasingly complex world.

Let’s consider this claim by Mr. Muzii:

“Now, anyone who has worked in translation for a while knows that, for the job to be sustainable, a translation should not outperform the original (especially if this is already good), and that a professional translator producing superior texts from crap is not the reincarnation of Cagliostro, but a fool.”

I’ve been in translation for longer than Mr. Muzii, so I guess that means I qualify as “anyone.” It turns out, though, that premium-market clients are far more concerned about whether a translation works as a form of communication. The overriding objective is to successfully convey a message that is crucial to the client and can involve significant market risk. These client/translator collaborative efforts are deeply rooted in comparing multiple different translations and tweaking the options. It’s why premium-market translators are consistently and thoroughly engaged with their clients. This is in fact discussed in detail in the original post on premium markets. I can’t imagine how Mr. Muzii could have missed David Jemielity’s video image and quote in a 14-point type on how communication is the primary objective.

Mr. Muzii’s claim would also be news to the late Gabriel García Márquez, who called Gregory Rabassa's translation of “One Hundred Years of Solitude” better than the Spanish original.

Maybe the winner of the 1982 Nobel Prize in Literature was just another of Mr. Muzii’s “fools.”

I also found this comment by Mr. Muzii interesting:

“Finally, bringing on the example of US defense contracts to support the existence of the premium translation market is pointless. In this case, translation jobs go to some professional services contracting company, which is part of some large conglomerate, as in the case of GLS and other regular military contractors.”

I found this argument to fail on so many points that it’s a real head-scratcher to figure out how to begin to address it. First, Mr. Muzii selects a single isolated example and then turns around and projects it out to represent an enormously complicated multibillion-dollar market. Second, most of that spend bypasses companies because top-notch translators and interpreters are directly recruited and hired by government agencies at USD six-figure salaries, which only begins to cover the demand. At the ATA conference in Phoenix, Arizona, back in 2003, I worked personally to put a significant number of skilled translators in touch with these agencies, which again represented only a tiny trickle of recruiting activity. Third, large companies do provide a significant percentage of talent, but they pay translators exceedingly well because of the complexity of the missions. Fourth, Kirti himself – who has had a peek into this market – observed in the original post that this certainly extends beyond “U.S. defense contractors.” While many U.S. intelligence and defense agencies are major consumers of MT, their funding of premium-market translation experts is 50x greater.

I thought it would be most helpful to end on a topic where Mr. Muzii and I are in solid agreement.

“The specialization required to become a skillful and well-paid translator with a lasting position at the high end of the translation market involves substantial commitment, time, and investment. Skills do not grow on trees or accumulate overnight; building a network of relations is toilsome; learning to exploit it may require some major changes in character, and no one can guarantee stable high prices and job satisfaction.”

This is all certainly true. What I have not yet figured out is how we will train future generations of premium-market translators as our working world is increasingly interwoven with AI and ML, both of which are improving daily. Much of the bulk market has already been gouged out by these technologies, with varying degrees of success. My guess is that translation programs will need to recruit from graduate programs in engineering, physics, and law, etc., or future generations of premium-market translators will simply train themselves.

So this is a topic about which I am – without a doubt – woefully ignorant.



Kevin Hendzel is an Award-Winning Translator, Linguist, Author, National Media Consultant, and Translation Industry Expert