Pages

Thursday, April 20, 2017

LSP Perspective: MT Post-Editing Means a Drastic Reduction in Translation Cost

This is a short guest post by @translationguy also known as Ken Clark.  

These initial preamble comments in italics are mine. 

Today, many LSPs and Enterprises are working with MT and there is enough evidence that MT works even when you don't really know what you are doing. Unfortunately, many agencies still try to do it themselves with Moses and most of these DIY experiments either completely fail or produce systems that are not as good as the public systems produced by Microsoft and Google, which defeats the whole point of doing it. MT as a technology only provides business leverage if you have a superior MT system and have aligned processes to take advantage of this. 

Ken differentiates between light and full post-editing in his view of post-editing, and I would like to add another dimension to this discussion. It is my experience that full post-editing is done with smaller (in MT terms) projects, or when the information translated is very critical to get right. Thus, in a knowledge base project context, content related to security, privacy, and legal terms may be sent for full post-editing, and other content may just get a lighter post-edit. Also, when one is involved with very large MT projects like the team at eBay is, where hundreds of millions of words are involved, it is not possible to do a full post-edit on all the data so a light post-edit is done, or maybe nothing beyond the very specific linguistic work on high-frequency n-grams and important patterns that Silvio Picinini describes in this post. Unfortunately, it’s hard for translators and clients to agree on when we’re done with “light” post-editing, so it’s a headache to manage as editors often cannot tell when to stop.

Thus, as agencies really get involved with "real MT " projects they will do corpus profiling work and focus their attention on critical patterns as Juan Rowda has described in this post.

To me, real competence with MT in an agency or enterprise is demonstrated when there is some expertise with as many of the following core functions as possible:

  • Understanding the Data - Corpus Analysis
  • Focusing Linguistic Work on High-Frequency Patterns
  • Working with Expert MT Systems Developers in a pro-active way
  • Understanding MT Output Quality
  • Driving MT quality higher with specific linguistic feedback
  •  Managing Post-Editing Processes and Compensation
 
TAUS provides an excellent overview of the larger perspective in this post on best practices in MT


 
As the MT technology evolves I think we will see that strategies that made great sense with phrase-based SMT may not always make sense with the new Neural MT technology. I am talking to SYSTRAN about the realities in the NMT paradigm and hope to produce a post on this soon.

 

-------------------------


Machine translation has improved by leaps and bounds. What was once considered machine-produced gibberish is increasingly giving human translators a run for their money, particularly for predictable texts like weather reports.

While machine translation (MT) is also more economical than human translation, it’s not a true alternative yet. In most cases, machine translation can’t be used as is. And that’s where the expertise of machine translation post-editors comes in. Machine translation post-editors are the human editors that work to improve the output of machine translation. They combine the MT output with their linguistic expertise to provide a better reading experience to human audiences.

Besides the cost savings, it is estimated that machine translation plus post-editing is 40% more efficient than human translation alone. But what exactly do machine translation post-editors do, and how do they do it?

Types of Machine Translation Post-editing

Machine translation post-editing comes in two flavors: light post-editing and full post-editing.
Light post-editing suggests a lighter touch, only asking the human editors to ensure that the MT output is accurate in meaning and understandable to the reading audience. However, this means that style is not taken into account, grammar and syntax may be awkward, and the text may sound as if it were produced by a computer. It’s the most economical option, but for reasons of quality, light post-editing is typically only used when a translation is needed urgently and/or for an organization’s internal purposes.

Full post-editing, on the other hand, calls for a higher level of involvement by the post-editor. (This makes it more expensive than light post-editing, but still less expensive than full human translation.) In addition to making sure that the MT output is accurate in meaning and understandable to the reading audience, full post-editing addresses the text’s grammar, syntax, and punctuation, ensuring they are correct and appropriate. The result is similar in quality to a human translation, although it may not yet match the style of a native-speaking translator. Full post-editing is typically used when a machine-translated text is intended to be published, or widely disseminated inside or outside an organization.

MT Post-editing Strategies

How do they do it? Let’s examine some of the things that post-editors watch out for.

Light post-editors use the machine translation output as much as possible. However, they take special care that information has not been inadvertently added in or left out. They also edit anything they have identified as offensive or culturally unacceptable.

In addition to the above, full post-editors correct any grammatical and syntactical errors. They pay particular attention to terminology, making sure that the terms have been translated in the appropriate way (or left untranslated per the client’s wishes). They also ensure that the spelling and punctuation, as well as formatting, are correct.



Read more at http://www.responsivetranslation.com/blog/machine-translation-postediting/#r4ZiiLOHouYJ8E2O.99

Tuesday, April 11, 2017

The Problem with BLEU and Neural Machine Translation

There has been a great deal of public attention and publicity given to the subject of Neural Machine Translation in 2016. While experimentation with Neural Machine Translation (NMT) has been going on for the last several years, 2016 has proven to be the year that NMT broke through and became a big deal, and became more widely understood to be of great merit outside of the academic and research community, where it was already understood that NMT has great promise for some years now.

The reasons for the sometimes excessive exuberance around NMT are largely based on BLEU (not BLUE) score improvements on test systems which are sometimes validated by human quality assessments. However it has been understood by some that BLEU, which is still the most widely used measure of quality improvement, can be misleading in its indications when it is used to compare some kinds of MT systems.



The basis for the NMT optimism is related both to the very slow progress in recent years with improving phrase-based SMT quality, and also the striking BLEU score improvements that were seen coming from neural net based machine learning approaches. Much has been written about the flaws of BLEU but it still remains the most easily implementable measurement metric, and also really the only one where there are long-term longitudinal data available. While we all love to bash on BLEU, there is clear evidence that there is a strong correlation between BLEU scores and human judgments of the same MT output. The research community and the translation industry have not been able to come up with a better metric that can be widely implemented to enable ongoing test and evaluation of MT output so it remains as the primary metric.The alternatives are too cumbersome, expensive or impractical to use as widely and as frequently as BLEU is used.


However, there is also evidence that BLEU tends to score SMT systems more favorably than RBMT and NMT systems, both of which may produce very accurate and fluent translations to a human perspective, but differ greatly from the reference translations that are used in calculating the BLEU score. To a great extent the BLEU score is based on very simplistic "text string matches". Very roughly, the larger the cluster of words that you can match exactly, the higher the BLEU score.


To illustrate this, lets take a very simple example, say a reference translation is: "The guests walked into the living room and seated themselves on the couch." and an NMT system produces something like: "The visitors entered the lounge and sat down on the sofa." This would result in a very low BLEU score for the NMT segment, even though many human evaluators might say it is quite an acceptable and accurate translation, and as valid as the reference sentence.

If you want a quick refresher on BLEU you can check this out:

The Need for Automated Translation Quality Measurement in SMT: BLEU


Some of the optimism around NMT is related to its ability to produce a large number of sentences that look very natural, fluent and astonishingly human. Thus, much of the early results with NMT output show that it is considered to be clearly better to human evaluators, even though BLEU scores may show only 5% to 15% improvement (which is also significant). The improvements are most noticeable when considering fluency and word order issues with machine translation output. NMT is also working much more effectively in what were considered difficult languages for SMT and Rule Based MT, e.g. Japanese and Korean.

And here are some examples provided by SYSTRAN from their investigations where the NMT seems to make linguistically informed decisions and changes the sentence structure away from the source to produce a better translation. But again these would not necessarily score much better in terms of BLEU scores even though humans might rate them as significant improvements in MT output quality and naturalness.



But we have seen that in spite of this there are still many cases where NMT BLEU scores significantly outpace the phrase-based SMT systems. These are described in the following posts in this blog:

A Deep Dive into SYSTRANs Neural Machine Translation (NMT) Technology

 

An Examination of the Strengths and Weaknesses of Neural Machine Translation

 

Real and Honest Quality Evaluation Data on Neural Machine Translation 

 

and this is even true to some extent in the exaggerated over-the-top claims made by Google when they claimed that Google NMT was “Nearly Indistinguishable From Human Translation” and “GNMT reduces translation errors by more than 55%-85% on several major language pairs" which is described below.

The Google Neural Machine Translation Marketing Deception

 

The KantanMT NMT vs PB-SMT Evaluation Results


I had an interesting conversation with Tony O'Dowd at KantanMT about his experience with his own initial NMT experiments.While Kantan does plan to publish their results in full detail in the near future, here are some highlights Tony provided from their experiments, that certainly raises some fundamental questions. (Emphasis below is mine.)

  1. Scope of Test - We built identical systems for SMT and NMT in the following language combinations - en-es, en-de, en-zh-cn, en-ja, en-it. Identical training data sets and test reference materials were used throughout the development phase of these engines. This ensured that our subsequent testing would be of identical engines, only differing in the approach to build the models. The engines were trained with an average of 5 million parallel segments ranging from 44 - 110 million words of training data.
  2. BLEU Scores - In all cases, the BLEU scores of NMT output was lower than SMT. 
  3. Human Evaluation:  We deployed a minimum of 3 evaluators for each language group and used KantanLQR to run the evaluation. We used the A/B Testing feature of KantanLQR. Sample A was from SMT, Sample B was from NMT. We randomized the presentation of the translations to ensure evaluators did not know what was NMT and SMT - this was done to remove any bias for one approach or the other. We sampled 200 translations for each language set.
  4. In all cases NMT scored higher in our A/B Testing than SMT. On average NMT was chosen twice as often as SMT in our controlled A/B testing.
  5. For low scoring BLEU NMT segments, we found a high correlation to these segments being the preferred translation by our [human] evaluators - this pretty much proves that BLEU is not a useful and meaningful score for use with NMT systems.


Clearly, this shows that BLEU is of limited value when the human vs. automated metric results are so completely different and even diametrically opposed. The whole point of BLEU is that should provide a quick and simple way to get an estimate of what a human might think of sample machine translated output. So going forward it looks like we are going to need better metrics that can map more closely to human assessments. BLEU is not a linguistically informed measure and thus the problem. This is easy to say but not so easy to do.  A recent study pointed out the following key findings:

  • Translations produced by NMT are considerably different than those produced by phrase-based systems. In addition, there is higher inter-system variability in NMT, i.e. outputs by pairs of NMT systems are more different between them than outputs by pairs of phrase-based systems.
  • NMT outputs are more fluent. We corroborate the results of the manual evaluation of fluency at WMT16, which was conducted only for language directions into English, and we show evidence that this finding is true also for directions out of English.
  • NMT systems do more reordering than pure phrase-based ones but less than hierarchical systems. However, NMT re-orderings are better than those of both types of phrase-based systems.
  • NMT performs better in terms of inflection and reordering. We confirm that the findings of Bentivogli et al. (2016) apply to a wide range of language directions. Differences regarding lexical errors are negligible. A summary of these findings can be seen in the next figure, which shows the reduction of error percentages by NMT over PBMT. The percentages shown are the averages over the 9 language directions covered.

 Reduction of errors by NMT averaged over the 9 language directions covered


Given that there are currently no real practical alternatives to BLEU, there is perhaps an opportunity for an organization like TAUS to develop an easy to apply variant from their overall DQF framework, that can focus on these key elemental differences and can be done quickly and easily. NMT systems will gain in popularity and better measures will be sought. The need for an automated metric will also not go away as developers need some kind of measure to guide system tuning while they are in the development phase. Perhaps there is some research underway that I am not aware of that might address this, but I have seen that SYSTRAN uses several alternatives but everybody still comes back to BLEU.

Comparative BLEU score-based MT system evaluations are particularly problematic as I pointed out in my critique of the Lilt Labs evaluation, which I maintain is deeply flawed, and will result in erroneous conclusions if you take the reported results at face value. Common Sense Advisory also wrote recently about how BLEU scores can be manipulated to make outlandish claims by those with vested interests and also point out that BLEU scores naturally improve as you add multiple references.

"However, CSA Research and leading MT experts have pointed out for over a decade that these metrics are artificial and irrelevant for production environments. One of the biggest reasons is that the scores are relative to particular references. Changes that improve performance against one human translation might degrade it with respect to another. "
Common Sense Advisory, April, 2017


There is really a need for two kinds of measures, one for general developer research that can be used everyday like BLEU today, and one for business translation production use which indicate quality from that different perspective. So as we head into the next phase of MT, driven by machine learning and neural networks, it would be good for us all to think of ways to better measure what we are doing.  Hopefully some readers or some in the research community might have some ideas on new approaches to do this but this is an issue that is something worth keeping an eye on. And if you come up with better a way to do this, who knows, they might even name it after you. I noticed that Renato Beninatto has been talking about NMT recently, and who knows he could come up with something, I know we would all love to talk about our Renato scores instead of those old BLEU scores!