Wednesday, October 19, 2016

SYSTRAN Releases Their Pure Neural MT Technology

SYSTRAN announced earlier this week that they are doing a “first release” of their Pure Neural™ MT technology for 30 language pairs. Given how good the Korean samples that I saw were, I am curious why Korean is not one of the languages that they chose to release.

"Let’s be clear, this innovative technology will not replace human translators. Nor does it produce translation which is almost indistinguishable from human translation"  ...  SYSTRAN BLOG

The languages pairs being initially released are 18 in and out of English, specifically EN<>AR, PT-BR, NL, DE, FR, IT, RU, ZH, ES  and 12 in and out of French  FR<>AR, PT-BR, DE, IT, ES, NL. They claim these systems are the culmination of over 50,000 hours of GPU trainings but are very careful to say that they are still experimenting and tuning these systems and that they will adjust them as they find ways to make them better.

They have also enrolled ten major customers in a beta program to validate the technology at the customer level, and I think this is where the rubber will meet the road and we will find how it really works in practice.

The boys at Google (who should still be repeatedly watching that Pulp Fiction clip), should take note of their very pointed statement about this advance in the technology:

Let’s be clear, this innovative technology will not replace human translators. Nor does it produce translation which is almost indistinguishable from human translation – but we are convinced that the results we have seen so far mark the start of a new era in translation technologies, and that it will definitely contribute to facilitating communication between people.
Seriously Mike (Schuster) that’s all that people expect; a statement that is somewhat close to the reality of what is actually true.

They have made a good effort at explaining how NMT works, and why they are excited, which they say repeatedly through their marketing materials. (I have noticed that many who work with Neural net based algorithms are still somewhat mystified by how it works.) They plan to try and explain NMT concepts in a series of forthcoming articles which some of us will find quite useful, and they also provide some output examples which are interesting to understand how the different MT methodologies approach language translation.

 CSA Briefing Overview

In a recent briefing with Common Sense Advisory they shared some interesting information about the company in general:
  • The Korean CSLi Co. ( acquisition has invigorated the technology development initiatives.
  • They have several large account wins including Continental, HP Europe, PwC and Xerox Litigation Services. These kinds of accounts are quite capable of translating millions of words a day as a normal part of their international operational needs.
  • Revenues are up over 20% over 2015, and they have established a significant presence in eDiscovery area which now accounts for 25% of overall revenue.
  • NMT technology improvements will be assessed by an independent third party (CrossLang) with long term experience in MT evaluation, and who are not likely to say misleading things like "55% to 85% improvements in quality" like the boys at Google.
  • SYSTRAN is contributing to an open-source project on NMT with Harvard University and will share detailed information about their technology there. 

Detailed Technical Overview

They have also supplied a more detailed technical paper which I have yet to review carefully, but what struck me immediately on initial perusal was that the data volumes they are building their systems with are minuscule compared to what Google and Microsoft have available. However, the ZH > EN results did not seem substantially different from the amazing-NOT GNMT system. Some initially interesting observations are highlighted below, but you should go to the paper to see the details:

Domain adaptation is a key feature for our customers — it generally encompasses terminology, domain and style adaptation, but can also be seen as an extension of translation memory for human post-editing workflows. SYSTRAN engines integrate multiple techniques for domain adaptation, training full new in-domain engines, automatically post-editing an existing translation model using translation memories, extracting and re-using terminology. With Neural Machine Translation, a new notion of “specialization” comes close to the concept of incremental translation as developed for statistical machine translation like (Ortiz-Martınez et al., 2010 )

What is encouraging is that adaptation or “specialization” is possible with very small volumes of data, and this can be run in a few seconds which suggests this has possibilities to be an Adaptive MT model equivalent.

 Our preliminary results show that incremental adaptation is effective for even limited amounts of in-domain data (nearly 50k additional words). Constrained to use the original “generic” vocabulary, adaptation of the models can be run in a few seconds, showing clear quality improvements on in-domain test sets .

Of course the huge processing requirements of NMT remain a significant challenge and perhaps they are going to have to follow Google and Microsoft who both have new hardware approaches to address this issue with the TPU (Tensor Processing Units) and programmable FPGAs that Microsoft recently announced to deal with this new class of AI based machine learning applications.

For those who are interested,  I ran a paragraph from my favorite Chinese News site and compared the Google “nearly indistinguishable from human translation”  GNMT output with the SYSTRAN PNMT output and I really see no big differences in quality from my rigorous test, and clearly we can safely conclude that humanity is quite far from human range MT quality at this point in time.

 The Google GNMT Sample 


The SYSTRAN Pure NMT Sample

Where do we go from here?

I think the actual customer experience is what will determine the rate of adoption and uptake. Microsoft and a few others are well along the way with NMT too. I think SYSTRAN will provide valuable insights in December from the first beta users who actually try to use it in a commercial application. There is enough evidence now to suggest that if you want to be a long-term player in MT you had better have actual real experience with NMT and not just post how cool NMT is and use SEO words like machine learning and AI on your website.

The competent third party evaluation SYSTRAN has planned is a critical proof statement that hopefully provides valuable insight on what works and what needs to be improved at the MT output level. It will also give us more meaningful comparative data than the garbage that Google has been feeding us. We should note that while BLEU score jumps are not huge the human evaluations show that NMT output is often preferred by many who look at the output.

The ability of serious users to adapt and specialize the NMT engines for their specific in-domain needs I think is really a big deal – if this works as well as I am being told, I think it will quickly push PBSMT-based Adaptive MT (my current favorite) to the sidelines, but it is still too early to really to say this with anything but Google MT Boys certainty.

But after a five-year lull in the MT development world and seemingly little to no progress, we finally have some excitement in the world of machine translation and NMT is still quite nascent. It will only get better and smarter.

1 comment:

  1. While implementing SMT is no big deal, it is never going to be the same with NMT. Unfortunately, "machine learning" and "AI" are the new black, for translation industry players too. And this is another reason why the industry has not been taken seriously so far, and it is not going to be taken seriously anytime soon.