Friday, May 30, 2014

Monolithic MT or 50 Shades of Grey?

In the many discussions by different parties in the professional translation world involving machine translation, we see a great deal of conflation and confusion because most people assume that all MT is equivalent and that any MT under discussion is largely identical in all aspects. Here is a slightly modified description of what conflation is from the Wikipedia.
Conflation occurs when the identities of two or more implementations, concepts, or products, sharing some characteristics of one another, seem to be a single identity — the differences appear to become lost.[1] In logic, it is the practice of treating two distinct MT variants as if they were one, which produces errors or misunderstandings as a fusion of distinct subjects tends to obscure analysis of relationships which are emphasized by contrasts.
However, there are many reasons to question this “all MT is the same” assumption, as there are in fact many variants of MT, and it is useful to have some general understanding of the core characteristics of each of these variants so that a meaningful and more productive dialogue can be had when discussing how the technology can be used. This is particularly true in discussions with translators as the general understanding is that all the variants are essentially the same. This can be seen clearly in the comments to the last post about improving the dialogue with translators. Misunderstandings are common when people use the same words to mean  very different things.

There may be some who view my characterizations as opinionated and biased, and perhaps they are, but I do feel that in general these characterizations are fair and reasonable and most who have been examining the possibilities of this technology for a while, will likely agree with some if not all of my characterizations.

The broadest characterization that can be made about MT is around the methodology used in developing the MT systems i.e. Rule-based MT (RbMT) and Statistical MT (SMT) or some kind of hybrid as today users of both of these methodologies claim to have a hybrid approach. If you know what you are doing both can work for you but for the most part the world has definitely moved away from RbMT, and towards statistically based approaches and the greatest amount of commercial and research activity is around evolving SMT technology. I have written previously about this but we continue to see misleading information about this often, even from alleged experts. For practitioners the technology you use has a definite impact on the kind and degree of control you have over the MT output during the system development process so one should care what technology is used. What are considered valuable skills and expertise in SMT may not be as useful with RbMT and vice versa, and they are both complex enough that real expertise only comes from a continuing focus and deep exposure and long-term experience. 

The next level of MT categorization that I think is useful is the following:
  • Free Online MT (Google, Bing Translate etc..)
  • Open Source MT Toolkits (Moses & Apertium)
  • Expert Proprietary MT Systems
The toughest challenge in machine translation is the one that online MT providers like Google and Bing Translate attempt to address. They want to translate anything that anybody wants to translate instantly across thousands of language pairs. Historically, Systran and some other RbMT systems also addressed this challenge on a smaller scale, but the SMT based solutions have easily surpassed the output quality of these older RbMT systems in a few short years. The quality of these MT systems varies by language, with the best output produced in Romance languages (FR, IT, ES, PT) and the worst quality in languages like Korean, Turkish and Hungarian and of course most African, Indic and lesser Asian languages. Thus the Spanish experience with “MT” is significantly different to the Korean one or the Hindi one. This is the “MT” that is most visible, and most widely used translation technology across the globe. This is also what most translators mean and reference when they complain about “poor MT quality”. For a professional translator user, there are very limited customization and tuning capabilities, but even the generic system output can be very useful to translators working with romance languages and save typing time if nothing else. Microsoft does allow some level of customization depending on user data availability. This type of generic MT is the most widely used “MT” today, and in fact is where most of the translation done on the planet today is done. The number of users numbers in the hundreds of millions per month. We should note that in the many discussions about MT in the professional translation world most people are referring to these generic online MT capabilities when they make a reference to “MT”.

Open Source MT Toolkits (Moses & Apertium)

I will confine the bulk of my comments to Moses, mostly because I pretty much know nothing about Apertium other than it being an open source RbMT tool. Moses is an open source SMT toolkit that allows anybody with a little bit of translation memory data to experiment and develop a personal MT system. This system can only be as good as the data and the expertise of the people using the system and tools, and I think it is quite fair to say that the bulk of Moses systems produce lesser/worse output quality than the major online generic MT systems. This does not mean that Moses users/developers cannot develop superior domain-focused systems but the data,skills and ancillary tools needed to do so are not easily acquired and I believe definitely missing in any instant DIY MT scenario. There is a growing suite of instant Moses based MT solutions that make it easy to produce an engine of some kind, but do not necessarily make it easy produce MT systems that meet professional use standards. For successful professional use the system output quality and standards requirements are generally higher than what is acceptable for the average user of Google or Bing Translate. 

While many know how to upload data into a web portal to build an MT engine of some sort, very few know what to do if the system underperforms (as many initially do) as it requires diagnostic, corpus analysis and identification skills to get to the source of the problem, and then knowledge on what to fix and how to fix it as not everything can be fixed. It is after all machine translation and more akin to a data transformation than a real human translation process.  Unfortunately, many translators have been subjected to “fixing” the output from these low quality MT systems and thus the outcry within the translator community about the horrors of “MT”. Most professional translation agencies that attempt to use these instant MT system toolkits underestimate the complexity and skills needed to produce good quality systems and thus we have a situation today where much of the “MT” experience is either generic online MT or low quality do-it-yourself (DIY) implementations.  DIY only makes sense if you really do know what you are doing and why you are doing it, otherwise it is just a gamble or a rough reference on what is possible with “MT”, with no skill required beyond getting data into an up loadable data format.

Expert Proprietary MT Systems
Given the complexity, suite of support tools and very deep skill requirements of getting MT output to quality levels that provide real business leverage in professional situations I think it is safe to say that this kind of “MT” is the exception rather than the rule. Here is a link to a detailed overview of how an expert MT development process would differ from a typical DIY scenario. I have seen a few expert MT development scenarios from the inside and here are some characteristics of the Asia Online MT development environment:
  • The ability to actively steer and enhance the quality of translation output produced by the MT system to critical business requirements and needs.
  • The degree of control over final translation output using the core engine together with linguist managed pre processing and post-processing rules in highly efficient translation production pipelines.
  • Improved terminological consistency with many tools and controls and feedback mechanisms to ensure this.
  • Guidance from experts who have built thousands of MT systems and who have learned and overcome the hundreds of different errors that developers can make that undermine output quality.
  • Improved predictability and consistency in the MT output, thus much more control over the kinds of errors and corrective strategies employed in professional use settings.
  • The ability to continuously improve the output produced by an MT system with small amounts of strategic corrective feedback.
  • Automatic identification and resolution of many fundamental problems that plague any MT development effort.
  • The ability to produce useful MT systems even in scarce data situations by leveraging proprietary data resources and strategically manufacturing the optimal kind of data to improve the post-editing experience.
   So while we observe many discussions about “MT” in the social and professional social web, they are most often referring to the translator experience with generic MT as this is the most easy to access MT. In translator forums and blogs the reference can also often be a failed DIY attempt. The best expert MT systems are only used in very specific client constrained situations and thus rarely get any visibility, except in some kind of raw form like support knowledge base content where the production goal is always understandability over linguistic excellence. The very best MT systems that are very domain focused and used by post editors who are going through projects at 10,000+ words/day are usually very client specific and for private use only and are rarely seen by anybody outside the involvement of these large production projects. 

It is important to understand that if any (LSP) competitor can reproduce your MT capabilities by simply throwing some TM data into an instant MT solution, then the business leverage and value of that MT solution is very limited. Having the best MT system in a domain can mean long-term production cost and quality advantage and this can provide meaningful competitive advantage and provide both business leverage and definite barriers to competition.

In the context of the use of "MT" in a professional context, the critical element for success is demonstrated and repeatable skill and a real understanding of how the technology works. The technology can only be as good as the skill, competence and expertise of the developers building these systems. In the right hands many of the MT variants can work, but the technology is complex and sophisticated enough that it is also true that non-informed use and ignorant development strategies (e.g. upload and pray) can only lead to problems and a very negative experience for those who come down the line to clean up the mess. Usually the cleaners are translators or post-editors and they need to learn and insist that they are working with competent developers who can assimilate and respond to their feedback before they engage in PEMT projects. I hope that in future they will exercise this power more frequently. 

So the next time you read about “MT”, think about what are they actually referring to and maybe I should start saying Language Studio MT or Google MT or Bing MT or Expert Moses or Instant Moses or Dumb Moses rather than just "MT". 

Addendum: added on June 20

This was a post that I just saw, and I think provides a similar perspective on the MT variants from a vendor independent point of view. Perhaps we are now getting to a point where more people realize that competence with MT requires more than dumping data into the DIY hopper and expect it to produce useful results.

Machine translation: separating fact from fiction

Wednesday, May 14, 2014

Improving the MT Technology to Translator Dialogue

While we see that MT technology adoption continues to grow, hopefully because of clearly demonstrated benefits and measured production efficiencies, we still see that the dialogue between the technology developers / business sponsors and translators/post-editors is often strained, and communications can often be dysfunctional and sometimes even hostile.

While there is a growing volume of material on “how-to-use” the technology, much of this material is of questionable quality, there is still very little discussion about managing human factors around successful use of the technology. The growth of instant, do-it-yourself (DIY) tools only unleashes more low quality MT output into the world and there are translators who are expected to often edit (fix) very low quality MT output for a pittance. Getting good quality MT output requires real skill, expertise and preferably some considerable experience. The actual translator experience with “good MT” is not going to be so different from working with TM (though MT errors are quite different from TM errors) and is likely going to be very different from the negative experiences described in translator blogs.

The history of MT has indeed been filled with eMpTy promises beyond the real possibilities of the technology, and more recently we see lots of sub-par DIY systems built by mostly incompetent practitioners that do cause pain/fatigue/stress/frustration/anger to translators who engage or are somehow roped in to clean up the mess. This fact does not however lead to a conclusion that the outlook for MT is bleak and hopeless in my eyes. 

Rather, it suggests that MT must be approached with care and expertise, not just in terms of basic system development mechanics but also in terms of managing human expectations and ensuring that risks and rewards are shared amongst the key stakeholders, and that transparency and equity should be guiding principles for MT projects in general.

I don't expect that MT will replace human translators, but I do expect that for a lot of business translations with largely repetitive content with a  short shelf life, it will continue to make sense. Most of the corporate members of TAUS (who also pay for a lot human translation work) are driven to deploy MT because they are indeed faced with more volume and content that is very valuable for a few months but with little value after that. The basic business urgency requires that they explore other approaches to getting material translated. They have often done this independently of their key translation agencies who were very slow to catch on to this need. Many translators do not seem to realize that much of the content that MT focuses on is material that would simply NOT get translated if MT were not available and can sometimes create new human translation opportunity. It is not always a zero sum game. Also, while some MT advocates can be over-zealous at times I think very few are actually bent on deception and fraud as is sometimes claimed.

MT does bring about change in traditional work practices and can sometimes have adverse economic impact (especially when misused or incompetently used) on translators. In some ways MT technology is getting better, and in some “easy” language combinations even DIY initiatives can produce some kind of minimal production advantage. But really steering an MT system to make it work and respond in a way that it is an experience that professional translators want to repeatedly engage in, does take more skill than dumping data into an instant Moses system. Though the risk of running into incompetent MT practitioners is still high, we are seeing many more successful collaborations that show the potential and promise of this technology when it is properly used.

Much of the anger and even rage from the translator side is “passionately” stated in this blog post by Kevin Lossner. I will paraphrase some of his key objections and and other points I have heard in the broader translator community, at the risk of getting it wrong. The issues seem to be:
  • Messages from industry gurus and from CSA &TAUS in particular about how the business of translation is changing and their vision of the impact of automation on translators,
  • Messages from MT vendors (me included) about the value and urgency and benefits of using MT,
  • The possible negative impact of MT on cognitive and professional skills of translators or just the general nature of post-editing work,
  • The link between the professional work effort and the compensation,
  • The degree of involvement in the development of MT systems,
  • Lack of education and training related to MT,
  • General professional respect.
  • The overall commoditization impact on translation work.
It is clear to most of us who have had successful MT implementations that post-editing is not suitable for everybody. There are translators out there who have developed very keen expertise in some domains and can translate at speeds and quality levels that would be hard for most MT systems to match. But there are also many translators who will benefit from a well developed MT system in the same way that they may benefit from the use of translation memory and other CAT tools. When properly done, working with MT output is not so different from working with TM. The nature of the errors are different but MT can also respond and improve as corrective feedback is processed.  

We have already reached a point in time, where the reality is that we have more “rough” translation done by MT in a day than ALL humans do in a year. The free online MT engines are used about 250-500 million times a month, and while it may still be true that MT has not penetrated the professional translation world in a substantial way yet, MT is now commonly used by many French and Spanish translators going in and out of English, and probably many other language pairs too.  There are still some who question the veracity of the increasing volumes of information that companies must now translate to ensure global visibility for their products and services but many companies now understand that making more and more product related content multilingual is a key to international market success. 

The translator concerns listed above however do need attention, and should be addressed in some way by all those who wish to maximize the potential for successful MT initiatives. John Hagel has an interesting and somewhat bleak viewed essay on The Dark Side of Technology where he describes the combined impact of all the new digital technologies which include:
  • A world of mounting performance pressure,
  • An accelerating pace of change,
  • Increasing uncertainty,
  • Digital technologies are coming together into global technology infrastructures that straddle the globe and reach an ever expanding portion of the population. In economic terms, these infrastructures systematically and substantially reduce barriers to entry and barriers to movement on a global scale.
This is perhaps what is being felt both by individual translators and by translation agencies and thus we often see reactive behavior at both these levels. We see many adopt the zero sum game view of the world, and there is increasing short-sightedness and often a breakdown of trust.

While I do not have a definitive prescription for success in dealing with the human factors involved in an MT project,  I think it is possible to outline some factors that I have observed from partners like Advanced Language Translation that constitute what I consider are best practices.

It is important to understand that the better the MT system and it's output is, the better the ROI and translator/editor work experience. MT systems that can respond to the needs of professionals using it for real work are very different from ones where the users have no real control of what happens beyond putting some data in. So if I were to list some recommendations on how to approach these basic communication and trust issues I think they would include the following:
  • Build the best MT system you can, which means it should never be done in a hurry and preferably developed by experts who can tune it and adjust it as needed in response to translator feedback.
  • Manage expectations of all key stakeholders, especially with regard to the evolutionary nature of MT system development. It is not as easy as 1-2-3 and requires expertise and patience.
  • Get MT systems up to an acceptable average quality level with the involvement of senior trusted translators before unleashing the system to a larger group of translators/editors.
  • Involve Project Managers and senior translators in MT system development with experts so that you can build organizational intelligence and skills on specific data cleaning, data preparation and system assessment.
  • Involve key translators in the rate setting process to establish fair and reasonable compensation rates that are trusted.
  • Don’t involve translators who are fundamentally opposed to MT technology. There are translators who do not benefit from MT because of very special and unique skill sets.
  • Provide specific examples of corrections for a variety of different types of output errors for post-editors to model.
  • Ensure that the nature of the task is understood and compensation issues are clear BEFORE setting production deadlines.
  • Focus on fixing high frequency error patterns with a small test team and test data set before general release.
  • Feed back error corrections and ask for general feedback from editors on an ongoing basis and incorporate as much of this into the system as possible. Monitor ongoing progress to ensure that MT system remains consistent over the project and over time.  
  • Retune and retrain the MT engine quickly and as frequently as possible.
  • Develop deeper system tuning skills over time as key team members begin to understand how the system responds to various kinds of feedback and corrective adjustments.
What more can be done to make post-editing MT work better understood and thus hopefully a less threatening or demeaning technology?  I see PEMT as a natural evolution of the business translation process. It is simply a new approach that enables new information to be translated, or a new way to do repetitive tasks but it can also be a means to build and develop strategic advantage. A guest post on the TAUS site has made a plea for translator education (not training), but I think it unlikely that the recommendations given there will solve the problems I have listed above. 

The most successful translators and LSPs all seem to be able to build “high trust professional networks”, and I suspect that this will be the way forward i.e. collaboration between Enterprises, MT developers, LSPs and translators who trust each other. Actually quite simple but not so common in the professional translation industry.

I feel compelled to re-use a quote I have used before because I think it fits very well in this current context.
Disruption is not something we set out to do. It is something that happens because of what we do,” stresses Brian Solis. Disruption changes human behavior (think: iPhone) and it’s a mixture of both ‘design-thinking and system-thinking’ to get there. So as an innovator, where do you begin if you don’t start with attempting disruption. To boil down Solis’ message into a word: ‘empathy.’ That’s right, empathy. Empathy drives the core of your vision as an innovator, or so it should says Solis.
Solis says that there are only two ways to change human behavior, by manipulating people, or by inspiring them. If you choose the former, good luck on your journey, but if you would prefer to attempt the latter with your innovative attempts, then you should start with empathy: the why of your product or company. That is how you will capture attention, and hold onto it, especially in the technologically, socially-driven world today.”
The excerpt above is from this post on The future of innovation is disruption (emphasis mine).
“The end of business as usual takes more than vision and innovation to survive digital Darwinism however. It requires a tectonic shift from product or industry focus to that of long-term consumer (customer) experiences. Businesses that don’t are forever caught in a perpetual cycle of competing for price and performance. It is in fact one of the reasons that Apple can command a handsome premium. The company delivers experiences that contribute to an overall lifestyle and ultimately style and self-expression. Think about the business model it takes to do so however. You can’t invent or invest in new experiences if your business is fixated on roadmaps and defending aging business models (SDL & LIOX?).”
This excerpt is from a fascinating article on the collapse of the Japanese consumer electronics industry and especially Sony, Panasonic and Sharp.

The way forward in developing win-win scenarios and excellence in these challenging times is collaboration between trusted partners. Collaboration curves hold the potential to mobilize larger and more diverse groups of participants to innovate and create new value. In trusted relationships and networks critical knowledge flows happen more easily. Benefits and risks are shared more willingly and together participants are driven by a desire to learn and reach new levels of performance. In this context, zero sum relationships that focus on dividing a fixed pie of rewards evolve into positive sum relationships where participants are driven by the opportunity to expand the overall pie.  When there is a real prospect of expanding rewards, we are much more likely to trust others than when everyone is focused on how to get a bigger share of a fixed pie. I think it is also likely that agencies that regard translators as valued partners in a demonstrable way at an organizational level, will likely lead the innovation and evolution of how business translation gets done.  Hegel says also that a new narrative based on opportunity is needed.
Like any great narrative, it must be crafted.  “Craft” is an evocative term because it suggests that narratives are not just created on paper, but built through the actions that we begin to take as we start to see the opportunity ahead. Narratives emerge through action and interaction as we collectively begin to sense an opportunity and learn through action what it will take to achieve that opportunity.
No single person can be responsible or create this collaboration, trust and opportunity narrative and I look forward to seeing those who do help carve a path for all to learn from. Revolutions often happen from many small acts (balls) that are set into motion, rolling together in the same direction gradually building momentum and some revolutions happen slowly after some initial sputtering and misfiring.