Saturday, December 29, 2012

Annual Review–Most Popular Posts of 2012

“Blogs are about sharing with authenticity. A good blog can help you really connect deeply with your audience in a meaningful way because the content is not only relevant but insightful and personal. I think most enterprises miss that point. When you do it right, your customers will walk away not only having learned something new but will also feel much more connected to your brand.David Armano EVP, Global Innovation & Integration at Edelman Digital
It seems like it was a just a moment ago that I summarized the most interesting blog posts of 2011 but here we are again and the world has not ended. I was not as active writing in 2012 as I was in 2011 as I felt that I had said much of what I had to say, and really there is only so much one can really write about machine translation without being repetitive. The topic has had more coverage across the industry and is perhaps slightly better understood now than it was last year. I am limiting the list to the top 6 since I had fewer new posts this year.  Since Google has killed the PostRank service I am now reduced to only providing the most popular list of blog posts. PostRank used to give us much better insight into the broader influence of any web content and helped identify seminal and influential rather than simply popular content. I resolve to be more active in the new year if I have ideas for new material and I am always open to suggestions. There are still many misconceptions about MT and I think that it would be useful to cover this in more detail and perhaps I will delve into that in 2013. 

Here is the list of most popular posts in order of popularity:

  1. Exploring Issues Related to Post-Editing MT Compensation This article continues to get attention today even though it was written early in the year and it still shows up regularly in the top 3 for every week. The post has links to several interesting comments on post-editing and I think this is possibly one of the reasons why it continues to be popular as it gathers different opinions and viewpoints in a useful and unbiased way. The popularity of this post suggests that this is an important issue to resolve in a fair and equitable way to enable broader MT adoption. All parties involved need to work together to establish trusted and equitable compensation for this process. I hope that others will step forward to share opinions and approaches that might further the dialogue. It would be useful for translators especially to step forward and suggest ways to do this more efficiently and accurately. For example this post by Jason Hall shows that simply equating MT output quality to TM matches may not make sense, and that leveraging MT is entirely different from leveraging TM.

  2. The Moses Madness and Dead Flowers This post was written very late in 2011 and thus it’s popularity was not reflected in the 2011 list. But it is another post that has continued to see regular traffic as more people wade through the Moses technology and realize that “free” and “DIY” is a still really a pipe dream with MT. Being able to whip up some sort of an MT system by throwing data into a computer has become very easy but the technology is still very complex and hairy, and requires at least "some" fundamental knowledge for any real success. I remain very skeptical about any instant MT approaches and I think we will continue to see a market where you get what you pay for. I would avoid any LSP whose strategy is based around instant MT solutions.

  3. Emerging Language Industry & Language Technology Trends This was a post that seemed to strike a chord and it very rapidly rose to being one of the most popular posts of the year. Thanks to all those who shared their opinions to provide broader context. In case you missed it you may also wish to take a look at Translation Guy’s humorous take on the post. You may also find the Asia Online Trends and Translation Industry predictions interesting and you can access the webinar and slides through the link provided.

  4. A Short Guide to Measuring and Comparing Machine Translation Engines This post provided specific and constructive advice on using BLEU scores correctly to assess your MT systems in a fair and accurate way. I see BLEU scores continually being used to mislead gullible users on a regular basis and there were even some presentations at the AMTA 2012 conference that claimed systems having .90 or 90 which to my mind is only possible if you cheat. In short BLEU measures the quality of MT system output against one or more human reference translations of the same material. It needs to be done carefully if you want meaningful and accurate results. It is possible to calculate BLEU scores on two human translations of the same material, and even there I have never seen a score higher than .7 or 70 since humans do things quite differently. There is a great discussion on the many issues with BLEU in this article and I recommend it so that you can understand the increasing number of discussions where it is referenced today.

  5. The Relationship Between Productivity and Effective Use of Translation Technology MT should only be used when it actually provides measurable productivity advantages. Higher quality MT systems generally provide much higher return on investment (ROI) and this post explores this issue in some detail. MT is a means to build long-term production advantage, but only when you do it well and if you are going to invest in this technology my advice is to do it as well as possible. Most of the short cuts will lead to dead-ends and remember that with MT, you are competing with smart people at Microsoft and Google who are doing the best they can for a general internet user population. Most translators will likely prefer to use these "free" engines to crappy LSP produced Moses and RbMT engines.

  6. Understanding Post-Editing  This is one of several posts on the subject of post-editing. This is a subject that is worth exploring more as there are also many misconceptions about the nature of the process and it would be useful for more voices to air both good and bad post-editing experiences so others can learn. Jost Zetsche has written about this in some detail in his newsletter but the scope and understanding of the role of language experts is still evolving and it is a worthwhile discussion to continue. I have not seen anything really useful coming out of conferences so I suspect the best stuff on the subject will happen in blogs and LinkedIn discussion forums.
I once again invite any interested guest authors who might wish to use this blog as a way to share an idea or an opinion on the translation industry. (There is a good blend of buyers, LSPs and translators who watch this blog). I do not seek only those who agree with me to apply to do this, and in fact I hope that some who disagree will also step forward. I have always thought that it is useful to hear many different opinions to better understand a subject. So please don’t hesitate to send me contributions that you think might be interesting to the audience that has been following this blog. I thank you for your support and I hope that the content here will continue to earn your interest and comments to extend the discussion beyond my thoughts on key translation automation related issues.

It is also interesting to note that some older posts continue to strike a chord with readers and remain active in terms of visibility because the themes are longer lived and also perhaps because they ring true. The original post on standards, the analysis of why Google changed the use model of their MT systems and some of the posts that discuss the reaction to automation or industry disintermediation were also posts that generate continuing interest and continue to show up high in the list in Google Analytics.

I found a very interesting blog post that I think is worth a read, as it points to the changes that widespread information availability and ease of access creates to traditional commerce by socially engaged human beings. There is also a link to the research data from Mary Meeker on the changing online world that is worth at least a quick look. I think we are heading back to world where it is more important to understand how people connect rather than assume that technology and data will solve every problem known to man. I have always preferred the emphasis on Why? rather than  How?

Commerce in 2013 is about integrating the whole experience around the customer -- social, local, and mobile, bricks and clicks, in real life, in real time, and over time.  
Finally, I want to share a beautiful piece of music by Mercedes Bahleda that I discovered through Pandora  - the video is also very evocative and sublime with scenes of inter-species communication and a langorous swim dance. Those of you who find the sight of a female human breast offensive (there are unfortunately many in America who actually do) may wish to avoid actually looking at the video. I suggest you turn the volume up and play this on good speakers for maximum effect.

Happy New Year – I wish you health, happiness and joy

I don’t tell the murky world
To turn pure.
I purify myself
And check my reflection
In the water of the valley brook.

Zen Master Ryokan

“If the light’s not in you, you’re in the dark.”
Marty Rubin

Wednesday, December 19, 2012

Understanding the Translation Buyer

In my last post I talked about various viewpoints on the emerging trends in the industry and I noticed that the post very quickly established itself amongst the most popular of the year. In fact second only to a March posting on post-editing compensation, a subject which continues to draw ongoing attention. In many ways this post is an expansion on some of the points that were raised in the last post.

When one considers the general focus of translation work done by the professional translation industry, I think we see that a large part of the work is related to translating content that facilitates and enables international commerce. The world of localization, to a great extent focuses on the content that is closely related to the final packaging of products that are sold in international markets. This focus is the software and documentation localization mindset that is at the heart of the largest translation agencies work in the business translation industry. So much so that one company chose to name themselves SDL, though most of the other agencies in the industry have exactly the same focus.

Much of the content (which is just a word to summarize a particular collection of words) that gets translated is mandatory and necessary to be able to participate in target international markets. So most global market focused enterprises translate what they absolutely must, to legally participate in key target markets, some do more,  but for the most part only what is absolutely necessary gets done because it is slow and expensive. An example of doing the least amount possible: Microsoft Office products have a Thai user interface but if you hit the F1 online help button you will only see help in English! But everybody understands some amount of translation MUST be done to be viable in international markets e.g. Honda could not sell cars in Europe without creating some amount of final customer (aka end-user) and distribution channel material about their products in “key” languages.

To my observation this traditional content is 1) marketing content (brochures and high-level product descriptive web content, legal liability, some advertising) and 2) product packaging related material since most customers and some governments require that imported products have user documentation and other basic service information be contained in the package that international customers buy, preferably in their language. The SDL mindset is a result of the increasing importance of software products and services in the world in the last 20 years. In fact, we see few translation agencies (LSPs) older than 20 years old in this industry. This has resulted in a world where “translation projects” are often outsourced to agencies (LSPs) today as it does not make economic sense for companies to build internal translation task focused teams unless they have ongoing and continuous needs to translate material.  And as we know it is still quite messy to coordinate translators across many languages to release products at the same time across the globe.  While there is change afoot across many dimensions, most of this traditional localization will continue, though I suspect that much of the paper documentation will get thinner and much more content will move to the web. Today global enterprises have to seriously consider translating new and continuously flowing text and video related to their products offerings that is accessed via tablets, smartphones and PCs. It has become important to translate “external” content that customers peruse and use to make purchase decisions and also provide a much richer set of information to enable self-service with products after the purchase.  

We live in an age, where increasingly marketing and corporate-speak is challenged, undermined and sometimes even seen as disingenuous and false. (Raise your hand if you trust and respect corporate press releases about their amazing “ground-breaking” products).  Today we see customer voices rise above the din of corporate messaging, and taking control of branding and corporate reputations with their own “authentic” discussions of actual customer experiences, while marketing departments look on haplessly.  We are also seeing a shift away from corporate controlled top-down marketing messages, to more open uncontrolled customer initiated and driven conversations and some have been saying that the old view of corporate websites cannot succeed anymore. In 2012 global enterprises need to do different things to be successful in building a satisfied and loyal customer base. Corporate marketing messages have gained the same patina as political party propaganda and most customers look elsewhere to determine the real truth about anything they might consider buying. While a few companies are learning the new rules of engagement, many still continue the old way and risk irrelevance. There is growing awareness of these forces of change as we can see from the many discussions related to these trends and the popularity of discussions on disintermediation and change in the translation industry. GALA recently alerted their members of the need to cooperate, collaborate and develop meaningful standards and stated the following:
To respond to these challenges, LSPs, tools providers, content developers, and all players in the language industry need to be smarter than we were in bygone days. We need to cooperate and collaborate, not only because now we truly can, but also because it is the new way of the world. Those in our midst who don’t collaborate with others will soon find themselves losing out on opportunities and falling behind.
Collaboration means more than having a Facebook page, a profile on LinkedIn, some files on Google Drive, and tweeting. The new collaborative paradigm means participants are distributed, peers are connected, work is interactive, and ideas are shared. Innovation goes up, down, and across the supply chain. But real cooperation also requires a certain level of trust. Intuitively humans only collaborate to the extent they trust others. As the ice of the P.C. era melts away, we may see trust building mainly through discussions in social networks and networking at conferences right now, but this is only the beginning of the Social Collaboration era. Over time, more ways will appear to establish trust and form collaborative networks.
The changing dynamics at the broadest level are eloquently described by John Hagel as The Big Shift.  He describes various core assumptions and historic conditions that are being undermined today and I like his advice on how to deal with change. The following is good advice for an industry with 25,000 companies.
If we approach interactions with the zero sum mindset – that there is a fixed quantity of resources that must be distributed and your gain will inevitably be my loss – we virtually ensure that we will end up with short-term transactions and undermine any efforts to build longer-term relationships.  In contrast, if we adopt a positive sum mindset – that through our collaboration we can generate a growing pool of resources – we are likely to be much more successful in building long-term trust based relationships. In turn, this means we will be more effective in participating in the knowledge flows that have the potential to generate the most economic value, thereby creating a virtuous cycle that builds upon itself and generates powerful network effects.

So, if we see that the way the customer gathers information and assesses purchase decisions is changing, we should also understand that the content that will have the most value in helping to build customer relationships and thus international market success is also changing.  Aligning your business processes and strategies with this new reality is likely to be a wise thing to do.  We can see today that while there will still be an ongoing need for the traditional SDL-type of content, there is also high-value content being created in much less controlled ways that could significantly benefit international business initiatives. The graphic below illustrates this. There is great value in identifying content that customers are creating about user experience and product feedback and translating that in addition to traditional localization content. Better yet, global enterprises could encourage this in sponsored forums. We see today that more informal corporate content (e.g. blogs and product discussion forums) and also “external” content created by customers in user forums can be invaluable in helping to build market momentum. A lot of this new high-value content is much more unstructured and fleeting but can still influence customer purchase behavior, so it should be taken seriously and considered worthy of translation through new production approaches like community crowdsourcing or automated translation with carefully tuned MT systems that easily outperform free MT solutions. 
For many global enterprises even internal communications about new products and services are increasingly becoming multilingual so the role of translation can be significantly greater than the limited scope defined by the SDL mindset.  Thus “internal” emails and product design discussions that are embedded in Microsoft Office documents also become very valuable to producing products that are truly localized for different markets, especially if these discussions are global and multilingual. There is probably a role for language translation specialists who can solve these new kinds of problems for global enterprises. Many corporations are attempting to solve these problems on their own since the translation industry is for the most part still only focused on the historical SDL-type solutions. How many LSPs do you know who are involved in translation projects related to customer conversations in social media?
However, the decisions to translate knowledge bases, customer discussion forums or high-value customer created content is probably happening in executive suites rather than in the localization department. Thus it makes sense to learn to understand and speak to the needs and view at this level. The conversation is likely to be quite different from the TM value and word rate discussions that happen with localization departments. (Which I know are also important.) I expect that new translation production models to build success in international markets will involve MT (and other translation automation), crowd sourcing as well as traditional project management. It is very likely that old production models like TEP (Translate-Edit-Proof) will become less important, or just one of several approaches to translation challenges as new collaboration and translation production models gain momentum.I think that the most successful approaches to solving these "new" translation problems will involve a close and constructive collaboration between traditional localization professionals, linguists, MT developers, end-customers and probably others in global enterprise organizations who have never worked in "localization" but are more directly concerned about the quality of the relationship with the final customer across the world. At the end of the day our value as an industry is determined by how useful our input is to the process of building international markets and the requirements for success are changing as we speak.
I will take a stab at describing what qualities might be most appealing to the target buyer who may not even know that the word localization is related to translation. The vendors that would have the most attractive profile with an executive suite buyer (VP Sales, VP Marketing, VP Customer Support, VP Customer Experience, COO, CMO, CFO etc..) would probably have the following characteristics:
  • Be an expert on solving language translation related business problems rather than be just a language service provider (LSP) who manages translation projects of defined bags of words
  • Ability to identify, recruit and retain a superior human translator workforce
  • Ability to understand and participate in the larger customer satisfaction and customer loyalty building dialogue that matters at the C-level and explain how translation contributes to this beneficially
  • Ability to interface with and process and translate critical content in a highly automated workflow as seamlessly as possible
  • Ability to adjust production to business needs i.e. combine and mix TEP, MT, PEMT and Community-based production as required to meet customer needs
  • Ability to articulate and adjust time, quality and cost parameters as necessary to meet different customer requirements rather than force all projects through the same millstone
  • Ability to have a productive and objective discussion on deliverable translation quality across different production methods
  • A commitment to open standards to facilitate data transfer and exchange on a long-term basis so that efforts transfer and scale across information delivery mechanisms (web, tablet, smartphone, documentation)
  • Demonstrated competence and an understanding of developing superior automated translation technology (i.e. beyond building dictionaries and operating Moses in rudimentary way). Preferably better than is possible with free MT on the web or your basic DIY effort.
  • Ability to manage and handle small (single sentence) projects as well as large bulk projects with equal ease and efficiency
  • Ability to respond rapidly to changing customer requirements

It is said that the Winter Solstice of 2012 is a very special time, (actually so special that the planetary alignment we see now apparently only happens once in 26,000 years.The Earth, The Sun and The Center of Galaxy are on the same line at the moment of this Winter Solstice.) and depending on your viewpoint either a time for great new beginnings or a time of final reckoning. Hopefully for most of us this is a time of wonderful and energizing new beginnings and evolution. I wish you all a wonderful holiday season and a happy new year.

Friday, December 7, 2012

Emerging Language Industry & Language Technology Trends

As the year comes to a close, it is sometimes useful to review and look ahead on where things may be going, and even though many of these type of ruminations can be self-indulgent and self-serving, I have decided to throw in my two cents anyway. These are personal opinions on other opinions, and like much of what I do in this blog, this is also a collection of information that I consider most worthwhile to share on this subject of trends.

The translation industry remains a highly fragmented industry with relatively inefficient production and business models. In 2012 we still have over 25,000 language service providers (agencies) of varying quality and professionalism doing the work of business translation across the globe. Efforts to define the final product or service produced by these firms are unsuccessful despite valiant efforts from industry associations.   However, many have been talking about change and disintermediation and many of us are aware that something is afoot. My intent here is to collect and organize different opinions rather than only promote my own and hopefully I succeed in creating a broader clarity on these emerging trends and possibly starting some discussion on this.

A trigger for this post was a conversation with Bob Donaldson who presented on this theme at Translation Forum Russia. I have also added some material gathered at other conferences I attended this year that extends these initial opinions. Bob has simplified my task by gathering and sharing the opinions on key trends of several different viewpoints as summarized below. (I have kept the text exactly as presented in his slides at TFR but you could get clarifications and detail beyond this slide verbiage by directly contacting him). 

Multi-Language LSP Vendor (MLV) Perspective by Renato Beninatto
  • Renato 
  • Rise of Micro translations (interesting response to this point by Luigi Muzii)
  • Outsourcing to translator teams
  • Demand for “long tail” languages

CAT Tools Training Perspective by Angelika Zerfass
  • 654698_r4605693a3f2f3 
  • Terminology Management gaining traction (finally)
  • New content types (twitter) don’t fit old processes
  • File management becoming more complex

Translator Perspective by Jost Zetsche
  • Deep integration of MT into translation workflows
  • Limited lifespan of LSP as (mere) middleman
End Buyer Perspective by Anonymous
  • Demand for continuous translation with very little context (Micro translation)
  • Declining Quality Expectations
  • MT will fill the gaps created by the first two at an ever-increasing price
Single & Regional Language Vendor (SLV/RLV) Perspectives in aggregate
  • Greater usage of MT
  • Multi-faceted approach to quality
  • “Price compression” will drive small/inefficient players out of business
  • “Disintermediation” will show up in various forms
  • Greater demand for self-service portals
Bob Donaldson Top 4 Trends Summary
  • RTEmagicC_Donaldson_Bob_02.jpg 
  • Transition from “Project Orientation” to “Content Stream” orientation
  • Increasing integration of MT at all levels
  • Increasing emphasis on velocity rather than price or quality
  • Increasing reliance on global SLV partners rather than freelancers
All resulting in changing and needed innovation in business models

As I consider all these views, my own sense is that the following trends are increasingly understood to be clear and continue to gain momentum:
  • Business translation is shifting focus from intermittent project work of relatively static content to continuously flowing streams of information that might enhance international business. The old “software and documentation localization” (SDL?) view of the world is becoming a smaller part of the core translation challenges that global enterprises face to be successful in international markets.  There is also a growing awareness that translation should be able to flow from document/video to PC/web to mobile/tablet easily, quickly and efficiently.
  • An expanded view of critical and translation-worthy content that includes more informal corporate content as well as customer generated content and social media conversations about products. Social media has dramatically changed the traditional top-down views of marketing, and this impacts the decisions on what is important to translate as enterprises realize that purchase decisions are being made in social online conversations and information sharing.
  • The importance of automation and collaboration increases. This is more than just MT, it includes greater integration of content flows from the information creation process all the way to information consumption. Successful use of comprehensive automation and collaborative processes will help create meaningful differentiation and competitive advantage amongst LSPs and help identify superior players.
  • The increasing importance of cloud based services and infrastructure to facilitate collaboration and standardization of translation-related informational flows. This will also mean that desktop tools (TM, MT) will become less important over time as usage shifts to the cloud.

On the MT front I expect the following trends, much of this is already in place and also gaining momentum:
  • Increasing awareness amongst translation professionals that domain focused MT produces the best results in terms of production efficiency and productivity gains. We will hear of many more successes of these kinds of focused systems.
  • Increasing understanding of post-editing based translation production and processes. While there will be some or many “premium” translators who refuse to work on PEMT projects, more and more translators and LSPs will learn to work effectively with MT.
  • Continued momentum in the understanding of MT system quality which will result in better PEMT experiences and trusted, fair and equitable compensation practices. This is essential for broader long-term adoption.
  • A shift away from free and instant MT solutions to expert collaboration and expert-built MT systems(Some will say this is self serving and to some extent it is.) It has become increasingly easy to get some sort of MT system into place by throwing some data into a hopper, but very few of these systems provide long-term productivity gains and strategic advantage out-of-the-box. MT in 2012 is still very complex and getting some kind of basic system together quickly should not be equated to building long-term production efficiency. Experience and knowledge about MT system development matter, and the best, i.e. the highest productivity and best overall ROI systems will still come from experts. As Malcolm Gladwell says, “Practice isn't the thing you do once you're good. It's the thing you do that makes you good.” Experts are people who have built hundreds or thousands of MT systems. Many who experiment with Moses and other instant MT solutions will learn that deep expertise is required to move the system quality beyond the initial engine capabilities and that long-term business advantage only come from continuously improving MT systems. In 2013 MT system development is still an evolutionary process and a skill based technology, not the instant iPhone-like gadget that some want it to be. There is a difference between using MT well and just blindly using MT because it  is in vogue. If you don’t know what you are doing and what you will do after your initial system is in place, being able to do it quickly initially is not going to really add much to your business leverage. 
  • Better understanding of what MT can and cannot do, and more pro-active use of MT to build long-term competitive advantage rather than just be a means to react to cost pressure or client demands. This means that some LSPs will build MT systems BEFORE they actually have a customer to ensure that they have an advantage in particular domains that they feel have strategic promise and potential.

I have discussed the importance of automation (process integration which includes MT and much more than traditional project management) and collaboration (which also means that you respect your workers and customers) as important elements of new business models that can effectively respond to and take advantage of these trends. I would like to add agility as a critical third element. What is agility or agile? I think this is increasingly becoming more important as a critical element for success in the future. 
: Characterized by quickness, lightness, and ease of movement; nimble.
: Mentally quick or alert
: marked by ready ability to move with quick easy grace
: having a quick resourceful and adaptable character
So are there any examples of where all these elements come together? Not really, and definitely not at large LSPs like Lionbridge, SDL et al.  Largeness (over $50M for the translation industry) generally tends to undermine agility and often collaboration (in the sense I use the word) too. I think there are smaller companies where all these elements are more visible and look like they have the potential and promise to bloom. A nice and succinct description of “agile” is presented by Jack Welde, CEO Smartling in the first 8 minutes of this video.
An 8 minute overview of Agile Business Translation
I suspect that many new business translation customers will opt for this type of lean, quick and more cost-effective approach over the traditional LSP sales and TEP process hype, where the customer is often treated like an idiot that needs to be slapped into shape. Lingotek is another company with an approach that has many key elements in place and I think is well positioned to challenge the old model. In both cases outsiders are creating tools to change a cumbersome old business model and facilitate rapid collaborative production. DotSUB and Amara are two that are focusing on facilitating translation of the huge volumes of video content that are increasingly useful to help sell products and services, and are increasingly recognized as more important than a lot of traditional localization content. In all cases these new approaches can steer easily to professional, MT or community based production or any combination of the above at significantly lower prices with “quality” intact. Try and have this discussion about flexibility, speed and various production modes with a large traditional LSP and you will likely see that it may be possible at a significantly higher price, and I suspect the conversation will also be labored and difficult.

All of this for points to examination of changing business models and innovation and the most interesting discussions I have seen on this subject for this industry are at the The Big Wave. Listing all these trends has some value but it is useful also to understand how all these trends mix together and what implications it might have. I can’t say I have the answers but I think these are good things to ponder.  I have seen several interesting posts about this at the Big Wave site. For example here are some selections from this post:
TEP is the unique answer of most translation vendors, an old-fashioned and somewhat obsolete answer too, but it is the only model they know.
At a closer look, none of the big players in the industry, however, has produced substantial product, technological, or process innovations.
When customers don’t get any new value from traditional vendors to meet new or implicit needs, they abandon these vendors and do something different themselves to better accomplish their goals.
This is why innovation in translation has always come from outsiders.
There are also interesting posts On Information Asymmetry and The Disintermediation Myth: Bogy or Opportunity? which are provocative and worth reading even though some might feel they are slightly opaque.

The future is likely to see multiple production models co-exist e.g. TEP, PEMT, Customized MT, Free MT, Crowdsourcing or Community Collaboration as well increasing examples of social translation, as these will all be necessary to solve different types of translation challenges we face. I have often thought that it is too complicated to buy translation from traditional LSPs and I hope that we as an industry make this much more clear and simple for the customer who has never heard the word localization used in relation to translation. There are a lot more of these customers out there than there are customers in localization departments. People usually find it easier to buy, when they know exactly what they will get for a given price. People like predictable outcomes (really hard to do with MT) and would like to be able to easily compare alternatives.

I would welcome any readers who would be interested to share their own perceptions and views on these trends as a guest post. I assure you that I will print it without modification (hopefully no personal attacks or rants).

Friday, November 9, 2012

Understanding Post-Editing

MT continues to build momentum because the need for large global enterprises to make more information available faster, continues relentlessly. There are still some who question the “content tsunami”, and we are now getting some data points that define this for industry players in very specific terms, for those who are still doubtful.  For example, last week at AMTA 2012, a senior Dell localization professional gave us a specific data point: Dell has increased its volume of business and product-related translation from 30 million words to 60 million words in two years. This was done without any increase in the translation budget. This situation is mirrored across the information technology industry, and now many with information-intensive products apparently do realize that translating large amounts of product/service-related information enhances global business success. Given the speed and volume of information creation, it is often necessary and perhaps even imperative to use technology like MT. 

While much of the discussion about MT tends to get gets bogged down around linguistic quality issues, we should all remember that finally, the whole point of business translation and the whole localization industry is to facilitate cross-language trade and commerce. We now have many examples where we see that the final customers of Dell, Microsoft, Apple customers say that machine-translated content is more than acceptable even though this same content would fail a linguistic review in a typical Translate-Edit-Proof (TEP) process.  We see terms like linguistic usability and readability being applied to translated content which is often short of the TEP quality that many of us have grown accustomed to or expect. Customer expectations change and free online MT has made MT more acceptable, also as we understand that the content that is translated is being created by writers who are not really writers, for readers who do not have scholarly expectations on this content. There is content that requires TEP rigor and there is some that can be raw MT and there is much in between with various shades of grey.  This is not acquiescence to crappy quality across the board, rather, it is understanding that for a lot of business translation, MT or PEMT does produce quality that helps to accomplish business goals of getting information to customers in a cost-effective and timely manner.

Thus, we see the growing use of MT in business translation contexts, but there is still a lot of misinformation and it is useful to share more information about successful practices so that the use and adoption of this technology is more informed and the discussions can become more dispassionate and pragmatic. 

There were some recent sessions exploring what post-editing MT is about in the ProZ Virtual Conference that I participated in, that I thought might be interesting to highlight in this post.

The integrated audio/video session is available on the ProZ site by clicking on the link to the left and playing the presentation back on the “low-bandwidth” image towards the bottom of the page. (Unfortunately, the live session had many video resolution issues but the recording is fine.) I have also included the Slideshare below for those who just want to see the basic content of the presentation. Hopefully, this presentation does provide a more realistic perspective on what is and is not possible with MT.

A second session included a panel discussion on post-editing with speakers from various translation agencies talking about their direct experiences with post-editing MT (PEMT).
PEMT is an important issue to understand as there are very strongly felt opinions on this (many based on actual bad experiences) but the signal-to-noise ratio is still very poor. Many translators feel that the work is demeaning and are not interested in doing it and practitioners should understand this. However, much of the negative feedback is based on early practices where the MT quality was very bad and translator/editors were paid unfairly for the effort involved. Some recent feedback from TAUS even suggests that many translators are considering leaving the profession because they do not enjoy this type of work. Better MT and fair compensation practices can address some of this dissatisfaction.  While early experiences often only focus on the most mechanical aspects of PEMT, I think there is an opportunity for the professional translation industry to get more engaged in solving different kinds multilingual problems e.g. Support Chat, Customer Forum discussions where translation could greatly enhance global customer satisfaction and increase dialogue and engagement.
I think that as we as an industry could further our prospects and greatly reduce the emotional content in the debate and discussion by getting better definitions of quality across the spectrum of content that is worth translating to facilitate commerce.  Competent TEP and raw online free MT are two opposite ends of the quality spectrum and it would be useful to get better definitions of the useful quality levels for the variety of grey shades in-between. Preferably in terms that are meaningful to the consumers of that content rather than in terms of linguistic errors.
In the PEMT context, it would useful for both translation agencies and translator/editors to understand the specific MT output quality involved better, so that compensation structures can be set more rationally and equitably.  This quality assessment, I believe is an opportunity for translators to develop measures that link the quality of specific MT output to their compensation on a project-by-project basis.  My previous post suggests one such approach but I am sure there are many other ways that translators can rapidly assess the scope and difficulty of a PEMT task and help the agencies and the buyers understand equitable compensation structures based on trusted measurements of the scope of work.
There is also a growing discussion on what an ideal PEMT environment looks like and Jost Zetsche provided some clues in the 213th edition of his newsletter. But basically, we need tools that provide some different context since MT errors are not quite the same as TM fuzzy match errors. Perhaps some of the frustration that translators have, stems from expecting to see the same type of errors as they see in low fuzzy matches.   I would suggest the following for an ideal PEMT environment:
  • Rapid Error Detection (Grammar and Spelling Checkers)
  • Rapid Error Correction (e.g. Move word order, global correct and replace)
  • Dictionary and Terminology DB links
  • Error Pattern Identification so that hundreds of strings can be corrected by correcting a pattern
  • Quality measurement utilities to assess specific and unique MT output
  • Productivity measurement tools
  • Context as well as individual segment handling
  • Tight integrations with TM
  • Linguistic data manufacturing capabilities to create corrective data
  • Regex and Word Macro-like capabilities

Wednesday, October 24, 2012

Effective Determination of PEMT Compensation

An issue that continues to be a source of great confusion and dissatisfaction in the translation industry is related to the determination of the appropriate compensation rate for post-editing work. Much of the dissatisfaction with MT is related to this being done badly or unfairly.

It is important that the translation industry develop a means to properly determine this compensation issue in a way that is acceptable to all stakeholders. Thus, developing a scheme that is considered fair and reasonable by the post-editor, the LSP and the final enterprise customer is valuable to all in the industry. It is my feeling that economic systems that provide equitable benefits to all stakeholders are the ones most likely to succeed in the long term. Achieving consensus on this issue would enable the professional translation industry to reach higher levels of productivity and also increase the scope and reach of business translation as enterprises start translating new kinds of content with higher-quality, mature, domain-focused MT engines.

While it took many years for TM compensation rates to reach general consensus within the industry, there is some consensus today on how TM fuzzy match rates relate to the compensation rate, even though there is still dissatisfaction amongst some about the methodology and commoditization of TM and fuzzy-match based compensation schemes cause to the art of translation. Basically, today it is understood that 100% matches are compensated at a lower rate than fuzzy matches and that the higher the fuzzy match level the greater the value of the segment in the new translation task. Today fuzzy match ratings provided by the different tools in the market are roughly equivalent and for the most part, trusted. There are some (or many) who complain about how this approach commoditizes translation work but for the most part, translators work with an approach that says they should get paid less for projects that contain a lot of the exact same phrases, i.e. 100% matches in the TM that is provided to do new projects. 

However, in the world of MT, it is quite different. Many are just beginning to understand that all MT systems are not equal and that all MT output does not necessarily equate to what is available on Google and Bing. Some systems are better and many are worse (especially the instant Moses kind) and to apply the same rates to any and all MT editing work is not an intelligent approach. Thus, the quality assessment of the MT output to be edited is a critical task that should precede any large project involving post-editing MT output. The first wave of many MT projects just applied an arbitrarily lower (e.g. 60%) word rate to any and all MT post-editing work with no regard to the actual quality of the MT output. This has led many to protest the nature of the work and the compensation. Many still fail to understand that MT should only be used if it does indeed improve productivity and this is a key measure of value and thus should drive compensation calculations.

The first fact to understand and have in hand before you implement any kind of MT is the production rate BEFORE MT is implemented. It is important to know what your translation production throughput is before you use any MT. The better you understand this, the higher the probability that you will be able to measure the impact of MT on your production process.  This was pointed out very clearly in this post. (I should state that many of my comments here apply to PEMT use in localization TEP type projects only).

Many now understand that the key to production efficiency with MT is to customize it for a specific domain and tune it for a specific purpose. This results in higher quality MT output but only if done with skill and expertise. We are now seeing some practitioners making attempts to make quality assessments prior to undertaking post-editing projects, but there is a lot of confusion since the quality metrics being used are not well understood. In general, any metric used, automated or human assessment based, requires extended use and use experience before they can produce useful input to rate-setting practices. BLEU is possibly the one metric that is most misunderstood and has the least value in helping to establish the correct rates for PEMT work, mostly because it is usually misused. There is one MT vendor making outlandish claims of getting a BLEU of .9 (90) or better. (This is clearly a bullshit alert!) This is somewhat ridiculous since it is quite typical for two competent human translations to score no higher than .7 when their translations are compared unless they use exactly the same phrasing and vocabulary to translate the same source material. The value of BLEU in establishing PEMT rates is limited unless the practitioner has long-term experience and a deep understanding of the many flaws of BLEU. 

Another popular approach is to use human-based quality assessment metrics like SAE J2450 or Edit Distance. They work best for those companies that have used them over a long period and understand how the metric measurements relate to past historical project experience. These are better and more reliable than most automated metrics but are much more expensive to deploy and also their link to setting correct compensation levels is not clear. There is much room for misinterpretation and like BLEU, they too can be dangerous in the hands of those with little understanding or expertise with extended use of these metrics. It is important that whatever metric is used should be trusted and easily understood by editors to build efficient and effective production systems.
While all these measurements of quality provide valuable information, I think the only metric that should matter is productivity. It is useful to use MT only if the translation production process is more efficient and more productive with the use of MT. This means that the same work is done faster and at a lower cost. This can be stated very simply in terms of average productivity as follows (I chose a number that can be easily divided by 8 and stay with round numbers):

Translator Productivity before MT 2400 Words / Day or 300 Words / Hour 

Any MT system that cannot produce translated output and related productivity that beats this throughput, is of negative value to your production efficiency, and you should stay with your old pre-MT process or find a better MT system. MT systems must beat this level of productivity to be economically useful to the production goals and to be useful in general. (BTW most Moses and Instant MT attempts often do not meet this requirement.)

Thus it is important to measure the productivity impact of the specific MT system that you are dealing with and measure the productivity implications of the very specific MT output your editors will be dealing with. To ensure that post-editors feel that compensation rates have been fairly set it is wise to use trusted and competent translators in the rate-setting process. It would also be good to be able to do this reliably with a sample or have a reconciliation process after the whole job is done to ensure that the rate was fair. The simplest way to do this could be as follows:
1. Identify a “trusted” translator and have this person do 2 hours of PEMT work that is directly related to the material that will be post edited.
2. Measure the productivity carefully both before and after the use of MT.
3. Establish the PEMT rates based on this productivity rate and err on the side of over paying editors initially to ensure that they are motivated.
Good MT systems will produce output that is very much like high fuzzy match TM. The better the system, the higher the average level of fuzzy match. This still means that you will get occasional low matches and make sure you understand what average means in the statistical sampling sense. Thus, if a system produces output that the trusted translator can edit at a rate of 750 words an hour, we can see that this is 2.5X the productivity rate without MT. Based on this data point, there is justification to reduce the rate paid to 40% of the regular rate, but since this is a small sample it would be wiser to adjust this upwards to a level that will accommodate more variance in the MT output. Thus perhaps the optimal rate would be to set the PEMT rate at 50% of the regular rate in this specific case based on this trusted measurement. It may also be advisable to offer incentives for the highest productivity to ensure that editors focus only on necessary modification and avoid excessive correction. Other editors should be informed that the rates were set based on actual measured work throughput. And at least in the early days, it would be wise to measure the productivity as often and as much as possible on larger data sets. In time, editors will learn to trust these measurements and will remain motivated to work on ongoing projects assuming the initial measurements are accurate and fair.

It is of course possible to do a larger sample or test where more translators are used and a longer test period is measured, e.g. 3 translators working for 8 hours. Though based on experiential evidence across multiple customers we have seen at Asia Online, a 2 hour test with a trusted translator provides a very accurate estimate of the productivity and can help establish a rate that is considered fair and reasonable for the work involved. I am sure there are other opinions on this and it would be interesting to hear them, but I would opt for an approach where trusted partners and actual direct production data experience are the key drivers to setting rates over metrics that may or may not be properly implemented. 

I continue to see more and more examples of good MT systems that produce output that clearly leverages production efficiency, and I hope that we will see more examples of good compensation practices in which translators and editors find that they actually make more money as I pointed out in my last post, than they would in typical TEP scenarios using just TM. 

Whatever you may think of this approach, the issue of post-editing MT needs to be linked to an accurate assessment of the quality of the MT output and the resultant productivity benefit. It is in everybody’s interest to do this accurately, fairly, and in a way that builds trust and helps drive translation into new kinds of areas. This quality assessment and productivity measurement process may be an area that translators can take a lead in and help to establish useful procedures and measurement methodology that the industry could adopt. 

I have written previously on post-editing compensation and there are several links to other research material and opinions on this issue in that posting.  I would recommend it to anybody interested in the subject.

Friday, July 13, 2012

The Relationship Between Productivity and Effective Use of Translation Technology

As machine translation continues to gain momentum, we are seeing many more instances of LSPs and some enterprise users exploring the potential use of the technology in core production work. MT today is still unfortunately quite complex and there are few universally accurate truisms or rules of thumb that replace the need for at least some minimal amount of expertise and understanding. Expertise and knowledge are key requirements for those who wish to use MT successfully in a translation production context. However, there are still many misconceptions about the effective use of the technology.

Some of the most common misconceptions include:

All MT systems are about the same. Not really, some MT systems that have undergone expert-managed customization and domain-focused training can produce dramatically better results than generic systems. This also means that you are not likely to get a very good understanding of the capabilities of an MT technology without doing a real pilot project that involves customization. Yet I often see people trying to make judgments about which MT system to use based on running a few paragraphs through a generic engine.

All MT applications are the same. Some MT applications that are focused on localization (documentation, core website content) need much higher quality to be useful, than other applications like making customer support forum content multilingual where good gisting quality is adequate. Translator productivity applications are the most difficult to do successfully and one where naïve users (e.g. your average LSP with Moses) are likely to fail.

Post-editors should be paid the same lower rate for all MT post-editing work. CSA states that this magic rate is 61% of the full rate in 2010. However, setting a fixed rate without understanding the reality of the MT output quality can often be unfair to editors and cause resentment that can undermine any attempt to build production leverage.  Compensation needs to be linked to productivity and effort expended to “fix MT” and the most successful users are respectful and careful to do this well to ensure a stable and motivated workforce.
MT is responsible for falling translation rates

This is a digression, but I wanted to highlight some interesting analysis and opinion by Luigi Muzii on why this is NOT true and he provides very interesting analysis and opinion on this matter in this article and also in a post called “Changes Ahead” that was characterized as follows by Rob Vandenberg.

I will address the first three issues in this post and provide some more context to clarify these misconceptions.
MT systems can vary and produce very different type and quality of output depending on all of the following factors:
  • Methodology used (RbMT, SMT, Hybrid which can also mean many different things)
  • The skill and knowledge of the practitioners working with the technology and building the systems. MT is still quite complex and needs skills that take time to develop and refine, to get output quality that surpasses the quality produced by public MT engines from Google and Bing.
  • Increasingly the quality and the volume of the “training data” are important determinants of the quality of the system as SMT approaches increasingly lead the way.
  • The language pair: It is much easier to get “good” systems with FIGS than with CJK relative to English. Languages like Hungarian, Finnish, and Turkish are just tough in general (relative to English).
  • The ability of the system to respond to small amounts of strategic corrective feedback. This is critical to building real business leverage. While some systems may improve slightly when many millions of new words are added to train them, very few can respond favorably to small volumes of additional data. MT system development is evolutionary and one should enter into development with this mindset.

MT can be useful in many different scenarios but it should be understood that the expected usable quality for different uses is very different. We live in a world today, where MT translates billions of words each day for internet users who are trying to understand the content of interest on the web or communicate with others across the world. There are also many corporate and business applications where the sheer volume and volatility of the information could not justify anything but MT, e.g. technical knowledge base content, customer forum discussions, hotel reviews where “good enough” is good enough. Much of this information has little or no value over time e.g. configuration guidance on DOS 5.0/Windows XP or a 3-year-old hotel review but could have great value and enhance global customer satisfaction for a brief window in time even in an imperfect linguistic-quality form. MT use for traditional LSP applications is the most demanding of all MT applications and requires the deepest knowledge and expertise and skill. MT in this context can only add value if the output produced is of sufficient quality, that it actually enhances the productivity of translators and makes the business translation process more cost-efficient. It is not a replacement for human translation and thus needs to be at a quality level that humans acknowledge its utility and actually want to use it.

Much of the early dissatisfaction with MT in the professional translation world is a result of asking translators to edit poor quality output for much lower rates in a relatively arbitrary fashion, that did not accurately reflect the level of effort that was involved. The task of post-editing MT to publication-quality levels needs an understanding of the average level of effort needed and very few in the professional translation world have figured this out.

Omnilingua is an example of how to do it right, with a very clear and trusted quality measurement profile of the MT output which then also helps to define productivity and fair compensation for editors. This task of accurate measurement of MT output quality and then a determination of the correct compensation structure is key to successful MT deployment and is quite possible in high-trust scenarios but much harder to implement when trust is less prevalent.

In the following largely hypothetical example (which is based on a generalization of actual experiences) I have summarized the possibilities to show how MT system output quality and productivity are related. I have also taken the additional step of showing how lower word rates can often make sense with “good” MT systems, and hopefully demonstrate that it is in the interests of both LSPs and translator/post-editors to figure out the key quality/productivity metrics accurately. Once the productivity is clearly established lower rates make sense because the throughput is trusted. Both parties need to be willing to make adjustments when the numbers don’t properly balance out.
In this hypothetical comparison, we will assume that there are 3 MT systems all focused on the same production task. These systems are of differing quality and their related productivity impact is characterized below. The objective in every case is to produce final output that cannot be discerned from a pure human TEP production effort:
  1. Good Instant MT/Moses System – A large majority of these systems do not produce output better than the free generic engines on the internet. I am assuming that perhaps 5% to 10% of these systems can reach a state where they can outperform Google. TAUS has highlighted several case studies where this is documented and where it is clear this is difficult. Typically productivity for a very successful effort will range from 3,000 words per day and slightly higher.
  2. Average Expert System – A product of a reasonable amount of data and expertise and experience that enables productivity over 5,000 words/day to as much as 7,000 words per day for editors who work on correcting the MT output.
  3. Excellent Expert System – This is possible with data-rich systems developed by experts that have gone through several iterations of improvement and corrective feedback. I have seen systems that enable 9,000 words/day to as much as 12,000 words/day throughput. Some exceptional systems are even higher!
In the following table, these 3 systems are profiled to compare the overall time and cost implications for a 500,000-word project. This clearly shows (fabricated though it is) that higher quality MT systems will provide the best overall production benefits. This also implies that it is worth investing in developing this better quality up front, rather than opting for a low initial cost option that provides less benefit.


Thursday, June 14, 2012

Thoughts on an MT technology presentation at ALC New Orleans, May 2012

This is a guest post by Huiping Iler who I had the pleasure to meet in New Orleans last month who made a very interesting presentation on how to increase the intrisnic value of an LSP firm. She runs a language services firm that is one of the growing fold of LSPs who have direct experience with post-editing MT output, and see an increasing role for MT in the future of her business.  I should add that while her own feedback on my presentation here is quite flattering, there were also others who commented through the regular feedback process that my slides were too dense and information filled, and one who even felt that my presentation was a “thinly disguised sales pitch”. (I assure you Sir, it was not.) It is difficult to find a balance that makes sense to everybody and all feedback is valuable. The pictures below come from the wonderful photographic eye of Rina Ne’eman taken during her visit to New Orleans.

It was a real delight listening to Kirti Vashee from Asia Online presenting on the ROI of Machine Translation – Scoping and Measuring MT. It took place at the most recent annual Association of Language Companies conference in New Orleans between May 16-20, 2012.

Kirti pointed out that:

  • Much of the today’s business content is dynamic and continuously flowing.
  • The need for real time international -language content cannot be met by human translators alone due to cost and time restraints.
  • Machine translation (MT), especially statistical machine translation is gaining traction among enterprises that have large amounts of data to translate.
  • IT companies and travel review sites are examples of early adopters of statistical MT.
  • Compared to any general or free MT tools out there such as Google, an enterprise MT tool and service like Asia Online is highly customizable and adaptable to unique customer needs
  • It gives clients much more control on terminology, non-translatable terms, vocabulary choice and writing style. As a result, it produces much higher accuracy and translation quality, especially in highly specialized and focused domains.
This echoes the feedback I heard from one of wintranslation’s enterprise clients who has been using statistical MT for the last few years. Our translation team have been tasked with post editing, providing corrective feedback to the client’s MT engineering team for continuous improvements.

According to translators who have mastered the art of editing machine translation, post editing raw output requires a different skill set than the traditional editing of human translations. 

As a starter, text selected for MT often tends to be “low visibility.” Kirti gave an example that for a travel review site, the four or five star hotel reviews are human translated while the lower star hotel reviews are machine translated with some or no human post editing. 

Other low visibility text examples include car service manuals that not everybody reads, or web-based support content. High visibility (and typically low volume) text such as marketing communications, rarely if ever, gets selected for machine translation. 

In the situation of translating low visibility text, particularly in technical communication, it is more important for the text to be technically accurate than stylish. It is a case where the translation might sound awkward but technically correct IS acceptable, as long as translation efficiency is maximized without hurting accuracy.
But translators new to post editing may be tempted to edit the text for not only accuracy but also flow and style. It leads them to spend more time than necessary on the text and they are also more likely to complain about the quality of MT output. After all style and flow is not the strength of MT but speed and consistency is. It is important to have an agreement with the human post editors what is good enough (i.e. technical accuracy only, not style). Improved productivity and lower cost are very important to clients using MT. The best post editors understand this and can deliver a high number of edited words per hour that meet quality standards. 

One of wintranslation’s MT post editors commented, “When I have to review a translation, either done by a human or by a machine, I do not try to make it sound like if I wrote it. I mostly correct errors, terminology inconsistencies, awkward style, problems with conveying the intended meaning and issues that really bother me. If we are able to have that mindset, then it will be less cumbersome to review machine-translated text. If we have the tendency to rewrite the translation, then the editing will be time-consuming and cumbersome.” It sums up the ideal attitude a post editor should have.

Consistency is one of machine translation’s core strengths. When set up properly, non-translatable text, like numbers, acronyms and product names are reliably consistent throughout the translation. It is an area that MT can outperform human translators.
For example,
Source: Migration information for JKJ 5.x
MT Target: Información sobre migraciones para JKJ 5.x
When a post editor reviews this text, she/he knows that for sure “JKJ 5.x” is correct and she/he doesn’t have to worry about it being translated as “JKJ 6.x” or “JKJ 5.s.” This is not always the case when reviewing human translations, because the editor will always have to double check the product name and version etc.

The absence of spelling errors in machine translated text is a distinct advantage that saves time. But it is a good practice to spellcheck the translation before delivery, because post editors could have introduced typos while inputting corrections.

When a post editor finds an error pattern, communicating it with the client will help training the engine and improving results for the future. For example, in one of the MT text, the term “wireless” is always translated into Spanish as “productos inalámbricos,” which in most cases is wrong. The post editor quickly identifies and fixes the error. Because this error happens often enough to be a pattern, it is submitted to the client for dictionary updating. This and other types of pattern based corrective work can greatly enhance the overall production efficiency of post-editing work.

When words are not in the right order in the translated text, it is best the post editor just drag them to the right place and that way he/she doesn’t have to retype them and delete them from the wrong place. This saves time.

Source: Where to buy ABC Anti-Theft Service related products.
MT Target: Dónde comprar ABC contra robo servicio productos relacionados.
In this case, “productos relacionados” needs to be moved towards the beginning of the sentence, the post editor just highlights the two words and drag them to their right place. She also needed to move the word “servicio” and make a few quick fixes.
Final Target: Dónde comprar productos relacionados con el servicio ABC contra robo.
When the upfront linguistic set up work has been inadequate or there is a lack of ongoing communication between the post editing team and the MT engineers, it produces a lot of frustrations for MT editors and creates unnecessary delay.

For example, a Brazilian Portuguese translator noticed the MT software was often using European Portuguese vocabulary even though the text is intended for the Brazilian market (there are significant spelling differences between Brazilian Portuguese and European Portuguese).

For instance, acção (should be ação), gestores de projectos (should be gerenciadores de projetos), etc.

She asked why the machine translation software “was not told” about that. This ability to provide feedback to the MT system is a key ingredient to getting better results and raising editor productivity and satisfaction. The best results of MT come from close collaboration between the engineering and linguistic post editing team.

The translator mentioned above also found inconsistencies in the translation of key terms such as product names. “Green Power Management” was translated as Energia verde Management, Verdes gerenciamento de energia, and Verdes poder Management. Some editing of the translation memory to reduce such inconsistency would speed up the posting editing process a lot.

In terms of productivity gains, it varies from language to language. In Spanish and Portuguese for example where MT has made more inroads, one can expect as high as 50% productivity increase in terms of number of words translated per hour assuming the MT engine has been properly set up and trained. But gains are harder to come by in Asian languages. 

There is a real and imminent opportunity for translation companies to offer real-time translation services for select type of content that is out of reach for human translations due to time and cost. The linguistic training of statistical translation engines and developing post MT editors are key pieces in realizing that opportunity.
On a side note, I cannot help but noticing Mr. Vashee’s passion and sharing of MT expertise is contagious. He is one of the finest craftsmen in the sales and marketing field of technology and translation; an empathetic communicator, he is always able to see things from his clients’ eyes; when in the company of translation company owners, he presents possibilities to use a tool like Asia Online to generate new revenue and create differentiation (ask which translation company owner doesn’t like to hear that); he satisfies the data driven analytical types with numbers and return on investment measured in quality metrics and dollars; he has an amazing ability to stay insightful and relevant in a conversation while sticking to his value proposition; he is an outstanding marketer and an entrepreneur’s dream pitch man. 

About Huiping Iler:
Huiping Iler is the president of wintranslationTM, a Canadian based translation company *specializing in information technology and financial services. wintranslation has been coordinating post editing of machine translated text for the last several years.
Canadian translation company wintranslation