Monday, December 1, 2014

Machine Translation Humor Update

It has been sometime since I first wrote a blog post about MT humor primarily because I really have not been able to find anything worth the mention, until now, and except for some really lame examples about how MT mistranslates (sic) I have not seen much to laugh heartily at. It seems a group of people on the web have discovered the humorous possibilities of MT in translating song lyrics which might be difficult even for good human translators. (It really seems strange to be saying “human translator”.) 

I should point out that in all these recent cases one does have to work at degrading the translation quality by running the same text through a whole sequence of preferably not closely related languages.

It has often surprised me that there are some in the MT industry who use “back translation” as a way to check MT quality, as from my vantage point it is an exercise that can only result in proving the obvious. MT back translation by definition should result in deterioration since to a very great extent MT will almost always be something less than a perfect translation. This point seems to evade many who advocate this method of evaluation, so let me clarify with some mathematics as math is one of the few conceptual frameworks available to man where proof is absolute or pretty damned certain at least.

If one has a perfect MT system then the Source and Target segments should be very close if not exactly the same. So mathematically we could state this as:

Source (1) x Target (1) = 1

since in this case we know our MT system is perfect ;-)

But in real life where humans play on the internet and you have DIY MT systems being used to determine what MT can produce, the results are less likely to equal 1 which is perfect as shown in the example above.

So lets say you and I do a somewhat serious evaluation of the output of various MT systems (each language direction should be considered a separate system) and find that the following table is true for our samples by running 5,000 sentences through various MT conversions and scoring each MT translation (conversion) as a percentage “correct” in terms of linguistic accuracy and precision.

Language Combination Percentage Correct
English to Spanish 0.8 or 80%
Spanish to English 0.85 or 85%
English to German 0.7 or 70%
German to English 0.75 or 75%

So if we took 1,000 new sentences and translate them with MT we should expect that the percentage shown above would be “correct” (whatever that means). But if we now chain the results by making the output of one, the input of the other, we will find that results are different and and get continually smaller e.g.

EN > ES > EN = .8 x .85 = 0.68 or 68% correct

EN > DE > EN = .7 x .75 =  0.525 or 52.5% correct

So with MT we should expect that every back test will result in a lower or degraded results as we are multiplying the effect of two different systems. Since computers don’t really speak the language one cannot assume that they have equal knowledge going each way and if you provide a bad source from system A to system B you should expect a bad target as computers like some people, are very literal.

So now if we take our example and run it through multiple iterations we should see a very definite degradation of the output as we can see below.

EN > ES > EN(from MT) > DE > EN = .8 x .85 x .7 x .75 = 0.357 or 35.7%

So if you are trying to make MT look silly you have to run it through multiple iterations to get silly results. It would help further if you chose language combinations like EN to Japanese to Hindi to Arabic as this would cause more rapid degradation to the original English source. Try it and share your results in the comments. 

So here we have a very nicely done example and you should realize it takes great skill for the lead vocalist to mouth the MT words as if they were real lyrics and still maintain melodic and rhythmic integrity so be generous in your appreciation of their efforts.

This video shows very effectively how using multiple languages very quickly can degrade the original source as you can see when they go to 64 languages. Somehow words get lost and really strange.

And here is one from a vlogger who really enjoys the effect of multiple rounds of MT on a songs lyrics. She is a good singer and is able to maintain the basic melody without breaking into a smile so I found it quite enjoyable  and I would not be surprised that some might believe that these were indeed the lyrics of the song. She has a whole collection of recordings and has what I consider are high production values for this kind of stuff.

And she produces wonderful results on this Disney classic "When you paint the colors of your air can" which used to be a favorite of my daughter. I actually think the song from the Little Mermaid is much funnier and was done by just running it only through four iterations in Google Translate, but since I could not embed it here directly I have given the link.

 Here is another person who has decided that 14 iterations is enough to get to generally funny with this or any pop song. I'm not sure how funny this really is since I don't know the original song.

 So it appears that we are going to see a whole class of songs that are re-interpreted by Google Translate and it is possible to get millions of views as MKR has, and probably even make a living doing this.  So here you see one more job created by MT.

So anyway if somebody suggests doing a back test with MT you should know the cards are clearly stacked against the MT monster and the results are pretty close to meaningless. A human assessment of a targeted sample set of sentences is a much better way to understand your MT engine.

Hope you all had a good Thanksgiving vacation and are not feeling compelled to shop too fervently now. 

In this time of strife and distrust in Ferguson it is good to see spontaneous goodwill and instant musical camaraderie between these amateur musicians. 


My previous posts on MT humor for those who care are:
Machine and Human Translation Based Humor

Translation Humor & Mocking Machine Translation

Thursday, September 11, 2014

The Translation Market – Is it Really Understood?

I saw some interesting comments to a blog post by Kevin Lossner that I thought would be good to share with the community that reads this blog, as it raised some cogent points I thought. The comments basically talk about a larger more complex translation market than many of us might believe exists based on market research available. I do not claim to have real insight or knowledge of this larger translation market, but I am definitely aware that the largest translation initiatives in the world are generally overlooked by traditional market research e.g the many branches of the US government (DoD, NSA, CIA, FBI, DIA, State and even Commerce), the EU and I expect many of the clandestine “intelligence” operations around the world, especially amongst the G20 governments.

I would also bet that the really big, almost nation-like, Fortune 100 corporates also have captive and hidden translation operations that are buried and invisible within PR, Marketing and Investor Relations somewhere to translate the stuff that really matters or is really secret. (I would not be surprised if the people in these departments did not even know if a localization team exists elsewhere in the corporation.)  If it really matters, why would you ask Lionbridge or SDL (or any other large LSP) to translate it? definitely is something to ponder upon.   Surely it would be more likely to go to internal subject matter experts, or to trusted and elite boutique services that actually understand the subject matter of the material, and can protect the information with the same zeal and protective assurances as those who create it.  Imagine you are an oil company called ABCP and want to make sure that you look less culpable for a major accident caused by management insistence on moving ahead with a risky drilling project. I think the odds are high that the translators chosen to translate critical memos and communications and "put the right spin on it" before it is shown to regulators are going to be different from the ones that work for Lionbridge since it might save a few billion in damages that will have to be paid.

I also generally expect that specialists, i.e. translators with demonstrated subject domain expertise, will have a much brighter future than those who will translate anything that is within arms reach. Specialization means building subject matter expertise, which I think will matter more and more, and I for one would stay away from LSPs who do not specialize or have long-term demonstrated competence in a few select domains.

I find this discussion interesting also because I think that repetitive, low-value, short shelf-life, bulk (high volume) content is eventually going the way of PEMT or even raw MT, but there is a huge world of high value content that is unlikely ever to head that way until we reach the Star Trek Universal Translator levels of quality, which are not expected to be available till the 24th century. I actually think that IPO and many SEC filing documents (10K, Registration documents) and user manuals of any kind including nuclear machinery and medical equipment are fair game for competent and very specialized PEMT initiatives, but I would not use MT for anything that requires linguistic finesse or reading between the lines e.g. wedding vows, great literature, letters to the board/stockholders or poetry. Even in those areas where you have high volume and lots of repetitive and highly similar content, MT can work well only when real expertise is applied, and there is a real and active collaboration with translators and linguists who all want to produce an engine that will reduce future efforts.
These are some of the excerpted and unedited (by me) comments made by Kevin Hendzel at the blog post referenced above written in a more visceral style than the more careful elaboration in much greater detail on his own blog. I don’t agree with everything Kevin says about MT, but I think his views are generally based on deeper observations than “MT is crap” and I can appreciate that we have different views on this issue. (Excerpts printed here with his and Kevin Lossner’s permission.)
From my own viewpoint, it does seem that the localization industry/bulk translation market has long suffered from a “we’re the only game in town” problem. There’s an amusing story about SeaWorld (an aquatic theme park in the US) that goes a long way toward illustrating this exact echo-chamber problem that the localization industry and pure bulk-market providers seem to be perpetually trapped in. Occasionally you’ll see protesters outside SeaWorld holding up signs that declare: “It’s not SeaWorld, it’s PoolWorld.” The corporate entity SeaWorld telling tourists that these tiny, familiar pools constitute “the sea” does not make them the sea. The sea is immensely, incalculably larger and more complex.
The same is true of the translation market. Referring to the tiny pool you are familiar with (low-end bulk localization and translation) as “the sea” (the whole rest of the market) tends to distort one’s sense of the enormity of the sea, the complexity of sea life, not to mention how damaging it can be to trap sea life in unfamiliar and hostile surroundings. There may also be value in dispensing with a couple of misconceptions.
Myth #1: There are two market segments (premium and bulk) that are easily delineated and the premium market is dramatically smaller than the bulk market.
Reality: There’s a very long continuum that encompasses all market segments, with raw bulk free MT at one end and $25,000 tag line translations of 3 words at the other.
It’s far more accurate to characterize the continuum in terms of gradual and consistent gradations of shade rather than in terms of clear differentiating boundary lines. The “premium vs. bulk” dichotomy is a form of shorthand only. That also applies to price and quality, since the correlation between the two is not always linear. The premium sector includes commercial segments that are fiercely guarded and (often) shrouded in secrecy to prevent additional competition. Many of these are boutique translator-owned companies that deliberately fly under the radar of “research” companies like Nonsense Advisory (itself shamelessly in bed with the large companies it purports to “cover,” and stubbornly resistant to acknowledging its own 50-kilometer-wide blind spots) to avoid alerting other companies to their profitable businesses. There is an astonishing amount of money in these premium sectors. Pure translation alone in the high-end expert pharmaceutical, medical device and IP litigation as well as the premium legal, financial and marketing sectors across all languages and in all countries dwarfs the entire global IT localization industry by about two to three orders of magnitude. There are some years where one single IP pharmaceutical litigation case in Japanese-English alone will run into the $10 - $20 million range – about 10 times the “savings” that TAUS preaches are available to localization companies and their end clients that embrace their “translation as a utility” model in localization. That’s one single translation project in one single language pair. And the net profit margins are considerably higher.
Myth 2: Price is the key differentiator between the premium and bulk market.
Reality: While it’s true that the premium market tends to operate at higher prices, the market really operates on a completely different value proposition than does the bulk market. That proposition is that the cost of failure is dramatically higher than the cost of performance.
So in the premium market, the cost of translation errors – liability, regulatory failure, loss of life, damaging publicity or significant loss of prestige – far outweighs the cost of “getting it right.” Paying whatever cost premium for translation that is necessary to PREVENT the cost of failure is viewed as a wise investment.
In the bulk market, those two are reversed. The cost of failure is low, so there is no corresponding push to invest in getting it right. This can be tested by comparison to the dynamics of other industries, too. The cost of failure for a Walmart product is very low – the consumer almost expects the damn thing to break. It’s the same with cheap online localization and “just good enough to understand it” bulk translation. But a fractured fuel pump on a Boeing aircraft in flight has an enormous cost of failure, so several layers of review, ongoing maintenance and testing as well as regulatory enforcement are built around it in an effort to ensure that does not happen, a process which drives up fuel pump manufacturing costs dramatically.  When the failure of an IPO or the collapse of a deal due to a translation-related regulatory failure or when nuclear weapons are improperly dismantled or lost to unknown people – yeah, that’s a very, very high cost of failure. Wallets open up to pay a premium for translation in these cases. Of course, translators who want to play in this market must be Boeing quality, though, not Walmart. (If any serious person considers this view “elitist,” I will contemplate the validity of that charge when that person agrees to fly on Walmart-manufactured jet aircraft that fly without regulatory approval or oversight.) :)
Myth 3: The largest translation company in the world is Lionbridge, crowned once again by Nonsense Advisory.
Reality: It isn’t. It may be the largest localization company that openly shares public financial data in an easy-to-read format and hence is trivially “researched,” but it omits huge operations that just don’t advertise their existence in quite the same way. For example, there are Global Linguist Solutions and L-3 Inc. just in the US alone. Never heard of either, right? GLS won the original US Army contract to support Iraq ops worth about $4.64 billion over five years after L-3 had the original one pre-Iraq. Perhaps more to the point in terms of current size, the U.S. Army recently awarded a huge US Army contract referred to as DLITE valued at $9.7 billion to 5 companies including those two. Those are JUST the U.S. Army contracts. The open, unclassified ones. This does not include all the other U.S. federal open spending on language services for all the other agencies that these same companies along with DynCorp and McNeil and Booz Allen and a dozen others that have never been to an ATA or any other translation conference compete for and win. It also omits all U.S. classified and confidential contracts. It omits all other governments’ outsourced classified and unclassified language spending. It’s like omitting the Indian Ocean and half the Pacific from your "research."
It’s a vast, complex, cloudy and immensely varied translation sea out there.
I know that those who have dealings with the US government around translation technology at least have an inkling that this is true. It is sort of like the discussions on the Deep Web which contains much of the highest value information available in the world that is not indexed or accessible by the search engines that we all use. This is the part that is private, gated and contains the really important high value content that can only be seen by people who are properly authenticated and authorized. I can’t say for certain that the proportions in the graphic below are true for the translation market but based on what I directly know about the data volumes processed in the clandestine communities it certainly would not be impossible.
Deep Web icebergdeepweb
Anyway I thought this subject was interesting and worth more exposure. Also, it was easy to do as Kevin Hendzel wrote the bulk of this post.Smile   

P.S.  I thought it was worth adding this post-script here since Luigi Muzii has also made extended comments on his blog on this subject and so I add his Twitter comment to the main body of this post.

From @ilbarbaro
My comments to @kvashee latest debated post can be found in, and

Friday, July 25, 2014

Understanding The Drivers of Success with the Business Use of Machine Translation

We have reached a phase where there is a relatively high level of acceptance of the idea that machine translation can deliver value in professional translation settings. But as we all know the idea and the reality can often be far apart. It would be more accurate to say this acceptance of the idea that MT can be valuable, is limited to a select few among large enterprise users and LSPs (the TAUS community) and has yet to reach the broad translator community who continue to point out fundamental deficiencies in the technology or share negative experiences with MT.   So while we see growth in the number of attempts to use  MT, as it has gotten mechanically easier to do, there is also more evidence that many MT initiatives fail in achieving sustainable efficiencies in terms of real translation production value.

In a typical TEP (Translate-Edit-Proof) business translation scenario, A "good" MT system will provide three things to be considered successful:

1) Faster completion of all future translation projects in the same domain
2) Lower cost/word than doing it without the MT system
3) Better consistency on terminology especially for higher volume projects where many translators need to be involved

All of this should happen with a final translation delivered to the customer that is indistinguishable in terms of quality from a traditional approach where MT is not used at all.

It is useful to take a look at what factors underlie success and failure in the business use setting, and thus I present my (somewhat biased) opinions on this as a long-time observer of this technology (largely from a vendor perspective). I think that to a great extent we can already conclude that MT is very useful to the casual internet user, and we see that millions use it on a regular basis to get the gist of multilingual content they run into while traveling across websites and social platforms. (e.g. I use it regularly in Facebook.)

What are the primary causes of failure with MT deployments in business translation settings?

Incompetence with the technology: The most common reason I see for failed deployments is the lack of understanding that the key users have about how the technology works. Do-it-yourself (DIY) tools that promise that all you need to do is upload some data and press play are plentiful, and often promise instant success. But the upload and pray approach does not often work to provide any real satisfaction and business advantage. Unfortunately the state of the technology is such that some expertise and some knowledge are required. The translators and post-editors who have to work with the output results of these lazy Moses efforts, are expected to clean-up and somehow fix this incompetence usually at lower wage rates. And thus resentment grows and many are speaking up frequently in blogs and professional forums about bad MT experiences. Those that have positive MT experiences rarely speak up in these forums since the work is not so different from regular TM-based translation work and MT is often regarded as just another background tool that helps to get a project done faster and more consistently. MT output that does not provide cost and turnaround advantages for translation work cannot be considered to be useful for any professional use. Thus, a minimum requirement for using MT in professional settings is that it should enhance the production process.


Lowering cost is the ONLY motivation: The most na├»ve agencies simply assume that using MT, however incompetently, is a way to reduce the cost of getting a translation project done, or more accurately a way to justify paying translators less. Thus the post-editors are often in a situation where they have to clean up low quality MT output for very low wages. Given that we live in a world where the customers who pay for professional translation are asking for more efficient translation production i.e. faster and cheaper, agencies are being forced to explore how to do this, but this exploration needs to happen from a larger vision of the business.  As Brian Solis points out, using technology without collaboration and vision is unlikely to succeed (emphasis mine).
"That's the irony about digital transformation, it doesn't work when in of itself technology is the solution. Technology has to be an enabler and that enabler needs to be aligned with a bigger mission. We already found that companies that lead digital transformation from a more human center actually bring people together in the organization faster and with greater results," Solis says. “When technology is heralded above all else, there becomes an even greater disconnect between employees (translators)  and the challenges that their business is trying to solve.”
What many LSPs fail to understand is that their customers are asking for ongoing efficiencies, and new production models to handle the new kinds of translation challenges they face in their businesses. They are not just asking for a lower rate for a single project. Agencies focused on the bigger picture are asking questions like how MT can enable them to achieve new things and what's different about their customers needs today versus yesterday. With the right MT strategy in place, technology becomes an enabler, not the answer and enables agencies to build strong long-term relationships with customers who could not get the same price/performance with another agency that does not understand how to leverage technology for these new translation challenges. Agencies must evolve and reimagine their internal process, structure and culture to match this evolution in customer behavior among their own employees and translators.

No engagement with key stakeholders: Many if not all the bad MT experiences I hear about have one thing in common. Very poor communication between the MT engine developers (LSP), the customer and the translators and editors. MT is as much about new collaboration models as it is about effective engine development, and collaboration cannot happen without open and transparent communication, especially during the initial learning phase when there is a great deal of uncertainty for all concerned. If this communication process is in place in the early projects, it enables everybody to rise together in efficiency, and gets easier and more streamlined and more accurately predictable with each successive MT project. The communication issue is quite fundamental and I have tried to address and explore this in a previous post

What are the key drivers of successful deployments of MT?


Expert MT Engine Development: The building of MT engines has gotten progressively easier in terms of raw mechanics, but the development of MT engines that provide long-term competitive advantage remains a matter of deep expertise and experience. If as an LSP, you instantly create an MT engine that any of your competitors could duplicate with little trouble, you have achieved very little. Developing MT systems that provide long-term production advantage and a real competitive advantage is difficult, and requires real expertise and experience. The odds of a developer who has built thousands of engines producing a competitive engine are much higher than someone who uploads some data and hopes for the best. Skillful MT engine development is an iterative process where problems are identified and resolved in very structured development cycles so that the engine can improve continuously with small amounts of corrective feedback. Knowing which levers to pull and adjust to solve different kinds of problems is critical to developing competition beating systems. Really good systems that are refined over time are very difficult to match and will continue to provide price/performance advantages over the long-term that competitors will find difficult to match.


Engaged Project Managers and Key Translators: The most valuable feedback to enhance MT system output will come from engaged PMs and translators who see broad error patterns and and can help develop corrective strategies for these errors. Executives should always strive to ensure that these key people are empowered and encourage them to provide feedback in the engine development process. For most PMs today, MT is new and an unknown and unpredictable element in the translation production process. Thus in initial projects, executives should allow PMs great leeway to develop critical skills necessary to understand and steer both the translators and the MT engine developers. These new skills are very key to success and can help build formidable barriers to competition. While very large amounts of high quality data can sometimes produce excellent MT systems, a scenario where you have a a good project manager steering the MT developers and coordinating with translators to ensure that key elements of an upcoming project are well understood, will almost always result in favorable results other things being equal, especially with challenging situations like very sparse data or when dealing with tough language combinations. 

Communication and collaboration are key to both short and long-term success. The worst MT experiences often tend to be with those LSPs (often the largest ones ) where communication is stilted, disjointed and focused on CYA scenarios rather than getting the job done right. Successful outcomes are highly likely when you combine informed executive sponsorship, expert MT engine development and have empowered PMs who communicate openly and frequently with key translators to ensure that the job characteristics are well understood and that outcomes have a high win-win potentiality. Even really good MT output can fail when the human factors are not in sync. Remember that some translators really don’t want to do this kind of work and forcing them to do it is in nobody’s interest.

Fair & Reasonable Compensation for Post-Editors: I have noted that a blog post I wrote on this issue almost 30 months ago still continues to be amongst the most popular posts I have written. This is an important issue that needs to be properly addressed with a basic guiding principal, pay should be related to the specific difficulty of the work and quality of the output. So low quality output should pay higher per word rates than very high quality output. This means that you have to properly understand how good or bad the output is in as specific and accurate terms as possible since people’s livelihoods are at stake. This accuracy can be gauged in terms of average expected throughput i.e. words per hour or words per day. You may have to experiment at first and be prepared to overpay rather than underpay. Make sure that translators are involved in the rate setting process and that the rate setting process is clearly communicated so that it is trusted rather than resisted. Translators should also ask for samples to determine when a job is worthwhile or not. The worst scenario is where an arbitrary low rate is set without regard for the output quality, and typically in these scenarios incompetent MT practitioners always tend to go too low on the rates, resulting in discontent all around. 


Real Collaboration & Trust Between Stakeholders: This may be the most critical requirement of all as I have seen really excellent MT systems fail when this was missing. Translation is a business that requires lots of interaction between humans with different goals and if these goals are really out of sync with each other it is not possible to achieve success from multiple perspectives. Thus we often see translators feel they are being exploited or agencies feeling they are being squeezed to offer lower rates because an enterprise customer has whipped together some second rate MT system together with lots of noisy data for them to “post-edit”. When the technology is used (actually misused) in this way it can only result in a state of in equilibrium that will try to correct itself or make a lot of noise trying to find balance. This I think is the reason why so many translators protest MT and post-editing work. There are simply too many cases of bad MT systems combined with low rates and thus I have tried to point out how a translator can make an assessment of a post-editing job that is worth doing from an economic perspective at least. 

Perhaps what we are witnessing at this stage of the technology adoption cycle is akin to growing pains, like the clumsy first steps of a baby or the shyster attempts of some agencies to exploit translators as some translators have characterized it. Both cases are true I feel. And so I repeat what I said before about building trusted networks as this seems to be an essential element for success.

The most successful translators and LSPs all seem to be able to build “high trust professional networks”, and I suspect that this will be the way forward i.e. collaboration between Enterprises, MT developers, LSPs and translators who trust each other. Actually quite simple but not so common in the professional translation industry.

There seems no way to discuss the use of MT in professional settings without raising the ire of at least a few translators as you can see from some of the comments below. So I thought it might be worth trying to lighten the general mood of these discussions with music. I chose this song carefully as some might even say the lyrics are quite possibly the result of machine translation or not so different from what MT produces. As far as I know it is just one example of the poetic mind of Bob Dylan. If you can explain the lyrics shown below you are a better interpreter and translator than I am. Musically this is what I would call a great performance and a good vibe. So here you have a rendition of Dylan's My Back Pages on the Empty Pages blog.

Crimson flames tied through my ears
Rollin’ high and mighty traps
Pounced with fire on flaming roads
Using ideas as my maps
“We’ll meet on edges, soon,” said I
Proud ’neath heated brow
Ah, but I was so much older then
I’m younger than that now

Half-wracked prejudice leaped forth
“Rip down all hate,” I screamed
Lies that life is black and white
Spoke from my skull. I dreamed
Romantic facts of musketeers
Foundationed deep, somehow
Ah, but I was so much older then
I’m younger than that now

Girls’ faces formed the forward path
From phony jealousy
To memorizing politics
Of ancient history
Flung down by corpse evangelists
Unthought of, though, somehow
Ah, but I was so much older then
I’m younger than that now

A self-ordained professor’s tongue
Too serious to fool
Spouted out that liberty
Is just equality in school
“Equality,” I spoke the word
As if a wedding vow
Ah, but I was so much older then
I’m younger than that now

In a soldier’s stance, I aimed my hand
At the mongrel dogs who teach
Fearing not that I’d become my enemy
In the instant that I preach
My pathway led by confusion boats
Mutiny from stern to bow
Ah, but I was so much older then
I’m younger than that now

Yes, my guard stood hard when abstract threats
Too noble to neglect
Deceived me into thinking
I had something to protect
Good and bad, I define these terms
Quite clear, no doubt, somehow
Ah, but I was so much older then
I’m younger than that now

Friday, June 20, 2014

The Expanding Translation Market Driven by Expert Based MT

There has been much talk amongst some translators about how MT is a technology that will take away work and ultimately replace them, and thus some translators dig in their heels and resist MT at every step. The antagonistic view is based on a zero-sum game assumption that if a computer can perform a translation that they used to do, it inevitably means less work for them in future. In some cases this may be true, however this presumption is worth a closer look.

While stories of MT mishaps and mistranslations abound, (we all know how easy it is to make MT look bad), it is becoming increasingly apparent to many in the professional translation business, that it is important to learn how to use and extend the capabilities of this technology successfully, as the technology also enables new kinds of translation and linguistic engineering projects that would simply be impossible without viable and effective implementations of expert MT technology. Generally, MT is not a wholesale replacement for humans and in my opinion never will be. When properly implemented, it is a productivity enhancer and a way to expand the scope of multilingual information access for global populations that can benefit from this access. 

MT is in fact as much or more a tool/technology to create new kinds of translation work, as it is a tool to get traditional translation work done faster and more cost effectively. While MT is unlikely to replace human beings in any application where translation quality and semantic finesse is really important, there are a growing number of cases that show that MT is suitable for enabling many new kinds of business information translation initiatives that may in fact generate whole new kinds of translation related work for some if not all translators. MT is already creating new kinds of translation work opportunities in all the following scenarios:

  • With high volume content that would just not get translated via traditional human translation modes for economic and timeliness reasons, and thus the use case scenario is either use MT or do nothing. MT is used to lower total costs that make content viable to translate without which it would have never been translated. This in turn has created new work for human translation professionals in editing the most critical content and helping to raise the average quality of expert MT output.
  • With content that cannot afford human translation because the value of the information is clearly not worth the typical human translation cost scenario.
  • High value content in social networks that is changing every hour and every day and has great value for a brief moment, but has limited value a few weeks after the fact.
  • Knowledge content that facilitates and enhances the global spread of critical knowledge.
  • Content that is created to enhance and accelerate information access to global customers, who prefer a self-service model as in technical support knowledge base databases which have new content streaming in on a daily basis.
  • Content that does not need to be perfect but just approximately understandable for exploratory or gist purposes.
One point worth clarifying upfront is that much of the interest in MT by global enterprises is driven by their need to face the barrage of product/service related comments, discussions and opinions that flow in social media and influence how customers view their products. This social media banter is very influential in driving purchase decisions, often much more than corporate marketing communications which are seen as self-serving and self-promoting. Also, as products grow in complexity it becomes important to share more information about power features and extended capabilities. The issue of growth in the sheer volume of information is increasingly clear to most but there are actually translators out there who think the content tsunami is a myth. EMC and IDC have well documented studies that show the continuing content explosion. 

Global enterprises who wish to engage in commerce with global populations have discovered that the control of marketing has shifted away from corporate marketing departments to consumers who share intimate details or real customer experiences. User generated content (UGC) such as product experience related comments in social media e.g. blogs; Facebook, YouTube, Twitter and community forums have become much more important to final business outcomes. This UGC content is now influencing customer behavior all over the world and is often referred to as word-of-mouth-marketing (WOMM). Consumer reviews are often more trusted than corporate marketing-speak and even “expert” reviews. We all have experienced Amazon, travel sites, C-Net and other user rating sites which document actual consumer experiences. This is also happening at B2B levels. It is useful to both global consumers and global enterprises to make this content multilingual. Given the speed at which this information is produced, MT has to be part of the translation solution to digesting this information, and conversion to multilingual modes, to influence and assist global customers in a time frame where it is useful. For those of us who understand the translation challenges of this material, it is clear that involving humans in the expert MT development process providing linguistic and translation guidance in this process, will produce better MT output quality. The business value is significant so I expect that linguists who add value to this conversion process will be valued and sought after.

While some translators see MT as a big bad wolf that looms menacingly around, they fail to see that the world has changed for everybody, especially corporate marketers, PR professionals, and any enterprise sales function facing customers who share information freely with details of personal customer experiences. An individual blogger brought Dell to its knees with a blog post titled Dell Hell. Some say it triggered a huge stock price drop. A viral video about careless baggage handling of musical instruments resulted in a PR nightmare for United Airlines and perhaps even a negative impact on their stock price. This user experience content really matters to a global enterprise and they need strategies to deal with this as it spreads across the globe and influences purchase behavior. As the infographic below (bigger version available by clicking on this link) shows, every time a consumer posts an experience on the web it is seen by 150 people, which means small improvements in brand advocacy result in huge revenue increases, and 74% of consumers now rely on social networks to guide their purchasing decisions. This means that non-corporate content becomes much more important to understand and translate since these experiences are being shared in multiple languages.

This graph details how negative experiences multiply in negative impact, as consumers tend to be much more invested in sharing bad experiences than they are about sharing positive experiences. Thus it is very important that global enterprises monitor social media carefully. This is yet another example of what content really matters and how social media drives purchasing behavior. 

So if all this is going on, it also means that what used to be the primary focus for the professional translation industry, needs to change from the static content of yesteryear to the more dynamic and much higher volume user generated content of today. The discussions in social media are often where product opinions, brand credibility and product reputations are formed and this is also where customer loyalty or disloyalty can form as the customer support experience shows. This is what we call high value content. MT is a critical technology that is necessary as a foundational element for the professional translation world to play a useful role in solving these new translation challenges. However, it is important to also understand that this challenge cannot be solved by any old variant of MT, especially the upload and pray approaches of most DIY (Do It Yourself) MT. This is challenging even for experts and failure is par for the course..

Where MT creates new translation work opportunities


Some specific examples of the expanding translation pie that MT enables and drives:

The knowledge base use-case scenario has been well established as something that improves customer satisfaction and empowerment for many global enterprises with high demand technical support information. To develop and improve the quality of the MT translations in knowledge bases, very special linguistic work and translations need to be done. And while we see many examples of translators commenting on the poor quality of the translations we also see that millions of real customers provide feedback to the global enterprise suggesting that they find these “really bad” translations quite useful for their purposes, and prefer that to trying to read a tech note in a language that is not as familiar. Thus, while MT is imperfect we have evidence that many (millions) find it useful. Generic users on the internet are information consumers who have to deal with a language barrier. They are often the customers that global enterprises wish to communicate with. Their growing acceptance of MT suggests that MT has utility in general as a way to communicate with global customers, even though it is clear that a machine’s attempt at translation is rarely if ever as good as a human translation.

We are now also seeing that social media content based sentiment analysis is increasingly being considered as a high value exercise by marketing groups in understanding global markets. To translate international social media content it is useful to understand core terminology and get critical language translations in place and steer expert MT. This is new kinds of linguistic and translation related work which involves understanding the behavior of language in specific domains and discussion forums and then building predictive translation models for them. This new linguistic engineering work is an opportunity for progressive translators. New skills are needed here, an understanding of corpus at a linguistic profile level, the ability to identify MT error patterns and develop corrective strategies by working together with experts. The objective here is to understand the customer voice by language and develop appropriate marketing response strategies.

We also see the growth of sharing internal product development information across language within large global enterprises. Rather than use a public MT engine that can compromise and expose secret product plans it has become important to develop internal corporate engines that help employees to share documents and presentations in a secure environment and at least get a high quality gist. This effort too benefits from skilled linguistic engineering work, corpus analysis, terminology development and strategic glossary and TM data manufacturing. 

Every large translation project that is ONLY done because the cost/time characteristics that expert managed MT lends to it will generate two kinds of translation opportunities that would not exist were it not for the basic fact that MT made this content viable and visible in a multilingual context:

  1. Post-editing of the highest value material in a multimillion word corpus
  2. Translation of content that simply would NOT have been considered for translation had MT not made it economically viable and feasible.

So the next time you hear somebody bashing on “MT” ask yourself a few questions:

  1. What kind of MT variant are they talking about as there are many shades of grey? Amateur DIY experiences producing shoddy MT output abound, and translators should learn to identify these quickly and avoid them. Dealing with experts provides a very different experience and allows for ongoing feedback and improvement. MT is a tool that is only as good as the skill and competence of the users and is not suitable for many kinds of high value translation work.
  2. Are you dealing with a client/customer who has a larger vision for expanding the scope of translation? There is likely a bright future with anybody who has a focus on these new massive data volume social media projects.
  3. Are you playing a role in getting information that really matters to customers and marketers translated? While user documentation is still important, it is clear the relative value of this kind of content continues to fall as an element of building great customer experiences. The higher the value of the information you translate to your customer, the higher your value to the client.
But I expect that there will still be many translators who see no scenario in which they interact with MT in any way, expert-based or not, and that is OK, as it is a very different work experience that may not suit everybody. The very best translators can still put machines to shame with their speed and accuracy. But I hope that we will see more MT naysayers base their opinions about MT on professionally focused expert MT initiatives, rather than the well-publicized generic MT and lazy DIY MT initiatives that are much easier to find.

"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete." - Buckminister Fuller

Friday, May 30, 2014

Monolithic MT or 50 Shades of Grey?

In the many discussions by different parties in the professional translation world involving machine translation, we see a great deal of conflation and confusion because most people assume that all MT is equivalent and that any MT under discussion is largely identical in all aspects. Here is a slightly modified description of what conflation is from the Wikipedia.
Conflation occurs when the identities of two or more implementations, concepts, or products, sharing some characteristics of one another, seem to be a single identity — the differences appear to become lost.[1] In logic, it is the practice of treating two distinct MT variants as if they were one, which produces errors or misunderstandings as a fusion of distinct subjects tends to obscure analysis of relationships which are emphasized by contrasts.
However, there are many reasons to question this “all MT is the same” assumption, as there are in fact many variants of MT, and it is useful to have some general understanding of the core characteristics of each of these variants so that a meaningful and more productive dialogue can be had when discussing how the technology can be used. This is particularly true in discussions with translators as the general understanding is that all the variants are essentially the same. This can be seen clearly in the comments to the last post about improving the dialogue with translators. Misunderstandings are common when people use the same words to mean  very different things.

There may be some who view my characterizations as opinionated and biased, and perhaps they are, but I do feel that in general these characterizations are fair and reasonable and most who have been examining the possibilities of this technology for a while, will likely agree with some if not all of my characterizations.

The broadest characterization that can be made about MT is around the methodology used in developing the MT systems i.e. Rule-based MT (RbMT) and Statistical MT (SMT) or some kind of hybrid as today users of both of these methodologies claim to have a hybrid approach. If you know what you are doing both can work for you but for the most part the world has definitely moved away from RbMT, and towards statistically based approaches and the greatest amount of commercial and research activity is around evolving SMT technology. I have written previously about this but we continue to see misleading information about this often, even from alleged experts. For practitioners the technology you use has a definite impact on the kind and degree of control you have over the MT output during the system development process so one should care what technology is used. What are considered valuable skills and expertise in SMT may not be as useful with RbMT and vice versa, and they are both complex enough that real expertise only comes from a continuing focus and deep exposure and long-term experience. 

The next level of MT categorization that I think is useful is the following:
  • Free Online MT (Google, Bing Translate etc..)
  • Open Source MT Toolkits (Moses & Apertium)
  • Expert Proprietary MT Systems
The toughest challenge in machine translation is the one that online MT providers like Google and Bing Translate attempt to address. They want to translate anything that anybody wants to translate instantly across thousands of language pairs. Historically, Systran and some other RbMT systems also addressed this challenge on a smaller scale, but the SMT based solutions have easily surpassed the output quality of these older RbMT systems in a few short years. The quality of these MT systems varies by language, with the best output produced in Romance languages (FR, IT, ES, PT) and the worst quality in languages like Korean, Turkish and Hungarian and of course most African, Indic and lesser Asian languages. Thus the Spanish experience with “MT” is significantly different to the Korean one or the Hindi one. This is the “MT” that is most visible, and most widely used translation technology across the globe. This is also what most translators mean and reference when they complain about “poor MT quality”. For a professional translator user, there are very limited customization and tuning capabilities, but even the generic system output can be very useful to translators working with romance languages and save typing time if nothing else. Microsoft does allow some level of customization depending on user data availability. This type of generic MT is the most widely used “MT” today, and in fact is where most of the translation done on the planet today is done. The number of users numbers in the hundreds of millions per month. We should note that in the many discussions about MT in the professional translation world most people are referring to these generic online MT capabilities when they make a reference to “MT”.

Open Source MT Toolkits (Moses & Apertium)

I will confine the bulk of my comments to Moses, mostly because I pretty much know nothing about Apertium other than it being an open source RbMT tool. Moses is an open source SMT toolkit that allows anybody with a little bit of translation memory data to experiment and develop a personal MT system. This system can only be as good as the data and the expertise of the people using the system and tools, and I think it is quite fair to say that the bulk of Moses systems produce lesser/worse output quality than the major online generic MT systems. This does not mean that Moses users/developers cannot develop superior domain-focused systems but the data,skills and ancillary tools needed to do so are not easily acquired and I believe definitely missing in any instant DIY MT scenario. There is a growing suite of instant Moses based MT solutions that make it easy to produce an engine of some kind, but do not necessarily make it easy produce MT systems that meet professional use standards. For successful professional use the system output quality and standards requirements are generally higher than what is acceptable for the average user of Google or Bing Translate. 

While many know how to upload data into a web portal to build an MT engine of some sort, very few know what to do if the system underperforms (as many initially do) as it requires diagnostic, corpus analysis and identification skills to get to the source of the problem, and then knowledge on what to fix and how to fix it as not everything can be fixed. It is after all machine translation and more akin to a data transformation than a real human translation process.  Unfortunately, many translators have been subjected to “fixing” the output from these low quality MT systems and thus the outcry within the translator community about the horrors of “MT”. Most professional translation agencies that attempt to use these instant MT system toolkits underestimate the complexity and skills needed to produce good quality systems and thus we have a situation today where much of the “MT” experience is either generic online MT or low quality do-it-yourself (DIY) implementations.  DIY only makes sense if you really do know what you are doing and why you are doing it, otherwise it is just a gamble or a rough reference on what is possible with “MT”, with no skill required beyond getting data into an up loadable data format.

Expert Proprietary MT Systems
Given the complexity, suite of support tools and very deep skill requirements of getting MT output to quality levels that provide real business leverage in professional situations I think it is safe to say that this kind of “MT” is the exception rather than the rule. Here is a link to a detailed overview of how an expert MT development process would differ from a typical DIY scenario. I have seen a few expert MT development scenarios from the inside and here are some characteristics of the Asia Online MT development environment:
  • The ability to actively steer and enhance the quality of translation output produced by the MT system to critical business requirements and needs.
  • The degree of control over final translation output using the core engine together with linguist managed pre processing and post-processing rules in highly efficient translation production pipelines.
  • Improved terminological consistency with many tools and controls and feedback mechanisms to ensure this.
  • Guidance from experts who have built thousands of MT systems and who have learned and overcome the hundreds of different errors that developers can make that undermine output quality.
  • Improved predictability and consistency in the MT output, thus much more control over the kinds of errors and corrective strategies employed in professional use settings.
  • The ability to continuously improve the output produced by an MT system with small amounts of strategic corrective feedback.
  • Automatic identification and resolution of many fundamental problems that plague any MT development effort.
  • The ability to produce useful MT systems even in scarce data situations by leveraging proprietary data resources and strategically manufacturing the optimal kind of data to improve the post-editing experience.
   So while we observe many discussions about “MT” in the social and professional social web, they are most often referring to the translator experience with generic MT as this is the most easy to access MT. In translator forums and blogs the reference can also often be a failed DIY attempt. The best expert MT systems are only used in very specific client constrained situations and thus rarely get any visibility, except in some kind of raw form like support knowledge base content where the production goal is always understandability over linguistic excellence. The very best MT systems that are very domain focused and used by post editors who are going through projects at 10,000+ words/day are usually very client specific and for private use only and are rarely seen by anybody outside the involvement of these large production projects. 

It is important to understand that if any (LSP) competitor can reproduce your MT capabilities by simply throwing some TM data into an instant MT solution, then the business leverage and value of that MT solution is very limited. Having the best MT system in a domain can mean long-term production cost and quality advantage and this can provide meaningful competitive advantage and provide both business leverage and definite barriers to competition.

In the context of the use of "MT" in a professional context, the critical element for success is demonstrated and repeatable skill and a real understanding of how the technology works. The technology can only be as good as the skill, competence and expertise of the developers building these systems. In the right hands many of the MT variants can work, but the technology is complex and sophisticated enough that it is also true that non-informed use and ignorant development strategies (e.g. upload and pray) can only lead to problems and a very negative experience for those who come down the line to clean up the mess. Usually the cleaners are translators or post-editors and they need to learn and insist that they are working with competent developers who can assimilate and respond to their feedback before they engage in PEMT projects. I hope that in future they will exercise this power more frequently. 

So the next time you read about “MT”, think about what are they actually referring to and maybe I should start saying Language Studio MT or Google MT or Bing MT or Expert Moses or Instant Moses or Dumb Moses rather than just "MT". 

Addendum: added on June 20

This was a post that I just saw, and I think provides a similar perspective on the MT variants from a vendor independent point of view. Perhaps we are now getting to a point where more people realize that competence with MT requires more than dumping data into the DIY hopper and expect it to produce useful results.

Machine translation: separating fact from fiction

Wednesday, May 14, 2014

Improving the MT Technology to Translator Dialogue

While we see that MT technology adoption continues to grow, hopefully because of clearly demonstrated benefits and measured production efficiencies, we still see that the dialogue between the technology developers / business sponsors and translators/post-editors is often strained, and communications can often be dysfunctional and sometimes even hostile.

While there is a growing volume of material on “how-to-use” the technology, much of this material is of questionable quality, there is still very little discussion about managing human factors around successful use of the technology. The growth of instant, do-it-yourself (DIY) tools only unleashes more low quality MT output into the world and there are translators who are expected to often edit (fix) very low quality MT output for a pittance. Getting good quality MT output requires real skill, expertise and preferably some considerable experience. The actual translator experience with “good MT” is not going to be so different from working with TM (though MT errors are quite different from TM errors) and is likely going to be very different from the negative experiences described in translator blogs.

The history of MT has indeed been filled with eMpTy promises beyond the real possibilities of the technology, and more recently we see lots of sub-par DIY systems built by mostly incompetent practitioners that do cause pain/fatigue/stress/frustration/anger to translators who engage or are somehow roped in to clean up the mess. This fact does not however lead to a conclusion that the outlook for MT is bleak and hopeless in my eyes. 

Rather, it suggests that MT must be approached with care and expertise, not just in terms of basic system development mechanics but also in terms of managing human expectations and ensuring that risks and rewards are shared amongst the key stakeholders, and that transparency and equity should be guiding principles for MT projects in general.

I don't expect that MT will replace human translators, but I do expect that for a lot of business translations with largely repetitive content with a  short shelf life, it will continue to make sense. Most of the corporate members of TAUS (who also pay for a lot human translation work) are driven to deploy MT because they are indeed faced with more volume and content that is very valuable for a few months but with little value after that. The basic business urgency requires that they explore other approaches to getting material translated. They have often done this independently of their key translation agencies who were very slow to catch on to this need. Many translators do not seem to realize that much of the content that MT focuses on is material that would simply NOT get translated if MT were not available and can sometimes create new human translation opportunity. It is not always a zero sum game. Also, while some MT advocates can be over-zealous at times I think very few are actually bent on deception and fraud as is sometimes claimed.

MT does bring about change in traditional work practices and can sometimes have adverse economic impact (especially when misused or incompetently used) on translators. In some ways MT technology is getting better, and in some “easy” language combinations even DIY initiatives can produce some kind of minimal production advantage. But really steering an MT system to make it work and respond in a way that it is an experience that professional translators want to repeatedly engage in, does take more skill than dumping data into an instant Moses system. Though the risk of running into incompetent MT practitioners is still high, we are seeing many more successful collaborations that show the potential and promise of this technology when it is properly used.

Much of the anger and even rage from the translator side is “passionately” stated in this blog post by Kevin Lossner. I will paraphrase some of his key objections and and other points I have heard in the broader translator community, at the risk of getting it wrong. The issues seem to be:
  • Messages from industry gurus and from CSA &TAUS in particular about how the business of translation is changing and their vision of the impact of automation on translators,
  • Messages from MT vendors (me included) about the value and urgency and benefits of using MT,
  • The possible negative impact of MT on cognitive and professional skills of translators or just the general nature of post-editing work,
  • The link between the professional work effort and the compensation,
  • The degree of involvement in the development of MT systems,
  • Lack of education and training related to MT,
  • General professional respect.
  • The overall commoditization impact on translation work.
It is clear to most of us who have had successful MT implementations that post-editing is not suitable for everybody. There are translators out there who have developed very keen expertise in some domains and can translate at speeds and quality levels that would be hard for most MT systems to match. But there are also many translators who will benefit from a well developed MT system in the same way that they may benefit from the use of translation memory and other CAT tools. When properly done, working with MT output is not so different from working with TM. The nature of the errors are different but MT can also respond and improve as corrective feedback is processed.  

We have already reached a point in time, where the reality is that we have more “rough” translation done by MT in a day than ALL humans do in a year. The free online MT engines are used about 250-500 million times a month, and while it may still be true that MT has not penetrated the professional translation world in a substantial way yet, MT is now commonly used by many French and Spanish translators going in and out of English, and probably many other language pairs too.  There are still some who question the veracity of the increasing volumes of information that companies must now translate to ensure global visibility for their products and services but many companies now understand that making more and more product related content multilingual is a key to international market success. 

The translator concerns listed above however do need attention, and should be addressed in some way by all those who wish to maximize the potential for successful MT initiatives. John Hagel has an interesting and somewhat bleak viewed essay on The Dark Side of Technology where he describes the combined impact of all the new digital technologies which include:
  • A world of mounting performance pressure,
  • An accelerating pace of change,
  • Increasing uncertainty,
  • Digital technologies are coming together into global technology infrastructures that straddle the globe and reach an ever expanding portion of the population. In economic terms, these infrastructures systematically and substantially reduce barriers to entry and barriers to movement on a global scale.
This is perhaps what is being felt both by individual translators and by translation agencies and thus we often see reactive behavior at both these levels. We see many adopt the zero sum game view of the world, and there is increasing short-sightedness and often a breakdown of trust.

While I do not have a definitive prescription for success in dealing with the human factors involved in an MT project,  I think it is possible to outline some factors that I have observed from partners like Advanced Language Translation that constitute what I consider are best practices.

It is important to understand that the better the MT system and it's output is, the better the ROI and translator/editor work experience. MT systems that can respond to the needs of professionals using it for real work are very different from ones where the users have no real control of what happens beyond putting some data in. So if I were to list some recommendations on how to approach these basic communication and trust issues I think they would include the following:
  • Build the best MT system you can, which means it should never be done in a hurry and preferably developed by experts who can tune it and adjust it as needed in response to translator feedback.
  • Manage expectations of all key stakeholders, especially with regard to the evolutionary nature of MT system development. It is not as easy as 1-2-3 and requires expertise and patience.
  • Get MT systems up to an acceptable average quality level with the involvement of senior trusted translators before unleashing the system to a larger group of translators/editors.
  • Involve Project Managers and senior translators in MT system development with experts so that you can build organizational intelligence and skills on specific data cleaning, data preparation and system assessment.
  • Involve key translators in the rate setting process to establish fair and reasonable compensation rates that are trusted.
  • Don’t involve translators who are fundamentally opposed to MT technology. There are translators who do not benefit from MT because of very special and unique skill sets.
  • Provide specific examples of corrections for a variety of different types of output errors for post-editors to model.
  • Ensure that the nature of the task is understood and compensation issues are clear BEFORE setting production deadlines.
  • Focus on fixing high frequency error patterns with a small test team and test data set before general release.
  • Feed back error corrections and ask for general feedback from editors on an ongoing basis and incorporate as much of this into the system as possible. Monitor ongoing progress to ensure that MT system remains consistent over the project and over time.  
  • Retune and retrain the MT engine quickly and as frequently as possible.
  • Develop deeper system tuning skills over time as key team members begin to understand how the system responds to various kinds of feedback and corrective adjustments.
What more can be done to make post-editing MT work better understood and thus hopefully a less threatening or demeaning technology?  I see PEMT as a natural evolution of the business translation process. It is simply a new approach that enables new information to be translated, or a new way to do repetitive tasks but it can also be a means to build and develop strategic advantage. A guest post on the TAUS site has made a plea for translator education (not training), but I think it unlikely that the recommendations given there will solve the problems I have listed above. 

The most successful translators and LSPs all seem to be able to build “high trust professional networks”, and I suspect that this will be the way forward i.e. collaboration between Enterprises, MT developers, LSPs and translators who trust each other. Actually quite simple but not so common in the professional translation industry.

I feel compelled to re-use a quote I have used before because I think it fits very well in this current context.
Disruption is not something we set out to do. It is something that happens because of what we do,” stresses Brian Solis. Disruption changes human behavior (think: iPhone) and it’s a mixture of both ‘design-thinking and system-thinking’ to get there. So as an innovator, where do you begin if you don’t start with attempting disruption. To boil down Solis’ message into a word: ‘empathy.’ That’s right, empathy. Empathy drives the core of your vision as an innovator, or so it should says Solis.
Solis says that there are only two ways to change human behavior, by manipulating people, or by inspiring them. If you choose the former, good luck on your journey, but if you would prefer to attempt the latter with your innovative attempts, then you should start with empathy: the why of your product or company. That is how you will capture attention, and hold onto it, especially in the technologically, socially-driven world today.”
The excerpt above is from this post on The future of innovation is disruption (emphasis mine).
“The end of business as usual takes more than vision and innovation to survive digital Darwinism however. It requires a tectonic shift from product or industry focus to that of long-term consumer (customer) experiences. Businesses that don’t are forever caught in a perpetual cycle of competing for price and performance. It is in fact one of the reasons that Apple can command a handsome premium. The company delivers experiences that contribute to an overall lifestyle and ultimately style and self-expression. Think about the business model it takes to do so however. You can’t invent or invest in new experiences if your business is fixated on roadmaps and defending aging business models (SDL & LIOX?).”
This excerpt is from a fascinating article on the collapse of the Japanese consumer electronics industry and especially Sony, Panasonic and Sharp.

The way forward in developing win-win scenarios and excellence in these challenging times is collaboration between trusted partners. Collaboration curves hold the potential to mobilize larger and more diverse groups of participants to innovate and create new value. In trusted relationships and networks critical knowledge flows happen more easily. Benefits and risks are shared more willingly and together participants are driven by a desire to learn and reach new levels of performance. In this context, zero sum relationships that focus on dividing a fixed pie of rewards evolve into positive sum relationships where participants are driven by the opportunity to expand the overall pie.  When there is a real prospect of expanding rewards, we are much more likely to trust others than when everyone is focused on how to get a bigger share of a fixed pie. I think it is also likely that agencies that regard translators as valued partners in a demonstrable way at an organizational level, will likely lead the innovation and evolution of how business translation gets done.  Hegel says also that a new narrative based on opportunity is needed.
Like any great narrative, it must be crafted.  “Craft” is an evocative term because it suggests that narratives are not just created on paper, but built through the actions that we begin to take as we start to see the opportunity ahead. Narratives emerge through action and interaction as we collectively begin to sense an opportunity and learn through action what it will take to achieve that opportunity.
No single person can be responsible or create this collaboration, trust and opportunity narrative and I look forward to seeing those who do help carve a path for all to learn from. Revolutions often happen from many small acts (balls) that are set into motion, rolling together in the same direction gradually building momentum and some revolutions happen slowly after some initial sputtering and misfiring.