Pages

Friday, June 20, 2014

The Expanding Translation Market Driven by Expert Based MT

There has been much talk amongst some translators about how MT is a technology that will take away work and ultimately replace them, and thus some translators dig in their heels and resist MT at every step. The antagonistic view is based on a zero-sum game assumption that if a computer can perform a translation that they used to do, it inevitably means less work for them in future. In some cases this may be true, however this presumption is worth a closer look.

While stories of MT mishaps and mistranslations abound, (we all know how easy it is to make MT look bad), it is becoming increasingly apparent to many in the professional translation business, that it is important to learn how to use and extend the capabilities of this technology successfully, as the technology also enables new kinds of translation and linguistic engineering projects that would simply be impossible without viable and effective implementations of expert MT technology. Generally, MT is not a wholesale replacement for humans and in my opinion never will be. When properly implemented, it is a productivity enhancer and a way to expand the scope of multilingual information access for global populations that can benefit from this access. 

MT is in fact as much or more a tool/technology to create new kinds of translation work, as it is a tool to get traditional translation work done faster and more cost effectively. While MT is unlikely to replace human beings in any application where translation quality and semantic finesse is really important, there are a growing number of cases that show that MT is suitable for enabling many new kinds of business information translation initiatives that may in fact generate whole new kinds of translation related work for some if not all translators. MT is already creating new kinds of translation work opportunities in all the following scenarios:

  • With high volume content that would just not get translated via traditional human translation modes for economic and timeliness reasons, and thus the use case scenario is either use MT or do nothing. MT is used to lower total costs that make content viable to translate without which it would have never been translated. This in turn has created new work for human translation professionals in editing the most critical content and helping to raise the average quality of expert MT output.
  • With content that cannot afford human translation because the value of the information is clearly not worth the typical human translation cost scenario.
  • High value content in social networks that is changing every hour and every day and has great value for a brief moment, but has limited value a few weeks after the fact.
  • Knowledge content that facilitates and enhances the global spread of critical knowledge.
  • Content that is created to enhance and accelerate information access to global customers, who prefer a self-service model as in technical support knowledge base databases which have new content streaming in on a daily basis.
  • Content that does not need to be perfect but just approximately understandable for exploratory or gist purposes.
One point worth clarifying upfront is that much of the interest in MT by global enterprises is driven by their need to face the barrage of product/service related comments, discussions and opinions that flow in social media and influence how customers view their products. This social media banter is very influential in driving purchase decisions, often much more than corporate marketing communications which are seen as self-serving and self-promoting. Also, as products grow in complexity it becomes important to share more information about power features and extended capabilities. The issue of growth in the sheer volume of information is increasingly clear to most but there are actually translators out there who think the content tsunami is a myth. EMC and IDC have well documented studies that show the continuing content explosion. 

Global enterprises who wish to engage in commerce with global populations have discovered that the control of marketing has shifted away from corporate marketing departments to consumers who share intimate details or real customer experiences. User generated content (UGC) such as product experience related comments in social media e.g. blogs; Facebook, YouTube, Twitter and community forums have become much more important to final business outcomes. This UGC content is now influencing customer behavior all over the world and is often referred to as word-of-mouth-marketing (WOMM). Consumer reviews are often more trusted than corporate marketing-speak and even “expert” reviews. We all have experienced Amazon, travel sites, C-Net and other user rating sites which document actual consumer experiences. This is also happening at B2B levels. It is useful to both global consumers and global enterprises to make this content multilingual. Given the speed at which this information is produced, MT has to be part of the translation solution to digesting this information, and conversion to multilingual modes, to influence and assist global customers in a time frame where it is useful. For those of us who understand the translation challenges of this material, it is clear that involving humans in the expert MT development process providing linguistic and translation guidance in this process, will produce better MT output quality. The business value is significant so I expect that linguists who add value to this conversion process will be valued and sought after.

While some translators see MT as a big bad wolf that looms menacingly around, they fail to see that the world has changed for everybody, especially corporate marketers, PR professionals, and any enterprise sales function facing customers who share information freely with details of personal customer experiences. An individual blogger brought Dell to its knees with a blog post titled Dell Hell. Some say it triggered a huge stock price drop. A viral video about careless baggage handling of musical instruments resulted in a PR nightmare for United Airlines and perhaps even a negative impact on their stock price. This user experience content really matters to a global enterprise and they need strategies to deal with this as it spreads across the globe and influences purchase behavior. As the infographic below (bigger version available by clicking on this link) shows, every time a consumer posts an experience on the web it is seen by 150 people, which means small improvements in brand advocacy result in huge revenue increases, and 74% of consumers now rely on social networks to guide their purchasing decisions. This means that non-corporate content becomes much more important to understand and translate since these experiences are being shared in multiple languages.

This graph details how negative experiences multiply in negative impact, as consumers tend to be much more invested in sharing bad experiences than they are about sharing positive experiences. Thus it is very important that global enterprises monitor social media carefully. This is yet another example of what content really matters and how social media drives purchasing behavior. 

So if all this is going on, it also means that what used to be the primary focus for the professional translation industry, needs to change from the static content of yesteryear to the more dynamic and much higher volume user generated content of today. The discussions in social media are often where product opinions, brand credibility and product reputations are formed and this is also where customer loyalty or disloyalty can form as the customer support experience shows. This is what we call high value content. MT is a critical technology that is necessary as a foundational element for the professional translation world to play a useful role in solving these new translation challenges. However, it is important to also understand that this challenge cannot be solved by any old variant of MT, especially the upload and pray approaches of most DIY (Do It Yourself) MT. This is challenging even for experts and failure is par for the course..


Where MT creates new translation work opportunities

image

Some specific examples of the expanding translation pie that MT enables and drives:

The knowledge base use-case scenario has been well established as something that improves customer satisfaction and empowerment for many global enterprises with high demand technical support information. To develop and improve the quality of the MT translations in knowledge bases, very special linguistic work and translations need to be done. And while we see many examples of translators commenting on the poor quality of the translations we also see that millions of real customers provide feedback to the global enterprise suggesting that they find these “really bad” translations quite useful for their purposes, and prefer that to trying to read a tech note in a language that is not as familiar. Thus, while MT is imperfect we have evidence that many (millions) find it useful. Generic users on the internet are information consumers who have to deal with a language barrier. They are often the customers that global enterprises wish to communicate with. Their growing acceptance of MT suggests that MT has utility in general as a way to communicate with global customers, even though it is clear that a machine’s attempt at translation is rarely if ever as good as a human translation.

We are now also seeing that social media content based sentiment analysis is increasingly being considered as a high value exercise by marketing groups in understanding global markets. To translate international social media content it is useful to understand core terminology and get critical language translations in place and steer expert MT. This is new kinds of linguistic and translation related work which involves understanding the behavior of language in specific domains and discussion forums and then building predictive translation models for them. This new linguistic engineering work is an opportunity for progressive translators. New skills are needed here, an understanding of corpus at a linguistic profile level, the ability to identify MT error patterns and develop corrective strategies by working together with experts. The objective here is to understand the customer voice by language and develop appropriate marketing response strategies.

We also see the growth of sharing internal product development information across language within large global enterprises. Rather than use a public MT engine that can compromise and expose secret product plans it has become important to develop internal corporate engines that help employees to share documents and presentations in a secure environment and at least get a high quality gist. This effort too benefits from skilled linguistic engineering work, corpus analysis, terminology development and strategic glossary and TM data manufacturing. 

Every large translation project that is ONLY done because the cost/time characteristics that expert managed MT lends to it will generate two kinds of translation opportunities that would not exist were it not for the basic fact that MT made this content viable and visible in a multilingual context:

  1. Post-editing of the highest value material in a multimillion word corpus
  2. Translation of content that simply would NOT have been considered for translation had MT not made it economically viable and feasible.

So the next time you hear somebody bashing on “MT” ask yourself a few questions:

  1. What kind of MT variant are they talking about as there are many shades of grey? Amateur DIY experiences producing shoddy MT output abound, and translators should learn to identify these quickly and avoid them. Dealing with experts provides a very different experience and allows for ongoing feedback and improvement. MT is a tool that is only as good as the skill and competence of the users and is not suitable for many kinds of high value translation work.
  2. Are you dealing with a client/customer who has a larger vision for expanding the scope of translation? There is likely a bright future with anybody who has a focus on these new massive data volume social media projects.
  3. Are you playing a role in getting information that really matters to customers and marketers translated? While user documentation is still important, it is clear the relative value of this kind of content continues to fall as an element of building great customer experiences. The higher the value of the information you translate to your customer, the higher your value to the client.
But I expect that there will still be many translators who see no scenario in which they interact with MT in any way, expert-based or not, and that is OK, as it is a very different work experience that may not suit everybody. The very best translators can still put machines to shame with their speed and accuracy. But I hope that we will see more MT naysayers base their opinions about MT on professionally focused expert MT initiatives, rather than the well-publicized generic MT and lazy DIY MT initiatives that are much easier to find.

"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete." - Buckminister Fuller

Friday, May 30, 2014

Monolithic MT or 50 Shades of Grey?

In the many discussions by different parties in the professional translation world involving machine translation, we see a great deal of conflation and confusion because most people assume that all MT is equivalent and that any MT under discussion is largely identical in all aspects. Here is a slightly modified description of what conflation is from the Wikipedia.
Conflation occurs when the identities of two or more implementations, concepts, or products, sharing some characteristics of one another, seem to be a single identity — the differences appear to become lost.[1] In logic, it is the practice of treating two distinct MT variants as if they were one, which produces errors or misunderstandings as a fusion of distinct subjects tends to obscure analysis of relationships which are emphasized by contrasts.
However, there are many reasons to question this “all MT is the same” assumption, as there are in fact many variants of MT, and it is useful to have some general understanding of the core characteristics of each of these variants so that a meaningful and more productive dialogue can be had when discussing how the technology can be used. This is particularly true in discussions with translators as the general understanding is that all the variants are essentially the same. This can be seen clearly in the comments to the last post about improving the dialogue with translators. Misunderstandings are common when people use the same words to mean  very different things.

There may be some who view my characterizations as opinionated and biased, and perhaps they are, but I do feel that in general these characterizations are fair and reasonable and most who have been examining the possibilities of this technology for a while, will likely agree with some if not all of my characterizations.

The broadest characterization that can be made about MT is around the methodology used in developing the MT systems i.e. Rule-based MT (RbMT) and Statistical MT (SMT) or some kind of hybrid as today users of both of these methodologies claim to have a hybrid approach. If you know what you are doing both can work for you but for the most part the world has definitely moved away from RbMT, and towards statistically based approaches and the greatest amount of commercial and research activity is around evolving SMT technology. I have written previously about this but we continue to see misleading information about this often, even from alleged experts. For practitioners the technology you use has a definite impact on the kind and degree of control you have over the MT output during the system development process so one should care what technology is used. What are considered valuable skills and expertise in SMT may not be as useful with RbMT and vice versa, and they are both complex enough that real expertise only comes from a continuing focus and deep exposure and long-term experience. 

The next level of MT categorization that I think is useful is the following:
  • Free Online MT (Google, Bing Translate etc..)
  • Open Source MT Toolkits (Moses & Apertium)
  • Expert Proprietary MT Systems
The toughest challenge in machine translation is the one that online MT providers like Google and Bing Translate attempt to address. They want to translate anything that anybody wants to translate instantly across thousands of language pairs. Historically, Systran and some other RbMT systems also addressed this challenge on a smaller scale, but the SMT based solutions have easily surpassed the output quality of these older RbMT systems in a few short years. The quality of these MT systems varies by language, with the best output produced in Romance languages (FR, IT, ES, PT) and the worst quality in languages like Korean, Turkish and Hungarian and of course most African, Indic and lesser Asian languages. Thus the Spanish experience with “MT” is significantly different to the Korean one or the Hindi one. This is the “MT” that is most visible, and most widely used translation technology across the globe. This is also what most translators mean and reference when they complain about “poor MT quality”. For a professional translator user, there are very limited customization and tuning capabilities, but even the generic system output can be very useful to translators working with romance languages and save typing time if nothing else. Microsoft does allow some level of customization depending on user data availability. This type of generic MT is the most widely used “MT” today, and in fact is where most of the translation done on the planet today is done. The number of users numbers in the hundreds of millions per month. We should note that in the many discussions about MT in the professional translation world most people are referring to these generic online MT capabilities when they make a reference to “MT”.

Open Source MT Toolkits (Moses & Apertium)

I will confine the bulk of my comments to Moses, mostly because I pretty much know nothing about Apertium other than it being an open source RbMT tool. Moses is an open source SMT toolkit that allows anybody with a little bit of translation memory data to experiment and develop a personal MT system. This system can only be as good as the data and the expertise of the people using the system and tools, and I think it is quite fair to say that the bulk of Moses systems produce lesser/worse output quality than the major online generic MT systems. This does not mean that Moses users/developers cannot develop superior domain-focused systems but the data,skills and ancillary tools needed to do so are not easily acquired and I believe definitely missing in any instant DIY MT scenario. There is a growing suite of instant Moses based MT solutions that make it easy to produce an engine of some kind, but do not necessarily make it easy produce MT systems that meet professional use standards. For successful professional use the system output quality and standards requirements are generally higher than what is acceptable for the average user of Google or Bing Translate. 

While many know how to upload data into a web portal to build an MT engine of some sort, very few know what to do if the system underperforms (as many initially do) as it requires diagnostic, corpus analysis and identification skills to get to the source of the problem, and then knowledge on what to fix and how to fix it as not everything can be fixed. It is after all machine translation and more akin to a data transformation than a real human translation process.  Unfortunately, many translators have been subjected to “fixing” the output from these low quality MT systems and thus the outcry within the translator community about the horrors of “MT”. Most professional translation agencies that attempt to use these instant MT system toolkits underestimate the complexity and skills needed to produce good quality systems and thus we have a situation today where much of the “MT” experience is either generic online MT or low quality do-it-yourself (DIY) implementations.  DIY only makes sense if you really do know what you are doing and why you are doing it, otherwise it is just a gamble or a rough reference on what is possible with “MT”, with no skill required beyond getting data into an up loadable data format.



Expert Proprietary MT Systems
 
Given the complexity, suite of support tools and very deep skill requirements of getting MT output to quality levels that provide real business leverage in professional situations I think it is safe to say that this kind of “MT” is the exception rather than the rule. Here is a link to a detailed overview of how an expert MT development process would differ from a typical DIY scenario. I have seen a few expert MT development scenarios from the inside and here are some characteristics of the Asia Online MT development environment:
  • The ability to actively steer and enhance the quality of translation output produced by the MT system to critical business requirements and needs.
  • The degree of control over final translation output using the core engine together with linguist managed pre processing and post-processing rules in highly efficient translation production pipelines.
  • Improved terminological consistency with many tools and controls and feedback mechanisms to ensure this.
  • Guidance from experts who have built thousands of MT systems and who have learned and overcome the hundreds of different errors that developers can make that undermine output quality.
  • Improved predictability and consistency in the MT output, thus much more control over the kinds of errors and corrective strategies employed in professional use settings.
  • The ability to continuously improve the output produced by an MT system with small amounts of strategic corrective feedback.
  • Automatic identification and resolution of many fundamental problems that plague any MT development effort.
  • The ability to produce useful MT systems even in scarce data situations by leveraging proprietary data resources and strategically manufacturing the optimal kind of data to improve the post-editing experience.
   So while we observe many discussions about “MT” in the social and professional social web, they are most often referring to the translator experience with generic MT as this is the most easy to access MT. In translator forums and blogs the reference can also often be a failed DIY attempt. The best expert MT systems are only used in very specific client constrained situations and thus rarely get any visibility, except in some kind of raw form like support knowledge base content where the production goal is always understandability over linguistic excellence. The very best MT systems that are very domain focused and used by post editors who are going through projects at 10,000+ words/day are usually very client specific and for private use only and are rarely seen by anybody outside the involvement of these large production projects. 

It is important to understand that if any (LSP) competitor can reproduce your MT capabilities by simply throwing some TM data into an instant MT solution, then the business leverage and value of that MT solution is very limited. Having the best MT system in a domain can mean long-term production cost and quality advantage and this can provide meaningful competitive advantage and provide both business leverage and definite barriers to competition.

In the context of the use of "MT" in a professional context, the critical element for success is demonstrated and repeatable skill and a real understanding of how the technology works. The technology can only be as good as the skill, competence and expertise of the developers building these systems. In the right hands many of the MT variants can work, but the technology is complex and sophisticated enough that it is also true that non-informed use and ignorant development strategies (e.g. upload and pray) can only lead to problems and a very negative experience for those who come down the line to clean up the mess. Usually the cleaners are translators or post-editors and they need to learn and insist that they are working with competent developers who can assimilate and respond to their feedback before they engage in PEMT projects. I hope that in future they will exercise this power more frequently. 

So the next time you read about “MT”, think about what are they actually referring to and maybe I should start saying Language Studio MT or Google MT or Bing MT or Expert Moses or Instant Moses or Dumb Moses rather than just "MT". 

Addendum: added on June 20

This was a post that I just saw, and I think provides a similar perspective on the MT variants from a vendor independent point of view. Perhaps we are now getting to a point where more people realize that competence with MT requires more than dumping data into the DIY hopper and expect it to produce useful results.

Machine translation: separating fact from fiction

Wednesday, May 14, 2014

Improving the MT Technology to Translator Dialogue

While we see that MT technology adoption continues to grow, hopefully because of clearly demonstrated benefits and measured production efficiencies, we still see that the dialogue between the technology developers / business sponsors and translators/post-editors is often strained, and communications can often be dysfunctional and sometimes even hostile.

While there is a growing volume of material on “how-to-use” the technology, much of this material is of questionable quality, there is still very little discussion about managing human factors around successful use of the technology. The growth of instant, do-it-yourself (DIY) tools only unleashes more low quality MT output into the world and there are translators who are expected to often edit (fix) very low quality MT output for a pittance. Getting good quality MT output requires real skill, expertise and preferably some considerable experience. The actual translator experience with “good MT” is not going to be so different from working with TM (though MT errors are quite different from TM errors) and is likely going to be very different from the negative experiences described in translator blogs.

The history of MT has indeed been filled with eMpTy promises beyond the real possibilities of the technology, and more recently we see lots of sub-par DIY systems built by mostly incompetent practitioners that do cause pain/fatigue/stress/frustration/anger to translators who engage or are somehow roped in to clean up the mess. This fact does not however lead to a conclusion that the outlook for MT is bleak and hopeless in my eyes. 

Rather, it suggests that MT must be approached with care and expertise, not just in terms of basic system development mechanics but also in terms of managing human expectations and ensuring that risks and rewards are shared amongst the key stakeholders, and that transparency and equity should be guiding principles for MT projects in general.

I don't expect that MT will replace human translators, but I do expect that for a lot of business translations with largely repetitive content with a  short shelf life, it will continue to make sense. Most of the corporate members of TAUS (who also pay for a lot human translation work) are driven to deploy MT because they are indeed faced with more volume and content that is very valuable for a few months but with little value after that. The basic business urgency requires that they explore other approaches to getting material translated. They have often done this independently of their key translation agencies who were very slow to catch on to this need. Many translators do not seem to realize that much of the content that MT focuses on is material that would simply NOT get translated if MT were not available and can sometimes create new human translation opportunity. It is not always a zero sum game. Also, while some MT advocates can be over-zealous at times I think very few are actually bent on deception and fraud as is sometimes claimed.

MT does bring about change in traditional work practices and can sometimes have adverse economic impact (especially when misused or incompetently used) on translators. In some ways MT technology is getting better, and in some “easy” language combinations even DIY initiatives can produce some kind of minimal production advantage. But really steering an MT system to make it work and respond in a way that it is an experience that professional translators want to repeatedly engage in, does take more skill than dumping data into an instant Moses system. Though the risk of running into incompetent MT practitioners is still high, we are seeing many more successful collaborations that show the potential and promise of this technology when it is properly used.

Much of the anger and even rage from the translator side is “passionately” stated in this blog post by Kevin Lossner. I will paraphrase some of his key objections and and other points I have heard in the broader translator community, at the risk of getting it wrong. The issues seem to be:
  • Messages from industry gurus and from CSA &TAUS in particular about how the business of translation is changing and their vision of the impact of automation on translators,
  • Messages from MT vendors (me included) about the value and urgency and benefits of using MT,
  • The possible negative impact of MT on cognitive and professional skills of translators or just the general nature of post-editing work,
  • The link between the professional work effort and the compensation,
  • The degree of involvement in the development of MT systems,
  • Lack of education and training related to MT,
  • General professional respect.
  • The overall commoditization impact on translation work.
It is clear to most of us who have had successful MT implementations that post-editing is not suitable for everybody. There are translators out there who have developed very keen expertise in some domains and can translate at speeds and quality levels that would be hard for most MT systems to match. But there are also many translators who will benefit from a well developed MT system in the same way that they may benefit from the use of translation memory and other CAT tools. When properly done, working with MT output is not so different from working with TM. The nature of the errors are different but MT can also respond and improve as corrective feedback is processed.  

We have already reached a point in time, where the reality is that we have more “rough” translation done by MT in a day than ALL humans do in a year. The free online MT engines are used about 250-500 million times a month, and while it may still be true that MT has not penetrated the professional translation world in a substantial way yet, MT is now commonly used by many French and Spanish translators going in and out of English, and probably many other language pairs too.  There are still some who question the veracity of the increasing volumes of information that companies must now translate to ensure global visibility for their products and services but many companies now understand that making more and more product related content multilingual is a key to international market success. 

The translator concerns listed above however do need attention, and should be addressed in some way by all those who wish to maximize the potential for successful MT initiatives. John Hagel has an interesting and somewhat bleak viewed essay on The Dark Side of Technology where he describes the combined impact of all the new digital technologies which include:
  • A world of mounting performance pressure,
  • An accelerating pace of change,
  • Increasing uncertainty,
  • Digital technologies are coming together into global technology infrastructures that straddle the globe and reach an ever expanding portion of the population. In economic terms, these infrastructures systematically and substantially reduce barriers to entry and barriers to movement on a global scale.
This is perhaps what is being felt both by individual translators and by translation agencies and thus we often see reactive behavior at both these levels. We see many adopt the zero sum game view of the world, and there is increasing short-sightedness and often a breakdown of trust.

While I do not have a definitive prescription for success in dealing with the human factors involved in an MT project,  I think it is possible to outline some factors that I have observed from partners like Advanced Language Translation that constitute what I consider are best practices.

It is important to understand that the better the MT system and it's output is, the better the ROI and translator/editor work experience. MT systems that can respond to the needs of professionals using it for real work are very different from ones where the users have no real control of what happens beyond putting some data in. So if I were to list some recommendations on how to approach these basic communication and trust issues I think they would include the following:
  • Build the best MT system you can, which means it should never be done in a hurry and preferably developed by experts who can tune it and adjust it as needed in response to translator feedback.
  • Manage expectations of all key stakeholders, especially with regard to the evolutionary nature of MT system development. It is not as easy as 1-2-3 and requires expertise and patience.
  • Get MT systems up to an acceptable average quality level with the involvement of senior trusted translators before unleashing the system to a larger group of translators/editors.
  • Involve Project Managers and senior translators in MT system development with experts so that you can build organizational intelligence and skills on specific data cleaning, data preparation and system assessment.
  • Involve key translators in the rate setting process to establish fair and reasonable compensation rates that are trusted.
  • Don’t involve translators who are fundamentally opposed to MT technology. There are translators who do not benefit from MT because of very special and unique skill sets.
  • Provide specific examples of corrections for a variety of different types of output errors for post-editors to model.
  • Ensure that the nature of the task is understood and compensation issues are clear BEFORE setting production deadlines.
  • Focus on fixing high frequency error patterns with a small test team and test data set before general release.
  • Feed back error corrections and ask for general feedback from editors on an ongoing basis and incorporate as much of this into the system as possible. Monitor ongoing progress to ensure that MT system remains consistent over the project and over time.  
  • Retune and retrain the MT engine quickly and as frequently as possible.
  • Develop deeper system tuning skills over time as key team members begin to understand how the system responds to various kinds of feedback and corrective adjustments.
What more can be done to make post-editing MT work better understood and thus hopefully a less threatening or demeaning technology?  I see PEMT as a natural evolution of the business translation process. It is simply a new approach that enables new information to be translated, or a new way to do repetitive tasks but it can also be a means to build and develop strategic advantage. A guest post on the TAUS site has made a plea for translator education (not training), but I think it unlikely that the recommendations given there will solve the problems I have listed above. 

The most successful translators and LSPs all seem to be able to build “high trust professional networks”, and I suspect that this will be the way forward i.e. collaboration between Enterprises, MT developers, LSPs and translators who trust each other. Actually quite simple but not so common in the professional translation industry.

I feel compelled to re-use a quote I have used before because I think it fits very well in this current context.
Disruption is not something we set out to do. It is something that happens because of what we do,” stresses Brian Solis. Disruption changes human behavior (think: iPhone) and it’s a mixture of both ‘design-thinking and system-thinking’ to get there. So as an innovator, where do you begin if you don’t start with attempting disruption. To boil down Solis’ message into a word: ‘empathy.’ That’s right, empathy. Empathy drives the core of your vision as an innovator, or so it should says Solis.
Solis says that there are only two ways to change human behavior, by manipulating people, or by inspiring them. If you choose the former, good luck on your journey, but if you would prefer to attempt the latter with your innovative attempts, then you should start with empathy: the why of your product or company. That is how you will capture attention, and hold onto it, especially in the technologically, socially-driven world today.”
The excerpt above is from this post on The future of innovation is disruption (emphasis mine).
“The end of business as usual takes more than vision and innovation to survive digital Darwinism however. It requires a tectonic shift from product or industry focus to that of long-term consumer (customer) experiences. Businesses that don’t are forever caught in a perpetual cycle of competing for price and performance. It is in fact one of the reasons that Apple can command a handsome premium. The company delivers experiences that contribute to an overall lifestyle and ultimately style and self-expression. Think about the business model it takes to do so however. You can’t invent or invest in new experiences if your business is fixated on roadmaps and defending aging business models (SDL & LIOX?).”
This excerpt is from a fascinating article on the collapse of the Japanese consumer electronics industry and especially Sony, Panasonic and Sharp.



The way forward in developing win-win scenarios and excellence in these challenging times is collaboration between trusted partners. Collaboration curves hold the potential to mobilize larger and more diverse groups of participants to innovate and create new value. In trusted relationships and networks critical knowledge flows happen more easily. Benefits and risks are shared more willingly and together participants are driven by a desire to learn and reach new levels of performance. In this context, zero sum relationships that focus on dividing a fixed pie of rewards evolve into positive sum relationships where participants are driven by the opportunity to expand the overall pie.  When there is a real prospect of expanding rewards, we are much more likely to trust others than when everyone is focused on how to get a bigger share of a fixed pie. I think it is also likely that agencies that regard translators as valued partners in a demonstrable way at an organizational level, will likely lead the innovation and evolution of how business translation gets done.  Hegel says also that a new narrative based on opportunity is needed.
Like any great narrative, it must be crafted.  “Craft” is an evocative term because it suggests that narratives are not just created on paper, but built through the actions that we begin to take as we start to see the opportunity ahead. Narratives emerge through action and interaction as we collectively begin to sense an opportunity and learn through action what it will take to achieve that opportunity.
No single person can be responsible or create this collaboration, trust and opportunity narrative and I look forward to seeing those who do help carve a path for all to learn from. Revolutions often happen from many small acts (balls) that are set into motion, rolling together in the same direction gradually building momentum and some revolutions happen slowly after some initial sputtering and misfiring.