I have written previously about how this data deluge is affecting the professional translation industry and why change is necessary not only in processes and tool technology but also in the whole view of the professional translation business. I have pointed out:
-- About how enterprises are also facing a huge growth in data volume from both customers and internal processes
-- The changing nature of the information required to build loyalty in the global customer base. The continuing evolution from static documentation to dynamic user and community created content that is considered the most valuable content in customer support is one example of how dramatic this shift is. The word of mouth impact on products and companies in social networks is already powerful and will become even more so in future.
-- There is also now evidence that crowdsourcing is and will continue to be a force in getting things done in the translation world, not so much because it is cheaper, but more often because it increases customer engagement and allows global companies to address long tail issues in a cost effective way. It does not work everywhere, but it is a model worth understanding and using. Also, we are likely to see more groups of highly motivated amateurs focus on large translation projects like the Yeeyan, Global Voices, Meedan initiatives already show.
However, in 2010 in the professional translation world (and elsewhere), people have gotten used to the tools and processes that got us here. If you look back ten years, I would say we’re not doing things too differently from the way we did in 2000. We use essentially the same software and processes we did back then, though things have sped up a bit and maybe TMS systems are taken more seriously of late. It is very much a TEP world that is optimized for the global business and localization reality of 2000. Professional translation services firms try and build value around project management capabilities and un-definable notions of “quality”. It is ironic that an industry “leader” is named SDL, which originally stood for Software and Documentation Localization. Do they really do any more than that today? Part of the website?
So how would one contrast what happens in the data economy with the old way? As the Economist points out:
Google applies this principle of recursively learning from the data to many of its services, including the humble spell-check, for which it used a pioneering method that produced perhaps the world’s best spell-checker in almost every language. Microsoft says it spent several million dollars over 20 years to develop a robust spell-checker for its word-processing program. But Google got its raw material free: its program is based on all the misspellings that users type into a search window and then “correct” by clicking on the right result. With almost 3 billion queries a day, those results soon mount up.So as we head into this data economy what will professional translation companies need to do to thrive? It is clear that this is a time for innovation and real fundamental change, in both process and focus. A recent LISA report on Crowdsourcing has this statement from a senior Dell executive in the executive summary that summarizes what global enterprise buyers are looking for.
Dell: “What we want eventually from our services provider is a combination of localization, machine translation and crowdsourcing services.”
So what will a next generation LSP that thrives in the data-centered economy look like? Here my thoughts on this:
-- Competence in managing crowds and ensuring translation quality and managing the overall contribution and validation process
-- Competence with machine translation, especially SMT which is a data-driven approach that will soon overshadow the RbMT approaches
-- Competence with recruiting, assessing and managing both professional and amateur crowds to engage in translation projects when and where needed
-- Competence with building and managing motivated and engaged online communities and social networks
-- Competence in building and managing large linguistic data repositories that can be brought to bear for different client needs
-- Large linguistic corpus analysis, cleaning and preparation skills
-- Skills in the development of systems infrastructure to enable large groups of professional and amateur translators to collaborate on large, data-rich and very high volume (10M words+) translation projects
-- Systems process development skills with social networks and crowd initiatives and large data set management skills including handling audio and video translation cost-effectively and efficiently
-- The ability to make rapid and accurate quality assessments on work product in a regular, consistent and definable way
-- The ability to provide a satisfying and mutually beneficial experience for freelancers and amateurs that engage with the firm including useful and free translation productivity tools
This little promotional video for the book Different by Youngme Moon of the Harvard Business School, I thought had a very compelling message that is especially pertinent to the professional translation industry today.
Innovation is something we should all be thinking about and I would bet that collaboration is something that will help and further this exploration. I look forward to sharing this journey with you.