Pages

Monday, January 3, 2011

Most Popular Blog Posts of 2010

Award winning picture, part of the Wikimedia Commons Pictures of the Year collection.

As I have been watching people summarize the year in many ways I decided to look into the traffic stats on the eMpTy Pages blog to see what were the most popular posts (in terms of traffic anyway). And here is the list in order of traffic popularity. 

The top 3 entries were written in July so it appears the thoughts and the news were really flowing at the time. 

1. The most popular entry was the summary of a conversation that Renato & Bob had at the IMTT Vendor Management conference in Las Vegas and my additional comments on this issue. This is an ongoing discussion and far from over, we will see more unfold on this subject in this coming year I expect.

2. My thoughts and analysis of the SDL acquisition of Language Weaver, which was clearly between a rock and a hard place in 2010 after doubling manpower/sales investments and overall expenses and seeing hardly a budge in revenue or translation quality of their systems. The facts speak for themselves in spite of careful PR efforts to create impressions that suggested growth and momentum.(Yup Mark, some of us do realize there was hardly any revenue growth in the last three years.)

3. This next one seemed to resonate with all the MT enthusiasts in particular.This one got a lot of comments as well.

4. I am surprised that this entry was as popular, but it is clear that TAUS is becoming more relevant and a great place to find some information (though not always the best and most accurate info IMO) on MT deployments throughout the corporate world.The understanding that clean data does matter is growing and that can only help the quality of future MT systems developed with TAUS data. I hope that TAUS will lead the charge in helping us all understand what are the driving factors behind really good MT systems. My sense is that everything we have seen so far is just about getting familiar with the technology and was mostly driven by localization ROI rather than real raw MT quality tuning efforts. The best is yet to come.

5. This entry was I think the best entry of the year, even though I did not offer any solutions. I think it was the clearest articulation of the problem and explanation of why standards matter and why we need better solutions. I hope that this discussion continues and grows in 2011. I saw that it was also a major issue and area of concern at the AGIS10 conference in Delhi. I am much more optimistic that the best thinking and solutions on standards will come from the non profit translation world since the corporate localization industry has barely delivered a real TM standard after 10 years of trying. There were also many interesting comments and feedback on this entry and I hope we will see more discussion emerge from this. PostRank also tells me that this article continues to get steady traffic over time and might be influencing others.

6. This was a summary of an interview with Rob Vandenberg, CEO of Lingotek about community collaboration tools for translation which is likely to become much more important in future.

7. This was a summary of key messages from ATA leadership to the AMTA community. Hopefully this dialogue grows even though there are some  strident voices on both sides. I think the recent admission by Google that they have reached the limits of what is possible in terms of driving MT system improvements by just feeding more data to the engines. This I think will lead to an increasing awareness that getting linguistic experts involved and improving information quality (yes, clean data rears it ugly head again) is necessary for continued progress. Another one with interesting comments.

8. This focused on my view of where the highest value translation work would be in the future. I do not believe that transcreation is the best definition of high value/high skill work in translation. Value has to be determined by what customers find most useful at critical stages (pre and post sales) of their relationship with a company, brand or product.

9. This is a summary of key messages from my keynote presentation at ELIA Dublin which was possibly my favorite conference of the year. This article summarizes some details about the content explosion and how it may be impacting global customer interactions and how this relates to the world of professional translation.

10. And more on standards in the localization industry, a subject that will be key to make real progress on, to raise productivity and  respond to the very growing volume of content.

I wish you all a Happy, Healthy and Prosperous New Year. I think 2011 will be memorable in many ways, and translation will continue to grow in strategic importance for both global businesses and for countries that are rising economically and entering the knowledge economy.

We are seeing a lot of forecasts as is typical at the beginning of the year but most of these are very technology focused. I found a really interesting forecast  that casts a much wider net and is the most interesting one I have found on the world at large. They have a pretty good track record for 2010 as well. Good news for my new friends in the Ukraine, they see the region rising in 2011. This is worth taking a look at.




3 comments:

  1. Hi Kirti, thanks for the summary. The link to Google admitting to having reached limits seems to be not working, can you post the correct URL? Christian.

    ReplyDelete
  2. Christian

    Thanks for catching the bad link. I have fixed it now and have supplied two links now. Here is teh direct quote:

    Andreas Zollmann, who has been researching in the field for many years and working at Google Translate for the last year, suggests, along with Blunsom, that the idea that more and more data can be introduced to make the system better and better is probably a false premise. "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models."

    This of course is only true for high data density languages (FIGS, CJK, Portuguese) - many of the Google systems will continue to improve as they climb in data volume.

    Ultan's article is also a good summary of the issues at:
    http://blogs.oracle.com/translation/2011/01/where_next_for_google_translate_and_what_of_information_quality.html

    To improve in future I think they will need clean data, IQ, more skilled human feedback on linguistic issues to add linguistic structural knowledge rather than just adding the old RbMT stuff to the SMT foundation.

    ReplyDelete
  3. Kirti, thank you for the link, this is highly intriguing. My gut feeling says that your suspicion is spot on. Best wishes for 2011! Christian

    ReplyDelete