Pages

Friday, March 28, 2014

Impressions from the Localization Summit at the Games Developer Conference




I recently attended the GDC conference and were impressed to see how advanced the tools and thinking on translation and localization was amongst the game developer community. This is a new frontier for the professional translation industry and it is amazing to see how quickly games developers are learning about localization issues and developing tools and methodologies to rapidly make their products viable in international markets. It is commonplace to see development tools that are truly both multi-platform capable and multilingual capable in active use throughout the community. Their comfort level with multimedia (video, voice) data, multi-platform development (mobile, desktop and TV) issues and willingness to consider technology was quite refreshing, and we think it is quite possible that this community will drive translation and localization technology advances much more rapidly than the traditional software and documentation community where things change much more slowly. While the use of MT is still very limited, the exploration of the viability is much more pragmatic and informed than the torturous trail we have seen in the traditional documentation translation business. This is perhaps because things move much more rapidly in the games business and most products have short and intense life cycles and behooves the developers to  quickly capitalize on international opportunities while their games are hot. We predict that MT will be more frequently used there in future as developers become more informed about the limitations and possibilities of MT.


==============================================================
This is a guest post by Lauren Scanlan that provides a quick summary of some of the localization related presentations at GDC.Lauren Scanlan is currently a Freelance Localization Professional and manga proofreader looking to dive into the wide, fun world of game localization. She's based in Denver, where she spends her time baking, reading, and wrangling the odd dinosaur. You can find her online at @lsscanlan, laurenscanlan.com, or connect with her on LinkedIn at www.linkedin.com/in/lsscanlan.




On March 18th I attended the Localization Summit at Game Developer's Conference (GDC) 2014, which, in truth, was the whole reason I came to GDC in the first place. I've been working in the localization industry for three years, and as an avid fan of video games, I thought that combining the two might be a good fit for me – and I wanted to see what the state of localization is within the gaming world.

The Summit, organized by Fabio Minazzi (Localization Summit Advisory Board Member and Account Manager at game localization service provider Binari Sonori) had a variety of talks – from emerging markets (Localizing Games for Spanish Speaking America, Emerging Communities: A Snapshot of the Brazilian Indie Game Development Scene), to culturalization (Journey to the West: A Chinese Game Localization Primer) to how localization can be improved, either through communication with an LSP (Indie Games Localization: Is It Worth It?) or improving tools and processes within your own localization department (The Future of Localization Testing, LAMS: Building a Localization Tool for Everyone – both talks given by Sony Computer Entertainment Europe senior employees). 

There was also a series of five mini-talks called Localization Microtalks: Globetrotting in the Fast Lane, which covered indie game localization, an open-source localization frameworks, localized advertising, mo-cap dubbing technology and processes, and using app description localization as a tool to “test the waters” for localization. All of these talks should be available as part of the GDC vault, and are well worth a watch for those with Vault access!

The two final talks of the day were uncomfortable for some of the people in the room – Crowdsourcing the Localization of Gone Home and What is the Place of Machine Translation in Today's Gaming Industry? Both took a look at methods that do not necessarily rely only on trained linguists to complete translations, and have the potential to make localization accessible to others without the budget (more so in the crowdsourcing talk than the machine translation panel), even if it is not 100% perfect or within total control of the creators.

It seems to me that, while things seem to be going well in game localization, there are areas for improvement. The first concentration seemed to be on how to get indie games, and especially indie mobile games, out to international markets, including emerging markets, which may not yet have a framework for distribution which works well for the consumer. The issues that face most indie games are cost management for the studio and accessibility for the gamers. Belén Agulló García, Language Production Manager at Pink Noise and Jonas Waever, Creative Director at Logic Artists (Indie Games Localization) mentioned that, as localization is a part of marketing, it made more sense to use the marketing budget for their localization. I thought this was a fantastic idea, since (as all indie studios that participated in the Summit would attest) they received sizable returns on their localization investment, which could then be invested back into marketing. 

Martina Santoro from Okam Game Studio and Alejandro Gonzales, CEO and Studio Director at Brainz (Localizing Games for Spanish Speaking Latin America) mentioned that some mobile games, especially freemium games, were hard to bring into the Latin American market because of the limited payment methods. Prepaid cards and vouchers seem to be popular, but this is certainly an issue that needs to be addressed as new markets start opening up. For entirely free-to-play games, or games on platforms that are already taken care of, this may not be an issue – and may make the market more enticing. Accessibility is going to be a growing issue for both localization service providers and distribution platforms, and I think a collaborative effort to grow in these new markets and find solutions would be great.

The next area for improvement seems to be in translation and culturalization. For example, contrary to what I've done in general localization, it seems that since the app store only lists two kinds of Spanish (Mexican Spanish and Spanish), it's better to localize an app into neutral Spanish to make it easily understandable for the LATAM region. This illustrates well that distribution platforms play a large role in how effective (or not!) localization can be – though I was also pleasantly surprised to learn that the App Store and Google Play will spotlight and promote localized games in the relevant market, which has meant huge success for those games overall. Shaun Newcomer, Vice President of Reality Squared Games (Journey to the West), gave a great talk on some of the culturalization issues of bringing Chinese games to the US market. For example, games in China usually have a short life cycle and high monetization, and the storylines can be of middling quality, since the games are not expected to last very long. However, US/Western consumers dislike high monetization in the interest of fairness, and look for a higher quality storyline, leading Newcomer to amend both of these things in-house before releasing games to the US market. I had never before considered aspects like monetization as a part of the localization process, but it makes sense to find out which markets lean more toward monetization and which shy away from it. Crid Yu, Vice President and Managing Director of North America for InMobi (Localization Microtalks) mentioned that it was also important to look at alternative marketing methods for each market, citing as an example using subway advertisements for mobile games in Seoul, South Korea. I thought this was a great idea – having been to Seoul myself and seeing the amount of uber-connected Seoulites with their phones out on the subway, always looking for the next new thing.

Finally, the last issue is how much Machine Translation should play a part in game localization. I've been working in Linguistic QA for three years now, and when I first started, I was trained on how to use SDL tools like Trados and Studio as part and parcel of the localization process. Since we usually leverage it only on material that needs to be the same year after year (employee satisfaction surveys) or technical specs (medical devices) that generally don't change, it makes the localizations far more cost-effective for our clients. However, games are full of mostly creative, non-repetitive text, which makes this method dicey, as the quality can be lessened and the text can sound stale if it's always repeated. One of the best uses that came up within the panel (What is the Place of Machine Translation in Today's Gaming Industry?) was for live chat – it would be great to use an automated system to instantly replace the most common words in a sentence, or the most common sentences – that way, players on European servers (for example) could easily get the main points across to each other while not interrupting the gameplay. Another good application, I think, would be to keep a TM of UI terms that won't often (if ever) change, such as “Play,” “Continue,” “Game Over,” etc. Also, depending on the intended quality/lifecycle/churn of the game, it may be better for some companies to use machine translation and go with an imperfect translation if it gets the game through faster... though, as a Linguistic QA-er, I am certainly not advocating lower-quality games!! 

 I also thought Christopher Burgess, Senior Programmer at SCEE (LAMS) did a wonderful job talking us through the struggles with building the perfect project management/localization software, showing that even customizable tools can be a challenge to work with to get the results you want.  Even when using Machine Translation, the users need to be well-trained and always should think critically about what they're using the MT software for. 

Overall, I learned quite a bit through the Localization Summit, and I think it's a wonderful addition to the GDC programming. It was great to meet so many people who were just starting out in game development, or just about to localize, in the audience! I'm glad that non-localization people are taking the time to educate themselves about localization and their options, and I only hope that others join them next year. I'll certainly be there!

Wednesday, December 18, 2013

Annual Review–Most Popular Posts of 2013

 

“Disruption is not something we set out to do. It is something that happens because of what we do,” stresses Brian Solis. Disruption changes human behavior (think: iPhone) and it’s a mixture of both ‘design-thinking and system-thinking’ to get there. So as an innovator, where do you begin if you don’t start with attempting disruption. To boil down Solis’ message into a word: ‘empathy.’

That’s right, empathy. Empathy drives the core of your vision as an innovator, or so it should says Solis.

Solis says that there are only two ways to change human behavior, by manipulating people, or by inspiring them. If you choose the former, good luck on your journey, but if you would prefer to attempt the latter with your innovative attempts, then you should start with empathy: the why of your product or company. That is how you will capture attention, and hold onto it, especially in the technologically, socially-driven world today.”

The excerpt above is from this post on The future of innovation is disruption (emphasis mine).

“The end of business as usual takes more than vision and innovation to survive digital Darwinism however. It requires a tectonic shift from product or industry focus to that of long-term consumer experiences. Businesses that don’t are forever caught in a perpetual cycle of competing for price and performance. It is in fact one of the reasons that Apple can command a handsome premium. The company delivers experiences that contribute to an overall lifestyle and ultimately style and self-expression. Think about the business model it takes to do so however. You can’t invent or invest in new experiences if your business is fixated on roadmaps and defending aging business models (SDL & LIOX?).”

This excerpt is from a fascinating article on the collapse of the Japanese consumer electronics industry and especially Sony, Panasonic and Sharp.

These quotes I think are particularly prescient for the professional translation business which is changing quietly and dramatically as we speak. Technology, new production models, changing buyer requirements and open collaboration models are changing the business in both very subtle and obvious ways in an increasingly global world. While many feel discomfort, very few feel like they understand what is going on, or have a clear sense for what they could do to deal with these changes. (It is more than: “I have to use MT “.) This blog claims to focus on these broader issues even though much of what I cover focuses on the translation technology impacts of these changes. These broader supply and demand forces are the primary forces behind many of these technology changes and adoption and it makes sense to try and unravel this through discussion and closer examination.

Thus if we look at the big themes of the year (not just in this blog, but also at conferences and the broader internet discussions) we see how the industry in evolving. Here are the most popular posts on this blog in 2013 (in order of popularity) around these key themes:

  • Post-editing compensation and practice
  • Clarifying new waves of misinformation on MT technology and practice put forth by alleged experts
  • Different views on the changes in the professional translation business
  1. Exploring Issues Related to Post-Editing MT Compensation: This article continues to get attention today even though it was written early in 2012 and it still shows up regularly in the top 3 posts, virtually every week. The post has links to several interesting posts on post-editing and I think this is possibly one of the reasons why it continues have long-term value, as it gathers different opinions and viewpoints in a useful and unbiased way. The popularity of this post suggests that this is an important issue to resolve in a fair and equitable way to enable broader MT adoption. All parties involved need to work together to establish trusted and equitable compensation as this could be a key driver or obstacle to broader MT deployment. It would be useful for translators especially to step forward and suggest ways to do this more efficiently and accurately. For example this post by Jason Hall shows that simply equating MT output quality to TM matches may not make sense, and that leveraging MT is entirely different from leveraging TM. However most observers still continue to miss the fact that MT output is the result of engineering efforts and can be managed to a great extent.
  2. Emerging Language Industry & Language Technology Trends Much of what I said about the overall trends in the industry in this post still hold true for the coming year. I was surprised at how little progress was made in understanding MT and how the glib talking and over promising just never seems to stop. MT is difficult to do well but increasingly easy to do badly, so I suspect we will see many unhappy translators who will  be expected to clean-up after incompetent MT practitioners for a pittance.
  3. Translation Pricing & PEMT Process Management This post documents ELIA Munich conference sessions that describe the huge variance in pricing for translation services, which I found quite shocking and was probably quite unsettling for many buyers of translation services. A discussion here also helped me to realize how tenuous many agency to customer relationships are and how easily they can be displaced by competitors. Finally, some good discussion on PEMT from actual practice. I think I saw clear signs that the day of the generic translation services agency are coming to a head and that specialists will rule in future. I predict that trust will be the most effective differentiator in the professional translation business and that this is earned by demonstrated competence and real expertise in specific subject areas. 
  4. Dispelling MT Misconceptions This was my response to a article in Multilingual magazine that I felt was filled with half-truths and gross generalizations that I believed were more a product of ignorance than malice. There is a some spirited discussion in the comments as well which I never censor unless they are clearly SPAM or personally malicious. These comments are helpful in getting opposing views also aired.
  5. Understanding ROI with Machine Translation Technology This post focuses on the issues that matter most for maximizing Return on Investment for MT. In a nutshell it can be stated as 1) Domain Focus, 2) Ensure the MT quality is the highest possible as “good” MT produces the highest productivity and least amount of translator backlash and discontent, and is harder to duplicate by instant DIY means, 3) Use these good MT systems a lot across multiple customers which also means you start developing real expertise in selected domains.
  6. Translator Strategies For Dealing With PEMT This was an attempt I made to provide some basic guidelines to translators to identify PEMT projects that are worth considering versus ones that are not. There is a very interesting discussion in the comments as well and one that I think might be worth a close look for anybody who wants to get a better sense for the human factors involved in automation and MT deployment. This is an area that I think deserves much more attention by MT vendors and all practitioners who wish to create win-win scenarios.
  7. Understanding MT Customization This was a post whose intent was to provide clear differentiation between an expert managed system and a typical DIY system. It evoked strong reactions from several MT vendors and perhaps ended up as a kind of a MT vendor brawl (definitely not my original intention) in the comments section. However, still useful if you want to see how different MT vendors approach the market. My bias/position is clear, most people who try DIY will produce sub-optimal results and and most DIYers don’t know how to do it themselves. MT is difficult even for experts, and if you cannot produce systems that are better than the public systems why bother?
Solis says that innovating for the next ten years will be part problem-solving, part design-thinking. But there are four aspects you should apply when you set out to create something, in order they are:
    • Empathy (the why)
    • Context (the connected world in which you are building something)
    • Creativity (in your approach to problem-solving)
    • Logic (the rationality to test what you have created)

I think it is quite possible that the business of translation is moving towards new business models to a kind of platform based approach.  Solis and others believe that creating this  platform is way disruption will come. The closest that I have seen in this business is that provides an example is Smartling but others like Gengo, Cloudwords  also point to directions of how this might evolve. They all change how customers buy and how the work gets done.

In considering one of the finest examples of how change can be brought about by inspiration rather than manipulation I think we must look at Nelson Mandela. I spent my childhood in Rhodesia under a government where institutionalized racism was the law of the land. I always felt as I grew older  (12-13) that the future was going to be bloody and violent. How could it not be? I saw what looked like innocent African men to me being beaten to the ground by policemen, and learnt how to cope with tear gas in my bedroom as a basic childhood survival skill (cover your eyes with wet towels).  I was thus astonished and amazed that one man could have so much influence when power shifted. Nelson Mandela though far from being a perfect saintly man, is a shining example of how a man can face adversity and oppression and still be graceful, joyful and civilized. In a speech in India he said: “I could never reach the standard of morality, simplicity and love for the poor set by the Mahatma, while Gandhi was a human without weaknesses, I am a man of many weaknesses.”

Mandela dancing and at peace with the world

For those who are not familiar with the man here is a wonderful tribute on the Brain Pickings site that includes his inaugural address in full.

“The greatest glory in living lies not in never falling, but in rising every time we fall.”

An Indian mystic’s viewpoint on the man

I wish you all Happy Holidays, Merry Xmas and a joyful, healthy and prosperous New Year.

Monday, December 2, 2013

IOLAR– A PEMT Case Study - Moses Revisited

IOLAR a Slovenian LSP, was interested in building a custom machine translation engine to translate technical engineering content from German to Slovenian. This language combination is difficult and has a relatively complex source language (for MT) combined with a very difficult target language (for MT) that, like other Slavic languages, has a large number of inflected forms. 

IOLAR
While translator productivity was important, the primary objectives were to ensure a high level of writing-style consistency and terminological accuracy. As there was no specific and directly related translation memory available to train the system, several hundred thousand segments were gathered from several somewhat related sources, in a much broader domain than technical engineering. This data was combined to form a single corpus that was used to train the engine. 

Earlier Attempts with Moses
Based on the widespread publicity around Moses and the increasing number of publicized Moses Case Studies, IOLAR decided to try and use Moses to accomplish their machine translation objectives. Part of the decision to deploy Moses in-house was based around concerns over data privacy. Sharing data with a Do-It-Yourself (DIY) Moses provider was a concern as many of these DIY providers are also translation agencies or are closely related to an LSP who may compete for the same business. 

IOLAR invested six months in an attempt to build a DIY Moses system of useable quality for this rather difficult language pair – German to Slovenian. A computational linguistics expert was hired and he spent three months building IOLAR’s own custom engines using DIY Moses technology. At the end of the six month period, IOLAR's Moses system was still producing unpredictable and unusable results. There were many problems with word order, terminology consistency, unknown words and incorrect inflected forms. Attempts made to understand and address the problems were unsuccessful. 

IOLAR compared the output from their Moses engine with Google Translate output and found that Google produced much better translation quality than their own system. However, neither IOLARs Moses engine nor Google Translate provided quality and related productivity gains that would create any advantage to the business. Many segments needed to be completely retranslated when post-editing was attempted. "Since our initial internal efforts did not progress with the desired speed we turned to Asia Online to deal with the growing urgency being communicated by our clients," said Simon Bratina, IOLAR's Executive Technical Director. 

Asia Online Custom Engine Training Plan
Asia Online addressed IOLAR's data security concerns with a contract that provides comprehensive protection of the data and ensures that IOLAR maintains all the appropriate rights to the data and that Asia Online can only use the data for the purposes of customizing IOLAR's engines. 

IOLAR provided the same translation memories that were used in their custom Moses engine for analysis and inclusion into the Asia Online custom engine, and worked with Language Studio™ Linguists to create a Customization Training Plan that addressed their specific goals. The plan identified issues and gaps in the training data and created a roadmap to address them. 

Language Studio™ Linguists are specialist linguists that have had comprehensive training in the creation of commercially viable, high quality custom engines. Commercially viable means that the output actually helps professional translation work get done more efficiently. The linguists, who possess very different skills from an NLP or computational linguistics academic, focus on fine tuning MT engine data and algorithms to minimize post-editing efforts.  Language Studio™ Linguists use human cognition to determine which tools and automated processes will be applied to refine and create MT-related data to achieve the optimal results for a client - something that an automated process is not capable of today. A unique plan is developed for each custom engine, with a broad suite of data analysis and data manipulation tools used in conjunction with language and domain specific approaches to ensure optimal data preparation when building a custom engine. This differs considerably to the DIY model where data is simply uploaded and immediately processed (by algorithms sometimes) without human analysis. 

Four key issues were identified that were deemed would greatly increase translation quality, and steps were added into the plan to address these issues: 

Issue: IOLAR’s translation memories were from multiple sources and included inconsistent terminology. This would result in inconsistent terminology in the translation output.
Solution: In addition to the standard data cleaning that is part of the Clean Data SMT model, Language Studio™ tools were used to normalize terminology so that when translating the terminology choices were limited to those preferred by IOLAR. 

Issue: The domains that the translation memories originated from, while related, were not a match to the desired target domain of technical engineering. This resulted in many technical terms being unknown and significantly lowering the quality of translations. In Statistical Machine Translation (SMT) an unknown term can have a very negative impact on translation fluency and overall translation quality. 
Solution: Language Studio™ Advanced Data Manufacturing tools were used to perform gap analysis which identified several thousand unknown technical terms. Language Studio™ Advanced Data Manufacturing resolved the unknown terminology which were validated by IOLAR’s linguists specialized in the domain. 

Issue: The writing style of the translation memories was varied and not relevant to the target domain of technical engineering. Even if an understandable translation could be produced, it would be in the wrong context and style, and therefore needed a large amount of editing in order to deliver publication quality.
Solution: Language Studio™ Advanced Data Manufacturing tools were used to manufacture appropriate grammatical structures and contextual data in the correct writing style. This was driven by a deep analysis of the client’s translation memories, and automated manufacturing of data that would adapt the writing style to the client's requirements. 

Issue: As Slovenian is a heavily inflected language, one of the very common issues was that the correct term was being translated, but in the incorrect inflected form. In many cases, the correct inflected form was not in the translation memories provided by IOLAR.
Solution: Language Studio™ Advanced Data Manufacturing tools were used to manufacture appropriate inflected forms in the correct context. This data would be used to ensure that the correct inflected form was available in the training data and thus reducing the number of incorrect inflections in the output. 

Initial Results
In contrast to IOLAR’s Moses engine, the custom engine created with Language Studio™ was built quickly and without the need for specialized computational linguistics or NLP skills from IOLAR. This freed up IOLAR’s translators to be able to work on more important tasks such as terminology refinement and validation. The resulting Version 1.0 engine was considerably better than IOLAR’s previous internal efforts with Moses and was also higher quality that Google. While there was still plenty of room for improvement, this initial engine was useable for starting the pilot project. 

Language Studio™ uses "Blind Test Sets" to measure initial quality using BLEU and other automated quality assessment metrics. Productivity metrics are used to validate the automated metrics. The initial Language Studio™ custom engine was 32 BLEU points better than Google Translate and 34 BLEU points better than Microsoft Translator. While BLEU is a useful indicator of quality, human productivity when post editing is a much better metric to indicate success, quality and value. 

Quality Improvement Plan
Many of the error causing issues in a custom engine are not visible until the engine has been trained and the output can be inspected. This is particularly true of more complex language combinations such as German to Slovenian. The first version of a Language Studio custom engine is called a Diagnostic Engine for this reason. Much like the Custom Training Plan, the Quality Improvement Plan is based on a deep understanding of the specific issues that have been determined when Language Studio™ Linguists reviewed the output and data. Using their extensive experience in customizing thousands of translation engines, Language Studio™ Linguists created a plan specific to IOLAR's custom engine that delivered the most rapid improvement with the least effort. 

In addition to the Quality Improvement Plan, Language Studio™ Linguists guided IOLAR through the post-editing process and showed them how initial post edited data could be fed back into the engine and used to quickly improve translation quality. Some training was also provided to IOLAR’s team on how best to leverage runtime customization features in Language Studio™ such as Runtime Glossaries and Post Translation Adjustments which further improved quality and corrected some capitalization and formatting issues.
As the initial custom engine in its diagnostic release stage was good enough for the production usage test, the Quality Improvement Plan was able to incorporate valuable post editing feedback at this stage. While IOLAR was processing and post editing, Language Studio™ Linguists identified several improvement paths and manufactured additional data to improve grammar structures, word order and terminology consistency.
On receipt of the post edited data, analysis of the edits was performed by Language Studio™ Linguists again additional data was manufactured to reinforce the edits. These changes created an immediate 4 BLEU point increase that was validated by a noticeable increase in post editing productivity. 

Conclusions
The IOLAR experience is an example of how a DIY approach might not work for professional production scale machine translation. During their “learning by doing” approach to DIY machine translation IOLAR spent a lot of time trying to understand why their initial efforts were producing such unpredictable results, and found that on their language combinations the free online MT engines were easily outperforming their own Moses efforts

The IOLAR example highlights an inherent issue with DIY machine translation, whether Moses based or from a commercial service – it implies that the user knows how to do-it-themselves. This case study demonstrates clearly that high quality machine translation requires considerably more effort, knowledge and skill than simply loading data into a system for training. Achieving a quality level that was useable for efficient post editing was clearly not the simple task that some at TAUS and third-party DIY proponents had conveyed.
From a business perspective it was clear that
outsourcing to an expert was a better strategy than a
DIY struggle, and I would say that our investment in
Asia Online’s Language Studio™ technology was one of the
best technology investments that we have made.
...
Some of the very technical segments were the same quality as human translation.

– Simon Bratina,   
Executive Technical Director, IOLAR
   
While some DIY Moses efforts are successful, few DIY Moses users know how to address or even identify the cause of problems when they do occur, even if they have some knowledge or training in the core technological concepts. Moving beyond the initial problems in a DIY Moses custom engine is a significant challenge, even when expert NLP specialists or computational linguists were on staff. Skills in understanding data, not just algorithms and tools, are required to address the challenges in adapting, refining and creating data to address issues, either preemptively or as a remedy to issues. 

Without a deep understanding of the cause of problematic machine translation output and corrective strategies to remedy them, the only improvement path available for most DIY Moses users is to upload post edited machine translations or additional translation memories. As there is little or no understanding of the impact that the new data will have, often the issues are not resolved and in many cases new issues and problems are introduced. 

Language Studio™ Linguists provided IOLAR with the deep understanding of issues and provided efficient solutions to resolve critical issues affecting the quality of machine translation output. This ability to understand the data and error patterns has been gained through the creation of thousands of custom engines. Language Studio™ Linguists played a considerable part in taking this project from unsuccessful beginning on DIY Moses to being a considerable success in Language Studio™. 

The overall conclusions and results drawn from IOLAR's collaboration with Asia Online:
  • Working with an expert results in a much improved and significantly more efficient overall process.
  • It is safer for an LSP to work with a non-LSP for technology that is as strategic as good MT can be.
  • The long-term expertise and tools and capabilities like data manufacturing that Asia Online brought to bear on the process made it possible to reach high quality levels in just a few iterations.
  • IOLAR noticed that there was a clear improvement in the machine translation output quality after the first iteration (incremental training) and they were surprised to see that "some segments were the same quality as human translation."
  • The data manufacturing and refinement tasks performed by Language Studio™ Linguists and further refined by IOLAR's staff had greatly reduced the number of unknown words and incorrectly inflected forms, and delivered consistent terminology across translations.
  • IOLAR achieved their core objectives of ensuring a consistent writing style and broad terminological accuracy that the clients had stated were of critical importance.
  • IOLAR accomplished an improvement in the overall production efficiency.
  • IOLAR realizes that while Moses may work for some simple cases where there is plentiful data in the target domain and language pair, deep expertise is required to produce successful systems outside of this atypical scenario. Even on these simple cases, it is now understood that with refinement of data along paths recommended by specialists, an even better result is possible.
  • IOLAR is now making savings where it matters – in building the competences for an efficient post-editing when machine translation is used. While Moses is technically “free”, there are significant costs in staffing, hardware and other resources. There is also considerable risk in deploying a Moses system, even when hiring experts with computational linguistics experience.
  • Even if IOLAR’s Moses system had delivered a quality that was better than Google, the savings and costs when compared to their investment in Language Studio™ would have been marginal. It has become clear to IOLAR that the Total Cost of Ownership (TCO) in a Language Studio™ system far exceed what was possible with DIY Moses solutions.
A significant success factor of the collaboration between Asia Online and IOLAR is that IOLAR better understands the specifics of machine pre-translated text for difficult language combinations. IOLAR now has developed a growing expertise and competence with the post-editing requirements and necessary competences and will be able to approach customers that need such translations with confidence.