Another guest post by Luigi who covers a variety of subjects here: AI, Big Data, NMT Hype, and more. Luigi attempts on a regular basis to clarify the conflation that seems rampant in the translation industry and makes my life easier by producing what I perceive as interesting content to keep this blog relevant and current.
AI is a much-misunderstood term and thus I think it is worth a closer look to further reduce the conflation that surrounds it. The graphic below from a presentation I made on "Linguistic AI" on behalf of SDL, describes what I think a real AI should do. However, the reality is still quite far from the broad promise made by the use of the word intelligence, and most of what we see today is narrowly focused ML deployments that indeed do seem to perform some kind of cognitive function around carefully selected data.
There is also a lot of confusion about what machine learning (ML) is and how it relates to AI. Thus I think this graphic below is also useful to keep the ongoing discussions clear. Especially, since we hear of some talking about deep NMT versus your basic NMT. Seriously, how deep are we talking? Most NMT today TTBOMK is based on deep learning as shown below.
Luigi also touches upon the hype around NMT, specifically, on the Microsoft claim of reaching human parity with their Chinese NMT engine. While not untrue from a very narrowly defined, and very specific definition of what parity is, it is an overstatement of the actual achievement in a broader sense as us regular humans might understand. However, to see this overstatement requires actual intelligence, artificial intelligence is not enough.
It is hyperbole that you can quickly disprove by taking any random Chinese news web page and running a translation through. You will indeed be disappointed by the complete lack of alleged human parity of this exercise, and will probably begin to ask pesky questions about what humans are we talking about. It also similar to equating a card trick to a miracle. Anyway, this kind of claim is a common marker in the MT world, which is often filled with empty promises. To be fair it is a much less deceptive and blatant overstatement than the Google announcements a year or so ago.
It has been my observation that most if not all the do-it-yourself experimentation with SMT produced sub-optimal results. To be explicit, this means that you would have been better off using a public MT portal or working with an expert. NMT has 10+ open source toolkits, so my question(s) to the DIYers is: Which one are you going to use? Why? How do you know the others are not better? The cost and complexity to engage with NMT go way beyond loading low-quality data into an open source or any toolkit. The rate of change in the science and algorithmic evolution is unprecedented. It is my opinion that NMT is not a game for the underfunded and the naive, but I am sure many in the translation industry will expend time and resources to find this out.
The notion of data in this era of ML and neural nets is interesting, and I recommend that you go down the thread and often silly comment trail that was triggered by this tweet from a partner at VC Andreessen Horowitz who it seemed, wanted to make the point that ML apps need very different and specific data to produce useful outcomes, not just generic "data":
Some of my favorite responses include the examples below, which sound surprisingly like some discussions on translation quality that I have witnessed.
@maxsklar : I heard they both go through pipelines
@DanielMiessler : Data would be more like the dinosaurs, plants, and sunshine. The oil would be the insights and predictions.
In recent years, the blogosphere has lost much of its original appeal, mainly because its connected community has largely moved to social media, which, today, ended up conveying most content. Indeed, social media help much content emerge that would otherwise remain buried. Social media—as we all know—also convey content that should better be ignored anyway, but even crap has its raison d’être: That’s content marketing, baby, content marketing, and there’s nothing you can do about it, nothing.
Social media activity of your contacts can even provide you with much more confirmations than expected. The fundamentals of content marketing say that the content produced should be of absolute value, but this is hardly true because marketing is supposed to exert its effects anyway and one does not always have something definitive to say.
What would you think, for example, of an acquaintance of yours recommending a post by someone who admits s/he is an absolute beginner with machine translation, has no technical knowledge of it and yet thinks s/he can provide his/her customers with solid advice anyway? And what would you think of the same acquaintance of yours who defines him/herself as an industry professional while admitting his/her revulsion for MT and declaring her cast-iron belief in any professional as being capable of sparing his/her customers a “poor figure”? Well, these people are really telling a lot about themselves with a post and a like.
It came as no surprise, then, that analytics generally indicate that in-house reviews mostly result overly expensive and largely pointless, as Kevin Cohn, later on, showed in the same occasion. Simply put, despite great expectations almost no actual improvement is recorded. Indeed, most edits are usually irrelevant and simply a matter of personal taste. Incidentally, Kevin Cohn is a data scientist who only speaks English and admittedly knows almost nothing about translation. Anyway, as the wise man says, data ipsa loquuntur.
In fact, hypes are aimed at and address people outside verticals, so Microsoft’s recent hype on NMT achieving human parity, for example, was not meant for the translation industry.
So why all the fuss?
As a matter of fact, the difference between human and machine translation is becoming thinner and thinner, at least looking at quality scores and statistical incidence. Also, the concept of parity may be quite hard for a layman to grasp. This, if anything, makes the desolation of posts like the one mentioned above even more evident. Indeed, it is pretty unlikely for the general media to get the news correctly in such cases like the Microsoft hype case: However complete and clear the article might have been, it was even misleading in the title, which usually is the only catchphrase for the media.
In Microsoft’s much-vexed, and yet, don’t forget it, scientific article, parity is defined mostly as a functional feature, i.e. as a measure of the ability to communicate across language barriers. Parity is compared to professional human translations, and yet keeping clearly in mind the idea that “computers achieving human quality level is generally considered unattainable and triggers negative reactions from the research community and end users alike” and that “this is understandable, as previous similar announcements have turned out to be overly optimistic.”
As a matter of fact, it is made equally clear that the quality of NMT output in the case examined exceeds that of crowd-sourced non-professional translations, which should come as no surprise for those translation pundits who have read the article.
On the other hand, a recent study from the University of Maryland found that “users reacted more strongly to fluency errors than adequacy errors.” Since the main criterion in recruiting participants was their English language ability, the study indirectly confirms that “adequacy” implies a vertical kind of knowledge, the same that could prevent hypes from arising and spreading.
The unpleasant side of this story is that, once again, many so-called translation professionals still can’t see how MT is just a stress-relieving technology, conceived and developed to enhance translation, make it easier and faster and possibly better.
That’s why (N)MT is no inflated hype, and it has actually been on the plateau of productivity for years now.
Overcoming language barriers is an ageless aspiration of humankind that does not generate any fears, unlike the much-fabled singularity. Except, possibly, amongst language professionals, despite the continuous, recurrent, self-reassurance (wishful thinking?) that machines will never replace men, at least in this creative and thus undeniably human task.
In the end, the NMT hype falls within mainstream tech news, which is sprayed as toxic gas to win a market war that is battled on much more profitable fronts than NLP, corporate business platforms. Indeed, the NMT arena is dominated by a leading actor with a supporting actor and many smaller side actors struggling for an appearance on the proscenium. Predictably, a translation industry “star”, which is just a “dwarf” in the global business universe, recently opted for buying instead of making its own NMT engine, citing the scarcity of data scientists—and money, of course—as the main reason for the decision.
Actually, not only has NMT emerged as the most promising approach, it has also been showing superior performances on public benchmarks and rapid adoption in deployments and steady improvements. Undeniably, there have also been reports of poor performance, such as the systems built under low-resource conditions, confirming that NMT systems have lower quality with out-of-domain data. This implies that the learning curve may be quite steep with respect to the amount and, most importantly, the quality of training data. Also, NMT systems are still little interpretable, meaning that any improvements are extremely complex and random, when not arbitrary.
Anyway, to be unmistakably clear, MT is definitely “at parity” with human translation, especially when this is below expectations, i.e. sadly average low-grade. And Arle Lommel is right in writing that an article titled New Study Shows That MT Isn’t Terrible would not generate much attention. At the same time, though, when he writes that “the only translators who need to worry about machine translation are those who translate like machines” he can’t possibly even imagine that this is exactly what most human translators have been doing, maybe forcedly, for decades.
Therefore, the NMT hype is such only for the people in the translation industry who, on the other hand, are much more open to stuff that insiders in other industry would label as crap.
After all, NMT is just another algorithm and, with the world going increasingly digital and (inter)connected, and so information-intensive, resorting to algorithms is inevitable, because it is necessary.
In this respect, besides showing a total ignorance of what “big data” is, the inconsiderate use of non-sensical “translation big data” has been seriously damaging any chance for the effectual trading of language and translation data. This is just one of the impact of fads and hypes, especially if ignorantly borrowed from and spread through equally ignorant (social) media.
As Andrew Joscelyne finally wrote in his latest post for the TAUS blog, «Language data […] has never been “big” in the Big Data Sense.»
By the way, what happened with “translation big data” is about to happen with AI, too, because ML—or even DL—is not AI, but too many people don’t care to deepen and see the difference.
In fact, with the translation industry processing less than 1% of translation requests, language data can’t be exactly big, while translation businesses don’t have the necessary knowledge, tools, and capability to effectively exploit and benefit from translation (project) data. Exceptions are de rigueur, of course, but one can count them on the fingers of one hand, and they all are technology providers.
Indeed, only human laziness should be blamed for unsatisfactory quality. And this is consistent with the perennial, grueling and inconclusive debate on quality, the magical mystery word that instantly explains everything and forbids further questioning.
A solid example is the anxiety for confidentiality with online MT, which is not quite an issue. Confidentiality is definitely a minor issue for an industry whose players are still extensively using email, when not FTP unsecured connections and servers for exchanging files. Confidentiality is definitely not a major issue when it is mostly delegated to NDAs, without providing for any enforcement mechanism, especially when non-disclosure agreements are perceived as offensive, for revealing lack of trust and questioning professionalism. Confidentiality is not an issue when, even in spite of bombastic certifications, the violation of any confidentiality obligations is around the corner for keeping customer’s data unsecured, having no contingency or security plan in place or re-using the same data for other projects, knowingly or not. Also, in most cases, IPR rather than confidentiality is the real issue.
Anyway, when such issues arise, never is technology to blame but human laziness, sloppiness, helplessness, and ineptness.
Are all these traits also affecting data? Of course, they are. It is not a case that translation businesses believe they are so different than other service businesses, to the point that to real innovation has ever come from them. Even when they choose to build their own platforms, these are so peculiar that they could never be made available to the whole community, even if their makers would, and they wouldn’t. After all, this is also a reason for the proliferation of unnecessary standards. Narcissism is the boulder blocking the road to change and innovation.
The same dysfunctional approach affects data. For example, should one believe in the meager results of the perennial, grueling and inconclusive debate on quality, one should only be able to measure it downstream and only by counting and weighing errors, in a typical red-pen syndrome. On the contrary, a predictive quality score can be computed based on past project data, which is extremely interesting for buyers.
Only a weak point will be left, i.e. how to recruit, vet, compensate, and retain vendors to have always the best fit.
During his presentation on KPIs at the recent interpretation and translation congress in Breda, XTRF’s Andrzej Nedoma recalled how project managers always tend to use the same resources, who are not necessarily always the most suitable.
With vendor managers continuously vetting and monitoring vendors and constantly updating the vendor database, project managers could have a reliable repository to get their picks from. And with project managers updating, in turn, the vendor database with performance data, this could be combined with assessments and ratings from customer and peers to feed an algorithm that would provide for best fits at any new projects and, in short, ultimately start a virtuous circle and maximize customer satisfaction.
To be unambiguously clear once again, this is by no means an endorsement of translation marketplaces. On the contrary, the inherent vice of translation marketplaces is the ultra-exploitation of information asymmetry as they provide no mechanism to help factual vetting and evaluation, thus ultimately disintermediation. However, any platform that users from all parties could join in and be vetted and evaluated—and their performances fairly measured—will eventually prevail.
If the idea of translation marketplaces has not worked out so far is not because of a supposedly unique nature of translation; on the contrary, this is one of the conditions that make the translation industry an ideal candidate for disruption. In fact, with suitable data and the right algorithms, machine learning—including deep learning—can provide many high-value solutions.
Where’s the weakness in data then? In humans [who misunderstand and misuse it.]
Luigi Muzii has been in the "translation business" since 1982 and has been a business consultant since 2002, in the translation and localization industry through his firm. He focuses on helping customers choose and implement best-suited technologies and redesign their business processes for the greatest effectiveness of translation and localization-related work.
This link provides access to his other blog posts.
AI is a much-misunderstood term and thus I think it is worth a closer look to further reduce the conflation that surrounds it. The graphic below from a presentation I made on "Linguistic AI" on behalf of SDL, describes what I think a real AI should do. However, the reality is still quite far from the broad promise made by the use of the word intelligence, and most of what we see today is narrowly focused ML deployments that indeed do seem to perform some kind of cognitive function around carefully selected data.
There is also a lot of confusion about what machine learning (ML) is and how it relates to AI. Thus I think this graphic below is also useful to keep the ongoing discussions clear. Especially, since we hear of some talking about deep NMT versus your basic NMT. Seriously, how deep are we talking? Most NMT today TTBOMK is based on deep learning as shown below.
Luigi also touches upon the hype around NMT, specifically, on the Microsoft claim of reaching human parity with their Chinese NMT engine. While not untrue from a very narrowly defined, and very specific definition of what parity is, it is an overstatement of the actual achievement in a broader sense as us regular humans might understand. However, to see this overstatement requires actual intelligence, artificial intelligence is not enough.
It is hyperbole that you can quickly disprove by taking any random Chinese news web page and running a translation through. You will indeed be disappointed by the complete lack of alleged human parity of this exercise, and will probably begin to ask pesky questions about what humans are we talking about. It also similar to equating a card trick to a miracle. Anyway, this kind of claim is a common marker in the MT world, which is often filled with empty promises. To be fair it is a much less deceptive and blatant overstatement than the Google announcements a year or so ago.
It has been my observation that most if not all the do-it-yourself experimentation with SMT produced sub-optimal results. To be explicit, this means that you would have been better off using a public MT portal or working with an expert. NMT has 10+ open source toolkits, so my question(s) to the DIYers is: Which one are you going to use? Why? How do you know the others are not better? The cost and complexity to engage with NMT go way beyond loading low-quality data into an open source or any toolkit. The rate of change in the science and algorithmic evolution is unprecedented. It is my opinion that NMT is not a game for the underfunded and the naive, but I am sure many in the translation industry will expend time and resources to find this out.
The notion of data in this era of ML and neural nets is interesting, and I recommend that you go down the thread and often silly comment trail that was triggered by this tweet from a partner at VC Andreessen Horowitz who it seemed, wanted to make the point that ML apps need very different and specific data to produce useful outcomes, not just generic "data":
The phrase ‘Data is the new oil’ only makes sense if you understand almost nothing about machine learning— Benedict Evans (@BenedictEvans) May 14, 2018
Some of my favorite responses include the examples below, which sound surprisingly like some discussions on translation quality that I have witnessed.
@maxsklar : I heard they both go through pipelines
@BatMongoose : Maybe data is more like sand - annoyingly ubiquitous but useless until you figure out how to turn it into something (silicon wafers)
@kohlschuetter : Big Data is the new snake oil. #fixeditforyou
@asemota : Data is the new "Oxygen"
============
In recent years, the blogosphere has lost much of its original appeal, mainly because its connected community has largely moved to social media, which, today, ended up conveying most content. Indeed, social media help much content emerge that would otherwise remain buried. Social media—as we all know—also convey content that should better be ignored anyway, but even crap has its raison d’être: That’s content marketing, baby, content marketing, and there’s nothing you can do about it, nothing.
Content, skills, and knowledge
Indeed, this content offers a plea to run some basic psychometrics on the small groups of people one follows on social media. Don’t get fooled by the Facebook/Cambridge Analytica scandal, it’s not rocket science: Even likes can tell you a lot and help you understand what your contacts are paying attention to and why especially if they are not just virtual acquaintances.Social media activity of your contacts can even provide you with much more confirmations than expected. The fundamentals of content marketing say that the content produced should be of absolute value, but this is hardly true because marketing is supposed to exert its effects anyway and one does not always have something definitive to say.
What would you think, for example, of an acquaintance of yours recommending a post by someone who admits s/he is an absolute beginner with machine translation, has no technical knowledge of it and yet thinks s/he can provide his/her customers with solid advice anyway? And what would you think of the same acquaintance of yours who defines him/herself as an industry professional while admitting his/her revulsion for MT and declaring her cast-iron belief in any professional as being capable of sparing his/her customers a “poor figure”? Well, these people are really telling a lot about themselves with a post and a like.
The power of data
Seth Stephen-Dawidowitz’s Everybody Lies is a terrific book for how simply it shows the power of data. Just like Seth Stephen-Dawidowitz in his book, Google’s Mackenzie Nicholson displaced many attendees at the recent Smartling Global Ready EMEA, by asking a few classic questions with a seemingly obvious and yet invariably incorrect answer. For example, when it comes to clichés, no one would have bet that Italians pay far more attention to price than Germans, Scots, Israelis as Google’s data unequivocally shows.
It came as no surprise, then, that analytics generally indicate that in-house reviews mostly result overly expensive and largely pointless, as Kevin Cohn, later on, showed in the same occasion. Simply put, despite great expectations almost no actual improvement is recorded. Indeed, most edits are usually irrelevant and simply a matter of personal taste. Incidentally, Kevin Cohn is a data scientist who only speaks English and admittedly knows almost nothing about translation. Anyway, as the wise man says, data ipsa loquuntur.
Hypes you (don’t) expect
Of the many expectations that have been generating hypes over the last few years, the ones about data are not inflated, and people are, maybe slowly but steadily, getting accustomed to reckoning with data-driven predictions. As algorithms will be growing in numbers and potentials, the confidence in their applications will also grow.
In fact, hypes are aimed at and address people outside verticals, so Microsoft’s recent hype on NMT achieving human parity, for example, was not meant for the translation industry.
So why all the fuss?
As a matter of fact, the difference between human and machine translation is becoming thinner and thinner, at least looking at quality scores and statistical incidence. Also, the concept of parity may be quite hard for a layman to grasp. This, if anything, makes the desolation of posts like the one mentioned above even more evident. Indeed, it is pretty unlikely for the general media to get the news correctly in such cases like the Microsoft hype case: However complete and clear the article might have been, it was even misleading in the title, which usually is the only catchphrase for the media.
In Microsoft’s much-vexed, and yet, don’t forget it, scientific article, parity is defined mostly as a functional feature, i.e. as a measure of the ability to communicate across language barriers. Parity is compared to professional human translations, and yet keeping clearly in mind the idea that “computers achieving human quality level is generally considered unattainable and triggers negative reactions from the research community and end users alike” and that “this is understandable, as previous similar announcements have turned out to be overly optimistic.”
As a matter of fact, it is made equally clear that the quality of NMT output in the case examined exceeds that of crowd-sourced non-professional translations, which should come as no surprise for those translation pundits who have read the article.
On the other hand, a recent study from the University of Maryland found that “users reacted more strongly to fluency errors than adequacy errors.” Since the main criterion in recruiting participants was their English language ability, the study indirectly confirms that “adequacy” implies a vertical kind of knowledge, the same that could prevent hypes from arising and spreading.
The unpleasant side of this story is that, once again, many so-called translation professionals still can’t see how MT is just a stress-relieving technology, conceived and developed to enhance translation, make it easier and faster and possibly better.
That’s why (N)MT is no inflated hype, and it has actually been on the plateau of productivity for years now.
Overcoming language barriers is an ageless aspiration of humankind that does not generate any fears, unlike the much-fabled singularity. Except, possibly, amongst language professionals, despite the continuous, recurrent, self-reassurance (wishful thinking?) that machines will never replace men, at least in this creative and thus undeniably human task.
In the end, the NMT hype falls within mainstream tech news, which is sprayed as toxic gas to win a market war that is battled on much more profitable fronts than NLP, corporate business platforms. Indeed, the NMT arena is dominated by a leading actor with a supporting actor and many smaller side actors struggling for an appearance on the proscenium. Predictably, a translation industry “star”, which is just a “dwarf” in the global business universe, recently opted for buying instead of making its own NMT engine, citing the scarcity of data scientists—and money, of course—as the main reason for the decision.
Actually, not only has NMT emerged as the most promising approach, it has also been showing superior performances on public benchmarks and rapid adoption in deployments and steady improvements. Undeniably, there have also been reports of poor performance, such as the systems built under low-resource conditions, confirming that NMT systems have lower quality with out-of-domain data. This implies that the learning curve may be quite steep with respect to the amount and, most importantly, the quality of training data. Also, NMT systems are still little interpretable, meaning that any improvements are extremely complex and random, when not arbitrary.
Anyway, to be unmistakably clear, MT is definitely “at parity” with human translation, especially when this is below expectations, i.e. sadly average low-grade. And Arle Lommel is right in writing that an article titled New Study Shows That MT Isn’t Terrible would not generate much attention. At the same time, though, when he writes that “the only translators who need to worry about machine translation are those who translate like machines” he can’t possibly even imagine that this is exactly what most human translators have been doing, maybe forcedly, for decades.
Therefore, the NMT hype is such only for the people in the translation industry who, on the other hand, are much more open to stuff that insiders in other industry would label as crap.
After all, NMT is just another algorithm and, with the world going increasingly digital and (inter)connected, and so information-intensive, resorting to algorithms is inevitable, because it is necessary.
Data as fuel
The fuel of algorithms is data. Unfortunately, despite the long practice of producing language and translation data, translation professionals and businesses have seemingly learned very little about data and are still very late in adopting data-driven applications. Indeed, data can be an asset if you know what to do with it, how to take advantage of it, how to profit from it.
In this respect, besides showing a total ignorance of what “big data” is, the inconsiderate use of non-sensical “translation big data” has been seriously damaging any chance for the effectual trading of language and translation data. This is just one of the impact of fads and hypes, especially if ignorantly borrowed from and spread through equally ignorant (social) media.
As Andrew Joscelyne finally wrote in his latest post for the TAUS blog, «Language data […] has never been “big” in the Big Data Sense.»
By the way, what happened with “translation big data” is about to happen with AI, too, because ML—or even DL—is not AI, but too many people don’t care to deepen and see the difference.
In fact, with the translation industry processing less than 1% of translation requests, language data can’t be exactly big, while translation businesses don’t have the necessary knowledge, tools, and capability to effectively exploit and benefit from translation (project) data. Exceptions are de rigueur, of course, but one can count them on the fingers of one hand, and they all are technology providers.
Data and quality
Unfortunately, the translation industry is affected by a syndrome, blaming technology for replacing services, products, and habits with others of lower quality, impoverished and/or simplified. Luddite anyone?
Indeed, only human laziness should be blamed for unsatisfactory quality. And this is consistent with the perennial, grueling and inconclusive debate on quality, the magical mystery word that instantly explains everything and forbids further questioning.
A solid example is the anxiety for confidentiality with online MT, which is not quite an issue. Confidentiality is definitely a minor issue for an industry whose players are still extensively using email, when not FTP unsecured connections and servers for exchanging files. Confidentiality is definitely not a major issue when it is mostly delegated to NDAs, without providing for any enforcement mechanism, especially when non-disclosure agreements are perceived as offensive, for revealing lack of trust and questioning professionalism. Confidentiality is not an issue when, even in spite of bombastic certifications, the violation of any confidentiality obligations is around the corner for keeping customer’s data unsecured, having no contingency or security plan in place or re-using the same data for other projects, knowingly or not. Also, in most cases, IPR rather than confidentiality is the real issue.
Anyway, when such issues arise, never is technology to blame but human laziness, sloppiness, helplessness, and ineptness.
Are all these traits also affecting data? Of course, they are. It is not a case that translation businesses believe they are so different than other service businesses, to the point that to real innovation has ever come from them. Even when they choose to build their own platforms, these are so peculiar that they could never be made available to the whole community, even if their makers would, and they wouldn’t. After all, this is also a reason for the proliferation of unnecessary standards. Narcissism is the boulder blocking the road to change and innovation.
The same dysfunctional approach affects data. For example, should one believe in the meager results of the perennial, grueling and inconclusive debate on quality, one should only be able to measure it downstream and only by counting and weighing errors, in a typical red-pen syndrome. On the contrary, a predictive quality score can be computed based on past project data, which is extremely interesting for buyers.
More ML applications
Now, imagine a predictive quality score combined with a post-factum score deduced from content profiling and initial requirements (checklists), classic QA, and linguistic evaluation based on correlation and dependence, precision and recall and edit distance.
Only a weak point will be left, i.e. how to recruit, vet, compensate, and retain vendors to have always the best fit.
During his presentation on KPIs at the recent interpretation and translation congress in Breda, XTRF’s Andrzej Nedoma recalled how project managers always tend to use the same resources, who are not necessarily always the most suitable.
With vendor managers continuously vetting and monitoring vendors and constantly updating the vendor database, project managers could have a reliable repository to get their picks from. And with project managers updating, in turn, the vendor database with performance data, this could be combined with assessments and ratings from customer and peers to feed an algorithm that would provide for best fits at any new projects and, in short, ultimately start a virtuous circle and maximize customer satisfaction.
To be unambiguously clear once again, this is by no means an endorsement of translation marketplaces. On the contrary, the inherent vice of translation marketplaces is the ultra-exploitation of information asymmetry as they provide no mechanism to help factual vetting and evaluation, thus ultimately disintermediation. However, any platform that users from all parties could join in and be vetted and evaluated—and their performances fairly measured—will eventually prevail.
If the idea of translation marketplaces has not worked out so far is not because of a supposedly unique nature of translation; on the contrary, this is one of the conditions that make the translation industry an ideal candidate for disruption. In fact, with suitable data and the right algorithms, machine learning—including deep learning—can provide many high-value solutions.
Where’s the weakness in data then? In humans [who misunderstand and misuse it.]
=======================
Luigi Muzii has been in the "translation business" since 1982 and has been a business consultant since 2002, in the translation and localization industry through his firm. He focuses on helping customers choose and implement best-suited technologies and redesign their business processes for the greatest effectiveness of translation and localization-related work.
This link provides access to his other blog posts.
No comments:
Post a Comment