Pages

Sunday, October 14, 2018

Looking at Blockchain in the Translation Industry

 I recently attended the TAUS Annual Conference, where " current language technology and collaboration" is the focus. And indeed it has historically been the best place to talk about technology in the "language industry".  It was very clear that in addition to MT, edit distance and error classification on MT systems output are also REALLY important to this community.   Blockchain alternatives were presented as the most revolutionary new technology at this event, since the buzz on NMT has subsided a bit, but I have always wondered if it is really possible to really get truly revolutionary if your primary focal point is "localization".  

I say all this not out of any disregard, but Chris Wendt (Microsoft) and I shared thoughts on our motivations on staying with MT after all these years, and shared some angst about when we as a community would (if ever) start talking about "real" MT applications. I am quite sure that we were both glad to hear LinkedIn and Dell very clearly state that the value of the MT content to the customer, and better engagement of global populations (enabled by MT) were much more important than any kind of quality score, and as a rule, more content faster would always be better than better translations delivered way too late in these days of digital transformation.

As long as I have been in "the industry" there has always been a discussion about taking localization to the next level. To make it more respectable. To be seen with more regard and more often in the "mainstream press".  I am not sure that this is really possible if the industry focus does not change and move beyond cost efficiency and translation quality. This is mostly because localization happens after-the-fact i.e. after the marketing, and product development people have decided everything that is really worth deciding to drive a market revolution, and/or make a digital transformation happen.

My sense is that it is increasingly all about content and content is more than words that we translate. It is about relevance and value and transformation. Content is where localization, marketing, and product development can meet. Content is where customers meet the enterprise. Content is the magic key that possibly lets localization people into the C-suite. And digital experience is where finance and customer service and support also join the party. From my vantage point the only company that fundamentally understands this in "the industry"  is the "new  SDL". I am quite possibly biased since they deposit money into my bank account on a regular basis, but I like to think that I can make this statement on purely objective facts as well. It is much more important to understand what you should translate and why you are doing it, than simply translate efficiently with fewer errors with MT systems that produce very low edit distances. Indeed it is probably most important to understand what is needed to get the original content right in the first place as that is the fundamental driver of digital transformation. Understand relevance and value. Revolutions tend to be more about what matters, and why it matters,  then about how should we do what we must do. Being content focused enables you get much closer to the source, to the what and the why. 

However, in this age of fake news even content is under fire. We are surrounded by "fake news" and fake videos and fake pictures. How do we tell what is true and what is not? What about blockchain? The idea of an immutable ledger stored in the cloud, tracing the origin of all content to its source, definitely sounds appealing. Users could compare versions of videos or images to check for modifications, and watermarks would serve as a badge of quality.  But here, too, the question is whether this can be applied to text-based content, where the intent to deceive leaves fewer technical traces.

There are now some who wish to bring specialized blockchain implementations to localization (translation) with verified translators, translation memory and payment mechanisms and raise  the level of trust and fairness.  I am hoping to publish a series of posts on this subject that show various perspectives on this issue and technology. I cannot say I really understand the blockchain potential here at this point, and this post and others that follow is part of my effort to learn and share. 

Gabor Ugray has written a post on blockchain as having far-reaching consequences for compensation, compliance, workflows, and tools in the industry, and has a much more optimistic viewpoint than presented by Luigi in the bulk of this post, that I also recommend to readers.

Luigi Muzii is a commentator who is often considered acerbic and "negative" by many in the industry. But I like to listen to his words generally, since he also tends to cut through the bullshit and get to core issues.  He is not enthusiastic about the impact of blockchain on the translation business. This guest post describes why. His summary conclusion:

Blockchain is no change, it may possibly be an improvement, but it will keep us doing things the way they have been done so far, in a [slightly] different shape. 

If we consider the history of MT in the localization industry, his current conclusions do indeed make sense and seem very reasonable. In this industry, MT is about error classification, edit distances, quality measurement, comparing MT system scores, and custom engines. It is almost never about understanding global customers, listening more closely around the globe, better global communication and collaboration on a daily basis, or rapidly  scaling and ramping up international business. Outsiders have for the most part led those kind of truly transformational MT-driven initiatives. We are defined often by the kinds of measures we use to define our success. Consider what you have accomplished by getting a low edit distance score across 30 MT systems vs. say increasing Russian traffic by 800% and Russian online transactions by 25% by translating 50 million product listings into Russian. Lets also say that this increases sales by $150 million. We can also safely bet that the edit distance on these billions of words is quite terrible and very high. (Yes. I understand that this is a very lop-sided contrast.)

So here is a toast to lower edit distance scores on all your MT systems, and to error classification systems with at least 56 dimensions. 😏

And thank you to Robert Etches for educating the TAUS  community on what a taus actually looks like, as shown below. As somebody who seriously plays a closely related instrument, I appreciate people knowing this. Robert also won some points in my eyes for stating the seminal and enduring influence that a book about the massacre at Wounded Knee had on his sense of injustice as a young man.





-----------------------------------------------------------



During a panel discussion at the first Hackers Conference in 1984, the influential publisher, editor, and writer Stewart Brand was recorded telling Steve Wozniak, “On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time.”

Over the years, Brand’s articulated nuanced modern conundrum has been amputated as “Information wants to be free,” thus distorting the original concept. Indeed, in this way, Brand’s originally value-neutral statement can be used ambiguously to argue the benefits of propertied information, of free and open information, or of both.

Truth is that people should be able to access information freely and that information should be independent, transparent, and honest. Unfortunately, the pages of the mainstream press, especially the economic ones, are the least “free” of all, filled as they are with the successes of companies possibly filling expensive advertising spaces in the same media, nearly without a critical or at least skeptical comment. Maybe, later, it turns out that those very same companies have been in crisis for some time. This same shameful habit can be found in trade media, especially the translation industry media, most often hosting docilely promotional articles. Nothing, however, indicates their peculiar nature next to them. This might not figure a problem of “freedom of information” but certainly one of “quality of information”.

Recently, Joseph E. Stiglitz, a recipient of the Nobel Memorial Prize in Economic Sciences in 2001, has warned against the risk of a short-sighted outlook. True, John Maynard Keynes said, “long run is a misleading guide to current affairs. In the long run we are all dead,” but any “sugar high” is going to vanish when the same unsolved problems fire back. And a major one is exactly the lack of transparency.

In God Bless You, Mr. Rosewater, Kurt Vonnegut recalls the free enterprise system as having “the sink-or-swim justice of Caesar Augustus built into it,” and hence the need for “a nation of swimmers, with the sinkers quietly disposing of themselves.”

The “translation industry” is aggressively competing on prices and volumes, so boasting a growth in revenues and volumes, but not in profits and compensations, and even greater expectations for the next year is not really a smart prospect. In fact, the larger translation companies may be collecting revenues in the short term in many ways, and a short-sighted outlook is the fastest lane to the grave of the whole industry. On the other hand, despite the paeans sung to the alleged smartness of the so-called champions of this industry, greed seems to be what has been driving them more than entrepreneurial and an innovational spirit. Simply put, can anyone name the Elon Musk of the translation industry? And, to be totally clear, leveraging public funding to build a platform and exploit cheap labor may be cunning, but it is no entrepreneurship.

Unfortunately, these “champions” have managed to make a business trend prevail to make the industry and its products and services irrelevant, so not even learning how to swim with sharks could be enough. The whole industry keeps chanting the same old litany, with the same people telling the same old stories in the same old places to the same people who are still surprisingly eager to listen.
On the other hand, there is also the same old vox clamantis in deserto, while the landscape keeps changing.

Blockchain again


Take blockchain for example. Blockchain is largely considered still an immature technology. The market is less than embryonic with no clear recipe for implementation and very few unstructured experimental solutions. Despite many publications, no clear and undisputed strategic evaluation of blockchain has emerged yet and many companies are reconsidering their investments. However, the hype has infected the translation industry through a breath of wind that has traveled the seas and industry events and media.

As many experts, analysts and observers noticed, blockchain is not as efficient as traditional databases: It’s much hungrier of energy use, processing power, and even storage. Also, a blockchain is only as good as the information that is put in it. In other words, the data in a blockchain is not checked in any way.

The translation industry and the translation profession positively need to be modernized and solve perennial problems, but blockchain can hardly be seen as the eventual technology solution and used as a banner in this respect. People who do so are just cynically using blockchain purely for its potential reputational value, to draw some attention, prove how innovative they are, and eventually attract investments. They may even start some kind of an implementation, though it is likely rather a proof of concept, that they possibly know they would not benefit from it in any way.

As a matter of fact, if and when some future practical applications will show possible, the only way those very people might most possibly benefit from their significant investments will be a highly volatile lead, a nonpaying competitive advantage.

At the moment, there seem to be six different categories of business applications addressing two major needs. It remains to be seen where industry players might find their applications: Storing reference data, with a view on ownership? Smart contracts? Maybe this might be a major application, but it is not among those that are being figured out at the moment. What about payment? It is the most interesting application, but it is not being figured out either.

Blockchain might be used to verify the identity of the person(s) with writing privilege, but as long as no control exists over the information being written, and the information itself remains unchecked, this feature will prove useless as it would always be possible to write fraudulent data to the blockchain. Therefore, a mechanism should be devised to stamp the digital signature of the legitimate owner of each digital item as a unique code that stays with it all the way through the supply chain.

Itnellectual property (IP) is another issue here. As long as no mechanism is available to unmistakably identify the owner of the content, investments in blockchain for content transactions would be highly capricious. And an open blockchain to store this information and help to integrate different organizations and systems still seems a long way off given the absence of standards and of any ongoing attempts to identify one.

Blockchain could support a validated register of qualified practitioners with proven experience in a specific field, whose credentials would be validated to allow customers to quickly find a qualified workforce. However, one of the many things that can never be provided is “trust” in transactions.

Will there ever be a sticker on an item that is valid and complete enough? What about the corporate and personal integrity of the people behind the processes in place? After all, Bitcoin success is mostly due to anonymity and the use of it by criminals.

Blockchain is not going to be disruptive to the translation industry. Once mature it will allow certain things to be done better and more efficiently, but it will not do to the translation industry what digital photography did to Kodak.

More with less

 

What does transparent, honest and independent information have to do with blockchain? And with the translation industry? As for the blockchain, the trade press, even more than the business or the mainstream press, should help debunk hype rather than fuel the frenzy on which they thrive by helping, more or less explicitly, the club of the usual suspects. Also, with their production, those industry sources that should be taken as authoritative end up looking generally unreliable and this creates a general climate of distrust. As a result, the mainstream media do not turn to these sources to gather the information to process for their audiences. Eventually, the translation industry is being devalued even more than it already is.

It comes as no surprise that the “do-more-with-less” mantra has been unquestioningly borrowed by the translation industry in the past few years, together with many other baloneys such as agile, lights-out, augmented, etc. when the many marketing geniuses crowding the industry could have invented something better than a flabby ‘transcreation.’ Of course, they should have done their homework first, and this would have spared them the poor figure of using wrong quotes and attributing them to the wrong people.

But no. Most industry players still believe the same old little fairy tale they have been told for years that the more companies expand globally, the more they need to pay attention to local language expectations in the new markets they are trying to enter, the more they need to pay close attention to the linguistic, cultural, and even socio-economic nuances of these markets, and that this makes translation a major part of a company’s global growth strategy.

 

Beyond futility

 

The greatest damage to the industry as a whole has been the explosion of true ‘religion wars’ pervading the industry with the abandonment of any objective, accurate and unemotional approach to problems.

This prevents the understanding of some otherwise simple and obvious phenomena. Globalization, for example, has been underway for almost three decades now while the growth of international trades started soon after the end of WWII. So, why the industry is still waiting for global companies “to pay close attention to the linguistic, cultural, and even socio-economic nuances” of international markets?

Also, content growth is exponential, while revenue growth in the translation industry is linear and the slope below 10°. Why, then, revenues are still that important to measure? How does translation demand correlate with content growth? What is the correlation between 99,99% of content being machine translated and the supposedly growing revenues of the industry’s major players?

Wait! Making effective translation a major part of a company’s global growth strategy is “a daunting task that is near impossible without technological leverage and momentum.” Now everything’s clear: this what happens when you try to bullshit the bullshitter. And they buy it.

This is a fairy tale, and it is where the gig economy comes into play. In fact, it is being peddled in industry events and media by the very same CEOs who are not used to do their homework and would misquote Henry Ford. Maybe they don’t even know that he did not invent either the assembly line, or interchangeable parts, or the automobile. Anyhow…

These very same people have no credible answer when they are asked why the translation industry has been familiar with the gig economy from inception, and yet still talks about innovation. And all laugh? Not really, indeed. As Marion McGovern, author of the otherwise forgettable Thriving in the Gig Economy, points out, “The advent of the digital platform world has altered the talent supply chain.” And rather than pushing out traditional staffing agencies, “digital platforms are becoming sourcing engines for them. Big companies may use both the staffing firm and then for urgent or unforeseen projects, turn to the platforms for options.”

Beyond blockchain and baloneys, the translation industry is stuck with obsolete models. Companies in other industry have been measuring performance for years as a way of adding value to their organizations; LSPs, no matter the size, are still at error-catching quality management.

 

Content flood

 

Given the inability of industry players to keep up with customer expectations for technological and process innovation, taking translation in the pipeline has been having the effect of further commoditizing translation. And commodification will continue, with expectations going up, and costs going further down.

The case of Amazon’s Chinese-branch employees leaking data for bribes is emblematic of the risks underpaid workforces might be emboldened to take. And with production business translation being more and more automated by ever better neural machine translation engines things might get a little dicey even in the translation industry.

Many of the larger translation buyers have been developing, managing and delivering multilingual content in different formats and devices for years now. The next step will be fitting everything together into a single workflow. Emerging technology will finally fully enable Sir Berners-Lee’s dream of a semantic Web. In fact, for more than a decade, a significant fraction of Web domains has been generated from structured databases. Today, many Web domains contain Semantic Web (i.e. schema.org) markups.

New models are being devised for content enrichment or augmentation (i.e. integrating different content aspects and simple resources with semantic and knowledge.) This mainly consists of metadata processing to exploit information and allow end users to navigate on semantic annotations. Content enrichment is a very expensive task typically performed for valuable content. Today, it can be done automatically by combining the analytics of multiple data sources. In document enrichment pipelines, each document is enriched, analyzed and/or linked with additional data to improve navigation and filtering for further analyses.

Also, to date, content generators are already available that take existing content and rewrite and shuffle it around to create new content, while many companies are working on Natural Language Generation, an AI sub-discipline to convert data into text used in customer service to generate reports and market summaries. It is being investigated for creating content for websites or blogs from a variety of sources including answers from questions on social media and forums.

With text analytics to understand the structure of sentences, as well as their meaning and intention and NLP to process unstructured data, full-blown computer-only-generated content will soon be a reality.

When these technologies are fully implemented, the Semantic Web will lead to a further upsurge in content production.

 

Translation redefined

 

This will forcedly lead to redefining the nature of translation and the role of linguists to leverage the value in enriched (intelligent) content. It’s time for applied linguists (e.g. translators) to re-think their role in the language industry. Tim Berners Lee’s idea was a bit futuristic (if not actually visionary) when he launched it two decades ago.

The coming future will see ‘applied linguists’ mostly employed as post-processors. Machine translation will do all the jobs, even in creative tasks and those who are still called translators today will have to confirm or, at worst, polish automatic output for cultural appropriateness. Some will re-engineer and re-organize content, more or less as digital indexers and curators, and others will clean and polish data to feed machines. Creativity will no longer exist by definition, it will depend on each one’s ability to exploit his/her knowledge and skills.

So, it is really a good time for a change. Blockchain is no change, it may possibly be an improvement, but it will keep us doing things the way they have been done so far, in a different shape. You might reclaim the things we believe have been taken away from you, but this will never happen. You can stick to obsolete models and expect to keep the same old stances, advance the same old claims, and work in the same old way you have been used to, but this won’t take you anywhere. You can only try to get new ones, and now it’s the time to find a way to get them.

So, is the translation industry attracting more investments than in burgeoning heyday? And where is the money being made? In the meantime, you better row and learn to swim.

=======================
Luigi Muzii's profile photo


Luigi Muzii has been in the "translation business"  or "the industry" since 1982 and has been a business consultant since 2002, in the translation and localization industry through his firm. He focuses on helping customers choose and implement best-suited technologies and redesign their business processes for the greatest effectiveness of translation and localization-related work.

This link provides access to his other blog posts.

Thursday, August 30, 2018

Chatbot Pitfalls and Solid Advice on Getting it Right

According to Gartner research, it is estimated that 60.5 million Americans use a virtual assistant at least once a month. If this trend continues, chatbots will likely power 85 percent of all customer service interactions by the year 2020. 

It is said that chatbots are the new FAQ. Every second, 40,000 search queries are made online worldwide. That adds up to 3.5 billion per day or 1.2 trillion each year. That’s a lot of people looking for a lot of answers. If chatbots are built right they definitely enhance the customer experience and make it easier for a customer to find the content they need. Technical support is the most common type of chatbot content. Chatbots provide a bridge between technical documentation teams and customer support call centers. Salesforce’s Chief Digital Evangelist, predicts that “The line-of-business that is most likely to embrace AI first will be the customer service – typically the most process-oriented and technology savvy organization within most companies.” AI here refers to NLP and NLU enabled content structures.

From my vantage point at SDL, it has become increasingly clear that content really matters. The modern digital customer journey is a journey that is marked and defined by interaction with different content.  The conversational interaction is needed everywhere, not just with Alexa and other voice-based virtual assistants. Imagine searching for technical support content and finding 20 large PDFs that may contain the answer to your question. Wouldn't it be useful to have the ability to zero in on the right content within these PDFs through a series of clarifying questions?  The real question, then, is not where does the content for chatbots come from, but rather how do we prepare, organize and structure that content so the chatbots can use it? This superior delivery of the right content to many customer questions can only happen if you have a proper content organization model. At SDL we call this GCOM and it involves content architecture, organization and process to make it happen.

Source:5 ways chatbots are revolutionizing knowledge management


However, it is very easy to build chatbots that are frustrating, useless and inefficient. We have all experienced websites where the only question the IVA (interactive virtual assistant or chatbot) is able to ask is "Hey dude, do you wanna buy my s*&t?" Chatbots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate. When done badly, which is very often at this stage of evolution, chatbots will kill customer service.

This is not so different from DIY machine translation 10 years ago. Many really bad systems were built and forced upon poor unsuspecting translators. Like MT, this too will take competence and skill and it is wise to work with experts.

Digit’s Ethan Bloch sums up the general consensus: “I’m not even sure if we can say ‘chatbots are dead,’ because I don’t even know if they were ever alive.”  

Our guest post by Ultan O'Broin is a look at how to do the right things the right way to make chatbots that are indeed useful in providing the right content to the right people and provides insight from the design and UX perspective in particular.  While the content organization issues are discussed often, it is worth a review and is well summarized in this article. Ultan has several other posts on Medium on the subject of chatbots and is worth a closer look.

What excites me the most is that once you have the IVA working properly and actually useful, it then makes sense to make it multilingual with optimized machine translation. But first things first.


===== 


Ultan O’Broin looks at how the Jobs To Be Done framework and simple build approaches for the minimum viable product can counter chatbot product management failure and fatigue.

Guerrilla testing of Snapchat Spectacles at Hook Head in Wexford, Ireland. We can try out products in the wild, but are we as focused on the job we’re hiring our product or service to do?
 
One of the compelling hopes for chatbots was that utter power of an agency to increase digital participation in a natural way, partly through reducing app fatigue. Ironically, I now believe we have reached the stage of chatbot fatigue.

Just like everyone was “doing” a smartwatch solution two years ago, suddenly chatbots are the tech “game changer” du jour.

And botification is being done just as badly as the smartwatch overkill.

Chatbot Game Chancers

There’s bot rot out there. The problem is often there is no design thinking about whether a new or existing job is worth botifying or not.

I have seen some criminal chatbots out there. From a barista training chatbot aimed at millennials that delivered a 50 page PDF manual in response to the utterance “how can I be a barista?” to that fitness chatbot telling me that my nearest Parkrun was 8,000 kilometres away, the assumption being that anyone asking was within a 300 kilometres radius of an Irish location. I was in San Francisco at the time, where there is a local run.

None of the arrivistes responsible for such rotbots have asked: “Why is this person hiring this chatbot to do this job?”

The problem is the wrong chatbots are being built, wrong. Fundamentally, begin by asking, “Why would anybody hire my chatbot to do this job?”

This article is about designing the right chatbot, right. But the profound principles to do so can be applied to any app or service.

The “User” and the Damage Done

First, the term “user” must go from the design conversation! User? WTF!

I hate that term “user”; only the usability community and illegal drug business uses it. Let’s reclaim “user” as a role in real life with a real job (unpaid or otherwise) to do. So, our “user” becomes a sales rep, a technician, a concert-goer, a mother, a parent, a patient, and so on.

But how do we find such people, especially if you’re a small startup or innovator? The answer is to think smart, get out on the street, watch real people, and ask them about what they’re doing and encourage them to articulate reasons they would use a chatbot, app, or service instead.

 

Job Profile Persecution

Forget about the job profile approach to identify your “users”.

Those unwieldy tl;dr profile documents of job titles, descriptions, qualifications, experience, org chart position, motivations, IT expertise, and so on. These tomes owe more to recruitment agencies and HR departments than to design thinking and product management.

Profiles list many tasks that the role performs, not just the focussed critical 80/20 reason why your conversational UI might really be a game changer and around which a killer solution can be built.
A job profile is not a real person. In the user experience business, it’s a statement for the prosecution of the mediocrity of design.

 

Don’t Take It Persona-ally, But…

Personas are next to go onto that “user” bonfire of the inanities.

Personas are stereotypical people, dumbed down versions of “a day in the life”, complete with stock photographs, typical responsibilities and tasks, personal characteristics, motivations, and tools.

Again, a persona is not a real digital adopter, but a distant idealisation, performing a non-prioritized list of tasks. These persona peeps may use many types of technology too; not good for innovation disruption by understanding why they might switch or integrate with your innovation.
There’s a reason the word “stereotype” usually comes with a negative connotation.

 

Get Close and Personal. Swipe Right for The Right Design

Instead of “user” personas and profiles, move closer to the customer, and discover what they do.
How? Great chatbot design begins with a contextual conversation.

Take a walk in their shoes. Try a little fun UX ethnography, guerrilla research, and the “Wizard of Oz” technique to identify that killer question or job that might inspire someone to continually turn to a conversational UI to actually do something.

Research and design a chatbot that resonates emotionally and culturally, as well as functionally for your region. Check out Hofstede’s country comparison dimensions to help you shape that experience.
For example, if we were building a chatbot for fitness enthusiasts we might hit the local gym session schedule, post to a Facebook fitness group or fitness app discussion section, or engage on Twitter or Instagram with an appropriate hashtag.

Use a deep-listening, “tell me more about that”, approach.

Original “The Wizard of Oz” movie poster. From Wikipedia; image in the public domain.

Amazon, in the run-up to the French launch of its Echo, for example, introduced Alexa to employees in its French fulfillment centres who interacted with the emergent voice assistant to help the voice chatbot learn French, cultural nuances and behaviours, and how to respond.

The launch of the French version of the Amazon Echo was preceded by real people to learn the language, cultural norms, and how they actually behaved!
Fundamentally, focus on the person’s behaviour, and not on what they say or think they want. Watch their actions. Think of this as a Lean product management approach, a way to quickly design and build a solution to determine the certainty about its value. As Eric Ries says:
“The minimum viable product (MVP) is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”
I encourage you to read Eric’s book, “The Lean Startup” and apply the thinking to chatbot product management and design.

What Features to Design?

Why do drivers of cars buy milkshakes at drive-in takeaways? There could be many reasons. By way of Harvard professor Clay Christensen’s (and others) Jobs To Be Done framework, it was found that people don’t buy and use things because they fit a persona or user profile.

Instead, people hire a service or product to do a job for them. In our milkshake example, the drink was hired because the drivers were bored and in danger of nodding off on a long journey.

So, use the Jobs To Be Done (JTBD) approach for designing the right thing, right!

Again, read (or listen to the podcast) about Clay’s theory of disruptive innovation and Google JTBD for more.

Think of it this way, as mentioned on the startup Intercom’s blog: JTBD allows you to focus on the essential product feature, and to generate a user story about why it’s needed. The story plot line structure for a product story, using the JTBD method, would be something like:
“When X happens, I want to Y, so I can Z.”
An example from the sales rep job world might be:
“When new sales collateral comes online, I want an SMS on my phone, so I can take it on the road with me.”
Designing using JTBD enables a template approach, listing the job to be done, the role, needs, and expected job outcome. We can add outcome performance goals in there too, as a way of testing and proving ROI.

Here’s a one-page enterprise JTBD example you can turn into a template and use yourself: 

A JTBD template completed for a mobile sales rep. You can use this template approach to frame the job to be done and as a way to shape a story about the job itself.
In our example, a sales rep on the road might hire a chatbot to enter new sales prospect details rapidly without using a special device, easily capture meeting locations and images of the opportunity, and share that data in SaaS for work later. She might want to do in 5 seconds. So, a voice-driven chatbot using smartphone out-of-the-box features such as the camera and GPS capability seems like a good opportunity for a chatbot solution.

Using the JTBD approach also helps shape and communicate the job story. A story untold is not a story. Here’s your storytelling formula: SUCCESS — Simplicity, Unexpectedness, Concreteness, Credibility, Emotion, Story.

Phrase your job story well; using bullet points is fine. As John Dewey put it:
“A problem well put is half solved.”
You can tell you JTBD story to customers, product managers, developers, investors, designers, or any stakeholder (although C-level management will probably just want an executive picture on the return on investment for their budget spend).


Competition is Fierce, from Unexpected Combatants

Remember: There is competition with other tools for hiring your product or service.

In the case of our sales rep, she will still carry a notebook and pen — it helps recallsticky/Post-it notes, an Apple iPad, a Windows laptop, and a Samsung smartphone, maybe.

This hire competition is at the job level, it’s not about the category the tool fits into. As Intercom points out, Microsoft Skype, for example, addresses the same purpose as an airline seat for a business trip— communicating with colleagues.

Chatbots compete against other tools and methodologies on the job hire level in many arenas — communicating, scheduling engagements, ordering, onboarding of employees, managing things to do, marketing, educating, entertaining, finding simple solutions and fixes — and must do so without special training, equipment, and so on.

But fundamentally, the JTDB approach really means the end of design as many people have come to think they know it. Design now becomes about the job the chatbot is being hired to perform by a human.

tl;dr? OK, Summary

So, to design and build the right product right, remember these six simple points.

 

Build the Right Thing

1. Use the JTBD framework: Why is this chatbot (or other product or service) being hired to do the job by this person? Watch their actual job behaviour, ask, and listen.

 

Build It Right


3. Have a clear primary job to be done. What is the 80/20 effort of the job? Avoid corner cases, nice-to-haves, and “what ifs?”.

Consider Microsoft Excel; a super-popular desktop spreadsheet and now service-offering, packed with features. But how much of that functionality do you use and how often? Probably 20% of the features or less to do 80% of the jobs you need Microsoft Excel for. There are other use case examples from the enterprise to get you thinking along those lines for chatbots.

4. Sketch and wireframe your solution first. Balsamiq, for example, offers great digital solutions, and there are now conversational UI specific options. But all you really need to start is a pen, paper, and ideas. You do not need to be an artist.

Using wireframes means you can also apply familiar UX design patterns, making for productive development. Document any open questions, use an Agile backlog and try collaborative, integrated cloud tools like Slack, or whatever suits your context of work, to agree your sketch with stakeholders.

Balsamiq wireframing stencils. There are awesome tools for designers, developers, product managers to sketch and agree on that chatbot JTBD
 5. Use an accelerator design kit or platform with suitable backend AI and NLP capability to innovate fast. Leverage usability heuristics, and use real text and voice scripts in your designs — in the language of the hirer— and iterate with the key stakeholders to agree on a chatbot solution before you code anything. This agreement eliminates surprises later!

Smartly.ai, a platform for creating conversational UI chatbots for SaaS, across different devices.

6. Don’t dribblise your design. With chatbots and conversational interactions stay true to the idea that no UI is the best UI of all. The UX design toolkit is now API calls, AI, NLP, ML, integrations with services and regular device features (GPS, SMS, camera, and the microphone, for example). Design platforms and kits often provide any UI widgets if and when needed (for maps, attachments, avatars, and so on).
No UI is the best UI to create that killer new user experience.

 

Job Satisfaction? Test It!

Finally, do remember to test your innovation.

UI heuristics and baked-in platform usability can do some heavy lifting to prove your idea’s value, but for JTBD the most reliable testing of having solved a design problem are tests that focus on real tasks by real people doing real jobs.

Plan to test your innovation before going live and after, and iterate if needed. Keep focused on the JTBD. If that is wrong, then no amount of fancy visual UI design is going to fix it and improve switching or adoption from another app. As Seymour Chwast puts it:
“If you dig a hole and it’s in the wrong place, digging it deeper isn’t going to help.”
Remember, people are hiring your chatbot to do a job. Hiring. In the age of the cloud, it’s easy to switch chatbots, apps, and services because of a subscription model.

There’s competition for that apps, services, and chatbot job hire. So, be competitive!

=======




Ultan O’Broin (@ultan) is a digital transformation and user experience consultant working with startups and the STEAM community, specialising in conversational UI and PaaS4SaaS. With over 20 years of product development and pre-sales design thinking outreach with major IT companies in the U.S. and Europe, he also speaks and blogs about UX and technology.

Watch out for ConverCon 2018 in Dublin, Ireland in September 2018 (previously reviewed), an event with chatbots at its heart. I might see you there to chat in person about the JTBD of your chatbot!


All images and screen captures in this article are by Ultan O’Broin. Copyright or trademark ownership vested in any screen capture is acknowledged, where applicable.


Thursday, August 16, 2018

Post-Editing and NMT: Embracing a New Age of Translation Technology

This is a guest post by Rodrigo Fuentes Corradi and Andrea Stevens from the SDL MT  project management team. 

SDL is unique amongst LSPs as they have both deep MT development expertise, and also have a large pool of in-house professional translators who can communicate much more easily with the MT development and management team on engine specifics, and provide the kind of feedback that leads to best practices in MTPE projects and delivers overall superior quality. MT in many localization contexts only works if it indeed delivers output that is useful to translators rather than frustrates them. This quality is best achieved by an active and productive (constructive) dialogue between translators, project managers, and developers, which is the modus operandi at SDL. 

While most MT developers have a strong preference for using automated quality metrics like BLEU and LEPOR, the SDL MT project management teams have discovered many years ago, that competent human assessments are much more reliable to determine if an MT system is viable or not for an MTPE project. They have developed very reliable methods to make this determination efficiently and effectively.  NMT presents special challenges in MTPE use scenarios, as the automated metrics are generally even less reliable than they have been with SMT. NMT fluency can sometimes veil accuracy and thus special training and modified strategies are needed as this post describes. Also, we often see that the quality of the MT output can be significantly better than a BLEU score might suggest as this metric is often lower for NMT systems for various reasons.

We are beginning to see other research that suggests that NMT does often provide productivity benefits over SMT, but that it requires special updated training and at least some understanding of the new challenges presented to established production processes by the compelling but sometimes inaccurate fluency of NMT systems.

 =====

Post-editing means different things to different people. To the corporate world, it means making localization budgets go further by translating more for less. To freelance translators, it may well mean an infringement of their craft and livelihood. We know that freelance careers are the result of several years of study, hard work and perseverance to create a client portfolio based on a reputation for quality, consistency and on-time delivery.

Language Service Providers like SDL are often stuck in the middle, working to satisfy the complex demands of our clients while nurturing the linguistic and creative talent in our supply chain. The truth is that in today’s localization market, Machine Translation Post-Editing (MTPE) is very much a reality and an answer to the unstoppable content explosion that we are experiencing.

How do we find a balance and how can we, as an LSP committed to machine translation (MT) and post-editing, do the impossible and manage both client expectations and supply chain needs?
At the heart of the conundrum lies content. What content needs to be developed and translated, what is its purpose, who does it serve, and how can our clients bring it to their customers in the most appropriate and cost-efficient way possible? Content and how well it communicates to a local audience is what defines a company in today’s fast-moving markets. As a business changes and evolves, so does content, creating the need for almost constant innovation and reinvention. This undoubtedly poses a challenge for translators who need to commit to life-long learning to understand new concepts, trends, and challenges and produce materials in the target language that are fully adapted to local markets and audiences.

Content keeps growing exponentially and our challenge is to understand the vast amounts of diverse content that our customers create and how to best deal with it. All content has value and answers a specific requirement, but for example, there is a difference between content created for a technical knowledge base, content for an advertising feature or even regulatory content. For content that has a short shelf-life, a straight MT solution without human intervention may be perfectly acceptable whereas, at the other end of the spectrum, more creative content may require specialized translators or transcreation. In between, there is a wide range of content that could be best served with a hybrid human and machine approach.

As a translation work-giver and machine translation provider, it’s up to us at SDL to navigate the challenges posed by the ongoing content explosion in partnership with our clients and our supply chain. We need to be realistic about the role that MTPE plays in today’s translation marketplace and acknowledge the advantages for all involved. At the same time, challenges and constraints cannot be swept under the rug, but need to be openly addressed and discussed for full transparency.


“While NMT is inspired by the way the human brain works, it does not learn languages quite like humans do – humans learn to speak to communicate with each other in a wide social context.”



One of the challenges is to make post-editing sustainable. Post-editing is established as a standard solution for a wide variety of content types and language pairs. The application of MT is constantly pushed further by the commercial need to respond to the overall content growth while dealing with limited or unchanged localization budgets. As a result, the MTPE footprint continues to expand beyond initially successful languages and domains into new territories, such as regulatory life sciences content or even marketing.

To do this successfully, we rely on technology advances and improvements, some of which have only become possible over the last one to two years. Customized solutions and real-time adaptive machine translation are among the tools that improve the post-editing experience for translators, but the biggest step forward is surely the arrival of Neural Machine Translation.

Neural Machine Translation

Neural Machine Translation (NMT) has rightly been described as a revolution rather than an evolution. With its core developed entirely from scratch, NMT offers amazing opportunities for innovation. Its powerful architecture paradigm does not only capture text or syntax information but actual meaning and semantics, leading to the improvements in translation quality that we are seeing across the board. This is something all MT providers, including SDL, agree on after extensive testing across language pairs and combinations.



How is NMT achieving this? In short, NMT uses artificial neural networks which are based on mimicking the human brain with its interconnected neurons that help us understand the world around us, what we see, touch, smell, taste or hear. An NMT system learns from observing correlations between the source and the target text and modifies itself to increase the likelihood of producing a correct translation. While NMT is inspired by the way the human brain works, it does not learn languages quite like humans do – humans learn to speak to communicate with each other in a wide social context. NMT systems are still trained on bilingual data sets but promise noticeable uplift in translation quality through a more efficient framework for learning translation rules. NMT systems use an input layer where the text for translation is fed into the system, a hidden layer or multiple hidden layers where processing takes place and an output layer where we obtain the translation.


“When meaning and semantics are represented through math, words with similar meanings tend to cluster together.”

 


The hidden layer contains a vast network of neural nodes where the input is encoded into a vector of numbers to give a predictive output. Essentially, we are applying math to the problem of language and translation. When meaning and semantics are represented through math, words with similar meanings tend to cluster together. This is how we know that the NMT system starts to learn the semantics of words. When words have several meanings, they appear in different clusters; for example, ‘bank’ can appear in the geography or the finance cluster. It is then even possible to apply further math to the vectors, as shown in the graphic below: if you take the vector ‘king,’ subtract the vector ‘man’ and add the vector ‘woman,’ the result is a vector that is exactly or very close to the vector ‘queen.’


 For NMT, we use deep neural networks, which are better for long-range context and dependencies. This is particularly important when it comes to languages for which the benefits of statistical machine translation were limited. Good examples are language pairs with long distance word traveling such as Japanese, where the clause structure is very different from English, or languages with long-distance dependencies such as German and Dutch.

One of the main advantages of NMT is the very fluent translation output. However, it is important to understand that the very fluent output can sometimes mask the fact that the content of the automated translation is not correct.

This is just one of the reasons why post-editing is still so important, even when working with NMT output. It is essential that translators understand the behaviors and patterns of NMT to be able to take full advantage of this promising technology.

To prepare translators for working with NMT, training and active engagement with the supply chain is essential. This is part of a much larger training effort that includes our soon to be updated Post-Editing Certification Program. Training will not only help our vendors to prepare for working with new technologies but also ensure that everyone is ready for the technological expansion of MTPE.


Working Together


Being a good work giver is key to jointly face new challenges along with our supply chain. Increasingly, this means guiding projects with an expert understanding of tools, domains, processes and intended audience. The combination of these factors will bring sustained success and quality continuity.

New technologies such as NMT can prove disruptive; transparency and training will be key to reassure and prepare our vendors. One key challenge will be to align the improved NMT technology with post-editing experience and a strategy for on-boarding our supply chain.

The intent is to utilize NMT technology across a wide range of domains and content types, and it is important to collect valid data points to proactively assess the chances of success before reaching out to freelancers. Furthermore, when approaching freelancers, it has always been helpful to share these findings and provide guidelines for MT behaviors that can help their post-editing decisions.

In summary, so much of the success is centered on good communication, which is dependent on openness, sharing materials and providing channels to discuss issues and answer questions. In this respect, we see a responsibility to drive these initiatives with a structured communication plan that includes webinars and open days held in our language offices.

 We are witnessing the ongoing growth of Artificial Intelligence and Machine Learning in many aspects of our daily lives, from self-driving cars to medical diagnosis and intervention. Technology is a huge part of how we live and what we do, and this particularly holds true for translators. Post-editing works at the intersection of humans and machines, and machine translation is one of the most advanced tools in the translators’ toolbox to future-proof the profession for a new generation of translators. MTPE is, of course, a choice that everyone needs to make for themselves, but with new technologies such as Adaptive or Neural MT working for translators and the growing reach of MT into new domains, this is not a choice to be taken lightly. Technology developments are not reducing the role of translators, but rather, are changing and enhancing it, opening a host of new opportunities. This is a journey we need to embark on together for continued learning, support, and feedback.

Rodrigo Fuentes Corradi

MT Business Consultant, SDL

Andrea Stevens


 MT Translation Manager, SDL

 

  The authors have produced a white paper on "Best Practices for Enterprise Scale Post-Editing" that can be accessed at this link.