Pages

Wednesday, May 9, 2012

Omnilingua: A Profile of Effective Translation Quality & Productivity Measurement

One of the major challenges that enterprises have in the use of increased automation in business translation, is understanding the productivity and quality impact of any new automation strategy. As the discussion of quality and and even productivity in the industry is often quite often vague and ill-defined, it is useful to show an example where a company understands with great precision what the impact is before and after the use of new translation production technology.

The key questions that one needs to understand are:
· What is my current productivity (time taken, words produced) to achieve a defined quality level?
· What impact does new automation e.g. an MT system, have on my existing productivity and final delivered quality?


Kevin Nelson, a Managing Director at Omnilingua Worldwide clip_image001, recently presented part of the Asia Online webinar "Meaningful Metrics for Machine Translation clip_image001[1]". Omnilingua is a company that prides itself in building teams that deliver definable quality and are recognized throughout the translation industry as a company that pays particular attention to process efficiency and accurate measurement. 

Thus, when they embarked in their use of MT 5 years ago, they took great care to understand and carefully measure and establish that MT was in fact enhancing production efficiency before it was put in to production. In the same way before making any changes to their current MT deployment they wanted to make sure that any new initiative they embarked on was in fact a clear and measurable improvement over previous practice. As Kevin Nelson, Managing Director at Omnilingua said: “The understanding of positive change is only possible when you understand the current system in terms of efficiency.”
During the webinar, Kevin discussed how and why Omnilingua perform detailed metrics. To demonstrate the benefits of measurement to Omnilingua, Kevin presented a case study that measures and compares an Asia Online Language Studio™ custom MT engine with a competitors MT engine and also studies it impact on human translators. 

Omnilingua first embarked in the use of MT 5 years ago with Language Weaver and took great care to carefully measure, understand and establish that MT was in fact enhancing production efficiency prior to using MT in production. Recently, Omnilingua reached a point where they had to reconsider retraining and upgrading their aging legacy MT engine or whether to invest in a new MT engine with Asia Online. 
 
Omnilingua engaged Asia Online at the end of 2011 to build a custom MT engine in the technical automotive domain translating from English into Spanish using similar data to the legacy competitors MT system. As this was Omnilingua’s first Language Studio™ custom MT engine, Omnilingua wanted to make sure that any new MT initiative they embarked on was in fact a clear and measureable improvement over the competitor's legacy MT technology before making any changes in their production environment. 

Omnilingua has long-term experience in conducting valid “double-blind” studies that produce statistically relevant results that measure machine quality, human quality and effort. The same careful measurement process was embarked upon to determine if their new MT initiative with Asia Online was an improvement. 
The understanding of positive change is only possible when you understand the current system in terms of efficiency.
...
Any conclusion about consistent, meaningful, positive change in a process must be based on objective measurements otherwise conjecture and subjectivity can steer efforts in the wrong direction. 

– Kevin Nelson, Omnilingua Worldwide
At the heart of Omnilingua’s process and quality control procedures is a long-term and continuous use of the SAE J2450 quality assessment and measurement process clip_image001[2]. Long-term use of a metric like this provides trusted and well-understood quality benchmarks for projects, individual customers and also for MT quality that are more trusted than automated metrics like BLEU, TER and METEOR that are available with the free Language Studio™ Pro measurement tools clip_image001[3] from Asia Online. 

While there is effort and expense involved in implementing SAE J2450 as actively as Omnilingua, the advantages provided by the measurements allow for a deep understanding of translation quality and the associated effort. Long-term use of such a metric also dramatically improves the conversation regarding translation quality between all the participants in a translation project, as it is very specific and impersonal and clear about what quality means.

Kevin listed the following benefits of the SAE J2450 measurement standard:
clip_image004 Built as a Human Assessment System:
Provides 7 defined and actionable error classifications.
2 severity levels to identify severe and minor errors.
clip_image004[1] Provides a Measurement Score Between 1 and 0:
A lower score indicates fewer errors.
Objective is to achieve a score as close to 0 (no errors/issues) as possible.
clip_image004[2] Provides Scores at Multiple Levels:
Composite scores across an entire set of data.
Scores for logical units such as sentences and paragraphs. 

In order to determine if MT has been successful, production efficiencies and improvements must be measureable. This not only shows improvement in MT over time, but ensures that the MT based process is more efficient than the previous human only process while delivering a comparable translation quality. A recent survey by MemSource clip_image001[4] indicated that over 80% of MT users have no reliable way of measuring MT quality. Omnilingua uses multiple metrics to precisely define the degree of effort required to post-edit MT to client deliverable quality. This quantification of the Post Edited MT (PEMT) effort includes raw SAE J2450 scores for MT Vs. the equivalent historical human quality SAE J2450 scores in addition to Time Study measurements and Omnilingua’s own proprietary effort metric, OmniMT EffortScore™, which is based on 5 years of measuring PEMT effort at a segment level. These different metrics are combined and triangulated to deliver very reliable and trusted measurements of the effort needed for each PEMT project. 

Omnilingua is able to understand through the above 3 metrics that the changes in their production process are measurably greater than the cost of deploying MT. Omnilingua also makes efforts to “share cost savings and benefits across the value chain with clients and translators”. Through this approach, Omnilingua has been able to keep the same team of post-editors working with them for 5 years continuously. This possibly is the greatest benefit of understanding what you are doing and what impact it has.

Omnilingua used the SAE J2450 standard to measure the improvement of the new Language Studio™ custom engine over the competitor’s legacy MT engine. SAE J2450 measurements were made on both the raw MT and the final output after post-editing the MT from both custom engines.
SAE J2450 Error Count Comparison:
Asia Online Language Studio™ Vs. Competitor
clip_image005
After reviewing the detailed measurement data Omnilingua made the following conclusions:
  • There were far fewer errors produced by the Language Studio™ custom MT engine than the competitor’s legacy MT engine.
    • Notably there were fewer wrong meanings, structural errors and wrong terms in the Language Studio™ custom MT engine, that were “typical SMT problems” in the competitor’s legacy MT engine.
  • 52% of the raw MT output from the Language Studio™ custom MT engine had no errors at all compared to the competitor’s legacy MT engine which had 26.8%.
    • The Language Studio™ custom MT engine measured was the very first iteration of the engine, with no improvements or corrective feedback applied.
    • Many of the errors from the Language Studio™ custom MT engine were minor spelling errors relating to capitalization. A majority of the "spelling errors" were traced back to a legacy portion of the client-supplied translation memory historically used for case-insensitive leverage.
    • Omnilingua found the errors easy to correct with tools provided by Asia Online.
  • The final translation quality after post-editing was better with the new Language Studio™ custom MT engine than the competitor’s legacy MT engine and also better than a human only translation approach.
    • Terminology was more consistent with a combined Language Studio™ custom MT engine plus human post editing approach.
  • When surveyed, post editors perceived that both MT engines were about the same quality and effort to edit. However, human perceptions can often overlook what objective measurements capture.
    • The measured results show that the Language Studio™ custom MT engine was considerably better in terms of translator productivity and produced a final product that had fewer errors because of the higher quality raw MT output provided to the post-editors.
    • The following table summarizes the key results for both the raw MT and the final post-edited MT:
   Asia Online Language Studio™ Vs. Competitor
Metric Factor
Total Raw MT SAE S2450 Errors 2 x Fewer
Raw SAE J2450 Score 2 x Better
Total Post Edited MT SAE J2450 Errors 5.3 x Fewer
Post Edited MT SAE J2450 Score 4.8 x Better
Post Editing Rate 32% Faster

Omnilingua has already seen translation quality from the first version of their Language Studio™ custom MT engine improve beyond the above levels by providing basic feedback using the tools provided by Asia Online. As Omnilingua continues to periodically measure quality, it is expected that the metrics will show further improvement in the metrics specified above.

We found that 52% of the raw original output from Asia Online had no errors at all – which is great for an initial engine

– Kevin Nelson, Omnilingua Worldwide
The entire presentation video and slides can be viewed in the Asia Online webinars library starting at about 31 minutes. clip_image001[5]

Thursday, March 22, 2012

Exploring Issues Related to Post-Editing MT Compensation

As the practice of post-editing MT continues to gain momentum and perhaps even some acceptance as a legitimate practice, there continue to be questions raised about how to do this in a way that is equitable and beneficial to all the stakeholders.  There was an interesting discussion in LinkedIn on this subject where it is possible to see the perspectives of tools developers, LSPs and clients, and even some translators in their own words. Some of the things that stand out from this discussion are the general sense of the lack of trust between constituents in the translation production chain, the inability to share and take operational risk between stakeholders, and the difficulty in defining critical elements in the process e.g. MT/final translation quality and accurate productivity implications.


The issue of equitable compensation for the post-editors is an important one, and it is essential to understand the issues related to post-editing, that many translators find to be a source of great pain and inequity.  MT can often fail or backfire if the human factors underlying work are not properly considered and addressed. 

From my vantage point, it is clear that those who understand these various issues and take steps to address them are most likely to find the greatest success with MT deployments. These practitioners will perhaps pave the way for others in the industry and “show you how to do it right” as Frank Zappa says. Many of the problems with PEMT are related to ignorance about critical elements, “lazy” strategies, and lack of clarity on what really matters, or just simply using MT where it does not make sense. These factors result in the many examples of poor PEMT implementations that antagonize translators. 

Some of the key elements that need to be understood or implemented to maximize the probability of successful PEMT outcomes include:
  • Customize your MT engine for your domain requirements and generally MT engines make the most sense if you do repeat/ongoing work in the same domain and language. And be wary of any MT vendor or LSP who assures you that “for a nominal service charge you could reach nirvana tonight. If you do this properly there are no instant solutions and few shortcuts.
  • Use objective and mostly transparent and repeatable measurements of efficiency and quality that are trusted by key stakeholders e.g. SAE J2450
  • A good understanding of the cost structure and efficiency of the pre-MT translation production process (Human TEP = Translate, Edit, Proof). If you don’t understand where you are, how will you know what direction is forward? It makes little sense to deploy MT if you cannot improve upon the old process in some meaningful way i.e. timeliness, cost, and quality.  Tradeoffs will need to be made as it is not possible to improve all three of these elements.
  • An understanding of the “average translation quality” of the MT engine output. This can be determined at the outset by sample-based tests, and are useful input to establishing fair rates for the full project. It should be understood that MT engines that produce higher quality will require less effort to get to a level where the final delivered translation is equivalent to that produced from a standard TEP process. Really good engines can produce an average output segment that looks like an 85% fuzzy match or better from translation memory. This kind of system will also produce a large number of 100% matches for new segments, which still need to be verified and editors need to be compensated appropriately for this validation. Learn how to interpret and link measures like J2450, BLEU, TER, and Edit Distance to create your own unique measurements so that you can quickly understand what you are dealing with. Badly done, these metrics are a black hole of misunderstanding and wrong conclusions remember that automated metrics are only as good as the users' understanding of them. Human assessments are ALWAYS needed and always used in successful PEMT case studies.  If a survey of 90 interested users in March 2012 is to be believed, “over 80% of MT users have no reliable way of measuring MT quality”. If this is indeed true, it surely explains why translators are so outraged and why so much of PEMT yields less than satisfactory results.
  • An understanding of the target translation quality level. Interestingly, the easiest case to define is one where a client requires the same quality as they have from a standard TEP process. MT in this case is a draft version of the translation step of the TEP process and will still require the EP, or edit and proofing steps. Expect your EP costs to rise as your T costs fall. It is much harder to define the quality level when MT output will only be “slightly” edited for “understandability” in very high-volume knowledge-based projects. “Slightly” or “lightly” is very hard to define and even harder for a translator to understand. Studies (see the Sharon O’Brien links below) have shown that translator productivity is lower with this type of task than with one where the target is TEP quality. It is important to provide many examples of what is required. In these high-volume cases, it may be more useful to follow the 80/20 rule and focus 80% of the post-edit efforts on the 20% of content that is most important. Often this is best done through corpus analysis to define the human focus, and then compensating editors for the corrections they make or at a fair hourly rate, i.e. at a rate they would make on average for TEP work.
  • An understanding of the effort required to raise the MT to the target level. Once you understand your average MT output quality and have a clear target, it is possible to make an estimate of the post-editing effort. This should be the key determinant of what post-editor compensation is. If you wish to build a long-term relationship with post-editors it would be wise to compensate post-editors fairly. Thus if a system raises translation production efficiency consistently, I would recommend that you compensate editors at a rate to ensure their net income is higher than it would be in a TEP process. (So easy for me to say.) The proper rate can only be learned through experience so there are few useful generalizations to be made. The quality of your measurement systems really matters here and can help you get to the “right” (win-win scenario) rate faster. Also, it would probably be better to err on over-paying rather than under-paying as shown in these completely hypothetical examples e.g.
    • Average TEP rate 15 Cents, Average Daily Translation Output 2,500 words  =  $375  per day 
    • MT Engine 1: Average Post Edit Translation Output 7,000 words, Average Rate 7.5 cents = $525 per day
    • MT Engine 2: Average Post Edit Translation Output 5,000 words, Average Rate 10 cents = $500 per day
    • MT Engine 3: Average Post Edit Translation Output 4,000 words, Average Rate 12 cents = $480 per day
Omnilingua is an example of a company that has long term experience (5 years +) with PEMT and has developed sophisticated processes and methodology to understand this gap and human effort with rare precision. They are committed users of SAE J2450 for many years now, and thus understand quality and productivity with distinctive precision. You can see a video presentation of the Omnilingua approach in their own words starting at 31:00. It is my opinion that very few LSPs can make this PEMT effort assessment with any precision. This is where superior LSPs will excel and this competence should become a clear differentiator in future. (Ask your LSP the question: “How much effort is needed to make the MT output indistinguishable from standard TEP?” and watch them fidget around a bit as FZ says.)
 Remember also that most often, starting with a good professionally developed engine will produce better ROI, than starting with quick and dirty DIY options that require much more post-MT labor to raise the output to target levels.
  • Expect and plan for a learning curve and develop post-editor training materials. MT requires an investment in engine development and process learning as well, as measurement systems fall into place. However, once this new process is understood it is possible to have success, even with tough languages like Hungarian as Hunnect as shown with their training program. Not all translators are interested in post-editing and it is important to determine this early and then provide guidance to those who are interested and best suited to this kind of translation work.
  • While accurate quality measurements are important it is also critical to understand productivity impacts in as much detail as possible over time. Best practice suggests that it is important to monitor the use of MT through the various learning stages to best understand the financial and productivity impact. This may not be the same for every language as MT does not work equally well in all languages. Some MT systems will continuously improve and some will not. LSPs will need to decide where they should invest: MT technology, measurement systems and processes, PEMT training and new workflow, and/or solving new translation problems like customer support chat. It is unlikely that many will be able to do it all and the overall complexity and time taken to achieve mastery of all these new initiatives should not be underestimated.
  • Involve some translators in the MT engine steering process to identify major error patterns. This action has been shown to produce much more useful systems and higher productivity when you go into production. They can also help to establish meaningful and trusted measurements between the raw MT quality and establishing reasonable translator productivity expectations.
It will still require collaboration and trust (that rarely exists) between corporate customers, tool vendors, LSPs, and translators. The stakeholders will also all need to understand that the nature of MT requires a higher tolerance for “outcome uncertainty” than most are accustomed to. Though it is increasingly clear that domain-focused systems in Romance languages are more likely to succeed with MT, it is not clear very often how good an MT engine will be a priori, and investments need to be made to get to a point to understand this. The stakeholders all need to understand this and work together and make concessions and contributions to make this happen in a mutually beneficial way. This is of course easier said than done as somebody has to usually put some money down to begin this process. The reward is long-term production efficiency so hopefully, buyers are willing to fund this, rather than go the fast and dirty MT route as some have been doing.

I hope we have all reached a point where we understand that arbitrarily setting lower pay rates for MT-related cleanup work is unwise and that the lowest initial cost of building MT engines is rarely the best TCO (total cost of ownership) with MT technology. MT in 2012 is still very complex and requires real financial and intellectual investment to build real competency. 

I suspect that the most compelling evidence of the value and possibilities of PEMT will come from LSPs who have teams of in-house editors/translators who are on fixed salaries and are thus less concerned about the word vs. hourly compensation issues. For these companies, it will only be necessary to prove that first of all MT is producing high enough quality to raise productivity and then ensure that everybody is working as efficiently as possible. (i.e not "over-correcting"). I would bet that these initiatives will outperform any in-house corporate MT initiative in quality and efficiency.

I have seen that there are LSPs out there that know how to build the eco-system to make this a win-win scenario for all stakeholders so I know it is possible, even though it is not very common in 2012. In these win-win examples, the customer and the LSP understand the risks, and post-editors are paid more when the engine is not great and less when it is. Quality and productivity-related information flows freely in the production chain and is trusted, and often translators are compensated for higher productivity. Thus, I think there are three basic principles to keep in mind in developing fair and equitable compensation practices:
  1. Measure productivity and quality accurately, frequently, and objectively and share critical findings. Ensure that the links between MT quality and productivity expectations are realistic.
  2. Train and select the right people for post-editing work and monitor progress carefully, especially in the early stages.
  3. Link the compensation to the effort required to complete the job which means you need to have some understanding of this effort. Not all PEMT work is equal, when uncertain about the correct rates, initially err on the side of overpaying rather than underpaying to build a loyal workforce.
image
The LinkedIn discussion goes into many more details and is worth a look to get a broader and varied perspective on the post-editor compensation issue. It would be wonderful to hear other perspectives on this in the comments. Practitioners, especially LSPs, should understand that the real benefit of making these investments is long-term cost and productivity advantages that are sustainable and defensible. This, however, requires “hard work” as George Bush said, apart from the time and money investment and has a learning curve. Finally, I would warn you that we live in a time of Moses Madness and many yearn for quick fixes that cost nothing. These quick fixes can often backfire and we should heed the wise words of Frank Zappa in the song Cosmik Debris:
The Mystery Man came over
And he said: "I'm outa-site!"
He said, for a nominal service charge,
I could reach nirvana tonight

If I was ready, willing 'n able
To pay him his regular fee
He would drop all the rest of his pressing affairs
And devote his attention to me
But I said . . .
Look here brother,
Who you jivin' with that Cosmik Debris?

For those interested, here are some other references that may be useful to those trying to understand PEMT issues from other perspectives :

Wednesday, February 29, 2012

Highlights from Recent Coverage on MT Related Subjects

This is a summary of what I think are some interesting recent articles on the web on subjects relating to MT.

The Big Wave, an Italian initiative that focuses on the changes happening in language technology released details and proceeding papers from their conference held in Rome in the summer of 2011. There are many interesting papers related to MT, controlled language and collaborative translation related issues. These papers provide a balance of practitioner, academic and user perspectives on these subjects and are worth a close examination.
Some highlights include:

Linguistic resources and MT trends for the Italian language by Isabella Chiari discusses the implications of various kinds of data and their value for building data-driven MT systems and provides some specifics for EN <> IT MT systems. The paper is a great overview on the kinds of data that can be used and also provides insight on what data to use and where to use it with summary implications. It also makes a great case for the inevitability of corpus driven approaches in MT (without meaning to) by providing the theoretical rationale for this and points to rising momentum of the data driven approach.

Productivity and quality in MT post-editing – by Ana Guerberof provides specific evidence of the productivity advantage of MT over TM and new segments in a translation workflow.

“In this context, it seems logical to think that if prices, quality and times are already established for TMs according to different level of fuzzy matches then we just need to compare MT segments with TM segments, rather than comparing MT to human translation. “

This study also helps to establish that in reality MT is just a new kind of TM fuzzy match. Even though the test only involved a small number of translators and a small amount of work, it was done with care to ensure the translators saw a mixture or MT, TM and new segments in a way that was “blind” and then carefully measured the productivity of the translators in processing these different segments.

 

The results show that MT had higher productivity than TM or New segments and that on average MT produced higher productivity. (We are certain these results would have been more pronounced with an Asia Online customized system). Interestingly this study also shows that weaker translators seem to benefit more from MT and TM than the “best” translators. There are some interesting observations about the error analysis which showed that TM produced the greatest amount of final errors.

 

I would hypothesize that a test with more translators in the pool, and a bigger set of test data would be useful to do, as the results would establish the benefits of the use of customized MT much more clearly. It may even be useful to include “bad” or free MT to show how differently translators react to a segment that looks like it is an 85% match and to one that looks obviously like raw free MT or instant customization (50% TM match) that some use today.

 

 Why Machine Translation Matters: Trends & Best Practices 

This article summarizes the forces driving the increasing use of MT which can be summarized as:

External Forces in the World at Large :-

  • The digital data explosion and its impact on new content that begs to be to translated quickly

  • The global thirst for knowledge and information

  • The growing online population that does not speak English or FIGS but represents a major commercial opportunity for global enterprises

Internal Forces affecting Global Enterprises :-

  • The growing importance of customer conversations and user generated content which affects purchase decisions and impacts customer loyalty

  • The growing importance of open collaboration in B2C relationships

  • The Rise of Asia and BRICI which requires huge amounts of new content in new languages

These forces, together amount to a shift towards more dynamic content, and increase the need to handle streaming flows of information that simply cannot be done without more automation and MT.

 

MT: the new 'lingua franca' is a fascinating perspective by Nicholas Oster, a historian of world languages on how MT is enabling linguistic diversity on the Internet.

“Between 2000 and 2009, Arabic on the internet grew twentyfold, Chinese x20, Portuguese x9, Spanish X7 and French x6, while content in English ‘only’ tripled. Proportionally, then, English is declining in importance relatively quickly. “The main story of growth on the Internet … is of linguistic diversity, not concentration.”
Ostler sees a key role for MT in this new environment. Just as the print revolution changed the ‘ground rules of communication’ in 16th century Europe, he expects that language and translation technology will revolutionize global communications tomorrow, removing the need for a ‘single lingua franca for all who wish to participate directly in the main international conversation.’

Translation errors or nuances in both humans and computers can naturally have an important impact. But there is no point in dismissing MT by judging it by some presumed norm of ‘perfect’ human translation. MT is a revolutionary tool that can help the world communicate better. TAUS will be welcoming Nicholas Ostler as a speaker at the upcoming TAUS European Summit on May 31 – June 1 in Paris.

When Machine Translation Usefulness Is Higher Than Quality:  

This article provides some interesting feedback for those who insist that MT only has value when it approaches human quality, and since MT rarely reaches human quality it has very limited value. In this study, English news was translated into FIGS by MT, but users were always given access to the English source. The study measures the usefulness of the MT in the context of assessed translation quality as shown below and interestingly MT is considered useful even when the quality falls short of excellence. Since this study was performed some time ago we would assume that the usefulness curve continues to shift upwards, driven by improving MT quality, whatever some translators may think about the quality.

clip_image001

The graph shows that although the machine translation quality was evaluated as being far from perfect, the translation’s usefulness was regarded as higher than its quality. However, this applies only when translation quality is above certain threshold. Bad or poor quality machine translations are naturally deemed as useless.

 

This result confirms what many MT proponents have themselves experienced. Pure MT can be rough – often obscure, frequently humorous – but it can be useful. If one really has little facility in the source language, pure MT translations, however clumsy, can be a boon to understanding and, by extension, to productivity.

 

The graph below illustrates the breakdown of responses to the question, “How would you rate the overall quality of the newsletter translation?” by language group. Note that Germans felt the quality was more lacking, possibly because the MT was poorer in quality or possibly because they had higher expectations. It is actually well known in the MT community that German <> English is more difficult than English <> Romance languages.

clip_image002

When we segment answers to the question, “How would you rate the usefulness of the newsletter translation?” by the respondents’ English ability, we see an even stronger vote in favor of MT by the two lower groups. Thus users who had a self-measured poorer English ability, found the MT much more useful. In fact even many who responded has having “Good” English ability found the MT very useful or essential.
clip_image003


There have also been some interesting discussions in LinkedIn that cover the dialogue and tension between translators and MT advocates and also expose some of the hyperbole that some MT enthusiasts are prone to. While the discussion does meander between translator emotions about plans to “eliminate” them and less than scrupulous business practices by some MT vendors, it is an interesting thread. In their rush to get on the technology bandwagon some LSPs may overlook the privacy and data security issues that they inadvertently agree to when they use instant Moses and DIY kits.  So caveat emptor.


In Machine Translation in the European Union : Renato provides some summary coverage from  a recent conference of the ever expanding use of MT in the European Union internal administration.

 

Interview with Translator David Bellos: author and award-winning translator David Bellos knows a thing or two about translation would be an understatement. With over 40 years of experience, he has achieved international recognition for his works as a translator and biographer and has an impressive list of acclaimed publications to his name.

Some interesting excerpts from the interview:

“What I expect is that machines will allow the demand for translation to carry on growing, and for translation to become an ever more integral part of the world we live in.

However, since there are almost 49 million translation directions between all the languages in the world and there is never going to be a 49-million-fold community of translators, machines might well be a useful adjunct to actual translation for many of the under served directions that exist.