News
Artificial Intelligence and Machine Learning News

Life, Translated

The Holy Grail of language translation is to develop a machine-based system that can handle the task transparently and accurately.
Posted
  1. Introduction
  2. Breaking Down Barriers
  3. Windows to the Past
  4. The Final Word
  5. Author
  6. Figures
smartphone
Smartphone applications like Google Translate rely upon machine learning to provide text-to-text and text-to-speech translation services.

Throughout history, communicating between cultures has presented enormous challenges. Kings, prime ministers, presidents, and business executives have long traveled with translators in tow. Tourists have learned to lug along language tapes and phrase books so they can make their thoughts and needs known while visiting a faraway land.

Although numerous gadgets and gimmicks have come and gone over the last several decades, the Holy Grail of translation has always been to develop a machine-based system that can handle the task transparently and accurately. As Kevin Knight, senior research scientist at the Information Sciences Institute at the University of Southern California, notes, “Instantaneous and automated translation would have a profound effect on global communication.”

That future might not be so far away. Machine translation is rapidly moving into the mainstream of society. Web-based services such as Google Translate and Yahoo! Babel Fish make it easy to paste text into a Web browser and almost instantly transform it from one language to another. In addition, a new crop of smartphone apps offer text-to-text and text-to-speech features.


With various services now offering online translation for up to 50 language pairs, the focus is on developing general algorithms that work across commonly used groups of languages.


Meanwhile, IBM, Systran, and other companies are developing increasingly sophisticated systems for high-end government and business use. And university researchers are turning to machine translation to decipher ancient languages. “The field is advancing rapidly,” says Salim Roukos, senior manager for Multinational Language Technologies at IBM. “There is enormous demand for having machines translate text and speech.”

Back to Top

Breaking Down Barriers

The idea of using machines to translate languages stretches back to the 1940s. At that time, IBM scientists began exploring the idea of using linguistic and statistical decoding methods to automate language translation. However, the computers of that era were not nearly powerful enough to accomplish the task. As a result, machine translation mostly languished until the 1980s.

Over the following decade, as more powerful processors and the Internet took hold, the international research community began building the foundation for machine translation. Early studies “gave researchers a clear vision that machine translation was within our grasp,” Knight explains. As a result, he says, “the race was on to improve the underlying algorithms that drive machine translation.” In fact, computer scientists recognized that effective machine translation was as much a mathematical and statistical challenge as a linguistic task.

Then, in 2007, Google introduced free online computer translation based entirely on a statistical model. Older forms of online translation continued to rely on linguistic modeling (some, like Babel Fish, continue to rely on both statistical and linguistics methods). Not surprisingly, the use of automated translation services soared, and while Google Translate and others are not perfect, they are now widely used to translate Web pages, tweets, product manuals, and more.

Today, researchers are attacking the translation challenge head-on. Some, like Knight and Roukos, are looking for ways to build systems that translate text more accurately in real time. Others, such as Regina Barzilay, associate professor in the Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology, and Benjamin Snyder, assistant professor of computer science at the University of Wisconsin-Madison, are focusing on deciphering ancient languages for clues into the past, as well as insights into how to make machine translation more effective for modern languages.

Both groups face a similar challenge. “There is a lot more to machine language translation than simply building a word-for-word dictionary,” Knight says. Not only is syntactic transformation different across different sets of languages, but neologisms, idioms, homonyms, and odd expressions make direct translation a daunting task. As a result, researchers focus on breaking language down into meaningful chunks and picking them apart with specialized programs, Knight says.

Improvements to machine translation systems often take place by trial and error. Developers must tweak and modify algorithms—sometimes based on statistical and probabilistic models—to take into account new or previously overlooked phrases or combinations. The goal is for systems to recognize and compare words for context. This process often relies heavily on word alignment, Roukos says. “It’s all about how words correspond with others in a sentence.”


Today, the best machine translation systems boast accuracy rates above 90% when they are used with textbook speech, says Salim Roukos.


For instance, in English the word “fish” is somewhat ambiguous. It could either serve as a noun (“I eat fish”) or a verb (“I fish at the stream”). “By contrast, if you look at the French translation for these two words, there are entirely separate meanings,” says Snyder. “In French, ‘poisson’ is the noun version of fish and ‘pêcher’ is the verb version. So, some triangulation has to take place for machine translation to work effectively.”

With various services now offering online translation for upward of 50 language pairs, the focus is on developing general algorithms that work across commonly used groups of languages. The commercial market has evolved so that online users rely on free and less powerful software than governments and business users. IBM’s Roukus says $15 billion is spent on human translation annually, with heavy users including publishers, law firms, and organizations involved in international commerce.

Global travel sites, for example, frequently procure content in English but cater to an array of markets around the world. A company will typically train the system for specific phrases and terms, run the text through a machine translation system, and, in some cases, have a human review the content before posting it online. Tech firms that offer IT support are also turning to machine translation to provide support services in multiple languages. Others, primarily governments, run reams of printed content through translation systems before giving it to analysts for review.

Back to Top

Windows to the Past

Understanding ancient languages is another aim of researchers. However, in some instances, there are no existing translations or a clear understanding of how the language is constructed. In essence, researchers looking for clues into ancient civilizations find themselves working blind. “It is very difficult to decipher an unknown language,” MIT’s Barzilay explains.

This hasn’t stopped her and others from accepting the challenge. In 2010, Barzilay worked with a team of researchers to untangle the ancient Semitic language of Ugaritic. The team built several assumptions into the translation software. First, they decided that Ugaritic would be similar to another language (Hebrew, in this case). Second, they decided that it is possible to map the alphabet of one language to another and find symbols or groups of symbols that correlate between the two languages. Finally, they examined words looking for shared roots.

It was a difficult task, with the computer parsing through the data hundreds and sometimes thousands of times, using probabilistic mapping techniques associated with artificial intelligence. Along the way, the system began to identify commonalities and inconstancies, including shared cognates (similar to “homme” and “hombre” in French and Spanish, respectively). Researchers continued to analyze language combinations until they saw no further improvements.

The result? The software mapped 29 of the 30 Ugaritic alphabet letters to Hebrew. Approximately one-third of Ugaritic words have Hebrew cognates. The researchers correctly identified 60% of these words. Many of the remaining words were off by only a letter or two. A human could easily correct these inconstancies, as well as spot homonyms. In this case, the team could verify the results because Ugaritic has already been deciphered (though they worked blind in their research).


Google and others are developing image-to-text and image-to-speech translation systems that allow a person to snap a photo of a sign or text and receive a translation.


Investigation into ancient languages is not a trivial pursuit. “The process has a great deal of applicability with current languages and translating them correctly,” Barzilay says. Simply put, when researchers learn the DNA of languages they are able to build better models for translation. “We learn how to make linguistic assumptions and build a better refinement cycle,” she points out.

Back to Top

The Final Word

The field of machine translation continues to advance. IBM’s Roukos addresses the linguistic challenges of machine translation using a multidisciplinary team with mathematicians, electrical engineers, computer scientists, linguists, computational linguists, and programmers. Altogether, the group speaks 20 different languages. “We run hundreds of billions of words through computers and analyze statistical models,” he says.

Today, the best machine translation systems boast accuracy rates above 90% when they are used with textbook speech, according to Roukos. Error rates typically double with colloquialisms and informal speech. However, researchers are building larger data-bases and working to perfect multilingual natural language processing, Roukos says. “Ultimately, the goal is to build systems that can mine text, extract information, and understand how language is used for different situations and applications.”

There is also a push to add languages that have not yet been cataloged. “In Africa there are thousands of languages,” says Knight, “and almost none of them have been touched by machine translation.” Meanwhile, researchers such as Barzilay and Snyder focus on unlocking the mysteries of hundreds of remaining “lost” languages. Tackling the task manually is next to impossible. “It’s a job that only computers can do,” Barzilay says.

Ultimately, machine translation promises to revolutionize work and life. Google and others are developing image-to-text and image-to-speech translation systems that allow a person to snap a photo of a sign or text and receive a translation. Google Translate on the iPhone already provides speech-to-speech translation across more than two dozen languages.

Meanwhile, the U.S. National Institute of Standards and Technology is testing smartphone-type devices that translate speech instantly between two handsets. The system, dubbed TRANSTEC, has already been used to translate between English and the Afghani tongue Pashto.

More powerful computers and better algorithms promise to further revolutionize the field. “Within the next decade or two we will see remarkable progress in machine translation,” Knight says. “It will likely become a regular part of our lives.”

*  Further Reading

Callison-Birch, C., Koehn, P., Monz, C., and Schroeder, J.
Findings of the 2009 workshop on statistical machine translation, Proceedings of the Fourth Workshop on Statistical Machine Translation, Athens, Greece, March 30–31, 2009.

Chiang, D., Knight, K., and Wang, W.
11,001 new features for statistical machine translation, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics on NAACL 09, Boulder, CO, May 31–June 5, 2009.

Ge, N.
A direct syntax-driven reordering model for phrase-based machine translation, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, CA, June 2–4, 2010.

Koehn, P.
Statistical Machine Translation. Cambridge University Press, Cambridge, England, 2010.

Willis, Y.
Machine Translation: Its Scopes and Limits. Springer, New York, NY, 2010.

Back to Top

Back to Top

Figures

UF1 Figure. Smartphone applications like Google Translate rely upon machine learning to provide text-to-text and text-to-speech translation services.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More