Imagine this: You want to whisper something to a co-worker in Spanish, but you can't speak the language. So you simply mouth the words in English, without uttering a sound, and they immediately pop up in Spanish on your colleague's computer.

The premise may seem far-fetched, but researchers are working toward making it a reality. As Carnegie Mellon (search) doctoral student Stan Jou (search) mouthed words in Mandarin Chinese recently, 11 electrodes on his face and throat sensed what he said by the movement of his facial muscles and promptly translated it into English and Spanish.

The device is among several projects at the International Center for Advanced Communication Technologies designed to tear down language barriers using computers. The center is run jointly by Carnegie Mellon University in Pittsburgh and the University of Karlsruhe (search) in Germany.

Using a different device, the center's director, Alexander Waibel (search), delivered a lecture last week simultaneously translated from English to German and Spanish. All he had to do was speak into a microphone.

"We're increasingly globalizing," he said. "We have multiple cultural groups that speak different languages. We want everyone working together but to maintain our individuality."

Other researchers are developing ultrasound speakers that deliver a narrow beam of audio, letting one person hear a translation while everyone else gets the speech in its original language.

Gadgets with limited capabilities could be sold commercially within a year, Waibel said. Complex translators will take longer.

"We have to improve performance," Waibel said. "It's very, very important for a system to tell you when it's wrong. Computers are awful at that."