Some British scientists have programmed a computer to figure out what languages people are speaking by the movement of their lips. http://www.sciencedaily.com/releases/2009/04/090421205226.htm
Kind of useless at this point, but there are some interesting implications here.
Automatic lip reading, also known as automatic speech reading, is a growing branch of speech recognition technology. By monitoring a speaker’s lip movements and other related elements, software can interpret verbal messages when cross-talk or background noise interferes with listening comprehension.
Even when audio is good, visual data is a really important part of the communication agenda, and often telephone interpreters are left in the dark.
While computers can produce useful translations of written content, automated translation for audio interpretation has a long way to go. The applications I’ve seen start with voice recognition, transferring audio to text, which is then translated by computer, and then read by the computer into target language audio. Lots of variables there and at each step in the process one bad phoneme can trash the whole sentence.
It’s easy to imagine (but doubtlessly harder to achieve) that visual signals could be used as a kind of reality check to insure accuracy of automated translation systems.