Wednesday, 16 de April de 2014

Ficha del recurso:

Fuente:

Vínculo original en newscientist.com
MacGregor Campbell

Fecha de publicación:

Friday, 10 de September de 2010

Última actualización:

Monday, 13 de September de 2010

Entrada en el observatorio:

Monday, 13 de September de 2010

Idioma:

Inglés

Archivado en:


Avatars learn gestures to match your tone of voice

Avatars in virtual worlds provide a richer way than email or chat to communicate online, but despite better graphics and sound quality, they still can't rival in-person meetings. Now new software may help virtual characters appear more lifelike by imbuing them with realistic body language.

Rather than assign physical gestures based on the literal meaning of a person's spoken words, the program focuses on prosody, the combination of vocal rhythm, intonation and stress. To assemble a library of gestures associated with prosody features, Sergey Levine and Vladlen Koltun at Stanford University, California, used a motion-capture studio to digitise the movements that an actor made as he spoke. They used this library to teach their system to correlate features of the actor's speech with the style of his gestures, such as their size and speed, and whether they were angular or smooth.

Because the system learns to associate a gestural style, rather than specific gestures, with specific prosody features, it can use different gesture libraries for different situations – for instance, when a person is speaking while sitting, or holding things in their hands. They could even be applied to non-human forms – an octopus, for example, provided an animator can figure out what an octopus's gestures might be.

Koltun and Levine hope their system will facilitate more effective online communication for the growing legions of telecommuters, reducing miscommunication while saving on travel costs and carbon emissions.

Video: Gesturing avatar