Gesture synthesis adapted to speech emphasis

Adso Fernández-Baena, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles, Francesc Alías

Producción científica: Artículo en revista indizadaArtículorevisión exhaustiva

21 Citas (Scopus)

Resumen

Avatars communicate through speech and gestures to appear realistic and to enhance interaction with humans. In this context, several works have analyzed the relationship between speech and gestures, while others have been focused on their synthesis, following different approaches. In this work, we address both goals by linking speech to gestures in terms of time and intensity, to then use this knowledge to drive a gesture synthesizer from a manually annotated speech signal. To that effect, we define strength indicators for speech and motion. After validating them through perceptual tests, we obtain an intensity rule from their correlation. Moreover, we derive a synchrony rule to determine temporal correspondences between speech and gestures. These analyses have been conducted on aggressive and neutral performances to cover a broad range of emphatic levels, whose speech signal and motion have been manually annotated. Next, intensity and synchrony rules are used to drive a gesture synthesizer called gesture motion graph (GMG). These rules are validated by users from GMG output animations through perceptual tests. Results show that animations using intensity and synchrony rules perform better than those only using the synchrony rule (which in turn enhance realism with respect to random animation). Finally, we conclude that the extracted rules allow GMG to properly synthesize gestures adapted to speech emphasis from annotated speech.

Idioma originalInglés
Páginas (desde-hasta)331-350
Número de páginas20
PublicaciónSpeech Communication
Volumen57
DOI
EstadoPublicada - 2014

Huella

Profundice en los temas de investigación de 'Gesture synthesis adapted to speech emphasis'. En conjunto forman una huella única.

Citar esto