Resum
The affective communication channel plays a key role in multimodal human-computer interaction. In this context, the generation of realistic talking-heads expressing emotions both in appearance and speech is of great interest. The synthetic speech of talking-heads is generally obtained from a text-to-speech (TTS) synthesizer. One of the dominant techniques for achieving high-quality synthetic speech is unit-selection TTS (US-TTS) synthesis. Affective US-TTS systems are driven by affective annotated speech databases. Since affective speech involves higher acoustic variability than neutral speech, achieving trustworthy speech labeling is a more challenging task. To that effect, this paper introduces a methodology for achieving reliable pitch marking on affective speech. The proposal adjusts the pitch marks at the signal peaks or valleys after applying a three-stage restricted dynamic programming algorithm. The methodology can be applied as a post-processing of any pitch determination and pitch marking algorithm (with any local criterion for locating pitch marks), or their merging. The experiments show that the proposed methodology significantly improves the results of the input state-of-the-art markers on affective speech.
Idioma original | Anglès |
---|---|
Pàgines (de-a) | 481-489 |
Nombre de pàgines | 9 |
Revista | IEEE Transactions on Multimedia |
Volum | 12 |
Número | 6 |
DOIs | |
Estat de la publicació | Publicada - d’oct. 2010 |