Text to visual synthesis with appearance models

Javier Melenchón, Fernando De La Torre, Igfnasi Iriondo, Francesc Alías, Elisa Martinez, Luis Vicent

Producció científica: Contribució a una conferènciaContribucióAvaluat per experts

5 Cites (Scopus)

Resum

This paper presents a new method named text to visual synthesis with appearance models (TEVISAM) for generating videorealistic talking heads. In a first step, the system learns a person-specific facial appearance model (PSF AM) automatically. PSF AM allows modeling all facial components (e.g. eyes, mouth, etc) independently and it will be used to animate die face from the input text dynamically. As reported by other researches, one of the key aspects in visual synthesis is the coarticulation effect. To solve such a problem, we introduce a new interpolation method in the high dimensional space of appearance allowing to create photorealistic and videorealistic avatars. In this work, preliminary experiments synthesizing virtual avatars from text are reported. Summarizing, in this paper we introduce three novelties: first, we make use of color PSFAM to animate virtual avatars; second, we introduce a non-linear high dimensional interpolation to achieve videorealistic animations; finally, this method allows to generate new expressions modeling the different facial elements.

Idioma originalAnglès
Pàgines237-240
Nombre de pàgines4
Estat de la publicacióPublicada - 2003
EsdevenimentProceedings: 2003 International Conference on Image Processing, ICIP-2003 - Barcelona, Spain
Durada: 14 de set. 200317 de set. 2003

Conferència

ConferènciaProceedings: 2003 International Conference on Image Processing, ICIP-2003
País/TerritoriSpain
CiutatBarcelona
Període14/09/0317/09/03

Fingerprint

Navegar pels temes de recerca de 'Text to visual synthesis with appearance models'. Junts formen un fingerprint únic.

Com citar-ho