Detalls del projecte
Description
"Virtual characters are now becoming an increasingly important part of our modern lives.
While in the entertainment world, film and game productions have long featured incredibly believable and realistic characters; now virtual characters are becoming key components in non-entertainment fields, from medical applications, to online help, to educational techniques. The animation of virtual characters typically involves motion capture data and/or manual manipulation using a digital content creation (DCC) tool. Poses of a character are usually parametrized by character joint angles, or skin offsets from a standard position. This representation is excellent for processing of the data, but valid human motion only exists for a small subspace – it is easy to find extreme configurations that create biologically impossible poses. º
Recent research [1] has used deep-learning neural networks to generate believable human corporeal motion in real-time. We propose extending this work into the facial animation domain. We will generate a large dataset of labelled facial animations using the facial animation capture set in La Salle’s Medialab facility. We will then use this dataset to train a deep neural network, to generate believable facial animations in real-time.
The research will extend the state of the art in the recent and emerging field, of using machine learning to drive virtual character motion in real-time. It has immediate applications in all fields that require real-time interaction with virtual characters. In the mid-term, the research could have major impact in virtual character interaction in virtual and augmented reality (VR and AR). We will use the results of the project to create a Horizon2020 proposal, targeting ICT-25: Interactive Technologies (deadline 14th Nov 2018).
[1] Holden, D., Saito, J., & Komura, T. (2016). A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG), 35(4), 138."
While in the entertainment world, film and game productions have long featured incredibly believable and realistic characters; now virtual characters are becoming key components in non-entertainment fields, from medical applications, to online help, to educational techniques. The animation of virtual characters typically involves motion capture data and/or manual manipulation using a digital content creation (DCC) tool. Poses of a character are usually parametrized by character joint angles, or skin offsets from a standard position. This representation is excellent for processing of the data, but valid human motion only exists for a small subspace – it is easy to find extreme configurations that create biologically impossible poses. º
Recent research [1] has used deep-learning neural networks to generate believable human corporeal motion in real-time. We propose extending this work into the facial animation domain. We will generate a large dataset of labelled facial animations using the facial animation capture set in La Salle’s Medialab facility. We will then use this dataset to train a deep neural network, to generate believable facial animations in real-time.
The research will extend the state of the art in the recent and emerging field, of using machine learning to drive virtual character motion in real-time. It has immediate applications in all fields that require real-time interaction with virtual characters. In the mid-term, the research could have major impact in virtual character interaction in virtual and augmented reality (VR and AR). We will use the results of the project to create a Horizon2020 proposal, targeting ICT-25: Interactive Technologies (deadline 14th Nov 2018).
[1] Holden, D., Saito, J., & Komura, T. (2016). A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG), 35(4), 138."
Layman's description
"Virtual characters are now becoming an increasingly important part of our modern lives.
While in the entertainment world, film and game productions have long featured incredibly believable and realistic characters; now virtual characters are becoming key components in non-entertainment fields, from medical applications, to online help, to educational techniques. The animation of virtual characters typically involves motion capture data and/or manual manipulation using a digital content creation (DCC) tool. Poses of a character are usually parametrized by character joint angles, or skin offsets from a standard position. This representation is excellent for processing of the data, but valid human motion only exists for a small subspace – it is easy to find extreme configurations that create biologically impossible poses. º
Recent research [1] has used deep-learning neural networks to generate believable human corporeal motion in real-time. We propose extending this work into the facial animation domain. We will generate a large dataset of labelled facial animations using the facial animation capture set in La Salle’s Medialab facility. We will then use this dataset to train a deep neural network, to generate believable facial animations in real-time.
The research will extend the state of the art in the recent and emerging field, of using machine learning to drive virtual character motion in real-time. It has immediate applications in all fields that require real-time interaction with virtual characters. In the mid-term, the research could have major impact in virtual character interaction in virtual and augmented reality (VR and AR). We will use the results of the project to create a Horizon2020 proposal, targeting ICT-25: Interactive Technologies (deadline 14th Nov 2018).
[1] Holden, D., Saito, J., & Komura, T. (2016). A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG), 35(4), 138."
While in the entertainment world, film and game productions have long featured incredibly believable and realistic characters; now virtual characters are becoming key components in non-entertainment fields, from medical applications, to online help, to educational techniques. The animation of virtual characters typically involves motion capture data and/or manual manipulation using a digital content creation (DCC) tool. Poses of a character are usually parametrized by character joint angles, or skin offsets from a standard position. This representation is excellent for processing of the data, but valid human motion only exists for a small subspace – it is easy to find extreme configurations that create biologically impossible poses. º
Recent research [1] has used deep-learning neural networks to generate believable human corporeal motion in real-time. We propose extending this work into the facial animation domain. We will generate a large dataset of labelled facial animations using the facial animation capture set in La Salle’s Medialab facility. We will then use this dataset to train a deep neural network, to generate believable facial animations in real-time.
The research will extend the state of the art in the recent and emerging field, of using machine learning to drive virtual character motion in real-time. It has immediate applications in all fields that require real-time interaction with virtual characters. In the mid-term, the research could have major impact in virtual character interaction in virtual and augmented reality (VR and AR). We will use the results of the project to create a Horizon2020 proposal, targeting ICT-25: Interactive Technologies (deadline 14th Nov 2018).
[1] Holden, D., Saito, J., & Komura, T. (2016). A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG), 35(4), 138."
Estatus | Acabat |
---|---|
Data efectiva d'inici i finalització | 1/01/18 → 31/12/18 |