English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

Gesture modeling and animation by imitation

MPS-Authors
/persons/resource/persons43991

Albrecht,  Irene
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45111

Neff,  Michael Paul
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

MPI-I-2006-4-008pdf.pdf
(Any fulltext), 5MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Albrecht, I., Kipp, M., Neff, M. P., & Seidel, H.-P.(2006). Gesture modeling and animation by imitation (MPI-I-2006-4-008). Saarbrücken: Max-Planck-Institut für Informatik.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0014-6979-2
Abstract
Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very difficult to generate, even more so when a unique, individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which a statistical model of the person.s particular gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with more detail and prepares a refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The system is capable of creating animation that replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style and creating performances of a given text sample in the style of different performers.