de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

A framework for natural animation of digitized models

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons43977

de Aguiar,  Edilson
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45789

Zayer,  Rhaleb
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44965

Magnor,  Marcus A.
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

MPI-I-2006-4-003.pdf
(Any fulltext), 7MB

Supplementary Material (public)
There is no public supplementary material available
Citation

de Aguiar, E., Zayer, R., Theobalt, C., Magnor, M. A., & Seidel, H.-P.(2006). A framework for natural animation of digitized models (MPI-I-2006-4-003). Saarbrücken: Max-Planck-Institut für Informatik.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0014-680B-F
Abstract
We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.