de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

A Simple Framework for Natural Animation of Digitized Models

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons43977

de Aguiar,  Edilson
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45789

Zayer,  Rhaleb
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
Programming Logics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44965

Magnor,  Marcus
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

de Aguiar, E., Zayer, R., Theobalt, C., Magnor, M., & Seidel, H.-P. (2007). A Simple Framework for Natural Animation of Digitized Models. In SIBGRAPI'07 - XX Brazilian Symposium on Computer Graphics and Image Processing (pp. 3-10). Piscataway, NJ, USA: IEEE.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-000F-1E3B-C
Zusammenfassung
We present a versatile, fast and simple framework to generate animations of scanned human characters from input optical motion capture data. Our method is purely meshbased and requires only a minimum of manual interaction. The only manual step needed to create moving virtual people is the placement of a sparse set of correspondences between the input data and the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human subjects of completely different shape and proportions. We feature a working prototype system that demonstrates that our method can generate convincing lifelike character animations directly from optical motion capture data.