Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Bericht

A framework for natural animation of digitized models

MPG-Autoren
/persons/resource/persons43977

de Aguiar,  Edilson
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45789

Zayer,  Rhaleb
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons44965

Magnor,  Marcus A.
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

MPI-I-2006-4-003.pdf
(beliebiger Volltext), 7MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

de Aguiar, E., Zayer, R., Theobalt, C., Magnor, M. A., & Seidel, H.-P.(2006). A framework for natural animation of digitized models (MPI-I-2006-4-003). Saarbrücken: Max-Planck-Institut für Informatik.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0014-680B-F
Zusammenfassung
We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.