de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Marker-less Deformable Mesh Tracking for Human Shape and Motion Capture

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons43977

de Aguiar,  Edilson
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
Programming Logics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45557

Stoll,  Carsten
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

de Aguiar, E., Theobalt, C., Stoll, C., & Seidel, H.-P. (2007). Marker-less Deformable Mesh Tracking for Human Shape and Motion Capture. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, CVPR'07. - Vol. 6 (pp. 2502-2509). Piscataway, NJ, USA: IEEE.


Cite as: http://hdl.handle.net/11858/00-001M-0000-000F-1FC6-A
Abstract
We present a novel algorithm to jointly capture the motion and the dynamic shape of humans from multiple video streams without using optical markers. Instead of relying on kinematic skeletons, as traditional motion capture methods, our approach uses a deformable high-quality mesh of a human as scene representation. It jointly uses an image-based \mbox{3D} correspondence estimation algorithm and a fast Laplacian mesh deformation scheme to capture both motion and surface deformation of the actor from the input video footage. As opposed to many related methods, our algorithm can track people wearing wide apparel, it can straightforwardly be applied to any type of subject, e.g. animals, and it preserves the connectivity of the mesh over time. We demonstrate the performance of our approach using synthetic and captured real-world video sequences and validate its accuracy by comparison to the ground truth.