English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Dense Correspondence Finding for Parametrization-free Animation Reconstruction from Video

MPS-Authors
/persons/resource/persons43978

Ahmed,  Naveed
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45303

Rössl,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Ahmed, N., Theobalt, C., Rössl, C., Thrun, S., & Seidel, H.-P. (2008). Dense Correspondence Finding for Parametrization-free Animation Reconstruction from Video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008) (pp. 1-8). Los Alamitos, CA: IEEE Computer Society.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-1B67-B
Abstract
We present a dense 3D correspondence finding method that enables spatio-temporally coherent reconstruction of surface animations from multi-view video data. Given as input a sequence of shape-from-silhouette volumes of a moving subject that were reconstructed for each time frame individually, our method establishes dense surface correspondences between subsequent shapes independently of surface discretization. This is achieved in two steps: first, we obtain sparse correspondences from robust optical features between adjacent frames. Second, we generate dense correspondences which serve as map between respective surfaces. By applying this procedure subsequently to all pairs of time steps we can trivially align one shape with all others. Thus, the original input can be reconstructed as a sequence of meshes with constant connectivity and small tangential distortion. We exemplify the performance and accuracy of our method using several synthetic and captured real-world sequences.