English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Space-time continuous geometry meshes from multi-view video sequences

MPS-Authors
/persons/resource/persons44508

Goldluecke,  Bastian
International Max Planck Research School, MPI for Informatics, Max Planck Society;
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

/persons/resource/persons44965

Magnor,  Marcus
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Goldluecke, B., & Magnor, M. (2005). Space-time continuous geometry meshes from multi-view video sequences. In 2005 International Conference on Image Processing (ICIP) (pp. 625-628). New York, USA: IEEE.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-288D-0
Abstract
We reconstruct geometry for a time-varying scene given by a number of video sequences. The dynamic geometry is represented by a 3D hypersurface embedded in space-time. The intersection of the hypersurface with planes of constant time then yields the geometry at a single time instant. In this paper, we model the hypersurface with a collection of triangle meshes, one for each time frame. The photo-consistency error is measured by an error functional defined as an integral over the hypersurface. It can be minimized using a PDE driven surface evolution, which simultaneously optimizes space-time continuity as well. Compared to our previous implementation based on level sets, triangle meshes yield more accurate results, while requiring less memory and computation time. Meshes are also directly compatible with triangle-based rendering algorithms, so no additional post-processing is required.