非表示:
キーワード:
-
要旨:
We reconstruct geometry for a time-varying scene given by
a number of video sequences.
The dynamic geometry is represented by a 3D
hypersurface embedded in space-time.
The intersection of the hypersurface with planes of constant time
then yields the geometry at a single time instant.
In this paper, we model the hypersurface with a collection of triangle
meshes, one for each time frame.
The photo-consistency error is measured by an error functional defined as
an integral over the hypersurface.
It can be minimized using a PDE driven surface evolution, which
simultaneously optimizes space-time continuity as well.
Compared to our previous implementation based on level sets, triangle
meshes yield more accurate results, while requiring less memory and
computation time.
Meshes are also directly compatible with triangle-based rendering
algorithms, so no additional post-processing is required.