English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Combining 3D Flow Fields with Silhouette-based Human Motion Capture for Immersive Video

MPS-Authors
/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;
Programming Logics, MPI for Informatics, Max Planck Society;

/persons/resource/persons44222

Carranza,  Joel
Computer Graphics, MPI for Informatics, Max Planck Society;
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

/persons/resource/persons44965

Magnor,  Marcus
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Theobalt, C., Carranza, J., Magnor, M., & Seidel, H.-P. (2004). Combining 3D Flow Fields with Silhouette-based Human Motion Capture for Immersive Video. Graphical Models, 66(6), 333-351. doi:10.1016/j.gmod.2004.06.009.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-29CE-7
Abstract
\begin{abstract}

In recent years, the convergence of Computer Vision and Computer Graphics has
put forth a new field of research that focuses on the reconstruction of
real-world scenes
from video streams.
To make immersive \mbox{3D} video reality, the whole pipeline spanning from
scene acquisition
over \mbox{3D} video reconstruction to real-time rendering needs to be
researched.

In this paper, we describe latest advancements of our system to record,
reconstruct and render
free-viewpoint videos of human actors.

We apply a silhouette-based non-intrusive motion capture
algorithm making use of a 3D human body model to estimate the actor's
parameters of motion
from multi-view video streams. A renderer plays back the acquired motion
sequence in real-time
from any arbitrary perspective. Photo-realistic physical appearance of the
moving actor is
obtained by generating time-varying multi-view textures from video.
This work shows how the motion capture sub-system can be enhanced
by incorporating texture information from the input video streams into the
tracking process. 3D motion fields
are reconstructed from optical flow that are used in combination with
silhouette matching to estimate pose parameters. We demonstrate that a
high visual quality can be achieved with the proposed approach and validate the
enhancements caused by the the motion field step.

\end{abstract}