de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

Joint Motion and Reflectance Capture for Creating Relightable 3D Videos

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
Programming Logics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons43978

Ahmed,  Naveed
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons43977

de Aguiar,  Edilson
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45805

Ziegler,  Gernot
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44911

Lensch,  Hendrik
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44965

Magnor,  Marcus
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

MPI-I-2005-4-004.pdf
(Any fulltext), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Theobalt, C., Ahmed, N., de Aguiar, E., Ziegler, G., Lensch, H., Magnor, M., et al.(2005). Joint Motion and Reflectance Capture for Creating Relightable 3D Videos (MPI-I-2005-4-004).


Cite as: http://hdl.handle.net/11858/00-001M-0000-000F-2879-B
Abstract
\begin{abstract} Passive optical motion capture is able to provide authentically animated, photo-realistically and view-dependently textured models of real people. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We describe a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}