Synthesizing photorealistic images is an active area of research in computer
graphics. Image based rendering combined with inverse rendering methods is used
to generate photorealistic images from real world images under novel
illumination conditions. Traditionally, very high-quality real world images of
static objects, obtained under known viewing and lighting conditions are used
in inverse rendering for the measurement of surface reflectance properties.
This thesis focuses on surface material reconstruction of dynamic objects from
video streams of multi-view recordings. Working with fairly low resolution
movie streams of a dynamic object recorded in known viewing conditions and a
geometry model tracked through all time steps, we approximate the best light
source configuration, and measure the bidirectional reflectance distribution
function of the object. We construct diffuse and specular maps for the whole
sequence, and a diffuse correction map for each time step. We have applied our
method to sequences of a human actor and are now able to synthesize views of
the actor in arbitrary poses under arbitrary lighting conditions.