非表示:
キーワード:
-
要旨:
In model-based free-viewpoint video, a detailed representation of the
time-varying geometry of a real-word scene is used to generate renditions of it
from novel viewpoints. In this paper, we present a method for reconstructing
such a dynamic geometry model of a human actor from multi-view video. In a
two-step procedure, first the spatio-temporally consistent shape and poses of a
generic human body model are estimated by means of a silhouette-based
analysis-by-synthesis method. In a second step, subtle details in surface
geometry that are specific to each particular time step are recovered by
enforcing a color-consistency criterion. By this means, we generate a
realistic representation of the time-varying geometry of a moving person that
also reproduces these dynamic surface variations.