English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Learning novel views to a single face image

MPS-Authors
/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Vetter, T. (1996). Learning novel views to a single face image. In C. von der Malsburg, W. von Seelen, J. Vorbrüggen, & B. Sendhoff (Eds.), Artificial Neural Networks: ICANN 96: 1996 International Conference Bochum, Germany, July 16–19, 1996 (pp. 715-719). Berlin, Germany: Springer.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-EB52-6
Abstract
A new technique is described for synthesizing images of faces from new viewpoints, when only a single 2D image is available. A novel 2D image of a face can be computed without knowledge about the 3D structure of the head. The technique draws on prior knowledge of faces based on example images of other faces seen in different poses and on a single generic 3D model of a human head. The example images are used to learn a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses. Examples of synthetic "rotations" over 24 degree based on a training set of 100 faces are shown.