English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Navigating through a virtual city: Using virtual reality technology to study human action and perception.

MPS-Authors
/persons/resource/persons84273

van Veen,  HAHC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83888

Distler,  H
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83828

Braun,  S
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

van Veen, H., Distler, H., Braun, S., & Bülthoff, H. (1998). Navigating through a virtual city: Using virtual reality technology to study human action and perception. Future Generation Computer Systems, 14(3-4), 231-242. doi:10.1016/S0167-739X(98)00027-2.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-E805-A
Abstract
Images formed by a human face change with viewpoint. A new technique is described for synthesizing images of faces from new viewpoints,
when only a single 2D image is available. A novel 2D image of a face can be computed without explicitly computing the 3D structure of the head. The
technique draws on a single generic 3D model of a human head and on prior knowledge of faces based on example images of other faces seen in different
poses. The example images are used to "learn" a pose-invariant shape and texture description of a new face. The 3D model is used to solve the
correspondence problem between images showing faces in different poses. The proposed method is interesting for view independent face recognition tasks
as well as for image synthesis problems in areas like teleconferencing and virtualized reality.