English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Multi-viewpoint video capture for facial perception research

MPS-Authors
/persons/resource/persons84016

Kleiner,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83870

Cunningham,  DW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

pdf3058.pdf
(Any fulltext), 538KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Kleiner, M., Wallraven, C., Breidt, M., Cunningham, D., & Bülthoff, H. (2004). Multi-viewpoint video capture for facial perception research. In N. Thalmann, & D. Thalmann (Eds.), Workshop on Modelling and Motion Capture Techniques for Virtual Environments (CAPTECH 2004) (pp. 55-60).


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D749-A
Abstract
In order to produce realistic-looking avatars, computer graphics has traditionally relied solely on physical realism. Research on cognitive aspects of face perception, however, can provide insights into how to produce believable and recognizable faces. In this paper, we describe a method for automatically manipulating video recordings of faces.
The technique involves the use of a custom-built multi-viewpoint video capture system in combination with head motion tracking and a detailed 3D head shape model. We illustrate how the technique can be employed in studies on dynamic facial expression perception by summarizing the results of two psychophysical studies which provide suggestions for creating recognizable facial expressions.