English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

The interaction between motion and form in expression recognition

MPS-Authors
/persons/resource/persons83870

Cunningham,  DW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Cunningham, D., & Wallraven, C. (2009). The interaction between motion and form in expression recognition. In K. Mania, B. Riecke, S. Spencer, B. Bodenheimer, & C. Sullivan (Eds.), APGV '09: Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization (pp. 41-44). New York, NY, USA: ACM Press.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-C31D-2
Abstract
Faces are a powerful and versatile communication channel. Physically, facial expressions contain a considerable amount of information, yet it is clear from stylized representations such as cartoons that not all of this information needs to be present for efficient processing of communicative intent. Here, we use a high-fidelity facial animation system to investigate the importance of two forms of spatial information (connectivity and the number of vertices) for the perception of intensity and the recognition of facial expressions. The simplest form of connectivity is point light faces. Since they show only the vertices, the motion and configuration of features can be seen but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. The results of two experiments are presented. In the first, the presence of motion was manipulated. In the second, the size of the images was varied. Overall, dynamic expressions performed better than static expressions and were largely impervious to the elimination of shape or connectivity information. Decreasing the size of the image had little effect until a critical size was reached. These results add to a growing body of evidence that shows the critical importance of dynamic information for processing of facial expressions: As long as motion information is present, very little spatial information is required.