Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Meeting Abstract

Motion and form interact in expression recognition: Insights from computer animated faces

MPG-Autoren
/persons/resource/persons83870

Cunningham,  D
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Cunningham, D., & Wallraven, C. (2009). Motion and form interact in expression recognition: Insights from computer animated faces. Perception, 38(ECVP Abstract Supplement), 163.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C3C9-B
Zusammenfassung
Faces are a powerful and versatile communication channel. Yet, little is known about which changes are important for expression recognition (for a review, see Schwaninger et al, 2006 Progress in Brain Research). Here, we investigate at what spatial and temporal scales expressions are recognized using five different expressions and three animation styles. In point-light faces, the motion and configuration of facial features can be inferred, but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. In our experiment, we also systematically varied the number of vertices and the presence of motion. Recognition accuracy (6AFC with 'acute;none-of-the-above' option) and perceived intensity (7-point scale) were measured. Overall, in contrast to static expressions, dynamic expressions performed better (72 versus 49, 4.50 versus 3.94) and were largely impervious to geometry reduction. Interestingly, in both conditions, wireframe faces suffered the least from geometry reduction. On the one hand, this suggests that more information than motion of single vertices is necessary for recognition. On the other hand, it shows that the geometry reduction affects the full-surface face more than the abstracted versions.