de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

High-level after-effects in the recognition of dynamic facial expressions

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Giese,  MA
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84016

Kleiner,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Curio, C., Giese, M., Breidt, M., Kleiner, M., & Bülthoff, H. (2007). High-level after-effects in the recognition of dynamic facial expressions. Poster presented at 7th Annual Meeting of the Vision Sciences Society (VSS 2007), Sarasota, FL, USA.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-CD7B-7
Abstract
Strong high-level after-effects have been reported for the recognition of static faces (Webster et al. 1999; Leopold et al. 2001). Presentation of static ‘anti-faces’ biases the perception of neutral test faces temporarily towards perception of specific identities or facial expressions. Recent experiments have demonstrated high-level after-effects also for point-light walkers, resulting in shifts of perceived gender. Our study presents first results on after-effects for dynamic facial expressions. In particular, we investigated how such after-effects depend on facial identity and dynamic vs. static adapting stimuli. STIMULI: Stimuli were generated using a 3D morphable model for facial expressions based on laser scans. The 3D model is driven by facial motion capture data recorded with a VICON system. We recorded data of two facial expressions (Disgust and Happy) from an amateur actor. In order to create ‘dynamic anti-expressions’ the motion data was projected onto a basis of 17 facial action units. These units were parameterized by motion data obtained from specially trained actors, who are capable of executing individual action units according to FACS (Ekman 1978). Anti-expressions were obtained by inverting the vectors in this linear projection space. METHOD: After determining a baseline-performance for expression recognition, participants were adapted with dynamic anti-expressions or static adapting stimuli (extreme keyframes of same duration), followed by an expression recognition test. Test stimuli were Disgust and Happy with strongly reduced expression strength (corresponding to vectors of reduced length in linear projection space). Adaptation and test stimuli were derived from faces with same or different identities. RESULTS: Adaptation with dynamic anti-expressions resulted in selective after-effects: increased recognition for matching test stimuli (p [[lt]] 0.05, N=13). Adaptation effects were significantly reduced for static adapting stimuli, and for different identities of adapting and test face. This suggests identity-specific neural representations of dynamic facial expressions.