English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Reverse Correlation In Temporal Facs Space Reveals Diagnostic Information During Dynamic Emotional Expression Classification

Garrod, O., Yu H, Breidt, M., Curio, C., & Schyns, P. (2010). Reverse Correlation In Temporal Facs Space Reveals Diagnostic Information During Dynamic Emotional Expression Classification. Poster presented at 10th Annual Meeting of the Vision Sciences Society (VSS 2010), Naples, FL, USA.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Garrod, O, Author
Yu H, Breidt, M1, Author           
Curio, C1, Author           
Schyns, P, Author
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Content

show
hide
Free keywords: -
 Abstract: Reverse correlation experiments have previously revealed the locations of facial features crucial for recognition of different emotional expressions, and related these features to brain electrophysiological activity [SchynsEtal07]. However, in social perception we expect the generation and encoding of communicative signals to share a common framework in the brain [SeyfarthCheney03] and neither ‘Bubbles’ [GosselinSchyns03] nor white noise based manipulation effectively target the input features underlying facial expression generation - the combined activation of sets of facial muscles over time. [CurioEtal06] propose a motion-retargeting method that controls the appearance of facial expression stimuli via a linear 3D Morphable Model [BlanzVetter99] composed of recorded Action Units (AUs). Each AU represents the surface deformation of the face, given the full activation of a particular muscle or muscle group taken from the FACS [EkmanFriesen79] system. The set of weighted linear combinations of AUs are hypothesised as a generative model for the set of typical facial movements for this actor. Here we report the outcome of a facial emotion reverse correlation experiment with one such generative AU model over a space of temporally parameterized AU weights. On each trial, a random selection of between 1 and 5 AUs are selected. Random timecourses for selected AUs are generated according to 6 temporal parameters (see supplementary figure). The observer rates the stimulus for each of the 6 ‘universal emotions’ on a continuous confidence scale from 0 to 1 and, from these ratings, optimal AU timecourses (timecourses whose temporal parameters maximize the expected rating for a given expression) are derived per expression and AU. These are then fed as weights into the AU model to reveal the feature dynamics associated with the expression. This method extends Bubbles and reverse correlation techniques to a relevant input space – one that makes explicit hypotheses about the temporal structure of diagnostic information.

Details

show
hide
Language(s):
 Dates: 2010-05
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Degree: -

Event

show
hide
Title: 10th Annual Meeting of the Vision Sciences Society (VSS 2010)
Place of Event: Naples, FL, USA
Start-/End Date: -

Legal Case

show

Project information

show

Source

show