Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Perceptual relevance of kinematic components of facial movements extracted by unsupervised learning

Giese, M., Chiovetto, E., & Curio, C. (2012). Perceptual relevance of kinematic components of facial movements extracted by unsupervised learning. Poster presented at 35th European Conference on Visual Perception, Alghero, Italy.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Giese, MA1, Autor           
Chiovetto, E, Autor
Curio, C2, Autor           
Affiliations:
1Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
2Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: The idea that complex facial or body movements are composed of simpler components (usually referred to as 'movement primitives'or 'action units') is common in motor control (Chiovetto 2011 Journal of Neurophysiology105(4), 1429-31.) as well as in the study of facial expressions (Ekman and Friesen, 1978). However, such components have rarely been extracted from real facial movement data. Methods: Combining a novel algorithm for anechoic demixing derived from (Omlor and Giese 2011 Journal of Machine Learning Research121111-1148) with a motion retargetting system for 3D facial animation (Curio et al, 2010, MIT Press, 47-65), we estimated spatially and temporally localized components that capture the major part of the variance of dynamic facial expressions. The estimated components were used to generate stimuli for a psychophysical experiment assessing classification rates and emotional expressiveness ratings for stimuli containing combinations of the extracted components. Results: We investigated how the information carried by the different extracted dynamic facial movement components is integrated in facial expression perception. In addition, we tried to apply different cue fusion models to account quantitatively for the obtained experimental results.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2012-09
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://www.perceptionweb.com/abstract.cgi?id=v120635
BibTex Citekey: GieseCC2012
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 35th European Conference on Visual Perception
Veranstaltungsort: Alghero, Italy
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: