Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Laying the foundations for an in-depth investigation of the whole space of facial expressions

Kaulard, K., Wallraven, C., Cunningham, D., & Bülthoff, H. (2010). Laying the foundations for an in-depth investigation of the whole space of facial expressions. Poster presented at 10th Annual Meeting of the Vision Sciences Society (VSS 2010), Naples, FL, USA.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Kaulard, K1, Autor           
Wallraven, C1, Autor           
Cunningham, DW1, Autor           
Bülthoff, HH1, Autor           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Facial expressions form one of the most important and powerful communication systems of human social interaction. They express a large range of emotions but also convey more general, communicative signals. To date, research has mostly focused on the static, emotional aspect of facial expression processing, using only a limited set of “generic” or “universal” expression photographs, such as a happy or sad face. That facial expressions carry communicative aspects beyond emotion and that they transport meaning in the temporal domain, however, has so far been largely neglected. In order to enable a deeper understanding of facial expression processing with a focus on both emotional and communicative aspects of facial expressions in a dynamic context, it is essential to first construct a database that contains such material using a well-controlled setup. We here present the novel MPI facial expression database, which contains 20 native German participants performing 58 expressions based on pre-defined context scenarios, making it the most extensive database of its kind to date. Three experiments were performed to investigate the validity of the scenarios and the recognizability of the expressions. In Experiment 1, 10 participants were asked to freely name the facial expressions that would be elicited given the scenarios. The scenarios were effective: 82 of the answers matched the intended expressions. In Experiment 2, 10 participants had to identify 55 expression videos of 10 actors. We found that 34 expressions could be identified reliably without any context. Finally, in Experiment 3, 20 participants had to group the 55 expression videos of 10 actors based on similarity. Out of the 55 expressions, 45 formed consistent groups, which highlights the impressive variety of conversational expressions categories we use. Interestingly, none of the experiments found any advantage for the universal expressions, demonstrating the robustness with which we interpret conversational facial expressions.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2010-05
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://www.journalofvision.org/content/10/7/606
DOI: 10.1167/10.7.606
BibTex Citekey: 6739
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 10th Annual Meeting of the Vision Sciences Society (VSS 2010)
Veranstaltungsort: Naples, FL, USA
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: