de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

The contribution of different facial regions to the recognition of conversational expressions

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84115

Nusseck,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83870

Cunningham,  DW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Nusseck, M., Cunningham, D., Wallraven, C., & Bülthoff, H. (2008). The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision, 8(8:1), 1-23. doi:10.1167/8.8.1.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-C8E3-3
Zusammenfassung
The human face is an important and complex communication channel. Humans can, however, easily read in a face not only identity information, but also facial expressions with high accuracy. Here, we present the results of four psychophysical experiments in which we systematically manipulated certain facial areas in video sequences of nine conversational expressions to investigate recognition performance and its dependency on the motions of different facial parts. These studies allowed us to determine what information is it necessary} and {it sufficient to recognize the different facial expressions. Subsequent analyses of the face movements and correlation with recognition performance show that, for some expressions, one individual facial region can represent the whole expression. In other cases, the interaction of more than one facial area is needed to clarify the expression. The full set of results is used to develop a systematic description of the roles of different facial parts in the visual perception of conversational facial expressions.