Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Interaction between vision and speech in face recognition

MPG-Autoren
/persons/resource/persons83840

Bülthoff,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bülthoff, I., & Newell, F. (2003). Interaction between vision and speech in face recognition. Poster presented at Third Annual Meeting of the Vision Sciences Society (VSS 2003), Sarasota, FL, USA.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-DB6D-7
Zusammenfassung
Many face studies have shown that in memory tasks, distinctive faces are more easily recognized than typical faces. All these studies were performed with visual information only. We investigated whether a cross-modal interaction between auditory and visual stimuli exists for face distinctiveness. Our experimental question was: Can visually typical faces become perceptually distinctive when they are accompanied by voice stimuli that are distinctive? In a training session, participants were presented with faces from two sets. In one set all faces were accompanied by characteristic auditory stimuli during learning (d-faces: different languages, intonations, accents, etc.). In the other set, all faces were accompanied by typical auditory stimuli during learning(s-faces: same words, same language). Face stimuli were counterbalanced across auditory conditions. We measured recognition performance in an old/new recognition task. Face recognition alone was tested. Our results show that participants were significantly better (t(12) = 3.89, p< 0.005) at recognizing d-faces than s-faces in the test session. These results show that there is an interaction between different sensory inputs and that typicality of stimuli in one modality can be modified by concomitantly presented stimuli in other sensory modalities.