Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Audiovisual recalibration of vowel categories

MPG-Autoren
/persons/resource/persons138192

Franken,  Matthias K.
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour;

/persons/resource/persons2693

Acheson,  Daniel J.
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour;

/persons/resource/persons69

Hagoort,  Peter
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour;

/persons/resource/persons122

McQueen,  James M.
Donders Institute for Brain, Cognition and Behaviour;
Research Associates, MPI for Psycholinguistics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

Franken_etal_2017.PDF
(Verlagsversion), 127KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002D-7571-B
Zusammenfassung
One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speech sound categories and lexical items. A major issue with this mapping problem is the variability in the acoustic realizations of sound categories, both within and across speakers. Past research has suggested listeners may use visual information (e.g., lipreading) to calibrate these speech categories to the current speaker. Previous studies have focused on audiovisual recalibration of consonant categories. The present study explores whether vowel categorization, which is known to show less sharply defined category boundaries, also benefit from visual cues. Participants were exposed to videos of a speaker pronouncing one out of two vowels, paired with audio that was ambiguous between the two vowels. After exposure, it was found that participants had recalibrated their vowel categories. In addition, individual variability in audiovisual recalibration is discussed. It is suggested that listeners’ category sharpness may be related to the weight they assign to visual information in audiovisual speech perception. Specifically, listeners with less sharp categories assign more weight to visual information during audiovisual speech recognition.