Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Physical and perceptual factors that determine the mode of audio-visual integration in distinct areas of the speech processing system

MPG-Autoren
/persons/resource/persons84042

Lee,  HL
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84266

Tuennerhoff,  J
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84310

Werner,  S
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84124

Pammi,  C
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84112

Noppeney,  U
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Lee, H., Tuennerhoff, J., Werner, S., Pammi, C., & Noppeney, U. (2008). Physical and perceptual factors that determine the mode of audio-visual integration in distinct areas of the speech processing system. Poster presented at 9th International Multisensory Research Forum (IMRF 2008), Hamburg, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C87F-9
Zusammenfassung
Speech and non-speech stimuli differ in their (i) physical (spectro-temporal structure) and (ii) perceptual (phonetic/linguistic representation) aspects. To dissociate these two levels in audio-visual integration, this fMRI study employed original spoken sentences and their sinewave analogues that were either trained and perceived as speech (group 1) or non-speech (group 2). In both groups, all stimuli were presented in visual, auditory or audiovisual modalities. AV-integration areas were identified by superadditive and subadditive interactions in a random effects analysis. While no superadditive interactions were observed, subadditive effects were found in right superior temporal sulci for both speech and sinewave stimuli. The left ventral premotor cortex showed increased subadditive interactions for speech relative to their sinewave analogues irrespective of whether they were perceived as speech or non-speech. More specifically, only familiar auditory speech signal suppressed premotor activation that was elicited by passive lipreading in the visual conditions, suggesting that acoustic rather than perceptual/linguistic features determine AV-integration in the mirror neuron system. In contrast, AV-integration modes differed between sinewave analogues perceived as speech and non-speech in bilateral anterior STS areas that have previously been implicated in speech comprehension. In conclusion, physical and perceptual factors determine the mode of AV-integration in distinct speech processing areas.