de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Audio-visual integration during multisensory object categorization

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84310

Werner,  S
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84112

Noppeney,  U
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Werner, S., & Noppeney, U. (2006). Audio-visual integration during multisensory object categorization. Poster presented at 7th International Multisensory Research Forum (IMRF 2006), Dublin, Ireland.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D14F-1
Zusammenfassung
Tools or musical instruments are characterized by their form and sound. We investigated audio-visual integration during semantic categorization by presenting pictures and sounds of objects separately or together and manipulating the degree of information content. The 3 x 6 factorial design manipulated (1) auditory information (sound, noise, silence) and (2) visual information (6 levels of image degradation). The visual information was degraded by manipulating the amount of phase scrambling of the image (0, 20, 40, 60, 80, 100). Subjects categorized stimuli as musical instruments or tools. In terms of accuracy and reaction times (RT), we found significant main effects of (1) visual and (2) auditory information and (3) an interaction between the two factors. The interaction was primarily due to an increased facilitatory effect of sound for the 80 degradation level. Consistently across the first 5 levels of visual degradation, we observed RT improvements for the sound-visual relative to the noise- or sile nce-visual conditions. Corresponding RT distributions significantly violated the so-called race model inequality across the first 5 percentiles of their cumulative density functions (even when controlling for low-level audio-visual interactions). These results suggest that redundant structural and semantic information is not independently processed but integrated during semantic categorization.