English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Audio-visual integration during multisensory object categorization

MPS-Authors
/persons/resource/persons84310

Werner,  S
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84112

Noppeney,  U
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Werner, S., & Noppeney, U. (2006). Audio-visual integration during multisensory object categorization. Poster presented at 7th International Multisensory Research Forum (IMRF 2006), Dublin, Ireland. Retrieved from http://imrf.mcmaster.ca/IMRF/2006/viewabstract.php?id=124.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D14F-1
Abstract
Tools or musical instruments are characterized by their form and sound. We investigated audio-visual integration during semantic categorization by presenting pictures and sounds of objects separately or together and manipulating the degree of information content. The 3 x 6 factorial design manipulated (1) auditory information (sound, noise, silence) and (2) visual information (6 levels of image degradation). The visual information was degraded by manipulating the amount of phase scrambling of the image (0, 20, 40, 60, 80, 100). Subjects categorized stimuli as musical instruments or tools. In terms of accuracy and reaction times (RT), we found significant main effects of (1) visual and (2) auditory information and (3) an interaction between the two factors. The interaction was primarily due to an increased facilitatory effect of sound for the 80 degradation level. Consistently across the first 5 levels of visual degradation, we observed RT improvements for the sound-visual relative to the noise- or sile
nce-visual conditions. Corresponding RT distributions significantly violated the so-called race model inequality across the first 5 percentiles of their cumulative density functions (even when controlling for low-level audio-visual interactions). These results suggest that redundant structural and semantic information is not independently processed but integrated during semantic categorization.