English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Intraparietal sulcus represents audiovisual space

MPS-Authors
/persons/resource/persons84450

Rohe,  T
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84112

Noppeney,  U
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Rohe, T., & Noppeney, U. (2012). Intraparietal sulcus represents audiovisual space. Poster presented at Bernstein Conference 2012, München, Germany. doi:10.3389/conf.fncom.2012.55.00054.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-B648-7
Abstract
Previous research has demonstrated that human observers locate audiovisual (AV) signals in space by averaging auditory (A) and visual (V) spatial signals according to their relative sensory reliabilities (=inverse of variance) (Ernst Banks, 2002; Alais Burr, 2004). This form of AV integration is optimal in that it provides the most reliable percept. Yet, the neural systems mediating integration of spatial inputs remain unclear. Multisensory integration of spatial signals has previously been related to higher order association areas such as intraparietal sulcus (IPS) as well as early sensory areas like the planum temporale (Bonath et al., 2007). In the current fMRI study, we investigated whether and how early visual (V1-V3) and higher association (IPS) areas represent A and V spatial information given their retinotopic organization. One subject was presented with synchronous audiovisual signals, at spatially congruent or discrepant locations along the azimuth and at two levels of sensory reliability. Hence, the experimental design factorially manipulated: (1) V location, (2) A location, (3) V reliability. The subject’s task was to localize the A signal. Retinotopic maps in visual areas and IPS were measured with standard wedge and ring checkerboard stimuli. At the behavioral level, the perceived location of the A input was shifted towards the location of the V input depending on the relative A and V reliabilities. At the neural level, the cue locations represented in retinotopic maps were decoded by computing a population vector estimate (Pouget et al., 2000) from the voxels’ BOLD responses to the AV cues given the voxels’ preferred visual field coordinate. In early visual areas (V1-V3), the decoded cue locations were determined by the V spatial signal but were independent from the A spatial signal. In IPS, the decoded cue locations were determined by the V and the A spatial signals if relative V reliability was low. In conclusion, our results suggest that the brain represents AV spatial location in IPS in qualitative agreement with reliability-weighted multisensory integration.