de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Intraparietal sulcus represents audiovisual space

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84450

Rohe,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84112

Noppeney,  U
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Rohe, T., & Noppeney, U. (2012). Intraparietal sulcus represents audiovisual space. Poster presented at Bernstein Conference 2012, München, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-B648-7
Zusammenfassung
Previous research has demonstrated that human observers locate audiovisual (AV) signals in space by averaging auditory (A) and visual (V) spatial signals according to their relative sensory reliabilities (=inverse of variance) (Ernst Banks, 2002; Alais Burr, 2004). This form of AV integration is optimal in that it provides the most reliable percept. Yet, the neural systems mediating integration of spatial inputs remain unclear. Multisensory integration of spatial signals has previously been related to higher order association areas such as intraparietal sulcus (IPS) as well as early sensory areas like the planum temporale (Bonath et al., 2007). In the current fMRI study, we investigated whether and how early visual (V1-V3) and higher association (IPS) areas represent A and V spatial information given their retinotopic organization. One subject was presented with synchronous audiovisual signals, at spatially congruent or discrepant locations along the azimuth and at two levels of sensory reliability. Hence, the experimental design factorially manipulated: (1) V location, (2) A location, (3) V reliability. The subject’s task was to localize the A signal. Retinotopic maps in visual areas and IPS were measured with standard wedge and ring checkerboard stimuli. At the behavioral level, the perceived location of the A input was shifted towards the location of the V input depending on the relative A and V reliabilities. At the neural level, the cue locations represented in retinotopic maps were decoded by computing a population vector estimate (Pouget et al., 2000) from the voxels’ BOLD responses to the AV cues given the voxels’ preferred visual field coordinate. In early visual areas (V1-V3), the decoded cue locations were determined by the V spatial signal but were independent from the A spatial signal. In IPS, the decoded cue locations were determined by the V and the A spatial signals if relative V reliability was low. In conclusion, our results suggest that the brain represents AV spatial location in IPS in qualitative agreement with reliability-weighted multisensory integration.