de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

The ventriloquist effect depends on audiovisual spatial discrepancy and visual reliability

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84450

Rohe,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84112

Noppeney,  U
Research Group Cognitive Neuroimaging, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Rohe, T., & Noppeney, U. (2011). The ventriloquist effect depends on audiovisual spatial discrepancy and visual reliability. Poster presented at 12th Conference of Junior Neuroscientists of Tübingen (NeNA 2011), Heiligkreuztal, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-B9EA-0
Zusammenfassung
Humans integrate auditory and visual spatial cues to locate objects. Generally, location judgments are dominated by vision because observers localize an auditory cue close to a visual cue even if they have been instructed to ignore the latter (ventriloquist effect). A recent model of multisensory integration proposes that the ventriloquist effect is governed by two principles: First, spatially discrepant cues are only integrated if the observer infers that both cues stem from one object (principle of causal inference). Second, if the inference results in an assumption that both cues originate from one object the cues are integrated by weighting them according to their relative reliability (principle of Bayes-optimal cue weighting). Thus, the bimodal estimate of the object`s location has a higher reliability than each of the unisensory estimates per se. In order to test this model, 26 subjects were presented with spatial auditory (HRTF-convolved white noise) and visual cues (cloud of dots). The 5x5x5 factorial design manipulated (1) the auditory cue location, (2) the visual cue location and (3) the reliability of the visual cue via the width of the cloud of dots. Subjects were instructed to locate the auditory cue while ignoring the visual cue and to judge the spatial unity of both cues. In line with the principle of causal inference results showed that the ventriloquist effect was weaker and unity judgments were reduced for larger audiovisual discrepancies. In case of small spatial discrepancies the ventriloquist effect was weaker at low levels of visual reliability implying a Bayes-optimal strategy of cue weighting only if a common cause of both cues was assumed. A probabilistic model incorporating the principles of causal inference and Bayes-optimal cue weighting accurately fitted the behavioral data. Overall, the pattern of results suggested that both principles describe important processes governing multisensory integration.