de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Bayesian integration of visual and vestibular signals for heading

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83842

Butler,  JS
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84978

Smith,  ST
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84378

Campos,  JL
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Butler, J., Smith, S., Campos, J., & Bülthoff, H. (2010). Bayesian integration of visual and vestibular signals for heading. Journal of Vision, 10(11), 1-13. doi:10.1167/10.11.23.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-BE46-7
Abstract
Self-motion through an environment involves a composite of signals such as visual and vestibular cues. Building upon previous results showing that visual and vestibular signals combine in a statistically optimal fashion, we investigated the relative weights of visual and vestibular cues during self-motion. This experiment was comprised of three experimental conditions: vestibular alone, visual alone (with four different standard heading values), and visual–vestibular combined. In the combined cue condition, inter-sensory conflicts were introduced (Δ = ±6° or ±10°). Participants performed a 2-interval forced choice task in all conditions and were asked to judge in which of the two intervals they moved more to the right. The cue-conflict condition revealed the relative weights associated with each modality. We found that even when there was a relatively large conflict between the visual and vestibular cues, participants exhibited a statistically optimal reduction in variance. On the other hand, we found that the pattern of results in the unimodal conditions did not predict the weights in the combined cue condition. Specifically, visual–vestibular cue combination was not predicted solely by the reliability of each cue, but rather more weight was given to the vestibular cue.