de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Visual Vestibular Interactions for Self Motion Estimation

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83842

Butler,  JS
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84978

Smith,  S
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83808

Beykirch,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  H
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Butler, J., Smith, S., Beykirch, K., & Bülthoff, H. (2006). Visual Vestibular Interactions for Self Motion Estimation. Poster presented at 7th International Multisensory Research Forum (IMRF 2006), Dublin, Ireland.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-D19B-4
Abstract
Accurate perception of self-motion through cluttered environments involves a coordinated set of sensorimotor processes that encode and compare information from visual, vestibular, proprioceptive, motor-corollary, and cognitive inputs. Our goal was to investigate the interaction between visual and vestibular cues to the direction of linear self-motion (heading direction). In the vestibular experiment, blindfolded participants were given two distinct forward linear translations, using a Stewart Platform, with identical acceleration profiles. One motion was a standard heading direction, while the test heading was randomly varied using the method of constant stimuli. The participants judged in which interval they moved further towards the right. In the visual alone condition, participants were presented with two intervals of radial optic flow stimuli and judged which of the two intervals represented a pattern of optic flow consistent with more rightward self-motion. In the combined experiments, participants were presented with a translation stimulus that had both vestibular and visual information. From participants’ responses, we compute a psychometric function for both experiments, from which we can calculate the participant’s uncertainty (standard deviation of the cumulative Gaussian fit). Using the uncertainty values from the vestibular alone and visual alone experiments, we will predict the outcome of this experiment using a maximum-likelihood-method.