de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Disambiguation of optic flow with vestibular signals

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84928

MacNeilage,  P
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83842

Butler,  JS
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84889

Banks,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

MacNeilage, P., Butler, J., Bülthoff, H., & Banks, M. (2007). Disambiguation of optic flow with vestibular signals. Poster presented at 7th Annual Meeting of the Vision Sciences Society (VSS 2007), Sarasota, FL, USA.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-CD75-4
Zusammenfassung
Optic flow is generated by observer motion relative to stationary objects, by movement of objects relative to a stationary observer, and by combinations of those situations. To determine the relative contributions of object and self motion to the observed optic flow, the nervous system can use vestibular signals. An object's speed relative to earth is given by the difference between its speed relative to the head and the head's speed relative to the earth. The variance of the difference is the sum of the component variances: σ2obj=σ2vis+σ2vest. In contrast, if observers estimate self-motion from optic flow and vestibular signals, and assume a stationary visual scene, visual and vestibular estimates may be combined in a weighted average to yield more precise self-motion estimates: σ2self=(σ2visσ2vest)/(σ2vis+σ2vest). So depending on whether the subject reports object motion or self-motion, the two-modality variance is predicted to be respectively higher or lower than the component variances. To test these predictions, we measured speed-discrimination thresholds for fore-aft translations and roll rotations. There were two single-modality conditions, Visual and Vestibular, and two multi-modality conditions, Self-motion and Object-motion. In the Visual, Vestibular, and Self-motion conditions, observers indicated if the movement was faster or slower than a standard. In the Object-motion condition, observers indicated if the object appeared to move with or against the self-motion. Experiments were conducted on a rotating chair and translating motion platform. The stereoscopic projection system was mounted on the apparatus. Stimuli were random-dot planes that rotated clockwise or anti-clockwise or translated forwards or backwards. In the translation conditions, multi-modal object-motion thresholds were, as predicted, higher than single-modality thresholds, and multi-modal self-motion thresholds were, as predicted, generally lower than single-modality thresholds. Results from the rotation conditions were less clear. Possible causes of differing results for translations and rotations will be discussed.