English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  The contribution of the visual scene to disambiguation of optic flow with vestibular signals

Butler, J., MacNeilage, P., Banks, M., & Bülthoff, H. (2007). The contribution of the visual scene to disambiguation of optic flow with vestibular signals. Poster presented at 10th Tübinger Wahrnehmungskonferenz (TWK 2007), Tübingen, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Butler, J1, Author           
MacNeilage, P2, Author           
Banks, M2, Author           
Bülthoff, HH1, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Optic flow is generated by observer motion relative to stationary objects, by movement of objects relative to a stationary observer, and by combinations of those situations. To determine the relative contributions of object and self motion to the observed optic flow, the nervous system can use vestibular signals. An object’s speed relative to earth is given by the difference between its speed relative to the head and the head’s speed relative to the earth. The variance of the difference is the sum of the component variances. In contrast, if observers estimate self-motion from optic flow and vestibular signals, and assume a stationary visual scene, visual and vestibular estimates may be combined in a weighted average to yield more precise selfmotion estimates. So depending on whether the subject reports object motion or self-motion, the two-modality variance is predicted to be respectively higher or lower than the component variances. To test these predictions and the influence of the visual scene upon them, we measured speed-discrimination thresholds for fore-aft translations. There were two single-modality conditions, Visual and Vestibular, and two multi-modality conditions, Self-motion and Object motion. In the Visual, Vestibular, and Self-motion conditions, observers indicated if the movement was faster or slower than a standard. In the Object-motion condition, observers indicated if the object appeared to move with or against the self-motion. The experiment was run for two visual scenes, random-dots and ground plane with columns. In both scenes, multi-modal object-motion thresholds were, as predicted, higher than single-modality thresholds, and multimodal self-motion thresholds were, as predicted, generally lower than single- modality thresholds. For the ground plane scene, the self-motion thresholds could be consistently predicted by the weighted average of the single modalities, which was not the case for the random-dots condition.

Details

show
hide
Language(s):
 Dates: 2007-07
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Degree: -

Event

show
hide
Title: 10th Tübinger Wahrnehmungskonferenz (TWK 2007)
Place of Event: Tübingen, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source

show