Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

The contribution of the visual scene to disambiguation of optic flow with vestibular signals

MPG-Autoren
/persons/resource/persons83842

Butler,  J
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

TWK-2007-Butler.pdf
(beliebiger Volltext), 8MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Butler, J., MacNeilage, P., Banks, M., & Bülthoff, H. (2007). The contribution of the visual scene to disambiguation of optic flow with vestibular signals. Poster presented at 10th Tübinger Wahrnehmungskonferenz (TWK 2007), Tübingen, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CD1D-E
Zusammenfassung
Optic flow is generated by observer motion relative to stationary objects, by movement of objects relative to a stationary observer, and by combinations of those situations. To determine
the relative contributions of object and self motion to the observed optic flow, the nervous system
can use vestibular signals. An object’s speed relative to earth is given by the difference
between its speed relative to the head and the head’s speed relative to the earth. The variance
of the difference is the sum of the component variances. In contrast, if observers estimate
self-motion from optic flow and vestibular signals, and assume a stationary visual scene, visual
and vestibular estimates may be combined in a weighted average to yield more precise selfmotion
estimates. So depending on whether the subject reports object motion or self-motion,
the two-modality variance is predicted to be respectively higher or lower than the component
variances. To test these predictions and the influence of the visual scene upon them, we measured
speed-discrimination thresholds for fore-aft translations. There were two single-modality
conditions, Visual and Vestibular, and two multi-modality conditions, Self-motion and Object
motion. In the Visual, Vestibular, and Self-motion conditions, observers indicated if the movement
was faster or slower than a standard. In the Object-motion condition, observers indicated
if the object appeared to move with or against the self-motion. The experiment was run for
two visual scenes, random-dots and ground plane with columns. In both scenes, multi-modal
object-motion thresholds were, as predicted, higher than single-modality thresholds, and multimodal
self-motion thresholds were, as predicted, generally lower than single- modality thresholds.
For the ground plane scene, the self-motion thresholds could be consistently predicted
by the weighted average of the single modalities, which was not the case for the random-dots
condition.