English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Weighting or selecting sensory inputs when memorizing body-turns: what is actually being stored?

MPS-Authors
/persons/resource/persons84281

Vidal,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83802

Berger,  D
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

pdf3491.pdf
(Any fulltext), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Vidal, M., Berger, D., & Bülthoff, H. (2005). Weighting or selecting sensory inputs when memorizing body-turns: what is actually being stored?. Poster presented at 6th International Multisensory Research Forum (IMRF 2005), Rovereto, Italy.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D571-D
Abstract
Many previous studies focused on how humans integrate inputs provided by different modalities for the same physical property. Some claim that these are merged into a single amodal percept, others propose that we select the most relevant sensory input.
We designed an experiment in order to study whether we select or merge senses, and we investigated what is actually being stored and recalled in a reproduction task. Participants experienced passive whole-body yaw rotations with a corresponding rotation of the visual scene (limited lifetime star field) turning 1.5 times faster. Then they actively reproduced the same rotation in the opposite direction, with body, visual or both cues available.
When the gain was the same as during the presentation, reproduced angles with both cues were smaller than with visual cues only, larger than with body cues, and responses were more precise. This suggests that turns in both modalities (vision and body) are stored independently, and that the resulting fusion lies in between with a higher reliability. This provides evidence for near-optimal integration. Modifying the reproduction gain resulted in a larger change for body than for visual reproduced rotation, which indicates a visual dominance when a matching problem is introduced.