de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Optimal integration of spatiotemporal information across vision and touch

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83960

Helbig,  HB
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83906

Ernst,  M
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Helbig, H., & Ernst, M. (2007). Optimal integration of spatiotemporal information across vision and touch. Poster presented at 8th International Multisensory Research Forum (IMRF 2007), Sydney, Australia.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-CCF9-6
Abstract
The brain integrates spatial (e.g., size, location) as well as temporal (e.g., event perception) information across different sensory modalities (e.g., Ernst Banks, 2002; Bresciani et al., 2006) in a statistically optimal manner to come up with the most reliable percept. That is, the variance (just-noticeable difference, JND) of the multisensory perceptual estimate is maximally reduced. Here we asked whether this holds also for spatiotemporal information encoded by different sensory modalities. To study this question, we visually presented observers with a dot moving along a line. In the haptic condition, observer’s finger was passively moved along the line using a robot device. Observers had to discriminate the length of two lines presented in a 2-IFC task either visually alone, haptically alone or bimodally. To judge the length of a line, spatial information (position of the moving dot or finger) had to be accumulated in time. The bimodal discrimination performance (JND) was significantly improved relative to the performance in the uni-modal tasks and did not differ from the predictions of an optimal integration model. This result indicates that observers adopt an optimal integration strategy to integrate spatial information accumulated in time.