de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Optimal integration of spatiotemporal information across vision and touch

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83960

Helbig,  HB
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83906

Ernst,  M
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Helbig, H., & Ernst, M. (2007). Optimal integration of spatiotemporal information across vision and touch. Poster presented at 8th International Multisensory Research Forum (IMRF 2007), Sydney, Australia.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-CCF9-6
Zusammenfassung
The brain integrates spatial (e.g., size, location) as well as temporal (e.g., event perception) information across different sensory modalities (e.g., Ernst Banks, 2002; Bresciani et al., 2006) in a statistically optimal manner to come up with the most reliable percept. That is, the variance (just-noticeable difference, JND) of the multisensory perceptual estimate is maximally reduced. Here we asked whether this holds also for spatiotemporal information encoded by different sensory modalities. To study this question, we visually presented observers with a dot moving along a line. In the haptic condition, observer’s finger was passively moved along the line using a robot device. Observers had to discriminate the length of two lines presented in a 2-IFC task either visually alone, haptically alone or bimodally. To judge the length of a line, spatial information (position of the moving dot or finger) had to be accumulated in time. The bimodal discrimination performance (JND) was significantly improved relative to the performance in the uni-modal tasks and did not differ from the predictions of an optimal integration model. This result indicates that observers adopt an optimal integration strategy to integrate spatial information accumulated in time.