Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Meeting Abstract

There can be only one! Integrating vision and touch at different egocentric locations

MPG-Autoren
/persons/resource/persons83960

Helbig,  HB
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83906

Ernst,  MO
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Helbig, H., & Ernst, M. (2006). There can be only one! Integrating vision and touch at different egocentric locations. In 7th International Multisensory Research Forum (IMRF 2006).


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-D1B1-0
Zusammenfassung
Ernst and Banks (2002) showed that humans integrate visual and haptic signals in a statistically optimal. Integration seems to be broken if there is a spatial discrepancy between the signals (Gepshtein et al., 2005).
Does knowledge that two signals belong to the same object facilitate integration even when they are presented at discrepant locations?
In our experiment, participants had to judge the shape of visual-haptic objects. In one condition, visual and haptic object information was presented at the same location, whereas in the other condition there was a spatial offset between the two information sources, however, subjects know that the signals belong together. In both conditions, we introduced a slight conflict between the visually and haptically perceived shape and asked participants report the felt (seen) shape. If integration breaks due to the spatial discrepancy we expect subjects’ percept to be less biased by visual (haptic) information.
We found that in both conditions the shape percept was in-between the haptically and visually specified shapes and did not differ significantly. This finding suggests that multimodal signals are combined if observers have reason to assume that they belong to the same event, even when there is a spatial discrepancy.