de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Vortrag

There can be only one! Integrating vision and touch at different egocentric locations

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83960

Helbig,  HB
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83906

Ernst,  MO
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Helbig, H., & Ernst, M. (2006). There can be only one! Integrating vision and touch at different egocentric locations. Talk presented at 7th International Multisensory Research Forum (IMRF 2006). Dublin, Ireland.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D1B1-0
Zusammenfassung
Ernst and Banks (2002) showed that humans integrate visual and haptic signals in a statistically optimal. Integration seems to be broken if there is a spatial discrepancy between the signals (Gepshtein et al., 2005). Does knowledge that two signals belong to the same object facilitate integration even when they are presented at discrepant locations? In our experiment, participants had to judge the shape of visual-haptic objects. In one condition, visual and haptic object information was presented at the same location, whereas in the other condition there was a spatial offset between the two information sources, however, subjects know that the signals belong together. In both conditions, we introduced a slight conflict between the visually and haptically perceived shape and asked participants report the felt (seen) shape. If integration breaks due to the spatial discrepancy we expect subjects’ percept to be less biased by visual (haptic) information. We found that in both conditions the shape percept was in-between the haptically and visually specified shapes and did not differ significantly. This finding suggests that multimodal signals are combined if observers have reason to assume that they belong to the same event, even when there is a spatial discrepancy.