de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Combining Sensory Information: Mandatory Fusion Within, but Not Between, Senses

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83906

Ernst,  MO
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84889

Banks,  MS
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Hillis, J., Ernst, M., Banks, M., & Landy, M. (2002). Combining Sensory Information: Mandatory Fusion Within, but Not Between, Senses. Science, 298(5598), 1627-1630. doi:10.1126/science.1075396.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-DE44-D
Zusammenfassung
Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object’s shape. The eyes pick up shape information from the object’s projected outline, its disparity gradient, texture gradient, shading, and more. The hands supply tactile and haptic shape information (respectively, static and active cues). When multiple cues are available, it would be sensible to combine them in a way that yields a more accurate estimate of the object property in question than any single-cue estimate would. By combining information from multiple sources, the nervous system might lose access to single-cue information. Here we report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when cues from different modalities (vision and haptics) are combined. When one considers the nature of within- and inter-modal information, th is difference is perfectly reasonable.