hide
Free keywords:
-
Abstract:
Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an objects shape. The eyes pick up shape information from the objects projected outline, its disparity gradient, texture gradient, shading, and more. The hands supply tactile and haptic shape information (respectively, static and active cues). When multiple cues are available, it would be sensible to combine them in a way that yields a more accurate estimate of the object property in question than any single-cue estimate would. By combining information from multiple sources, the nervous system might lose access to single-cue information. Here we report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when cues from different modalities (vision and haptics) are combined. When one considers the nature of within- and inter-modal information, th
is difference is perfectly reasonable.