English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Optimal Integration of Multimodal Information: Conditions and Limits

MPS-Authors
/persons/resource/persons83906

Ernst,  MO
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

http://imrf.mcmaster.ca/2004.html
(Table of contents)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Ernst, M. (2004). Optimal Integration of Multimodal Information: Conditions and Limits. In 5th International Multisensory Research Forum (IMRF 2004).


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D933-A
Abstract
The human brain uses multiple sources of sensory information to construct a representation of the environment. For example, when feeling objects, the eyes and hands both simultaneously provide relevant information about an object's size or shape. The visual system estimates shape using binocular disparity, perspective projection, and many other signals. The hands supply haptic shape information by means of tactile and proprioceptive signals. Naturally, no sensory apparatus is perfect. That is, there might be a small measurement error or noise in the transmission of the neural signals. In consequence, all the signals obtained from the same object property but from different modalities or from different signals within the same modality do not necessarily agree. In other words, due to the measurement uncertainty, two estimates of the same object property will never give raise to exactly the same sensory estimate. Therefore, the question arises how the brain handles these potentially discrepant signals? Does it prefer one signal over others and discard information from non-preferred signals? Or, does the brain process signals in a more sensible way by finding a reasonable compromise between all the available signals? We addressed these questions in a recent series of experiments by exploring how the brain combines signals about an object's size from the visual and haptic modality. The potential advantage of combining information across signals is noise reduction, i.e., a decrease in variance in the combined estimate. In theory, estimates with the lowest possible variance are achieved by using the Maximum-Likelihood rule described in [1] to combine the signals. This rule states that the combined estimate is a linear weighted average, with weights that are proportional to the in-verse variance of each individual estimate. By conducting a discrimination experiment, we recently confirmed that the brain combines signals in the statistically optimal way [1]. When the relative visual reliability decreases, the visual weight decreases as well and the combined percept is closer to the haptically specified size. However, combining signals may not only be beneficial, but also may come at a cost: loss of access to single-cue information. In a second study we report that single-cue information is indeed partially lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined [2]. In the case of vision and touch we found that subject had simultaneously access to all three representations: the visual, the haptic and the combined representation. That is, subjects have the benefit of combining the signals but by combining the signals no information is lost. This principle may account for the robustness when manipulating objects in everyday life. At present we investigate how top-down influences and learning mechanisms affect the integration behavior.