Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Trimodal integration of visual, tactile and auditory signals for the perception of sequences of events

MPG-Autoren
/persons/resource/persons83831

Bresciani,  J-P
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83874

Dammeier,  F
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83906

Ernst,  MO
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bresciani, J.-P., Dammeier, F., & Ernst, M. (2008). Trimodal integration of visual, tactile and auditory signals for the perception of sequences of events. Brain Research Bulletin, 75(6), 753-760. Retrieved from http://dx.doi.org/10.1016/j.brainresbull.2008.01.009.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CA73-2
Zusammenfassung
We investigated the interactions between visual, tactile and auditory sensory signals for the perception of sequences of events. Sequences of flashes, taps and beeps were presented simultaneously. For each session, subjects were instructed to count the number of events presented in one modality (Target) and to ignore the stimuli presented in the other modalities (Background). The number of events presented in the background sequence could differ from the number of events in the target sequence. For each session, we quantified the Background-evoked bias by comparing subjects’ responses with and without Background (Target presented alone). Nine combinations between vision, touch and audition were tested. In each session but two, the Background significantly biased the Target. Vision was the most susceptible to Background-evoked bias and the least efficient in biasing the other two modalities. By contrast, audition was the least susceptible to Background-evoked bias and the most efficient in biasing the other two modalities. These differences were strongly correlated to the relative reliability of each modality. In line with this, the evoked biases were larger when the Background consisted of two instead of only one modality. These results show that for the perception of sequences of events: (1) vision, touch and audition are automatically integrated; (2) the respective contributions of the three modalities to the integrated percept differ; (3) the relative contribution of each modality depends on its relative reliability (1/variability); (4) task-irrelevant stimuli have more weight when presented in two rather than only one modality.