de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Automatic integration of visual, tactile and auditory signals for the perception of sequences of events

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83831

Bresciani,  J-P
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83874

Dammeier,  F
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83906

Ernst,  MO
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bresciani, J.-P., Dammeier, F., & Ernst, M. (2006). Automatic integration of visual, tactile and auditory signals for the perception of sequences of events. Poster presented at 29th European Conference on Visual Perception, St. Petersburg, Russia.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D0A1-B
Zusammenfassung
Sequences of visual flashes, tactile taps, and auditory beeps were presented simultaneously. For each session, subjects were instructed to count the number of events presented in one modality (focal modality) and to ignore the other modalities (background). The number of events presented in the background modality(ies) could differ from the number of events in the focal modality. The experiment consisted of nine different sessions, all nine combinations between visual, tactile, and auditory signals being tested. In each session, the perceived number of events in the focal modality was significantly influenced by the background signal(s). The visual modality, which had the largest intrinsic variance (focal modality presented alone), was the most susceptible to background-evoked bias and the less efficient in biasing the other two modalities. Conversely, the auditory modality, which had the smallest intrinsic variance, was the less susceptible to background-evoked bias and the most efficient in biasing the othe r two modalities. These results show that visual, tactile, and auditory sensory signals tend to be automatically integrated for the perception of sequences of events. They also suggest that the relative weight of each sensory signal in the integration process depends on its intrinsic relative reliability.