de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Brain responses to auditory and visual stimulus offset: Shared representations of temporal edges

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84187

Lehmann C, Esposito F, di Salle F, Federspiel A, Bach DR, Scheffler,  K
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Herdener, M., Lehmann C, Esposito F, di Salle F, Federspiel A, Bach DR, Scheffler, K., & Seifritz, E. (2009). Brain responses to auditory and visual stimulus offset: Shared representations of temporal edges. Human Brain Mapping, 30(3), 725-733. doi:10.1002/hbm.20539.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-C587-F
Abstract
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence.