Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Buchkapitel

Audiovisual Cross-Modal Correspondences in the General Population

MPG-Autoren
/persons/resource/persons84129

Parise,  CV
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Parise, C. (2013). Audiovisual Cross-Modal Correspondences in the General Population. In J. Simner E. Hubbard (Ed.), Oxford Handbook of Synesthesia (pp. 790-815). Oxford, UK: Oxford University Press.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-001A-126F-5
Zusammenfassung
For more than a century now, researchers have acknowledged the existence of seemingly arbitrary crossmodal congruency effects between dimensions of sensory stimuli in the general (i.e., non-synesthetic) population. Such phenomena, known by a variety of terms including 'crossmodal correspondences', involve individual stimulus properties, rely on a crossmodal mapping of unisensory features, and appear to be shared by the majority of individuals. In other words, members of the general population share underlying preferences for specific pairings across the senses (e.g., preferring certain shapes to accompany certain sounds). Crossmodal correspondences between complementary sensory cues have often been referred to as synesthetic correspondences but, we would argue, differ from full-blown synesthetic experiences in a number of important ways, including the fact that there are no idiosyncratic concurrent sensations. Recent psychophysical evidence suggests that such crossmodal correspondences can modulate multisensory integration by helping to resolve the crossmodal binding problem. Here, we propose a model to account for the effects of crossmodal correspondences between complementary auditory and visual cues and critically review their relation to full-blown synesthesia.