English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Auditory and audiovisual specificity for processing communication signals in the superior temporal lobe

MPS-Authors
/persons/resource/persons84132

Perrodin,  C
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84136

Petkov,  CI
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84063

Logothetis,  Nikos K
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84006

Kayser,  Christoph
Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Perrodin, C., Petkov, C., Logothetis, N. K., & Kayser, C. (2014). Auditory and audiovisual specificity for processing communication signals in the superior temporal lobe. In 15th International Multisensory Research Forum (IMRF 2014) (pp. 28).


Cite as: https://hdl.handle.net/21.11116/0000-0001-33C9-3
Abstract
Effective social interactions can depend upon the receiver combining vocal and facial content to form a coherent audiovisual representation of communication signals. Neuroimaging studies have identified face- or voice-sensitive areas in the primate temporal lobe, some of which have been proposed as candidate regions for face-voice integration. However, so far neurons in these areas have been primarily studied in their respective sensory modality. In addition, these higher-level sensory areas are typically not prominent in current models of multisensory processing, unlike early sensory and association cortices. Thus, it was unclear how audiovisual influences occur at the neuronal level within such regions, especially in comparison to classically defined multisensory regions in temporal association cortex. Here I will present data exploring auditory (voice) and visual (face) influences on neuronal responses to vocalizations, that were obtained using extracellular recordings targeting a voice-sensitive region of the anterior supratemporal plane and the neighboring superior-temporal sulcus (STS) in awake rhesus macaques. Our findings suggest that within the superior temporal lobe, neurons in voice-sensitive cortex specialize in the auditory analysis of vocal features while congruency-sensitive visual influences emerge to a greater extent in STS neurons. These results help clarify the audiovisual representation of communication signals at two stages of the sensory pathway in primate superior temporal regions, and are consistent with reversed gradients of functional specificity in unisensory vs multisensory processing along their respective hierarchies.