English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Multisensory integration of dynamic voices and faces in the monkey brain

Perrodin, C., Kayser, C., Logothetis, N., & Petkov, C. (2008). Multisensory integration of dynamic voices and faces in the monkey brain. Poster presented at 9th Conference of the Junior Neuroscientists of Tübingen (NeNa 2008), Ellwangen, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Perrodin, C1, 2, Author           
Kayser, C1, 2, Author           
Logothetis, NK1, Author           
Petkov, CI1, Author           
Affiliations:
1Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497798              
2Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497808              

Content

show
hide
Free keywords: -
 Abstract: Primates are social animals whose communication is based on their conspecics' vocalizations and facial expressions. Although a lot of work to date has studied the unimodal representation of vocal or facial information, little is known about the way the nervous system supports the processing of communication signals from dierent sensory modalities to combine them into a coherent audiovisual percept. It is thought that the brains of human and nonhuman primates evaluate vocal expressions and facial information separately in specialized 'voice' and 'face' brain regions but we wondered if cross sensory interactions were already evident at the neuronal level in these typically unimodal brain regions. Using movies of vocalizing humans and monkeys as stimuli, we recorded extracellularly from the auditory cortex of a macaque monkey, targeting his 'voice' region in the right hemisphere. Within a multi factorial design we evaluated how these auditory neurons responded to dierent sensory modalities (auditory or visual) or combinations of modalities (audiovisual). We also analyzed the responses for species specic eects (human/ monkey speaker), call type specicity (coo/ grunt), as well as speaker familiarity, size and identity. Following the approach in the original fMRI study localizing the monkey voice region, our recordings identied a voice area 'cluster' in this animal. Within this auditory cluster of sites, we observed a signicant visual in uence on both the local eld potential (LFP) and the spiking activity (AMUA), and found that 30 of the sites showed audiovisual interactions in the LFP signals, and 38 in the AMUA. Grunts were especially eective stimuli for this region and rather than a specialization for monkey vocalizations, human vocalizations also elicited strong responses. Our results provide evidence for visual in uences in what has been characterized as an auditory 'voice' area suggesting that at least the 'voice' regions are in uenced by the visual modality. Voices and faces seem to already interact at traditionally unisensory brain areas, rather than cross sensory information being combined only in higher-level, associative or multisensory regions of the brain.

Details

show
hide
Language(s):
 Dates: 2008-10
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: URI: http://www.neuroschool-tuebingen-nena.de/index.php?id=284
BibTex Citekey: PerrodinKLP2008
 Degree: -

Event

show
hide
Title: 9th Conference of the Junior Neuroscientists of Tübingen (NeNa 2008)
Place of Event: Ellwangen, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source

show