de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Multisensory integration of dynamic voices and faces in the monkey brain

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84132

Perrodin,  C
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84006

Kayser,  C
Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84063

Logothetis,  NK
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84136

Petkov,  CI
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Perrodin, C., Kayser, C., Logothetis, N., & Petkov, C. (2008). Multisensory integration of dynamic voices and faces in the monkey brain. Poster presented at 9th Conference of the Junior Neuroscientists of Tübingen (NeNa 2008), Ellwangen, Germany.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-C6DD-9
Abstract
Primates are social animals whose communication is based on their conspecics' vocalizations and facial expressions. Although a lot of work to date has studied the unimodal representation of vocal or facial information, little is known about the way the nervous system supports the processing of communication signals from dierent sensory modalities to combine them into a coherent audiovisual percept. It is thought that the brains of human and nonhuman primates evaluate vocal expressions and facial information separately in specialized 'voice' and 'face' brain regions but we wondered if cross sensory interactions were already evident at the neuronal level in these typically unimodal brain regions. Using movies of vocalizing humans and monkeys as stimuli, we recorded extracellularly from the auditory cortex of a macaque monkey, targeting his 'voice' region in the right hemisphere. Within a multi factorial design we evaluated how these auditory neurons responded to dierent sensory modalities (auditory or visual) or combinations of modalities (audiovisual). We also analyzed the responses for species specic eects (human/ monkey speaker), call type specicity (coo/ grunt), as well as speaker familiarity, size and identity. Following the approach in the original fMRI study localizing the monkey voice region, our recordings identied a voice area 'cluster' in this animal. Within this auditory cluster of sites, we observed a signicant visual in uence on both the local eld potential (LFP) and the spiking activity (AMUA), and found that 30 of the sites showed audiovisual interactions in the LFP signals, and 38 in the AMUA. Grunts were especially eective stimuli for this region and rather than a specialization for monkey vocalizations, human vocalizations also elicited strong responses. Our results provide evidence for visual in uences in what has been characterized as an auditory 'voice' area suggesting that at least the 'voice' regions are in uenced by the visual modality. Voices and faces seem to already interact at traditionally unisensory brain areas, rather than cross sensory information being combined only in higher-level, associative or multisensory regions of the brain.