Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Multisensory integration of dynamic voices and faces in the monkey brain

Perrodin, C., Kayser, C., Logothetis, N., & Petkov, C. (2008). Multisensory integration of dynamic voices and faces in the monkey brain. Poster presented at 9th Conference of the Junior Neuroscientists of Tübingen (NeNa 2008), Ellwangen, Germany.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Perrodin, C1, 2, Autor           
Kayser, C1, 2, Autor           
Logothetis, NK1, Autor           
Petkov, CI1, Autor           
Affiliations:
1Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497798              
2Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497808              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Primates are social animals whose communication is based on their conspecics' vocalizations and facial expressions. Although a lot of work to date has studied the unimodal representation of vocal or facial information, little is known about the way the nervous system supports the processing of communication signals from dierent sensory modalities to combine them into a coherent audiovisual percept. It is thought that the brains of human and nonhuman primates evaluate vocal expressions and facial information separately in specialized 'voice' and 'face' brain regions but we wondered if cross sensory interactions were already evident at the neuronal level in these typically unimodal brain regions. Using movies of vocalizing humans and monkeys as stimuli, we recorded extracellularly from the auditory cortex of a macaque monkey, targeting his 'voice' region in the right hemisphere. Within a multi factorial design we evaluated how these auditory neurons responded to dierent sensory modalities (auditory or visual) or combinations of modalities (audiovisual). We also analyzed the responses for species specic eects (human/ monkey speaker), call type specicity (coo/ grunt), as well as speaker familiarity, size and identity. Following the approach in the original fMRI study localizing the monkey voice region, our recordings identied a voice area 'cluster' in this animal. Within this auditory cluster of sites, we observed a signicant visual in uence on both the local eld potential (LFP) and the spiking activity (AMUA), and found that 30 of the sites showed audiovisual interactions in the LFP signals, and 38 in the AMUA. Grunts were especially eective stimuli for this region and rather than a specialization for monkey vocalizations, human vocalizations also elicited strong responses. Our results provide evidence for visual in uences in what has been characterized as an auditory 'voice' area suggesting that at least the 'voice' regions are in uenced by the visual modality. Voices and faces seem to already interact at traditionally unisensory brain areas, rather than cross sensory information being combined only in higher-level, associative or multisensory regions of the brain.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2008-10
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://www.neuroschool-tuebingen-nena.de/index.php?id=284
BibTex Citekey: PerrodinKLP2008
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 9th Conference of the Junior Neuroscientists of Tübingen (NeNa 2008)
Veranstaltungsort: Ellwangen, Germany
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: