de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Movie presentation allows mapping of retinotopy, color, and face-related activity in the anesthetized monkey brain

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83797

Bartels,  A
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83787

Augath,  M
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84091

Moutoussis,  K
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84063

Logothetis,  NK
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bartels, A., Augath, M., Moutoussis, K., Zeki, S., & Logothetis, N. (2005). Movie presentation allows mapping of retinotopy, color, and face-related activity in the anesthetized monkey brain. Poster presented at 35th Annual Meeting of the Society for Neuroscience (Neuroscience 2005), Washington, DC, USA.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D3C5-5
Zusammenfassung
In traditional functional magnetic resonance imaging (fMRI) carefully controlled stimuli are used to reveal cortical regions that are differentially responsive to distinct stimuli. In human fMRI studies we have shown that the varying intensity of features, such as faces or color, seen in a movie, can be used to map feature selective regions, such as the human V4 complex for color or superior temporal regions (STS) and lateral fusiform cortex (FFA) for faces (Bartels amp;amp; Zeki, 2004). Here we applied the same paradigm in the anesthetized monkey to identify regions involved in processing various low- and high-level features. The advantage of this approach is that effects of attention or eye-movements can be excluded. We found that the BOLD signal in V1 was best predicted by changes in frame-by-frame pixel intensities (contrast changes) compared to measures of contrast, luminance or spatial frequency. BOLD signal in response to contrast changes were specific enough to reveal the retinotop y of V1 and V2 as a function of their spatial location throughout the movie. Color variations correlated most with BOLD signal in V4 and weakly along the STS. Face specific responses extended along the STS, and overlapped partly with the regions also responsive to color. We conclude that, in monkey as in man, movies - even though uncontrolled - allow surprisingly specific mapping of high- as well as low-level features, down to retinotopy. In addition, regions identified this way may reflect more realistically processing in natural, more complex and dynamic environments.