Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Neural system identification for large populations separating "what" and "where"

MPG-Autoren
/persons/resource/persons83805

Bethge,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen

Link
(beliebiger Volltext)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Klindt, D., Ecker, A., Euler, T., & Bethge, M. (2018). Neural system identification for large populations separating "what" and "where". In I. Guyon, U. von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, et al. (Eds.), Advances in Neural Information Processing Systems 30 (pp. 3507-3517). Red Hook, NY, USA: Curran.


Zitierlink: https://hdl.handle.net/21.11116/0000-0000-C369-E
Zusammenfassung
Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional neural system identification methods do not capitalize on this separation of "what" and "where". Learning deep convolutional feature spaces shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron's response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations -- a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse pooling layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We explore this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms the current state-of-the art system identification model of mouse primary visual cortex on a publicly available dataset.