Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Bayesian Neural System identification: error bars, receptive fields and neural couplings

Gerwinn, S., Seeger, M., Zeck, G., & Bethge, M. (2006). Bayesian Neural System identification: error bars, receptive fields and neural couplings. Talk presented at 7th Conference of the Junior Neuroscientists of Tuebingen (NeNa 2006). Oberjoch, Germany.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Gerwinn, S1, 2, Autor           
Seeger, M2, Autor           
Zeck, G, Autor
Bethge, M1, Autor           
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: The task of system identification lies at the heart of neural data analysis. Bayesian system identification methods provide a powerful toolbox which allows one to make inferences over stimulus-neuron and neuron-neuron dependencies in a principled way. Rather than reporting only the most likely parameters, the posterior distribution obtained in the Bayesian approach informs us about the range of parameter values that are consistent with the observed data and the assumptions made. In other words, Bayesian receptive fields always come with error bars. Since the amount of data from neural recordings is limited, the error bars are as important as the receptive field itself. Here we apply a recently developed approximation of Bayesian inference to a multi-cell response model consisting of a set of coupled units, each of which being a Linear-Nonlinear-Poisson (LNP) cascade neuron model. The instantaneous firing rate of each unit depends multiplicatively on both the spike train history of the units and the stimulus. Parameter fitting in this model has been shown to be a convex optimization problem (Paninski 2004) that can be solved efficiently, scaling linearly in the number of events, neurons and history-size. By doing inference in such a model one can estimate excitatory and inhibitory interactions between the neurons and the dependence of the stimulus. In addition, the Bayesian framework allows one not only to put error bars on the inferred parameter values but also to quantify the predictive power of the model in terms of the marginal likelihood. As a sanity check of the new technique, and also to explore its limitations, we first verify for artificially generated data that we are able to infer the true underlying model. Then we apply the method to recordings from retinal ganglion cells (RGC) responding to white noise (m-sequence) stimulation. The figure shows both the inferred receptive fields (lower) as well as the confidence range of the sorted pixel values (upper) when using a different fraction of the data (0,10,50, and 100 ). We also compare the results with the receptive fields derived with classical linear correlation analysis and maximum likelihood estimation.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2006-11
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://www.neuroschool-tuebingen-nena.de/index.php?id=284
BibTex Citekey: GerwinnSZB2006
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 7th Conference of the Junior Neuroscientists of Tuebingen (NeNa 2006)
Veranstaltungsort: Oberjoch, Germany
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: