English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Bayesian Neural System identification: error bars, receptive fields and neural couplings

MPS-Authors
/persons/resource/persons83931

Gerwinn,  S
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84205

Seeger,  M
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83805

Bethge,  M
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Göttingen-2007-Gerwinn.pdf
(Any fulltext), 35KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Gerwinn, S., Seeger, M., Zeck, G., & Bethge, M. (2007). Bayesian Neural System identification: error bars, receptive fields and neural couplings. Poster presented at 7th Meeting of the German Neuroscience Society, 31st Göttingen Neurobiology Conference, Göttingen, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CE2F-B
Abstract
The task of system identification lies at the heart of neural data analysis. Bayesian system identification methods provide a powerful toolbox which allows one to make inferences over stimulus-neuron and
neuron-neuron dependencies in a principled way. Rather than reporting only the most likely parameters, the
posterior distribution obtained in the Bayesian approach informs us about the range of parameter values that
are consistent with the observed data and the assumptions made. In other words, Bayesian receptive fields
always come with error bars. Since the amount of data from neural recordings is limited, the error bars are as
important as the receptive field itself.
Here we apply a recently developed approximation of Bayesian inference to a multi-cell response model
consisting of a set of coupled units, each of which being a Linear-Nonlinear-Poisson (LNP) cascade neuron
model. The instantaneous firing rate of each unit depends multiplicatively on both the spike train history of
the units and the stimulus. Parameter fitting in this model has been shown to be a convex optimization
problem (Paninski 2004) that can be solved efficiently, scaling linearly in the number of events, neurons and
history-size. By doing inference in such a model one can estimate excitatory and inhibitory interactions
between the neurons and the dependence of the stimulus. In addition, the Bayesian framework allows one not
only to put error bars on the inferred parameter values but also to quantify the predictive power of the model
in terms of the marginal likelihood.
As a sanity check of the new technique, and also to explore its limitations, we first verify for artificially
generated data that we are able to infer the true underlying model. Then we apply the method to recordings
from retinal ganglion cells (RGC) responding to white noise (m-sequence) stimulation. The figure shows both
the inferred receptive fields (lower) as well as the confidence range of the sorted pixel values (upper) when
using a different fraction of the data (0,10,50, and 100 ). We also compare the results with the receptive
fields derived with classical linear correlation analysis and maximum likelihood estimation.