English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Flexible Bayesian inference for mechanistic models of neural dynamics

MPS-Authors
/persons/resource/persons192667

Bassetto,  G
Former Research Group Neural Computation and Behaviour, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons192652

Nonnenmacher,  M
Former Research Group Neural Computation and Behaviour, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84066

Macke,  J
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Center of Advanced European Studies and Research (caesar), Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Goncalves, P., Lueckmann, J.-M., Bassetto, G., Nonnenmacher, M., & Macke, J. (2017). Flexible Bayesian inference for mechanistic models of neural dynamics. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2017), Salt Lake City, UT, USA.


Cite as: https://hdl.handle.net/21.11116/0000-0000-C507-A
Abstract
One of the central goals of computational neuroscience is to understand the dynamics of single neurons and neural ensembles. However, linking mechanistic models of neural dynamics to empirical observations of neural activity has been challenging. Statistical inference is only possible for a few models of neural dynamics (e.g. GLMs), and no generally applicable, effective statistical inference algorithms are available: As a consequence, comparisons between models and data are either qualitative or rely on manual parameter tweaking, parameterfitting using heuristics or brute-force search. Furthermore, parameter-fitting approaches typically return a single best-fitting estimate, but do not characterize the entire space of models that would be consistent with data. We overcome this limitation by presenting a general method for Bayesian inference on mechanistic models of neural dynamics. Our approach can be applied in a ‘black box’ manner to a wide range of neural models without requiring model-specific modifications. In particular, it extends to models without explicit likelihoods (e.g. most spiking networks). We achieve this goal by building on recent advances in likelihood-free Bayesian inference (Papamakarios and Murray 2016, Moreno et al. 2016): the key idea is to simulate multiple data-sets from different parameters, and then to train a probabilistic neural network which approximates the mapping from data to posterior distribution. We illustrate this approach using Hodgkin-Huxley models of single neurons and models of spiking networks: On simulated data, estimated posterior distributions recover ground-truth parameters, and reveal the manifold of parameters for which the model exhibits the same behaviour. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, and voltage traces accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neural dynamics models without having to design model-specific algorithms, closing the gap between biophysical and statistical approaches to neural dynamics.