Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse





How to identify a model for spatial vision?


Wichmann,  FA
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available

Dold, H., & Wichmann, F. (2011). How to identify a model for spatial vision?. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2011), Salt Lake City, UT, USA.

Cite as:
Empirical evidence gathered in experimental work often leads to computational models which help make progress towards a more complete understanding of the phenomenon under study. This increased knowledge in turn enables the design of better subsequent experiments. In the case of psychophysical experiments and models of spatial vision---multi-channel linear / non-linear cascade models---this experiment-modeling-cycle resulted in an almost factory production of data. Experimental variants ranged from detection and discrimination to summation and adaptation experiments. While the model was certainly productive in terms of experimental output, it is not clear yet to what extent the experimental data really helped to identify the model. We use Markov Chain Monte Carlo sampling to estimate the parameter posterior distributions of an image-driven spatial vision model. The inspection of the posterior distribution allows to draw conclusions on whether the model is well parametrized, on the applicability of model components and the explanatory power of the data. Specifically, we show that Minkowski pooling over channels as a decision stage does not allow to recover all parameters of an upstream static non-linearity for typical experimental data. Our results, and the approach in general, are not only relevant for psychophysics and should be applicable to computational models not only in spatial vision but in the neurosciences in general.