English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Implicit Wiener Series for Estimating Nonlinear Receptive Fields

Franz, M., Macke, J., Saleem, A., & Schultz, S. (2007). Implicit Wiener Series for Estimating Nonlinear Receptive Fields. Poster presented at 31st Göttingen Neurobiology Conference, Göttingen, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Franz, MO1, Author           
Macke, JH1, 2, Author           
Saleem, A, Author
Schultz, SR, Author
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              

Content

show
hide
Free keywords: -
 Abstract: The representation of the nonlinear response properties of a neuron by a Wiener series expansion has enjoyed a certain popularity in the past, but its application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number of terms that have to be estimated. A recently developed estimation method [1] utilizes the kernel techniques widely used in the machine learning community to implicitly represent the Wiener series as an element of an abstract dot product space. In contrast to the classical estimation methods for the Wiener series, the estimation complexity of the implicit representation is linear in the input dimensionality and independent of the degree of nonlinearity. From the neural system identification point of view, the proposed estimation method has several advantages: 1. Due to the linear dependence of the estimation complexity on input dimensionality, system identification can be also done for systems acting on high-dimensional inputs such as images or video sequences. 2. Compared to classical cross-correlation techniques (such as spike-triggered average or covariance estimates), similar accuracies can be achieved with a considerably smaller amount of data. 3. The new technique does not need white noise as input, but works for arbitrary classes of input signals such as, e.g., natural image patches. 4. Regularisation concepts from machine learning to identify systems with noise-contaminated output signals. We present an application of the implicit Wiener series to find the low-dimensional stimulus subspace which accounts for most of the neuron's activity. We approximate the second-order term of a full Wiener series model with a set of parallel cascades consisting of a linear receptive field and a static nonlinearity. This type of approximation is known as reduced set technique in machine learning. We compare our results on simulated and physiological datasets to existing identification techniques in terms of prediction performance and accuracy of the obtained subspaces.

Details

show
hide
Language(s):
 Dates: 2007-04
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: URI: http://www.neuro.uni-goettingen.de/nbc.php?sel=archiv
BibTex Citekey: 4265
 Degree: -

Event

show
hide
Title: 31st Göttingen Neurobiology Conference
Place of Event: Göttingen, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source

show