English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Likelihood Estimation in Deep Belief Networks

Theis, L., Gerwinn, S., Sinz, F., & Bethge, M. (2010). Likelihood Estimation in Deep Belief Networks. Poster presented at Bernstein Conference on Computational Neuroscience (BCCN 2010), Berlin, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Theis, L1, Author           
Gerwinn, S1, 2, Author           
Sinz, F1, Author           
Bethge, M1, Author           
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              

Content

show
hide
Free keywords: -
 Abstract: Many models have been proposed to capture the statistical regularities in natural images patches. The average log-likelihood on unseen data offers a canonical way to quantify and compare the performance of statistical models. A class of models that has recently gained increasing popularity for the task of modeling complexly structured data is formed by deep belief networks. Analyses of these models, however, have been typically based on samples from the model due to the computationally intractable nature of the model likelihood. In this study, we investigate whether the apparent ability of a particular deep belief network to capture higher-order statistical regularities in natural images is also reflected in the likelihood. Specifically, we derive a consistent estimator for the likelihood of deep belief networks that is conceptually simpler and more readily applicable than the previously published method [1]. Using this estimator, we evaluate a three-layer deep belief network and compare its density estimation performance with the performance of other models trained on small patches of natural images. In contrast to an earlier analysis based solely on samples, we provide evidence that the deep belief network under study is not a good model for natural images by showing that it is outperformed even by very simple models. Further, we confirm existing results indicating that adding more layers to the network has only little effect on the likelihood if each layer of the model is trained well enough. Finally, we offer a possible explanation for both the observed performance and the small effect of additional layers by analyzing a best case scenario of the greedy learning algorithm commonly used for training this class of models.

Details

show
hide
Language(s):
 Dates: 2010-10
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Degree: -

Event

show
hide
Title: Bernstein Conference on Computational Neuroscience (BCCN 2010)
Place of Event: Berlin, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source

show