Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Likelihood Estimation in Deep Belief Networks

Theis, L., Gerwinn, S., Sinz, F., & Bethge, M. (2010). Likelihood Estimation in Deep Belief Networks. Poster presented at Bernstein Conference on Computational Neuroscience (BCCN 2010), Berlin, Germany. doi:10.3389/conf.fncom.2010.51.00116.

Item is

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Theis, L1, 2, Autor           
Gerwinn, S1, 2, Autor           
Sinz, F1, 2, Autor           
Bethge, M1, 2, Autor           
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Many models have been proposed to capture the statistical regularities in natural images patches.
The average log-likelihood on unseen data offers a canonical way to quantify and compare the performance of statistical models. A class of models that has recently gained increasing popularity for the task of modeling complexly structured data is formed by deep belief networks. Analyses of these models, however, have been typically based on samples from the model due to the computationally intractable nature of the model likelihood.
In this study, we investigate whether the apparent ability of a particular deep belief network to capture higher-order statistical regularities in natural images is also reflected in the likelihood. Specifically, we derive a consistent estimator for the likelihood of deep belief networks that is conceptually simpler and more readily applicable than the previously published method [1]. Using this estimator, we evaluate a three-layer deep belief network and compare its density estimation performance with the performance of other models trained on small patches of natural images. In contrast to an earlier analysis based solely on samples, we provide evidence that the deep belief network under study is not a good model for natural images by showing that it is outperformed even by very simple models. Further, we confirm existing results indicating that adding more layers to the network has only little effect on the likelihood if each layer of the model is trained well enough.
Finally, we offer a possible explanation for both the observed performance and the small effect of additional layers by analyzing a best case scenario of the greedy learning algorithm commonly used for training this class of models.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2010-09
 Publikationsstatus: Online veröffentlicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.3389/conf.fncom.2010.51.00116
BibTex Citekey: 6704
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Bernstein Conference on Computational Neuroscience (BCCN 2010)
Veranstaltungsort: Berlin, Germany
Start-/Enddatum: 2010-09-27 - 2010-10-01

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Frontiers in Computational Neuroscience
  Kurztitel : Front Comput Neurosci
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Lausanne : Frontiers Research Foundation
Seiten: - Band / Heft: 2010 ( Conference Abstract: Bernstein Conference on Computational Neuroscience) Artikelnummer: - Start- / Endseite: - Identifikator: Anderer: 1662-5188
CoNE: https://pure.mpg.de/cone/journals/resource/1662-5188