日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  Information, learning and falsification

Balduzzi, D. (2011). Information, learning and falsification. In NIPS 2011 Philosophy and Machine Learning Workshop (pp. 1-4).

Item is

基本情報

表示: 非表示:
資料種別: 会議論文

ファイル

表示: ファイル

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Balduzzi, D1, 著者           
所属:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              

内容説明

表示:
非表示:
キーワード: -
 要旨: There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out [2]. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers [3]. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. Solomonoff and Hutter have applied algorithmic information to prove remarkable results on universal induction. Shannon information provides the mathematical foundation for communication and coding theory. However, both approaches have shortcomings. Algorithmic information is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory [4, 5]. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies [6], counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information.

資料詳細

表示:
非表示:
言語:
 日付: 2011-12
 出版の状態: 出版
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): URI: http://www.dsi.unive.it/PhiMaLe2011/Program.html
BibTex参照ID: Balduzzi2011_5
 学位: -

関連イベント

表示:
非表示:
イベント名: NIPS 2011 Philosophy and Machine Learning Workshop
開催地: Sierra Nevada, Spain
開始日・終了日: -

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: NIPS 2011 Philosophy and Machine Learning Workshop
種別: 会議論文集
 著者・編者:
所属:
出版社, 出版地: -
ページ: - 巻号: - 通巻号: - 開始・終了ページ: 1 - 4 識別子(ISBN, ISSN, DOIなど): -