日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集ファイル形式で保存
 
 
ダウンロード電子メール
  Statistical Learning with Similarity and Dissimilarity Functions

von Luxburg, U. (2004). Statistical Learning with Similarity and Dissimilarity Functions. Berlin, Germany: Logos Verlag.

Item is

基本情報

表示: 非表示:
資料種別: 書籍

ファイル

表示: ファイル

作成者

表示:
非表示:
 作成者:
von Luxburg, U1, 2, 3, 著者           
所属:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497797              

内容説明

表示:
非表示:
キーワード: -
 要旨: This work explores statistical properties of machine learning algorithms from different perspectives. Questions arising both in the fields of supervised and unsupervised learning, dealing with diverse issues such as the convergence of algorithms, the speed of convergence, generalization bounds, and how statistical properties can be used in practical machine learning applications are investigated. All topics covered have the common feature that the properties of the similarity or dissimilarity function on the data play an important role.

Learning is the process of inferring general rules from given examples. The examples are instances of some input space (pattern space), and the rules can consist of some general observation about the structure of the input space, or have the form of a functional dependancy between the input and some output space. Two types of learning problems are considered: classification and clustering. In both problems, the goal is to divide the input space into several regions such that objects within the same region "belong together" and "are different" from the objects in the other regions. The difference between the two problems is that classification is a supervised learning technique while clustering is unsupervised.

Machine learning algorithms are usually designed to deal with either similarities or dissimilarities. In general it is recommended to close an algorithm which can deal with the type of data given, but sometimes it may become necessary to convert similarities into dissimilarities or vice versa. In some situations this can be done without loosing information, especially if the similarities and distances are defined by a scalar product in an Euclidean space. If this is not the case, several heuristics can be invoked. The general idea is to transform a similarity into a dissimilarity function or vice versa by applying a monotonically decreasing function. This is according to the general intuition that a distance is small if the similarity is large, and vice versa. The connection between information theory and learning can be exploited in every-day machine learning applications.

資料詳細

表示:
非表示:
言語:
 日付: 2004
 出版の状態: 出版
 ページ: 166
 出版情報: Berlin, Germany : Logos Verlag
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: 2836
ISBN: 978-3-8325-0767-1
 学位: -

関連イベント

表示:

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: MPI Series in Biological Cybernetics
種別: 連載記事
 著者・編者:
所属:
出版社, 出版地: -
ページ: - 巻号: 10 通巻号: - 開始・終了ページ: - 識別子(ISBN, ISSN, DOIなど): -