日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., & Schölkopf, B. (2008). Sparse Multiscale Gaussian Process Regression. In W., Cohen, A., McCallum, & S., Roweis (Eds.), ICML '08: Proceedings of the 25th international conference on Machine learning (pp. 1112-1119). New York, NY, USA: ACM Press.

Item is

基本情報

表示: 非表示:
資料種別: 会議論文

ファイル

表示: ファイル

関連URL

表示:
非表示:
説明:
-
OA-Status:

作成者

表示:
非表示:
 作成者:
Walder, C1, 2, 3, 著者           
Kim, KI2, 4, 著者           
Schölkopf, B2, 4, 著者           
所属:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              
4Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              

内容説明

表示:
非表示:
キーワード: -
 要旨: Most existing sparse Gaussian process (g.p.)
models seek computational advantages by
basing their computations on a set of m basis
functions that are the covariance function of
the g.p. with one of its two inputs fixed. We
generalise this for the case of Gaussian covariance
function, by basing our computations on
m Gaussian basis functions with arbitrary diagonal
covariance matrices (or length scales).
For a fixed number of basis functions and
any given criteria, this additional flexibility
permits approximations no worse and typically
better than was previously possible.
We perform gradient based optimisation of
the marginal likelihood, which costs O(m2n)
time where n is the number of data points,
and compare the method to various other
sparse g.p. methods. Although we focus on
g.p. regression, the central idea is applicable
to all kernel based algorithms, and we also
provide some results for the support vector
machine (s.v.m.) and kernel ridge regression
(k.r.r.). Our approach outperforms the other
methods, particularly for the case of very few
basis functions, i.e. a very high sparsity ratio.

資料詳細

表示:
非表示:
言語:
 日付: 2008-07
 出版の状態: 出版
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): DOI: 10.1145/1390156.1390296
BibTex参照ID: 5121
 学位: -

関連イベント

表示:
非表示:
イベント名: 25th International Conference on Machine Learning (ICML 2008)
開催地: Helsinki, Finland
開始日・終了日: 2008-07-05 - 2008-07-09

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: ICML '08: Proceedings of the 25th international conference on Machine learning
種別: 会議論文集
 著者・編者:
Cohen, WW, 編集者
McCallum, A, 編集者
Roweis, ST, 編集者
所属:
-
出版社, 出版地: New York, NY, USA : ACM Press
ページ: - 巻号: - 通巻号: - 開始・終了ページ: 1112 - 1119 識別子(ISBN, ISSN, DOIなど): ISBN: 978-1-60558-205-4