日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Learning output kernels with block coordinate descent

MPS-Authors
/persons/resource/persons83886

Dinuzzo,  F
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Dinuzzo, F., Ong, C., Gehler, P., & Pillonetto, G. (2011). Learning output kernels with block coordinate descent. In L., Getoor, & T., Scheffer (Eds.), 28th International Conference on Machine Learning (ICML 2011) (pp. 49-56). Madison, WI, USA: International Machine Learning Society.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-BB22-D
要旨
We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performances and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space (RKHS) of vector-valued functions. Although the regularized risk functional is non-convex, we show that it is invex, implying that all local minimizers are global minimizers. We derive a block-wise coordinate descent method that efficiently exploits the structure of the objective functional. Then, we empirically demonstrate that the proposed method can improve classification accuracy. Finally, we provide a visual interpretation of the learned kernel matrix for some well known datasets.