非表示:
キーワード:
-
要旨:
We address the problem of learning a symmetric positive definite matrix. The central issue is to design
parameter updates that preserve positive definiteness. Our updates are motivated with the von
Neumann divergence. Rather than treating the most general case, we focus on two key applications
that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric
positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient
(EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite
matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite
matrix with trace one). The generalized updates use matrix logarithms and exponentials to
preserve positive definiteness. Most importantly, we show how the derivation and the analyses of
the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting
matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel
matrix from distance measurements.