Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection

MPG-Autoren
/persons/resource/persons84265

Tsuda,  K
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84153

Rätsch,  G
Friedrich Miescher Laboratory, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Tsuda, K., Rätsch, G., & Warmuth, M. (2005). Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection. The Journal of Machine Learning Research, 6, 995-1018.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-D54F-C
Zusammenfassung
We address the problem of learning a symmetric positive definite matrix. The central issue is to design
parameter updates that preserve positive definiteness. Our updates are motivated with the von
Neumann divergence. Rather than treating the most general case, we focus on two key applications
that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric
positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient
(EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite
matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite
matrix with trace one). The generalized updates use matrix logarithms and exponentials to
preserve positive definiteness. Most importantly, we show how the derivation and the analyses of
the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting
matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel
matrix from distance measurements.