Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Fast Kernel ICA using an Approximate Newton Method

MPG-Autoren
/persons/resource/persons83994

Jegelka,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83946

Gretton,  A
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Shen, H., Jegelka, S., & Gretton, A. (2007). Fast Kernel ICA using an Approximate Newton Method. In M. Meila, & X. shen (Eds.), Artificial Intelligence and Statistics, 21-24 March 2007, San Juan, Puerto Rico (pp. 476-483). Madison, WI, USA: International Machine Learning Society.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CE7B-0
Zusammenfassung
Recent approaches to independent component analysis (ICA) have used
kernel independence measures to obtain very good performance,
particularly where classical methods experience difficulty (for
instance, sources with near-zero kurtosis). We present Fast Kernel ICA
(FastKICA), a novel optimisation technique for one such kernel
independence measure, the Hilbert-Schmidt independence criterion
(HSIC). Our search procedure uses an approximate Newton method on the
special orthogonal group, where we estimate the Hessian locally about
independence. We employ incomplete Cholesky decomposition to
efficiently compute the gradient and approximate Hessian. FastKICA results in more accurate solutions at a given cost
compared with gradient descent, and is relatively insensitive to local minima
when initialised far from independence. These properties allow kernel approaches to be
extended to problems with larger numbers of sources and observations.
Our method is competitive with other modern and classical ICA
approaches in both speed and accuracy.