English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Analysis of fixed-point and coordinate descent algorithms for regularized Kernel methods

MPS-Authors
/persons/resource/persons83886

Dinuzzo,  F.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dinuzzo, F. (2011). Analysis of fixed-point and coordinate descent algorithms for regularized Kernel methods. IEEE Transactions on Neural Networks, 22(10), 1576-1587. doi:10.1109/TNN.2011.2164096.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0010-4C2E-1
Abstract
In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.