hide
Free keywords:
-
Abstract:
In this paper we present a primal-dual decomposition algorithm for
support vector machine training. As with existing methods that use
very small working sets (such as Sequential Minimal
Optimization (SMO), Successive Over-Relaxation (SOR) or
the Kernel Adatron (KA)), our method scales well, is
straightforward to implement, and does not require an external QP
solver. Unlike SMO, SOR and KA, the method is applicable to a
large number of SVM formulations regardless of the number of
equality constraints involved. The effectiveness of our algorithm
is demonstrated on a more difficult SVM variant in this respect,
namely semi-parametric support vector regression.