hide
Free keywords:
-
Abstract:
Convex learning algorithms, such as Support Vector Machines (SVMs), are
often seen as highly desirable because they offer strong practical
properties and are amenable to theoretical analysis. However, in this work
we show how non-convexity can provide scalability advantages over
convexity. We show how concave-convex programming can be applied to produce
(i) faster SVMs where training errors are no longer support vectors, and
(ii) much faster Transductive SVMs.