ausblenden:
Schlagwörter:
-
Zusammenfassung:
We revisit the multiclass support vector machine (SVM) and generalize
the formulation to convex loss functions and joint feature maps. Motivated by
recent work [Chapelle, 2006] we use logistic loss and softmax to enable gradient
based primal optimization. Kernels are incorporated via kernel principal component
analysis (KPCA), which naturally leads to approximation methods for large scale
problems. We investigate similarities and differences to previous multiclass SVM
approaches. Experimental comparisons to previous approaches and to the popular
one-vs-rest SVM are presented on several different datasets.