Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Training and Approximation of a Primal Multiclass Support Vector Machine

MPG-Autoren
/persons/resource/persons84331

Zien,  A
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Friedrich Miescher Laboratory, Max Planck Society;

/persons/resource/persons84118

Ong,  CS
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Friedrich Miescher Laboratory, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Zien, A., Bona, F., & Ong, C. (2007). Training and Approximation of a Primal Multiclass Support Vector Machine. In C. Skiadas (Ed.), XIIth International Conference on Applied Stochastic Models and Data Analysis (ASMDA 2007) (pp. 1-8).


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CD6D-7
Zusammenfassung
We revisit the multiclass support vector machine (SVM) and generalize
the formulation to convex loss functions and joint feature maps. Motivated by
recent work [Chapelle, 2006] we use logistic loss and softmax to enable gradient
based primal optimization. Kernels are incorporated via kernel principal component
analysis (KPCA), which naturally leads to approximation methods for large scale
problems. We investigate similarities and differences to previous multiclass SVM
approaches. Experimental comparisons to previous approaches and to the popular
one-vs-rest SVM are presented on several different datasets.