Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

A Compression Approach to Support Vector Model Selection

MPG-Autoren
/persons/resource/persons76237

von Luxburg,  U
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83824

Bousquet,  O
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

von Luxburg, U., Bousquet, O., & Schölkopf, B. (2004). A Compression Approach to Support Vector Model Selection. The Journal of Machine Learning Research, 5, 293-323.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-D96B-E
Zusammenfassung
In this paper we investigate connections between statistical learning
theory and data compression on the basis of support vector machine (SVM)
model selection. Inspired by several generalization bounds we construct
"compression coefficients" for SVMs which measure the amount by which the
training labels can be compressed by a code built from the separating
hyperplane. The main idea is to relate the coding precision to geometrical
concepts such as the width of the margin or the shape of the data in the
feature space. The so derived compression coefficients combine well known
quantities such as the radius-margin term R^2/rho^2, the eigenvalues of the
kernel matrix, and the number of support vectors. To test whether they are
useful in practice we ran model selection experiments on benchmark data
sets. As a result we found that compression coefficients can fairly
accurately predict the parameters for which the test error is minimized.