Abstract
While Support Vector Machines (SVMs) with kernels offer great flexibility and prediction performance on many application problems, their practical use is often hindered by the following two problems. Both problems can be traced back to the number of Support Vectors (SVs), which is known to generally grow linearly with the data set size [1]. First, training is slower than other methods and linear SVMs, where recent advances in training algorithms vastly improved training time. \(h(x)={\rm sign} \left[\sum^{\#SV}_{i=1} \alpha_iK(x_i, x)\right]\) it is too expensive to evaluate in many applications when the number of SVs is large.
This is an extended abstract of an article published in the machine learning journal [3].
Chapter PDF
References
Steinwart, I.: Sparseness of support vector machines. JMLR 4, 1071–1105 (2003)
Wu, M., Schölkopf, B., Bakir, G.H.: A direct method for building sparse kernel learning algorithms. JMLR 7, 603–624 (2006)
Joachims, T., Yu, C.-N.J.: Sparse Kernel SVMs via Cutting-Plane Training. Machine Learning (2009), doi:10.1007/s10994-009-5126-6
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Joachims, T., Yu, CN.J. (2009). Sparse Kernel SVMs via Cutting-Plane Training. In: Buntine, W., Grobelnik, M., Mladenić, D., Shawe-Taylor, J. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2009. Lecture Notes in Computer Science(), vol 5781. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04180-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-04180-8_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04179-2
Online ISBN: 978-3-642-04180-8
eBook Packages: Computer ScienceComputer Science (R0)