Sparse Kernel SVMs via Cutting-Plane Training

  • Thorsten Joachims
  • Chun-Nam John Yu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5781)


While Support Vector Machines (SVMs) with kernels offer great flexibility and prediction performance on many application problems, their practical use is often hindered by the following two problems. Both problems can be traced back to the number of Support Vectors (SVs), which is known to generally grow linearly with the data set size [1]. First, training is slower than other methods and linear SVMs, where recent advances in training algorithms vastly improved training time. \(h(x)={\rm sign} \left[\sum^{\#SV}_{i=1} \alpha_iK(x_i, x)\right]\) it is too expensive to evaluate in many applications when the number of SVs is large.


  1. 1.
    Steinwart, I.: Sparseness of support vector machines. JMLR 4, 1071–1105 (2003)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Wu, M., Schölkopf, B., Bakir, G.H.: A direct method for building sparse kernel learning algorithms. JMLR 7, 603–624 (2006)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Joachims, T., Yu, C.-N.J.: Sparse Kernel SVMs via Cutting-Plane Training. Machine Learning (2009), doi:10.1007/s10994-009-5126-6Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Thorsten Joachims
    • 1
  • Chun-Nam John Yu
    • 1
  1. 1.Dept. of Computer ScienceCornell UniversityIthacaUSA

Personalised recommendations