Skip to main content

Selecting a Reduced Set for Building Sparse Support Vector Regression in the Primal

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4426))

Abstract

Recent work shows that Support vector machines (SVMs) can be solved efficiently in the primal. This paper follows this line of research and shows how to build sparse support vector regression (SVR) in the primal, thus providing for us scalable, sparse support vector regression algorithm, named SSVR-SRS. Empirical comparisons show that the number of basis functions required by the proposed algorithm to achieve the accuracy close to that of SVR is far less than the number of support vectors of SVR.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.: Statistical Learning Theory. Wiley-Interscience, New York (1998)

    MATH  Google Scholar 

  2. Steinwart, I.: Sparseness of support vector machines. Journal of Machine Learning Research 4, 1071–1105 (2003)

    Article  MathSciNet  Google Scholar 

  3. Burges, C.J.C., Schölkopf, B.: Improving the accuracy and speed of support vector learning machines. In: Advances in Neural Information Processing System, vol. 9, pp. 375–381 (1997)

    Google Scholar 

  4. Scholkopf, B., et al.: Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks 10, 1000–1017 (1999)

    Article  Google Scholar 

  5. Lee, Y.J., Mangasarian, O.L.: RSVM: Reduced support vector machines. In: Proceedings of the SIAM International Conference on Data Mining, SIAM, Philadelphia (2001)

    Google Scholar 

  6. Joachims, T.: Making large-scale SVM learning practical. In: Advances in Kernel Methods - Support Vector Learning, MIT Press, Cambridge (1999)

    Google Scholar 

  7. Platt, J.: Sequential minimal optimization: a fast algorithm for training support vector machines. In: Advance in Kernel Methods - Support Vector Learning, MIT Press, Cambridge (1999)

    Google Scholar 

  8. Mangasarian, O.L.: A finite Newton method for classification. Optimization Methods & Software 17(5), 913–929 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  9. Keerthi, S.S., Decoste, D.M.: A modified finite Newton method for fast solution of large scale linear svms. Journal of Machine Learning Research 6, 341–361 (2005)

    MathSciNet  Google Scholar 

  10. Chapelle, O.: Training a Support Vector Machine in the Primal. Neural Computation (Accepted) (2006)

    Google Scholar 

  11. Bo, L.F., Wang, L., Jiao, L.C.: Recursive finite Newton algorithm for support vector regression in the primal. Neural Computation, in press (2007)

    Google Scholar 

  12. Keerthi, S.S., Chapelle, O., Decoste, D.: Building Support Vector Machines with Reduced Classifier Complexity. Journal of Machine Learning Research 7, 1493–1515 (2006)

    MathSciNet  Google Scholar 

  13. Vincent, P., Bengio, Y.: Kernel matching pursuit. Machine Learning 48, 165–187 (2002)

    Article  MATH  Google Scholar 

  14. Fan, R.E., Chen, P.H., Lin, C.J.: Working Set Selection Using Second Order Information for Training Support Vector Machines. Journal of Machine Learning Research 6, 1889–1918 (2005)

    MathSciNet  Google Scholar 

  15. Kimeldorf, G.S., Wahba, G.: A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics 41, 495–502 (1970)

    Article  MathSciNet  Google Scholar 

  16. Huber, P.: Robust Statistics. John Wiley, New York (1981)

    MATH  Google Scholar 

  17. Mallat, S., Zhang, Z.: Matching pursuit with time-frequency dictionaries. IEEE Transactions on Signal Processing 41(12), 3397–3415 (1993)

    Article  MATH  Google Scholar 

  18. Friedman, J.: Greedy Function Approximation: a Gradient Boosting Machine. Annals of Statistics 29, 1189–1232 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  19. Friedman, J.: Multivariate adaptive regression splines. Annals of Statistics 19(1), 1–141 (1991)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Zhi-Hua Zhou Hang Li Qiang Yang

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Bo, L., Wang, L., Jiao, L. (2007). Selecting a Reduced Set for Building Sparse Support Vector Regression in the Primal. In: Zhou, ZH., Li, H., Yang, Q. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2007. Lecture Notes in Computer Science(), vol 4426. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71701-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-71701-0_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-71700-3

  • Online ISBN: 978-3-540-71701-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics