Abstract
Recent work shows that Support vector machines (SVMs) can be solved efficiently in the primal. This paper follows this line of research and shows how to build sparse support vector regression (SVR) in the primal, thus providing for us scalable, sparse support vector regression algorithm, named SSVR-SRS. Empirical comparisons show that the number of basis functions required by the proposed algorithm to achieve the accuracy close to that of SVR is far less than the number of support vectors of SVR.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Vapnik, V.: Statistical Learning Theory. Wiley-Interscience, New York (1998)
Steinwart, I.: Sparseness of support vector machines. Journal of Machine Learning Research 4, 1071–1105 (2003)
Burges, C.J.C., Schölkopf, B.: Improving the accuracy and speed of support vector learning machines. In: Advances in Neural Information Processing System, vol. 9, pp. 375–381 (1997)
Scholkopf, B., et al.: Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks 10, 1000–1017 (1999)
Lee, Y.J., Mangasarian, O.L.: RSVM: Reduced support vector machines. In: Proceedings of the SIAM International Conference on Data Mining, SIAM, Philadelphia (2001)
Joachims, T.: Making large-scale SVM learning practical. In: Advances in Kernel Methods - Support Vector Learning, MIT Press, Cambridge (1999)
Platt, J.: Sequential minimal optimization: a fast algorithm for training support vector machines. In: Advance in Kernel Methods - Support Vector Learning, MIT Press, Cambridge (1999)
Mangasarian, O.L.: A finite Newton method for classification. Optimization Methods & Software 17(5), 913–929 (2002)
Keerthi, S.S., Decoste, D.M.: A modified finite Newton method for fast solution of large scale linear svms. Journal of Machine Learning Research 6, 341–361 (2005)
Chapelle, O.: Training a Support Vector Machine in the Primal. Neural Computation (Accepted) (2006)
Bo, L.F., Wang, L., Jiao, L.C.: Recursive finite Newton algorithm for support vector regression in the primal. Neural Computation, in press (2007)
Keerthi, S.S., Chapelle, O., Decoste, D.: Building Support Vector Machines with Reduced Classifier Complexity. Journal of Machine Learning Research 7, 1493–1515 (2006)
Vincent, P., Bengio, Y.: Kernel matching pursuit. Machine Learning 48, 165–187 (2002)
Fan, R.E., Chen, P.H., Lin, C.J.: Working Set Selection Using Second Order Information for Training Support Vector Machines. Journal of Machine Learning Research 6, 1889–1918 (2005)
Kimeldorf, G.S., Wahba, G.: A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics 41, 495–502 (1970)
Huber, P.: Robust Statistics. John Wiley, New York (1981)
Mallat, S., Zhang, Z.: Matching pursuit with time-frequency dictionaries. IEEE Transactions on Signal Processing 41(12), 3397–3415 (1993)
Friedman, J.: Greedy Function Approximation: a Gradient Boosting Machine. Annals of Statistics 29, 1189–1232 (2001)
Friedman, J.: Multivariate adaptive regression splines. Annals of Statistics 19(1), 1–141 (1991)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Bo, L., Wang, L., Jiao, L. (2007). Selecting a Reduced Set for Building Sparse Support Vector Regression in the Primal. In: Zhou, ZH., Li, H., Yang, Q. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2007. Lecture Notes in Computer Science(), vol 4426. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71701-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-71701-0_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-71700-3
Online ISBN: 978-3-540-71701-0
eBook Packages: Computer ScienceComputer Science (R0)