Abstract
Least Squares Proximal Support Vector Regression (LSPSVR) requires only a single matrix inversion to obtain the Lagrange Multipliers as opposed to solving a Quadratic Programming Problem (QPP) for the conventional SVM optimization problem. However, like other least squares based methods, LSPSVR suffers from lack of sparseness. Most of the Lagrange multipliers are non-zero and thus the determination of the separating hyperplane requires a large number of data points. Large zero norm of Lagrange multipliers inevitably leads to a large kernel matrix that is inappropriate for fast regression on large datasets. This paper suggests how the LSPSVR formulation may be recast into one that also tries to minimize the zero norm of the vector of Lagrange multipliers, and in effect imposes sparseness. Experimental results on benchmark data show that a significant decrease in the number of support vectors can be achieved without a concomitant increase in the error.
Chapter PDF
References
Cristianini, N., Taylor, J.S.: An Introduction to Support Vector Machines and other kernel based learning methods. Cambridge University Press, Cambridge (2000)
Suykens, J.: Least Squares Support Vector Machines. In: IJCNN (2003), http://www.esat.kuleuven.ac.be/sista/lssvmlab/
Jayadeva, R.K., Chandra, S.: Least Squares Proximal Support Vector Regression. Neurocomputing (communicated)
Amaldi, E., Kann, V.: On the approximability of minimizing non zero variables or unsatisfied relations in linear systems. Theoretical Computer Science 209, 237–260 (1998)
Weston, J., Elisseeff, A., Scholkopf, B.: Use of the l0-norm with linear models and kernel methods. Technical report (2001)
Weston, J., Elisseeff, A., Scholkopf, B., Tipping, M.: Use of Zero Norm with Linear Models and Kernel Machines. Journal of Machine Learning Research 3, 1439–1461 (2003)
Murphy, P.M., Aha, P.M.: UCI Repository of Machine learning Databases (1992), http://www.ics.uci.edu/mlearn/MLRepository.html
Data for Evaluating Learning in Valid experiments, http://www.cs.utoronto.ca/~delve
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jayadeva, Shah, S., Chandra, S. (2009). Zero Norm Least Squares Proximal SVR. In: Chaudhury, S., Mitra, S., Murthy, C.A., Sastry, P.S., Pal, S.K. (eds) Pattern Recognition and Machine Intelligence. PReMI 2009. Lecture Notes in Computer Science, vol 5909. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11164-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-11164-8_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-11163-1
Online ISBN: 978-3-642-11164-8
eBook Packages: Computer ScienceComputer Science (R0)