Advertisement

Zero Norm Least Squares Proximal SVR

  • Jayadeva
  • Sameena Shah
  • Suresh Chandra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5909)

Abstract

Least Squares Proximal Support Vector Regression (LSPSVR) requires only a single matrix inversion to obtain the Lagrange Multipliers as opposed to solving a Quadratic Programming Problem (QPP) for the conventional SVM optimization problem. However, like other least squares based methods, LSPSVR suffers from lack of sparseness. Most of the Lagrange multipliers are non-zero and thus the determination of the separating hyperplane requires a large number of data points. Large zero norm of Lagrange multipliers inevitably leads to a large kernel matrix that is inappropriate for fast regression on large datasets. This paper suggests how the LSPSVR formulation may be recast into one that also tries to minimize the zero norm of the vector of Lagrange multipliers, and in effect imposes sparseness. Experimental results on benchmark data show that a significant decrease in the number of support vectors can be achieved without a concomitant increase in the error.

Keywords

SVR sparse representation zero-norm proximal least squares 

References

  1. 1.
    Cristianini, N., Taylor, J.S.: An Introduction to Support Vector Machines and other kernel based learning methods. Cambridge University Press, Cambridge (2000)Google Scholar
  2. 2.
    Suykens, J.: Least Squares Support Vector Machines. In: IJCNN (2003), http://www.esat.kuleuven.ac.be/sista/lssvmlab/
  3. 3.
    Jayadeva, R.K., Chandra, S.: Least Squares Proximal Support Vector Regression. Neurocomputing (communicated)Google Scholar
  4. 4.
    Amaldi, E., Kann, V.: On the approximability of minimizing non zero variables or unsatisfied relations in linear systems. Theoretical Computer Science 209, 237–260 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Weston, J., Elisseeff, A., Scholkopf, B.: Use of the l0-norm with linear models and kernel methods. Technical report (2001)Google Scholar
  6. 6.
    Weston, J., Elisseeff, A., Scholkopf, B., Tipping, M.: Use of Zero Norm with Linear Models and Kernel Machines. Journal of Machine Learning Research 3, 1439–1461 (2003)zbMATHCrossRefGoogle Scholar
  7. 7.
    Murphy, P.M., Aha, P.M.: UCI Repository of Machine learning Databases (1992), http://www.ics.uci.edu/mlearn/MLRepository.html
  8. 8.
    Data for Evaluating Learning in Valid experiments, http://www.cs.utoronto.ca/~delve

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Jayadeva
    • 1
  • Sameena Shah
    • 1
  • Suresh Chandra
    • 2
  1. 1.Dept. of Electrical Engineering 
  2. 2.Dept. of MathematicsIndian Institute of TechnologyNew DelhiIndia

Personalised recommendations