Computational Optimization and Applications

, Volume 47, Issue 3, pp 431–453 | Cite as

Using an iterative linear solver in an interior-point method for generating support vector machines

Open Access


This paper concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. A method is proposed based on interior-point methods for convex quadratic programming. This interior-point method uses a linear preconditioned conjugate gradient method with a novel preconditioner to compute each iteration from the previous. An implementation is developed by adapting the object-oriented package OOQP to the problem structure. Numerical results are provided, and computational experience is discussed.


Machine learning Support vector machines Quadratic programming Interior-point methods Krylov-space methods Matrix-free preconditioning 


  1. 1.
    Alvira, M., Rifkin, R.: An empirical comparison of snow and svms for face detection. A.I. memo 2001-004, Center for Biological and Computational Learning, MIT, Cambridge, MA (2001) Google Scholar
  2. 2.
    Anderson, E., Bai, Z., Bischof, C., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Ostrouchov, S., Sorensen, D.: LAPACK User’s Guide. SIAM, Philadelphia (1992) Google Scholar
  3. 3.
    Bach, F.R., Jordan, M.I.: Predictive low-rank decomposition for kernel methods. In: ICML ’05: Proceedings of the 22nd International Conference on Machine Learning, pp. 33–40. ACM Press, New York (2005) CrossRefGoogle Scholar
  4. 4.
    Blackford, L.S., Demmel, J., Dongarra, J., Duff, I., Hammarling, S., Henry, G., Heroux, M., Kaufman, L., Lumsdaine, A., Petitet, A., Pozo, R., Remington, K., Whaley, R.C.: An updated set of basic linear algebra subprograms (BLAS). ACM Trans. Math. Soft. 28, 135–151 (2002) CrossRefGoogle Scholar
  5. 5.
  6. 6.
    Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2, 121–167 (1998) CrossRefGoogle Scholar
  7. 7.
    CBCL center for biological & computational learning.
  8. 8.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000) Google Scholar
  9. 9.
    Dongarra, J.: Basic linear algebra subprograms technical forum standard. Int. J. High Perform. Appl. Supercomput. 16, 1–111 (2002), 115–199 CrossRefGoogle Scholar
  10. 10.
    Drineas, P., Mahoney, M.W.: On the Nyström method for approximating a gram matrix for improved kernel-based learning. J. Mach. Learn. Res. 6, 2153–2175 (2005) MathSciNetGoogle Scholar
  11. 11.
    Fan, R.-E., Chen, P.-H., Lin, C.-J.: Working set selection using second order information for training support vector machines. J. Mach. Learn. Res. 6, 1889–1918 (2005) MathSciNetGoogle Scholar
  12. 12.
    Ferris, M.C., Munson, T.S.: Interior point methods for massive support vector machines. SIAM J. Optim. 13, 783–804 (2003) MATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Ferris, M.C., Munson, T.S.: Semismooth support vector machines. Math. Program. 101, 185–204 (2004) MATHMathSciNetGoogle Scholar
  14. 14.
    Fine, S., Scheinberg, K.: Efficient SVM training using low-rank kernel representations. J. Mach. Learn. Res. 2, 243–264 (2001) CrossRefGoogle Scholar
  15. 15.
    Fletcher, R.: Practical Methods of Optimization. Constrained Optimization, vol. 2. Wiley, New York (1981) MATHGoogle Scholar
  16. 16.
    Gertz, E.M., Griffin, J.D.: Support vector machine classifiers for large data sets, Technical memo ANL/MCS-TM-289, Argonne National Lab, October 2005 Google Scholar
  17. 17.
    Gertz, E.M., Wright, S.J.: OOQP user guide. Technical Memorandum ANL/MCS-TM-252, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL (2001) Google Scholar
  18. 18.
    Gertz, E.M., Wright, S.J.: Object oriented software for quadratic programming. ACM Trans. Math. Softw. (TOMS) 29, 49–94 (2003) CrossRefMathSciNetGoogle Scholar
  19. 19.
    Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic, London (1981) MATHGoogle Scholar
  20. 20.
    Glasmachers, T., Igel, C.: Maximum-gain working set selection for SVMs. J. Mach. Learn. Res. 7, 1437–1466 (2006) MathSciNetGoogle Scholar
  21. 21.
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996) MATHGoogle Scholar
  22. 22.
    Hettich, C.B.S., Merz, C.: UCI repository of machine learning databases (1998).
  23. 23.
    In Hyuk Jung, A.L.T., O’Leary, D.P.: A constraint reduced IPM for convex quadratic programming with application to SVM training. In: INFORMS Annual Meeting (2006) Google Scholar
  24. 24.
    Joachims, T.: Making large-scale support vector machine learning practical. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1998) Google Scholar
  25. 25.
    Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput. 13, 637–649 (2001) MATHCrossRefGoogle Scholar
  26. 26.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998) CrossRefGoogle Scholar
  27. 27.
    Louradour, J., Daoudi, K., Bach, F.: SVM speaker verification using an incomplete Cholesky decomposition sequence kernel. In: IEEE Odyssey 2006: The Speaker and Language Recognition Workshop, IEEE, June 2006 Google Scholar
  28. 28.
    Mangasarian, O.L., Musicant, D.R.: Lagrangian support vector machines. J. Mach. Learn. Res. 1, 161–177 (2001) MATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Osuna, E., Freund, R., Girosi, F.: Improved training algorithm for support vector machines. In: Proceedings of the IEEE Workshop on Neural Networks for Signal Processing, pp. 276–285. (1997) Google Scholar
  30. 30.
    Platt, J.: Sequential minimal optimization.
  31. 31.
    Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 41–65. MIT Press, Cambridge (1998) Google Scholar
  32. 32.
    Platt, J.: Sequential minimal optimization: A fast algorithm for training support vector machine. Technical Report TR-98-14, Microsoft Research, (1998) Google Scholar
  33. 33.
    Scheinberg, K.: An efficient implementation of an active set method for SVMs. J. Mach. Learn. Res. 7, 2237–2257 (2006) MathSciNetGoogle Scholar
  34. 34.
    Smola, A.J., Schölkopf, B.: Sparse greedy matrix approximation in machine learning. In: Proceedings of the 17th International Conference on Machine Learning, Stanford University, CA, pp. 911–918. (2000) Google Scholar
  35. 35.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995) MATHGoogle Scholar
  36. 36.
    Wright, S.J.: Primal–Dual Interior–Point Methods. SIAM Publications. SIAM, Philadelphia (1996) Google Scholar
  37. 37.
    Xiong Dong, J., Krzyzak, A., Suen, C.Y.: Fast SVM training algorithm with decomposition on very large data sets. IEEE Trans. Pattern Anal. Mach. Intell. 27, 603–618 (2005) CrossRefGoogle Scholar
  38. 38.
    Yang, M.-H.: Resources for face detection.

Copyright information

© The Author(s) 2009

Authors and Affiliations

  1. 1.University of WisconsinMadisonUSA
  2. 2.Computational Sciences and Mathematical Research DivisionSandia National LaboratoriesLivermoreUSA

Personalised recommendations