Advertisement

PAL-Hom method for QP and an application to LP

  • Guoqiang Wang
  • Bo YuEmail author
Article
  • 2 Downloads

Abstract

In this paper, a proximal augmented Lagrangian homotopy (PAL-Hom) method for solving convex quadratic programming problems is proposed. This method takes the proximal augmented Lagrangian method as the outer iteration. To solve the proximal augmented Lagrangian subproblems, a homotopy method is presented as the inner iteration. The homotopy method tracks the piecewise-linear solution path of a parametric quadratic programming problem whose start problem takes an approximate solution as its solution and the target problem is the subproblem to be solved. To improve the performance of the homotopy method, the accelerated proximal gradient method is used to obtain a fairly good approximate solution that implies a good prediction of the optimal active set. Moreover, a sorting technique for the Cholesky factor update as well as an \(\varepsilon \)-relaxation technique for checking primal-dual feasibility and correcting the active sets are presented to improve the efficiency and robustness of the homotopy method. Simultaneously, a proximal-point-based AL-Hom method which is shown to converge in finite number of steps, is applied to linear programming. Numerical experiments on randomly generated problems and the problems from the CUTEr and Netlib test collections, support vector machines (SVMs) and contact problems of elasticity demonstrate that PAL-Hom is faster than the active-set methods and the parametric active set methods and is competitive to the interior-point methods and the specialized algorithms designed for specific models (e.g., sequential minimal optimization method for SVMs).

Keywords

Convex quadratic programming Linear programming Proximal point method Augmented Lagrangian method Homotopy 

Mathematics Subject Classification

90C05 90C20 

Notes

Acknowledgements

The authors would like to thank Xiaoliang Song (School of Mathematical Sciences, Dalian University of Technology) for his valuable suggestions, which led to improvement in this paper. This research was supported by the National Natural Science Foundation of China (Grant Nos. 11571061, 11401075 and 11701065) and the Fundamental Research Funds for the Central Universities (Grant Nos. DUT16LK05 and DUT17LK14)

References

  1. 1.
    Averick, B.M., Carter, R.G., Xue, G.L., Moré, J.J.: The minpack-2 test problem collection. Technical Report, Argonne National Lab., IL, USA (1992)Google Scholar
  2. 2.
    Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1999)zbMATHGoogle Scholar
  3. 3.
    Best, M.J.: An algorithm for the solution of the parametric quadratic programming problem. CORR 82-14, Department of Combinatorics and Optimization, University of Waterloo, Canada (1982)Google Scholar
  4. 4.
    Best, M.J.: An Algorithm for the Solution of the Parametric Quadratic Programming Problem. Springer, Berlin (1996)zbMATHGoogle Scholar
  5. 5.
    Bongartz, I., Conn, A.R., Gould, N., Toint, P.L.: Cute: constrained and unconstrained testing environment. ACM Trans. Math. Softw. (TOMS) 21(1), 123–160 (1995)zbMATHGoogle Scholar
  6. 6.
    Buys, J.D.: Dual Algorithms for Constrained Optimization Problems. Brondder-Offset, Rotterdam (1972)Google Scholar
  7. 7.
    Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2,( 27): 1–27 (2011). http://www.csie.ntu.edu.tw/~cjlin/libsvm
  8. 8.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer, Berlin (2013)Google Scholar
  10. 10.
    Cornuejols, G., Tütüncü, R.: Optimization Methods in Finance. Cambridge University Press, Cambridge (2006)Google Scholar
  11. 11.
    Dostál, Z., Friedlander, A., Santos, S.A.: Augmented Lagrangians with adaptive precision control for quadratic programming with simple bounds and equality constraints. SIAM J. Optim. 13(4), 1120–1140 (2003)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Dostál, Z., Gomes, F.A.M., Santos, S.A.: Duality-based domain decomposition with natural coarse-space for variational inequalities. J. Comput. Appl. Math. 126(1–2), 397–415 (2000)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Dostál, Z., Gomes, F.A.M., Santos, S.A.: Solution of contact problems by feti domain decomposition with natural coarse space projections. Comput. Methods Appl. Mech. Eng. 190(13–14), 1611–1627 (2000)zbMATHGoogle Scholar
  14. 14.
    Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using second order information for training support vector machines. J. Mach. Learn. Res. 6(Dec), 1889–1918 (2005)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Ferreau, H.J.: An Online Active Set Strategy for Fast Solution of Parametric Quadratic Programs with Applications to Predictive Engine Control. University of Heidelberg, Heidelberg (2006)Google Scholar
  16. 16.
    Ferreau, H.J., Bock, H.G., Diehl, M.: An online active set strategy to overcome the limitations of explicit MPC. Int. J. Robust Nonlinear Control 18(8), 816–830 (2008)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Progr. Comput. 6(4), 327–363 (2014)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Fletcher, R.: A general quadratic programming algorithm. IMA J. Numer. Anal. 7(1), 76–91 (1971)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Fletcher, R.: Stable reduced hessian updates for indefinite quadratic programming. Math. Progr. 87(2), 251–264 (2000)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Forsgren, A., E, P.G., Wong, E.: Primal and dual active-set methods for convex quadratic programming. Math. Progr. 159(1–2), 469–508 (2016)Google Scholar
  21. 21.
    Gay, D.M.: Electronic mail distribution of linear programming test problems. Math. Progr. Soc. COAL Newsl. 13, 10–12 (1985)Google Scholar
  22. 22.
    Gill, P.E., Murray, W., Saunders, M.A.: User’s Guide for Qpopt 1.0: A Fortran Package for Quadratic Programming, Technical Report SOL 95-4, Systems Optimization Laboratory, Dept. Operations Research, Stanford University (1995)Google Scholar
  23. 23.
    Gill, P.E., Murray, W., Saunders, M.A.: User’s Guide for Snopt Version 7: Software for Large-scale Linear and Quadratic Programming. Report NA 05-2, Department of Mathematics, University of California, San Diego (2008)Google Scholar
  24. 24.
    Gill, P.E., Murray, W., Saunders, M.A., Tomlin, J.A., Wright, M.H.: On projected newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Math. Progr. 36(2), 183–209 (1986)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Gill, P.E., Wong, E.: Methods for convex and general quadratic programming. Math. Progr. Comput. 7(1), 71–112 (2015)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Gould, N.I.: An algorithm for large-scale quadratic programming. IMA J. Numer. Anal. 11(3), 299–324 (1991)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Hager, W.W., c. Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17(2), 526–557 (2006)Google Scholar
  28. 28.
    Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Karmarkar, N.: A new polynomial-time algorithm for linear programming. In: Proceedings of the sixteenth Annual ACM Symposium onTheory of Computing, pp. 302–311. ACM (1984)Google Scholar
  30. 30.
    Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml
  31. 31.
    Lin, C.J., Moré, J.J.: Newton’s method for large bound-constrained optimization problems. SIAM J. Optim. 9(4), 1100–1127 (1999)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Mangasarian, O.L.: Iterative solution of linear programs. SIAM J. Numer. Anal. 18(4), 606–614 (1981)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Mangasarian, O.L., Meyer, R.R.: Nonlinear perturbation of linear programs. SIAM J. Control Optim. 17(6), 745–752 (1979)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Progr. 103(1), 127–152 (2005)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Nesterov, Y., et al.: Gradient methods for minimizing composite objective function. In: Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (2007)Google Scholar
  37. 37.
    Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Proceedings of the VII IEEE Workshop Neural Networks for Signal Processing, pp. 276–285. IEEE (1997)Google Scholar
  38. 38.
    Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, London (1969)Google Scholar
  39. 39.
    Ritter, K.: On Parametric Linear and Quadratic Programming Problems. Technical Report, DTIC Document (1981)Google Scholar
  40. 40.
    Ritter, K., Meyer, M.: A method for solving nonlinear maximum-problems depending on parameters. Nav. Res. Logist. (NRL) 14(2), 147–162 (1967)MathSciNetzbMATHGoogle Scholar
  41. 41.
    Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)MathSciNetzbMATHGoogle Scholar
  42. 42.
    Sra, S., Nowozin, S., Wright, S.J.: Optimization for Machine Learning. MIT Press, Cambridge (2012)Google Scholar
  43. 43.
    Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Progr. 106(1), 25–57 (2006)MathSciNetzbMATHGoogle Scholar
  44. 44.
    Wright, S.J.: Implementing proximal point methods for linear programming. J. Optim. Theory Appl. 65(3), 531–554 (1990)MathSciNetzbMATHGoogle Scholar
  45. 45.
    Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)zbMATHGoogle Scholar
  46. 46.
    Yuan, Y.X.: Analysis on a superlinearly convergent augmented Lagrangian method. Acta Math. Sin. Engl. Ser. 30(1), 1–10 (2014)MathSciNetzbMATHGoogle Scholar
  47. 47.
    Zhang, Y.: Solving large-scale linear programs by interior-point methods under the matlab environment. Optim. Methods Softw. 10(1), 1–31 (1998)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Mathematical SciencesDalian University of TechnologyDalianPeople’s Republic of China

Personalised recommendations