A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms

Article
  • 7 Downloads

Abstract

We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives a new second-order optimality condition without constraint qualifications stronger than previous ones associated with global convergence of algorithms. We prove that second-order variants of augmented Lagrangian (under an additional smoothness assumption based on the Lojasiewicz inequality) and interior point methods generate sequences satisfying our optimality condition. We present also a companion minimal constraint qualification, weaker than the ones known for second-order methods, that ensures usual global convergence results to a classical second-order stationary point. Finally, our optimality condition naturally suggests a definition of second-order stationarity suitable for the computation of iteration complexity bounds and for the definition of stopping criteria.

Keywords

Second-order optimality conditions Complementarity Global convergence Constraint qualifications 

Mathematics Subject Classification

90C46 90C30 

Notes

Acknowledgements

This work was supported by FAPESP (Grants 2013/05475-7 and 2016/02092-8) and CNPq.

References

  1. 1.
    Anandkumar, A., Ge, R.: Efficient approaches for escaping higher order saddle points in non-convex optimization. arXiv:1602.05908v1 (2016)
  2. 2.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On augmented lagrangian methods with general lower-level constraints. SIAM J. Optim. 18, 1286–1309 (2008)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Second-order negative-curvature methods for box-constrained and general constrained optimization. Comput. Optim. Appl. 45, 209–236 (2010)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Andreani, R., Haeser, G., Martínez, J.M.: On sequencial optimality conditions for smooth constrained optimization. Optimization 60(5), 627–641 (2011)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Andreani, R., Haeser, G., Ramos, A., Silva, P.: A second-order sequential optimality condition associated to the convergence of algorithms. IMA J. Numer. Anal. 37(4), 1902–1929 (2017)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: Two new weak constraint qualifications and applications. SIAM J. Optim. 22, 1109–1135 (2012)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: A cone-continuity constraint qualification and algorithmic consequences. SIAM J. Optim. 26(1), 96–110 (2016)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: Strict constraint qualifications and sequential optimality conditions for constrained optimization. Math. Oper. Res. (2018).  https://doi.org/10.1287/moor.2017.0879 Google Scholar
  9. 9.
    Andreani, R., Martínez, J.M., Schuverdt, M.L.: On second-order optimality conditions for nonlinear programming. Optimization 56, 529–542 (2007)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Andreani, R., Martínez, J.M., Svaiter, B.F.: A new sequencial optimality condition for constrained optimization and algorithmic consequences. SIAM J. Optim. 20(6), 3533–3554 (2010)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Bertsekas, D.P.: Nonlinear Programming. Athenas Scientific, Belmont (1999)MATHGoogle Scholar
  12. 12.
    Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149(1), 301–327 (2015)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Birgin, E., Martínez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization. SIAM Publications, Philadelphia (2014)CrossRefMATHGoogle Scholar
  14. 14.
    Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, P.L.: Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models. SIAM J. Optim. 26(2), 951–967 (2016)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Birgin, E.G., Haeser, G., Ramos, A.: Augmented lagrangians with constrained subproblems and convergence to second-order stationary points. Comput. Optim. Appl. 69(1), 51–75 (2018)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Bolte, J., Daniilidis, A., Lewis, A.S.: The lojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2007)CrossRefMATHGoogle Scholar
  17. 17.
    Bonnans, J.F., Shapiro, A.: Pertubation Analysis of Optimization Problems. Springer, Berlin (2000)CrossRefMATHGoogle Scholar
  18. 18.
    Cartis, C., Gould, N.I.M., Toint, P.L.: Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization. Found. Comput. Math. (2017).  https://doi.org/10.1007/s10208-017-9363-y
  19. 19.
    Chen, L., Goldfarb, D.: Interior-point \(\ell \)2-penalty methods for nonlinear programming with strong global convergence properties. Math. Program. 108(1), 1–36 (2006)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Coleman, T.F., Liu, J., Yuan, W.: A new trust-region algorithm for equality constrained optimization. Comput. Optim. Appl. 21, 177–199 (2002)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: Lancelot: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer, Berlin (1992)CrossRefMATHGoogle Scholar
  22. 22.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust Region Methods. MPS/SIAM Series on Optimization. SIAM, Philadelphia (2000)CrossRefGoogle Scholar
  23. 23.
    Dennis, J.E., Vicente, L.N.: On the convergence theory of trust-region-based algorithms for equality-constrained optimization. SIAM J. Optim. 7(4), 927–950 (1997)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Facchinei, F., Lucidi, S.: Convergence to second order stationary points in inequality constrained optimization. Math. Oper. Res. 23(3), 746–766 (1998)MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Fiacco, A.V.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, New York (1968)MATHGoogle Scholar
  26. 26.
    Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: global convergence. IMA J. Numer. Anal. 37(1), 407–443 (2016)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Gould, N.I.M., Conn, A.R., Toint, P.L.: A note on the convergence of barrier algorithms for second-order necessary points. Math. Program. 85, 433–438 (1998)MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Haeser, G.: On the global convergence of interior-point nonlinear programming algorithms. Comput. Appl. Math. 29, 125–138 (2010)MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Haeser, G.: Some theoretical limitations of second-order algorithms for smooth constrained optimization. Oper. Res. Lett. 46(3), 295–299 (2018)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Haeser, G., Liu, H., Ye, Y.: Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary. Optimization Online (2017). http://www.optimization-online.org/DB_HTML/2017/02/5861
  31. 31.
    Haeser, G., Ramos, A.: A survey of constraint qualifications with second-order properties in nonlinear optimization. Optimization Online (2018). http://www.optimizationonline.org/DB_HTML/2018/01/6409.html
  32. 32.
    Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems. Math. Program. 142(1), 591–604 (2013)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Krantz, S.G., Parks, H.G.: A Primer on Real Analytic Functions. Birkhäuser, Basel (2002)CrossRefMATHGoogle Scholar
  34. 34.
    Liu, H., Yao, T., Li, R., Ye, Y.: Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions. Math. Program. 166(1–2), 207–240 (2017)MathSciNetCrossRefMATHGoogle Scholar
  35. 35.
    Moguerza, J.M., Prieto, F.J.: An augmented lagrangian interior-point method using directions of negative curvature. Math. Program. 95(3), 573–616 (2003)MathSciNetCrossRefMATHGoogle Scholar
  36. 36.
    Pillo, G.D., Liuzzi, G., Lucidi, S.: A primal-dual algorithm for nonlinear programming exploiting negative curvature directions. Numer. Algebra Control Optim. 1(3), 509–528 (2011)CrossRefMATHGoogle Scholar
  37. 37.
    Pillo, G.D., Lucidi, S., Palagi, L.: Convergence to second-order stationary points of a primal-dual algorithm model for nonlinear programming. Math. Oper. Res. 30(4), 897–915 (2005)MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Tseng, P.: Convergent infeasible interior-point trust-region methods for constrained minimization. SIAM J. Optim. 13(2), 432–469 (2002)MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Ye, Y.: On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 56(1), 285–300 (1992)MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Ye, Y.: On the complexity of approximating a KKT point of quadratic programming. Math. Program. 80(2), 195–211 (1998)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Applied MathematicsUniversity of São PauloSão PauloBrazil
  2. 2.Department of Management Science and EngineeringStanford UniversityStanfordUSA

Personalised recommendations