Advertisement

A gradient-type algorithm with backward inertial steps associated to a nonconvex minimization problem

  • Cristian Daniel Alecsa
  • Szilárd Csaba LászlóEmail author
  • Adrian Viorel
Original Paper

Abstract

We investigate an algorithm of gradient type with a backward inertial step in connection with the minimization of a nonconvex differentiable function. We show that the generated sequences converge to a critical point of the objective function, if a regularization of the objective function satisfies the Kurdyka-Łojasiewicz property. Further, we provide convergence rates for the generated sequences and the objective function values formulated in terms of the Łojasiewicz exponent. Finally, some numerical experiments are presented in order to compare our numerical scheme with some algorithms well known in the literature.

Keywords

Inertial algorithm Nonconvex optimization Kurdyka-Łojasiewicz inequality Convergence rate 

Mathematics Subject Classification (2010)

90C26 90C30 65K10 

Notes

Acknowledgments

The authors are thankful to three anonymous reviewers for remarks and suggestions which helped us to improve the quality of the paper.

References

  1. 1.
    Aujol, J.-F., Dossal, C.H., Rondepierre, A.: Optimal convergence rates for Nesterov acceleration. arXiv:1805.05719
  2. 2.
    Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116(1-2), 5–16 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1-2), 91–129 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 168(1-2), 123–175 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Bégout, P., Bolte, J., Jendoubi, M.A.: On damped second-order gradient systems. J. Differ. Equ. 259, 3115–3143 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. Series A 146(1-2), 459–494 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205–1223 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18(2), 556–572 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Bolte, J., Daniilidis, A., Ley, O., Mazet, L.: Characterizations of Łojasiewicz inequalities: subgradient flows, talweg, convexity. Trans. Am. Math. Soc. 362(6), 3319–3363 (2010)CrossRefzbMATHGoogle Scholar
  11. 11.
    Boţ, R. I., Csetnek, E.R., László, S.C.: Approaching nonsmooth nonconvex minimization through second-order proximal-gradient dynamical systems. J. Evol. Equ. 18(3), 1291–1318 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Boţ, R.I., Csetnek, E.R., László, S.C.: An inertial forward-backward algorithm for minimizing the sum of two non-convex functions. Euro J. Comput. Optim. 4(1), 3–25 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Boţ, R. I., Csetnek, E.R., László, S.C.: A second order dynamical approach with variable damping to nonconvex smooth minimization. Applicable Analysis.  https://doi.org/10.1080/00036811.2018.1495330 (2018)
  14. 14.
    Boţ, R.I., Nguyen, D.-K.: The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates. arXiv:1801.01994
  15. 15.
    Chambolle, A., Dossal, C. h.: On the convergence of the iterates of the fast iterative shrinkage/thresholding algorithm. J. Optim. Theory Appl. 166(3), 968–982 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Combettes, P.L., Glaudin, L.E.: Quasinonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods. Siam Journal on Optimization 27(4), 2356–2380 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for Kurdyka–Łojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165(3), 874–900 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Ghadimi, E., Feyzmahdavian, H.R., Johansson, M.: Global convergence of the heavy-ball method for convex optimization. In: 2015 IEEE European Control Conference (ECC), pp. 310–315 (2015)Google Scholar
  19. 19.
    Kurdyka, K.: On gradients of functions definable in o-minimal structures. Annales de l’institut Fourier (Grenoble) 48(3), 769–783 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    László, S.C.: Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization. arXiv:1811.09616
  21. 21.
    Li, G., Pong, T.K.: Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comput. Math. 2018, 1–34 (2018)zbMATHGoogle Scholar
  22. 22.
    Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels, Les É,quations aux Dérivées Partielles, Éditions du Centre National de la Recherche Scientifique Paris, pp. 87–89 (1963)Google Scholar
  23. 23.
    Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate O(1/k 2). (Russian) Dokl. Akad. Nauk SSSR 269(3), 543–547 (1983)Google Scholar
  24. 24.
    Nesterov, Y.: Introductory lectures on convex optimization: a basic course. Kluwer Academic Publishers, Dordrecht (2004)CrossRefzbMATHGoogle Scholar
  25. 25.
    Polheim, H.: Examples of objective functions, Documentation for Genetic and Evolutionary Algorithms for use with MATLAB : GEATbx version 3.7, http://www.geatbx.com
  26. 26.
    Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. U.S.S.R. Comput. Math. Math. Phys. 4(5), 1–17 (1964)CrossRefGoogle Scholar
  27. 27.
    Rockafellar, R.T., Wets, R. J. -B.: Variational analysis fundamental principles of mathematical sciences, vol. 317. Springer, Berlin (1998)Google Scholar
  28. 28.
    Su, W., Boyd, S., Candes, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17, 1–43 (2016)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Sun, T., Yin, P., Li, D., Huang, C., Guan, L., Jiang, H.: Non-ergodic convergence analysis of heavy-ball algorithms. arXiv:1811.01777

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Tiberiu Popoviciu Institute of Numerical AnalysisRomanian AcademyCluj-NapocaRomania
  2. 2.Department of MathematicsBabes-Bolyai UniversityCluj-NapocaRomania
  3. 3.Department of MathematicsTechnical University of Cluj-NapocaCluj-NapocaRomania

Personalised recommendations