Advertisement

Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence

  • Wei Peng
  • Hui ZhangEmail author
  • Xiaoya Zhang
Regular Paper
  • 26 Downloads

Abstract

In this paper, we study the proximal incremental aggregated gradient algorithm for minimizing the sum of L-smooth nonconvex component functions and a proper closed convex function. By exploiting the L-smooth property and using an error bound condition, we can show that the method still enjoys some desired linear convergence properties, even for nonconvex minimization. Actually, we show that the generated iterative sequence globally converges to the stationary point set. Moreover, we give an explicit computable stepsize threshold to guarantee that both the objective value and iterative sequences are R-linearly convergent.

Keywords

Error bound Linear convergence Nonconvex Incremental aggregated gradient 

Mathematics Subject Classification

90C26 90C06 90C15 

Notes

Acknowledgements

We are grateful for the support of the National Natural Science Foundation of China (No. 11501569). We are also obliged to the anonymous reviewers and Dr. Wenbo Wang for their comments and careful proofreading.

References

  1. 1.
    Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Vanli, N.D., Gurbuzbalaban, M., Ozdaglar, A.: Global convergence rate of proximal incremental aggregated gradient methods. SIAM J. Optim. 28(2), 1282–1300 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Aytekin, A., Feyzmahdavian, H.R., Johansson, M.: Analysis and implementation of an asynchronous optimization algorithm for the parameter server. arXiv preprint arXiv:1610.05507 (2016)
  4. 4.
    Zhang, H., Guo, L., Dai, Y., Peng, W.: Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions. arXiv preprint arXiv:1711.01136 (2017)
  5. 5.
    Zhang, X., Peng, W., Zhang, H., Zhu, W.: Inertial proximal incremental aggregated gradient method. arXiv preprint arXiv:1712.00984 (2017)
  6. 6.
    Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1–2), 91–129 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Wen, B., Chen, X., Pong, T.K.: Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. SIAM J. Optim. 27(1), 124–145 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Luo, Z.Q., Tseng, P.: On the linear convergence of descent methods for convex essentially smooth minimization. SIAM J. Control Optim. 30(2), 408–425 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Luo, Z.Q., Tseng, P.: On the convergence rate of dual ascent methods for linearly constrained convex minimization. Math. Oper. Res. 18(4), 846–867 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Beck, A., Teboulle, M.: A linearly convergent dual-based gradient projection algorithm for quadratically constrained convex minimization. Math. Oper. Res. 31(2), 398–417 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Tseng, P., Yun, S.: A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training. Comput. Optim. Appl. 47(2), 179–206 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117(1–2), 387–423 (2009).  https://doi.org/10.1007/s10107-007-0170-0 MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Huang, Y., Dong, Y.: New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 237, 60–68 (2014)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of MathematicsNational University of Defense TechnologyChangshaChina

Personalised recommendations