Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence
- 26 Downloads
In this paper, we study the proximal incremental aggregated gradient algorithm for minimizing the sum of L-smooth nonconvex component functions and a proper closed convex function. By exploiting the L-smooth property and using an error bound condition, we can show that the method still enjoys some desired linear convergence properties, even for nonconvex minimization. Actually, we show that the generated iterative sequence globally converges to the stationary point set. Moreover, we give an explicit computable stepsize threshold to guarantee that both the objective value and iterative sequences are R-linearly convergent.
KeywordsError bound Linear convergence Nonconvex Incremental aggregated gradient
Mathematics Subject Classification90C26 90C06 90C15
We are grateful for the support of the National Natural Science Foundation of China (No. 11501569). We are also obliged to the anonymous reviewers and Dr. Wenbo Wang for their comments and careful proofreading.
- 3.Aytekin, A., Feyzmahdavian, H.R., Johansson, M.: Analysis and implementation of an asynchronous optimization algorithm for the parameter server. arXiv preprint arXiv:1610.05507 (2016)
- 4.Zhang, H., Guo, L., Dai, Y., Peng, W.: Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions. arXiv preprint arXiv:1711.01136 (2017)
- 5.Zhang, X., Peng, W., Zhang, H., Zhu, W.: Inertial proximal incremental aggregated gradient method. arXiv preprint arXiv:1712.00984 (2017)