Advertisement

Journal of Systems Science and Complexity

, Volume 20, Issue 3, pp 416–428 | Cite as

Global Convergence of the Dai-Yuan Conjugate Gradient Method with Perturbations

  • Changyu WangEmail author
  • Meixia Li
Article
  • 56 Downloads

Abstract

In this paper, the authors propose a class of Dai-Yuan (abbr. DY) conjugate gradient methods with linesearch in the presence of perturbations on general function and uniformly convex function respectively. Their iterate formula is x k+1 = x k + α k (s k + ω k ), where the main direction s k is obtained by DY conjugate gradient method, ω k is perturbation term, and stepsize α k is determined by linesearch which does not tend to zero in the limit necessarily. The authors prove the global convergence of these methods under mild conditions. Preliminary computational experience is also reported.

Keywords

Conjugate gradient method global convergence perturbation uniformly convex 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., 1985, 5(1): 121–124.CrossRefGoogle Scholar
  2. 2.
    M. J. D. Powell, Restart procedures of the conjugate gradient method, Math. Program., 1997, 2: 241–245.Google Scholar
  3. 3.
    M. J. D. Powell, Non-convex minimization calculations and the conjugate gradient method, Lecture Notes in Math., Springer-Verlag, Berlin, 1984, 1066: 122–141.Google Scholar
  4. 4.
    Y. H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 1999, 10(1): 177–182.CrossRefGoogle Scholar
  5. 5.
    Y. H. Dai and Y. Yuan, Some properties of a new conjugate gradient method, in Advances in Nonlinear Programing (ed. by Y. Yuan), Kluwer, Boston, 1998, 251–262.Google Scholar
  6. 6.
    B. T. Poljak and Y. Z. Tsypkin, Pseudogradient adaptation and training algorithms, Automat. Remote Control, 1973, 12: 83–94.Google Scholar
  7. 7.
    P. Tseng, Incremental gradient (-projection) method with momentum term and adaptive stepsize rule, SIAM J. on Optim., 1998, 8(2): 506–531.CrossRefGoogle Scholar
  8. 8.
    A. A. Gaivoronski, Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods, Part 1, Optim. Methods Software, 1994, 4(2): 117–134.CrossRefGoogle Scholar
  9. 9.
    L. Grippo, A class of unconstrained minimization methods for neural network training, Optim. Methods Software, 1994, 4(2): 135–150.CrossRefGoogle Scholar
  10. 10.
    O. L. Mangasarian and M. V. Solodov, Serial and parallel backpropagation convergence via nonmonotone perturbed minimization, Optim. Methods Software, 1994, 4(2): 103–116.CrossRefGoogle Scholar
  11. 11.
    O. L. Mangasarian and M. V. Solodov, Backpropagation convergence via deternministic nonmonotone perturbed minimization, in Advances in Neural Information Processing Systems 6 (ed. by G. Tesauro, J. D. Cowan, and J. Alspector), Morgan Kaufmann, San Francisco, 1994, 383–390.Google Scholar
  12. 12.
    D. P. Bertsekas and J. N. Tsitsiklis, Gradient convergence in gradient methods with errors, SIAM J. Optim., 2000, 10(3): 627–642.CrossRefGoogle Scholar
  13. 13.
    M. V. Solodov, Convergence analysis of perturbed feasible descent methods, J. Optim. Theory Appl., 1997, 93(2): 337–353.CrossRefGoogle Scholar
  14. 14.
    M. V. Solodov, Incremental gradient algorithms with stepsizes bounded away from zero, Computational Optimization and Applications, 1998, 11: 23–35.CrossRefGoogle Scholar
  15. 15.
    M. V. Solodov and B. F. Svaiter, Descent methods with linesearch in the presence of perturbations, Journal of Computational and Applied Mathematics, 1997, 80(2): 265–275.CrossRefGoogle Scholar
  16. 16.
    Y. H. Dai, Further insight into the convergence of the Fletcher-Reeves method, Science in China, 1999, 42(9): 905–916.CrossRefGoogle Scholar

Copyright information

© Springer Science + Business Media, LLC 2007

Authors and Affiliations

  1. 1.Institute of Operations ResearchQufu Normal UniversityQufuChina
  2. 2.Department of MathematicsWeifang UniversityWeifangChina

Personalised recommendations