Advertisement

Two–parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length

  • Saman Babaie–KafakiEmail author
  • Zohre Aminifard
Original Paper
  • 8 Downloads

Abstract

A class of two–parameter scaled memoryless BFGS methods is developed for solving unconstrained optimization problems. Then, the scaling parameters are determined in a way to improve the condition number of the corresponding memoryless BFGS update. It is shown that for uniformly convex objective functions, search directions of the method satisfy the sufficient descent condition which leads to the global convergence. To achieve convergence for general functions, a revised version of the method is developed based on the Li–Fukushima modified secant equation. To enhance performance of the methods, a nonmonotone scheme for computing the initial value of the step length is suggested to be used in the line search procedure. Numerical experiments are done on a set of unconstrained optimization test problems of the CUTEr collection. They show efficiency of the proposed algorithms in the sense of the Dolan–Moré performance profile.

Keywords

Unconstrained optimization Quasi–Newton method Memoryless BFGS update Global convergence Line search 

Mathematics Subject Classification (2010)

90C53 65K05 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This research was in part supported by the grant 96013024 from Iran National Science Foundation (INSF), and in part by the Research Council of Semnan University. The authors thank the anonymous reviewers for their valuable comments and suggestions helped to improve the quality of this work. They are also grateful to Professor Michael Navon for providing the line search code.

References

  1. 1.
    Andrei, N.: A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Appl. Math. Lett. 20(6), 645–650 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Andrei, N.: Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Optim. Methods Softw. 22(4), 561–571 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Andrei, N.: Another hybrid conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 47(2), 143–156 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Andrei, N.: Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Eur. J. Oper. Res. 204(3), 410–420 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Andrei, N.: An adaptive scaled BFGS method for unconstrained optimization. Numer. Algorithms 77(2), 413–432 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Andrei, N.: A double parameter scaled BFGS method for unconstrained optimization. J. Comput. Appl. Math. 332, 26–44 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Babaie–Kafaki, S.: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization. 4OR 11(4), 361–374 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Babaie–Kafaki, S.: Two modified scaled nonlinear conjugate gradient methods. J. Comput. Appl. Math. 261(5), 172–182 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Babaie–Kafaki, S.: On optimality of the parameters of self–scaling memoryless quasi–Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Babaie–Kafaki, S.: A modified scaling parameter for the memoryless BFGS updating formula. Numer. Algorithms 72(2), 425–433 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Babaie–Kafaki, S., Fatemi, M., Mahdavi–Amiri, N.: Two effective hybrid conjugate gradient algorithms based on modified BFGS updates. Numer. Algorithms 58(3), 315–331 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Babaie–Kafaki, S., Ghanbari, R.: A modified scaled conjugate gradient method with global convergence for nonconvex functions. Bull. Belg. Math. Soc. Simon Stevin 21(3), 465–477 (2014)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43(2), 117–128 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2, Ser. A), 201–213 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)CrossRefzbMATHGoogle Scholar
  17. 17.
    Gould, N.I.M., Scott, J.: A note on performance profiles for benchmarking software. ACM Trans. Math. Softw. 43(2), Art. 15, 5 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Hager, W.W., Zhang, H.: Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)CrossRefzbMATHGoogle Scholar
  20. 20.
    Li, D.H., Fukushima, M.: A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 129(1–2), 15–35 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Liao, A.: Modifying the BFGS method. Oper. Res. Lett. 20(4), 171–177 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)zbMATHGoogle Scholar
  23. 23.
    Oren, S.S., Spedicato, E.: Optimal conditioning of self–scaling variable metric algorithms. Math. Program. 10(1), 70–90 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Ou, Y.: A note on the global convergence theorem of accelerated adaptive Perry conjugate gradient methods. J. Comput. Appl. Math. 332, 101–106 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Sugiki, K., Narushima, Y., Yabe, H.: Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization. J. Optim. Theory Appl. 153(3), 733–757 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Sun, W., Yuan, Y.X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)zbMATHGoogle Scholar
  27. 27.
    Watkins, D.S.: Fundamentals of Matrix Computations. Wiley, New York (2002)CrossRefzbMATHGoogle Scholar
  28. 28.
    Zhou, W., Zhang, L.: A nonlinear conjugate gradient method based on the MBFGS secant condition. Optim. Methods Softw. 21(5), 707–714 (2006)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Mathematics, Faculty of Mathematics, Statistics and Computer ScienceSemnan UniversitySemnanIran

Personalised recommendations