Computational Optimization and Applications

, Volume 57, Issue 1, pp 27–43 | Cite as

A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization



A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing general nonlinear objective function with Lipschitz continuous derivatives. For convex objective function, this algorithm maintains the optimal convergence rate of convex optimization. In numerical experiments, this algorithm is specified by employing safe-guarded nonlinear conjugate gradient search directions. Numerical results show the nonmonotone algorithm performs significantly better than the corresponding monotone algorithm for solving the unconstrained optimization problems in the CUTEr (Bongartz et al. in ACM Trans. Math. Softw. 21:123–160, 1995) library.


Gradient methods Nonmonotone algorithm Unconstrained optimization Convex estimate sequence Optimal convergence rate Nonlinear conjugate gradient methods 


  1. 1.
    Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995) CrossRefMATHGoogle Scholar
  2. 2.
    Dai, Y.H., Zhang, H.: An adaptive two-point stepsize gradient algorithm. Numer. Algorithms 27, 377–385 (2001) CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    Dai, Y.H., Yuan, Y.Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999) CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002) CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic optimization. Technical Report, Department of Industrial and Systems Engineering, University of Florida (2013) Google Scholar
  6. 6.
    Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986) CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005) CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17, 526–557 (2006) CrossRefMATHMathSciNetGoogle Scholar
  9. 9.
    Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 35–58 (2006) MATHMathSciNetGoogle Scholar
  10. 10.
    Hager, W.W., Zhang, H.: The limited memory conjugate gradient method. Technical Report (Nov. 6th, 2012) Google Scholar
  11. 11.
    Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009) CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence \(\mathcal {O}(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983). Translated as Soviet Math. Dokl. MathSciNetGoogle Scholar
  13. 13.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic, Norwell (2004) CrossRefGoogle Scholar
  14. 14.
    Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14, 1043–1056 (2004) CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Department of MathematicsLouisiana State UniversityBaton RougeUSA

Personalised recommendations