Skip to main content
Log in

A regularized Newton method without line search for unconstrained optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

An Erratum to this article was published on 01 July 2016

Abstract

In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularization parameter instead of a step size in order to guarantee the global convergence. We show that the proposed algorithm has the following convergence properties. (a) The proposed algorithm has global convergence under appropriate conditions. (b) It has superlinear rate of convergence under the local error bound condition. (c) An upper bound of the number of iterations required to obtain an approximate solution \(x\) satisfying \(\Vert \nabla f(x) \Vert \le \varepsilon \) is \(O(\varepsilon ^{-2})\), where \(f\) is the objective function and \(\varepsilon \) is a given positive constant.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Ashcraft, C., Grimes, R.G., Lewis, J.G.: Accurate symmetric indefinite linear equation solvers. SIAM J. Matrix Anal. Appl. 20, 513–561 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  2. Cartis, C., Gould, N.I.M., Toint, P.L.: On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems. SIAM J. Optim. 20, 2833–2852 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cheng, S.H., Higham, N.J.: A modified Cholesky algorithm based on a symmetric indefinite factorization. SIAM J. Matrix Anal. Appl. 19, 1097–1110 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  4. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  5. Crouzeix, J.P.: On second order conditions for quasiconvexity. Math. Program. 18, 349–352 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr (and SifDec), a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  8. Gratton, S., Sartenaer, A., Toint, P.L.: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19, 414–444 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Li, D.H., Fukushima, M., Qi, L., Yamashita, N.: Regularized Newton methods for convex minimization problems with singular solutions. Comput. Optim. Appl. 28, 131–147 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  10. Li, Y.J., Li, D.H.: Truncated regularized Newton method for convex minimizations. Comput. Optim. Appl. 43, 119–131 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer Academic, Dordrecht (2004)

    Book  MATH  Google Scholar 

  12. Nesterov, Y., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Program. 108, 177–205 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Pang, J.S.: Error bounds in mathematical programming. Math. Program. 79, 299–332 (1997)

    MathSciNet  MATH  Google Scholar 

  14. Polyak, R.A.: Regularized Newton method for unconstrained convex optimization. Math. Program. 120, 125–145 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  15. Shen, C., Chen, X., Liang, Y.: A regularized Newton method for degenerate unconstrained optimization problems. Optim. Lett. 6, 1913–1933 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ueda, K.: A regularized Newton method without line search for unconstrained optimization. Master’s thesis, Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University (2009). http://www-optima.amp.i.kyoto-u.ac.jp/result/masterdoc/20ueda.pdf

  17. Ueda, K., Yamashita, N.: Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization. Appl. Math. Optim. 62, 27–46 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  18. Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg–Marquardt method. Computing Supplementum(Wien) 15, 227–238 (2001)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nobuo Yamashita.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ueda, K., Yamashita, N. A regularized Newton method without line search for unconstrained optimization. Comput Optim Appl 59, 321–351 (2014). https://doi.org/10.1007/s10589-014-9656-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-014-9656-x

Keywords

Navigation