Advertisement

Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems

  • Nicholas I. M. Gould
  • Tyrone ReesEmail author
  • Jennifer A. Scott
Article
  • 74 Downloads

Abstract

Given a twice-continuously differentiable vector-valued function r(x), a local minimizer of \(\Vert r(x)\Vert _2\) is sought. We propose and analyse tensor-Newton methods, in which r(x) is replaced locally by its second-order Taylor approximation. Convergence is controlled by regularization of various orders. We establish global convergence to a first-order critical point of \(\Vert r(x)\Vert _2\), and provide function evaluation bounds that agree with the best-known bounds for methods using second derivatives. Numerical experiments comparing tensor-Newton methods with regularized Gauss–Newton and Newton methods demonstrate the practical performance of the newly proposed method.

Keywords

Nonlinear least-squares Levenberg Marquardt Trust region methods Data fitting 

Notes

Acknowledgements

The authors are grateful to two referees and the editor for their very helpful comments on the original draft of this paper.

References

  1. 1.
    Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, PhL: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Math. Program. 163(1), 359–368 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Birgin, E.G., Martinez, J.M.: Quadratic regularization with cubic descent for unconstrained optimization. Technical Report MCDO271016, State University of Campinas, Brazil (2016)Google Scholar
  3. 3.
    Björck, Å.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996)CrossRefzbMATHGoogle Scholar
  4. 4.
    Bouaricha, A., Schnabel, R.B.: Algorithm 768: TENSOLVE: a software package for solving systems of nonlinear equations and nonlinear least-squares problems. ACM Trans. Math. Softw. 23(2), 174–195 (1997)CrossRefzbMATHGoogle Scholar
  5. 5.
    Bouaricha, A., Schnabel, R.B.: Tensor methods for large, sparse nonlinear least squares problems. SIAM J. Sci. Stat. Comput. 21(4), 1199–1221 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Cartis, C., Gould, N.I.M., Toint, PhL: On the complexity of steepest descent, Newton’s method and regularized Newton’s methods for nonconvex unconstrained optimization problems. SIAM J. Optim. 20(6), 2833–2852 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cartis, C., Gould, N.I.M., Toint, PhL: Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program. Ser. A 127(2), 245–295 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Cartis, C., Gould, N.I.M., Toint, PhL: Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function and derivative-evaluation complexity. Math. Program. Ser. A 130(2), 295–319 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Cartis, C., Gould, N.I.M., Toint, PhL: On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization. SIAM J. Optim. 23(3), 1553–1574 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Cartis, C., Gould, N.I.M., Toint, Ph.L.: Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models. Report naXys-11-2015(R1), University of Namur, Belgium (2015)Google Scholar
  11. 11.
    Cartis, C., Gould, N.I.M., Toint, Ph.L.: Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models. Technical Report RAL-TR-2015-011, Rutherford Appleton Laboratory, Chilton, Oxfordshire (2015)Google Scholar
  12. 12.
    Cartis, C., Gould, N.I.M., Toint, Ph.L.: Universal regularization methods-varying the power, the smoothness and the accuracy. Preprint RAL-P-2016-010, Rutherford Appleton Laboratory, Chilton, Oxfordshire (2016)Google Scholar
  13. 13.
    Chen, P.: Hessian matrix vs. Gauss–Newton Hessian matrix. SIAM J. Numer. Anal. 49(4), 1417–1435 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Curtis, F.E., Lubberts, Z., Robinson, D.P.: Concise complexity analyses for trust-region methods. Technical Report 01-2018, Johns Hopkins University, Baltimore (2018)Google Scholar
  15. 15.
    Curtis, F.E., Robinson, D.P., Samadi, M.: A trust region algorithm with a worst-case iteration complexity of \(O(\epsilon ^{-3/2})\) for nonconvex optimization. Math. Program. 162(1–2), 1–32 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Dennis, J.E., Gay, D.M., Welsh, R.E.: An adaptive nonlinear least squares algorithm. ACM Trans. Math. Softw. 7(3), 348–368 (1981)CrossRefzbMATHGoogle Scholar
  17. 17.
    Gill, P.E., Murray, W.: Algorithms for the solution of the nonlinear least squares problem. SIAM J. Numer. Anal. 15(5), 977–992 (1978)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Gould, N.I.M., Orban, D., Toint, PhL: GALAHAD—a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization. ACM Trans. Math. Softw. 29(4), 353–372 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Gould, N.I.M., Rees, T., Scott, J.A.: A higher order method for solving nonlinear least-squares problems. Technical Report RAL-P-2017-010, STFC Rutherford Appleton Laboratory (2017)Google Scholar
  20. 20.
    Grapiglia, G.N., Nesterov, Y.: Regularized Newton methods for minimizing functions with Hölder continuous Hessians. SIAM J. Optim. 27(1), 478–506 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Levenberg, K.: A method for the solution of certain problems in least squares. Q. Appl. Math. 2(2), 164–168 (1944)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11(2), 431–441 (1963)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. In: Watson, G.A. (ed.) Numerical Analysis, Dundee 1977, Number 630 in Lecture Notes in Mathematics, pp. 105–116. Springer, Berlin (1978)Google Scholar
  24. 24.
    Morrison, D.D.: Methods for nonlinear least squares problems and convergence proofs. In: Lorell, J., Yagi, F (eds.) Proceedings of the Seminar on Tracking Programs and Orbit Determination, pp. 1–9, Pasadena. Jet Propulsion Laboratory (1960)Google Scholar
  25. 25.
    Nesterov, Yu.: Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, Dordrecht (2004)CrossRefzbMATHGoogle Scholar
  26. 26.
    Nesterov, Yu., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Program. 108(1), 177–205 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    NIST Nonlinear Regression Datasets. http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml. Accessed June 2018
  28. 28.
    RALFit. https://github.com/ralna/RALFit. Accessed 20 July 2018
  29. 29.
    Transtrum, M.K., Machta, B.B., Sethna, J.P.: Why are nonlinear fits to data so challenging? Phys. Rev. Lett. 104(6), 060201 (2010)CrossRefGoogle Scholar
  30. 30.
    Transtrum, M.K., Sethna, J.P.: Geodesic acceleration and the small-curvature approximation for nonlinear least squares (2012). arXiV.1207.4999
  31. 31.
    Zhang, H., Conn, A.R.: On the local convergence of a derivative-free algorithm for least-squares minimization. Comput. Optim. Appl. 51(2), 481–507 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Zhang, H., Conn, A.R., Scheinberg, K.: A derivative-free algorithm for least-squares minimization. SIAM J. Optim. 20(6), 3555–3576 (2010)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.STFC Rutherford Appleton LaboratoryChilton, DidcotUK
  2. 2.Department of Mathematics and StatisticsUniversity of ReadingReadingUK

Personalised recommendations