Skip to main content

Analysis of Convergence

  • Chapter
  • First Online:
Optimization

Part of the book series: Springer Texts in Statistics ((STS,volume 95))

  • 12k Accesses

Abstract

Proving convergence of the various optimization algorithms is a delicate exercise. In general, it is helpful to consider local and global convergence patterns separately. The local convergence rate of an algorithm provides a useful benchmark for comparing it to other algorithms. On this basis, Newton’s method wins hands down. However, the tradeoffs are subtle. Besides the sheer number of iterations until convergence, the computational complexity and numerical stability of an algorithm are critically important. The MM algorithm is often the epitome of numerical stability and computational simplicity. Scoring lies somewhere between Newton’s method and the MM algorithm. It tends to converge more quickly than the MM algorithm and to behave more stably than Newton’s method. Quasi-Newton methods also occupy this intermediate zone. Because the issues are complex, all of these algorithms survive and prosper in certain computational niches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Acosta E, Delgado C (1994) Fréchet versus Carathéodory. Am Math Mon 101:332–338

    Article  MATH  Google Scholar 

  2. Acton FS (1990) Numerical methods that work. Mathematical Association of America, Washington, DC

    MATH  Google Scholar 

  3. Dennis JE Jr, Schnabel RB (1996) Numerical methods for unconstrained optimization and nonlinear equations. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  4. Fessler JA, Clinthorne NH, Rogers WL (1993) On complete-data spaces for PET reconstruction algorithms. IEEE Trans Nucl Sci 40:1055–1061

    Article  Google Scholar 

  5. Gill PE, Murray W, Wright MH (1991) Numerical linear algebra and optimization, vol 1. Addison-Wesley, Redwood City

    MATH  Google Scholar 

  6. Guillemin V, Pollack A (1974) Differential topology. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  7. Lange K (1995) A gradient algorithm locally equivalent to the EM algorithm. J Roy Stat Soc B 57:425–437

    MATH  Google Scholar 

  8. Lange K (2010) Numerical analysis for statisticians, 2nd edn. Springer, New York

    Book  MATH  Google Scholar 

  9. Louis TA (1982) Finding the observed information matrix when using the EM algorithm. J Roy Stat Soc B 44:226–233

    MathSciNet  MATH  Google Scholar 

  10. Luenberger DG (1984) Linear and nonlinear programming, 2nd edn. Addison-Wesley, Reading

    MATH  Google Scholar 

  11. McLachlan GJ, Krishnan T (2008) The EM algorithm and extensions, 2nd edn. Wiley, Hoboken

    Book  MATH  Google Scholar 

  12. Meyer RR (1976) Sufficient conditions for the convergence of monotonic mathematical programming algorithms. J Comput Syst Sci 12:108–121

    Article  MATH  Google Scholar 

  13. Nocedal J (1991) Theory of algorithms for unconstrained optimization. Acta Numerica 1991:199–242

    MathSciNet  Google Scholar 

  14. Orchard T, Woodbury MA (1972) A missing information principle: theory and applications. In: Proceedings of the 6th Berkeley symposium on mathematical statistics and probability. University of California Press, Berkeley, pp 697–715

    Google Scholar 

  15. Ortega JM (1990) Numerical analysis: a second course. Society for Industrial and Applied Mathematics, Philadelphia

    Book  Google Scholar 

  16. Wu CF (1983) On the convergence properties of the EM algorithm. Ann Stat 11:95–103

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Lange, K. (2013). Analysis of Convergence. In: Optimization. Springer Texts in Statistics, vol 95. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5838-8_12

Download citation

Publish with us

Policies and ethics