Skip to main content

Local and Global Convergence

  • Chapter
  • First Online:

Part of the book series: Statistics and Computing ((SCO))

Abstract

Proving convergence of the various optimization algorithms is a delicate exercise. In general, it is helpful to consider local and global convergence patterns separately. The local convergence rate of an algorithm provides a useful benchmark for comparing it to other algorithms. On this basis, Newton’s method wins hands down. However, the tradeoffs are subtle. Besides the sheer number of iterations until convergence, the computational complexity and numerical stability of an algorithm are critically important. The MM algorithm is often the epitome of numerical stability and computational simplicity. Scoring lies somewhere between these two extremes. It tends to converge more quickly than the MM algorithm and to behave more stably than Newton’s method. Quasi-Newton methods also occupy this intermediate zone. Because the issues are complex, all of these algorithms survive and prosper in certain computational niches.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. de Leeuw J (1994) Block relaxation algorithms in statistics. Information Systems and Data Analysis, Bock HH, Lenski W, Richter MM, Springer, Berlin, pp 308-325

    Google Scholar 

  2. Dennis JE Jr, Schnabel RB (1983) Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Engle-wood Cliffs, NJ

    MATH  Google Scholar 

  3. Fessler JA, Clinthorne NH, Rogers WL (1993) On complete-data spaces for PET reconstruction algorithms. IEEE Trans Nuclear Sci 40:1055-1061

    Article  Google Scholar 

  4. Gill PE, Murray W, Wright MH (1981) Practical Optimization. Academic Press, New York

    MATH  Google Scholar 

  5. Guillemin V, Pollack A (1974) Differential Topology. Prentice-Hall, Englewood Cliffs, NJ

    MATH  Google Scholar 

  6. Lange K (1995) A gradient algorithm locally equivalent to the EM algorithm. J Roy Stat Soc B 57:425-437

    MATH  Google Scholar 

  7. Lange, K (2004) Optimization. Springer, New York

    MATH  Google Scholar 

  8. Louis TA (1982) Finding the observed information matrix when using the EM algorithm. J Roy Stat Soc B 44:226-233

    MATH  MathSciNet  Google Scholar 

  9. Luenberger DG (1984) Linear and Nonlinear Programming, 2nd ed. Addison-Wesley, Reading, MA

    MATH  Google Scholar 

  10. McLachlan GJ, Krishnan T (2008) The EM Algorithm and Extensions, 2nd ed. Wiley, New York

    MATH  Google Scholar 

  11. Meyer RR (1976) Sufficient conditions for the convergence of monotonic mathematical programming algorithms. J Computer System Sci 12:108-121

    MATH  Google Scholar 

  12. Nocedal J (1991) Theory of algorithms for unconstrained optimization. Acta Numerica 1991:199-242.

    MathSciNet  Google Scholar 

  13. Ortega JM (1990) Numerical Analysis: A Second Course. SIAM, Philadelphia

    MATH  Google Scholar 

  14. Ortega JM, Rheinboldt WC (1970) Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, San Diego

    MATH  Google Scholar 

  15. Ostrowski AM (1960) Solution of Equations and Systems of Equations. Academic Press, New York

    MATH  Google Scholar 

  16. Wu CF (1983) On the convergence properties of the EM algorithm. Ann Stat 11:95-103

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kenneth Lange .

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer New York

About this chapter

Cite this chapter

Lange, K. (2010). Local and Global Convergence. In: Numerical Analysis for Statisticians. Statistics and Computing. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-5945-4_15

Download citation

Publish with us

Policies and ethics