Abstract
Proving convergence of the various optimization algorithms is a delicate exercise. In general, it is helpful to consider local and global convergence patterns separately. The local convergence rate of an algorithm provides a useful benchmark for comparing it to other algorithms. On this basis, Newton’s method wins hands down. However, the tradeoffs are subtle. Besides the sheer number of iterations until convergence, the computational complexity and numerical stability of an algorithm are critically important. The MM algorithm is often the epitome of numerical stability and computational simplicity. Scoring lies somewhere between Newton’s method and the MM algorithm. It tends to converge more quickly than the MM algorithm and to behave more stably than Newton’s method. Quasi-Newton methods also occupy this intermediate zone. Because the issues are complex, all of these algorithms survive and prosper in certain computational niches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Acosta E, Delgado C (1994) Fréchet versus Carathéodory. Am Math Mon 101:332–338
Acton FS (1990) Numerical methods that work. Mathematical Association of America, Washington, DC
Dennis JE Jr, Schnabel RB (1996) Numerical methods for unconstrained optimization and nonlinear equations. SIAM, Philadelphia
Fessler JA, Clinthorne NH, Rogers WL (1993) On complete-data spaces for PET reconstruction algorithms. IEEE Trans Nucl Sci 40:1055–1061
Gill PE, Murray W, Wright MH (1991) Numerical linear algebra and optimization, vol 1. Addison-Wesley, Redwood City
Guillemin V, Pollack A (1974) Differential topology. Prentice-Hall, Englewood Cliffs
Lange K (1995) A gradient algorithm locally equivalent to the EM algorithm. J Roy Stat Soc B 57:425–437
Lange K (2010) Numerical analysis for statisticians, 2nd edn. Springer, New York
Louis TA (1982) Finding the observed information matrix when using the EM algorithm. J Roy Stat Soc B 44:226–233
Luenberger DG (1984) Linear and nonlinear programming, 2nd edn. Addison-Wesley, Reading
McLachlan GJ, Krishnan T (2008) The EM algorithm and extensions, 2nd edn. Wiley, Hoboken
Meyer RR (1976) Sufficient conditions for the convergence of monotonic mathematical programming algorithms. J Comput Syst Sci 12:108–121
Nocedal J (1991) Theory of algorithms for unconstrained optimization. Acta Numerica 1991:199–242
Orchard T, Woodbury MA (1972) A missing information principle: theory and applications. In: Proceedings of the 6th Berkeley symposium on mathematical statistics and probability. University of California Press, Berkeley, pp 697–715
Ortega JM (1990) Numerical analysis: a second course. Society for Industrial and Applied Mathematics, Philadelphia
Wu CF (1983) On the convergence properties of the EM algorithm. Ann Stat 11:95–103
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this chapter
Cite this chapter
Lange, K. (2013). Analysis of Convergence. In: Optimization. Springer Texts in Statistics, vol 95. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5838-8_12
Download citation
DOI: https://doi.org/10.1007/978-1-4614-5838-8_12
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-5837-1
Online ISBN: 978-1-4614-5838-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)