Optimization pp 137-154 | Cite as

The EM Algorithm

  • Kenneth Lange
Part of the Springer Texts in Statistics book series (STS)

Abstract

Maximum likelihood is the dominant form of estimation in applied statistics. Because closed-form solutions to likelihood equations are the exception rather than the rule, numerical methods for finding maximum likelihood estimates are of paramount importance. In this chapter we study maximum likelihood estimation by the EM algorithm [25, 88, 93], a special case of the MM algorithm. At the heart of every EM algorithm is some notion of missing data. Data can be missing in the ordinary sense of a failure to record certain observations on certain cases. Data can also be missing in a theoretical sense. We can think of the E, or expectation, step of the algorithm as filling in the missing data. This action replaces the loglikelihood of the observed data by a minorizing function. This surrogate function is then maximized in the M step. Because the surrogate function is usually much simpler than the likelihood, we can often solve the M step analytically. The price we pay for this simplification is that the EM algorithm is iterative. Reconstructing the missing data is bound to be slightly wrong if the parameters do not already equal their maximum likelihood estimates.

Keywords

Random Vector Conditional Expectation Markov Chain Model Conditional Density Multivariate Normal Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Kenneth Lange
    • 1
  1. 1.Department of Biomathematics and Human GeneticsUCLA School of MedicineLos AngelesUSA

Personalised recommendations