Advertisement

The EM Algorithm

  • Martin A. Tanner
Chapter
Part of the Springer Series in Statistics book series (SSS)

Abstract

In the previous chapters, we examined various methods that are applied directly to the likelihood or to the posterior density. In this and the following chapters, we examine the data augmentation algorithms, including the EM algorithm and the data augmentation algorithm. All of these data augmentation algorithms share a common approach to problems: rather than performing a complicated maximization or simulation, one augments the observed data with “stuff” (latent data) that simplifies the calculation and subsequently performs a series of simple maximizations or simulations. This “stuff” can be the “missing” data or parameter values. The principle of data augmentation can then be stated as follows: Augment the observed data Y with latent data Z so that the augmented posterior distribution p(θ | Y, Z) is “simple.” Make use of this simplicity in maximizing/marginalizing/calculating/sampling the observed posterior p (θ | Y).

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag New York, Inc. 1996

Authors and Affiliations

  • Martin A. Tanner
    • 1
  1. 1.Department of StatisticsNorthwestern UniversityEvanstonUSA

Personalised recommendations