Abstract
In the previous chapters, we examined various methods that are applied directly to the likelihood or to the posterior density. In this and the following chapters, we examine the data augmentation algorithms, including the EM algorithm and the data augmentation algorithm. All of these data augmentation algorithms share a common approach to problems: rather than performing a complicated maximization or simulation, one augments the observed data with “stuff” (latent data) that simplifies the calculation and subsequently performs a series of simple maximizations or simulations. This “stuff” can be the “missing” data or parameter values. The principle of data augmentation can then be stated as follows: Augment the observed data Y with latent data Z so that the augmented posterior distribution p(θ | Y, Z) is “simple.” Make use of this simplicity in maximizing/marginalizing/calculating/sampling the observed posterior p (θ | Y).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag New York, Inc.
About this chapter
Cite this chapter
Tanner, M.A. (1996). The EM Algorithm. In: Tools for Statistical Inference. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4024-2_4
Download citation
DOI: https://doi.org/10.1007/978-1-4612-4024-2_4
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-8471-0
Online ISBN: 978-1-4612-4024-2
eBook Packages: Springer Book Archive