This chapter covers a class of models where a rather simple distribution is degraded by an attrition mechanism that mixes together several known or unknown distributions. This representation is naturally called a mixture of distributions. Inference about the parameters of the elements of the mixtures and the weights is called mixture estimation, while recovery of the original distribution of each observation is called classification (or, more exactly, unsupervised classification to distinguish it from the supervised classification to be discussed in Chapter 8). Both aspects almost always require advanced computational tools since even the representation of the posterior distribution may be complicated. Typically, Bayesian inference for these models was not correctly treated until the introduction of MCMC algorithms in the early 1990s.
This chapter is also the right place to introduce the concept of “variable dimension models,” where the structure (dimension) of the model is determined a posteriori using the data. This opens new perspectives for Bayesian inference such as model averaging but calls for a special simulation algorithm called reversible jump MCMC.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2007 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
(2007). Mixture Models. In: Bayesian Core: A Practical Approach to Computational Bayesian Statistics. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-38983-7_6
Download citation
DOI: https://doi.org/10.1007/978-0-387-38983-7_6
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-38979-0
Online ISBN: 978-0-387-38983-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)