Advertisement

Diagnosing Convergence

  • Christian P. Robert
  • George Casella
Part of the Springer Texts in Statistics book series (STS)

Abstract

In previous chapters, we have presented the theoretical foundations of MCMC algorithms and showed that, under fairly general conditions, the chains produced by these algorithms are ergodic, or even geometrically ergodic. While such developments are obviously necessary, they are nonetheless insufficient from the point of view of the implementation of MCMC methods. They do not directly result in methods of controlling the chain produced by an algorithm (in the sense of a stopping rule to guarantee that the number of iterations is sufficient). In other words, while necessary as mathematical proofs of the validity of the MCMC algorithms, general convergence results do not tell us when to stop these algorithms and produce our estimates. For instance, the mixture model of Example 10.18 is fairly well behaved from a theoretical point of view, but Figure 10.3 indicates that the number of iterations used is definitely insufficient.

Keywords

Markov Chain Stationary Distribution Gibbs Sampler Importance Sampling Transition Kernel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. Hastings, W. (1970). Monte Carlo sampling methods using Markov chains and their application. Biometrika, 57: 97–109.zbMATHCrossRefGoogle Scholar
  2. Geweke, J. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (with discussion). In Bernardo, J., Berger, J., Dawid, A., and Smith, A., editors, Bayesian Statistics 4, pages 169–193. Oxford University Press, Oxford.Google Scholar
  3. Silverman, B. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, New York.zbMATHGoogle Scholar
  4. Spiegelhalter, D., Thomas, A., Best, N., and Gilks, W. (1995a). BUGS: Bayesian inference using Gibbs sampling. Technical report, Medical Research Council Biostatistics Unit, Institute of Public Health, Cambridge Univ.Google Scholar
  5. Heidelberger, P. and Welch, P. (1983). A spectral method for confidence interval generation and run length control in simulations. Comm. Assoc. Comput. Machinery, 24: 233–245.MathSciNetCrossRefGoogle Scholar
  6. Schruben, L., Singh, H., and Tierney, L. (1983). Optimal tests for initialization bias in simulation output. Operation. Research, 31: 1176–1178.Google Scholar
  7. Carlin, B. and Louis, T. (1996). Bayes and Empirical Bayes Methods for Data Analysis. Chapman and Hall, New York.zbMATHGoogle Scholar
  8. Brooks, S. and Roberts, G. (1998). Assessing convergence of Markov chain Monte Carlo algorithms Statistics and Computing, 8: 319–335.CrossRefGoogle Scholar
  9. Yu, B. and Mykland, P. (1998). Looking at Markov samplers through cusum path plots: A simple diagnostic idea. Statistics and Computing 8(3):275286.Google Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Christian P. Robert
    • 1
  • George Casella
    • 2
  1. 1.CEREMADEUniversité Paris DauphineParis Cedex 16France
  2. 2.Department of StatisticsUniversity of FloridaGainesvilleUSA

Personalised recommendations