Skip to main content

Simulation and Markov Chain Monte Carlo

  • Chapter
  • First Online:
  • 13k Accesses

Part of the book series: Springer Texts in Statistics ((STS))

Abstract

Simulation is a computer-based exploratory exercise that aids in understanding how the behavior of a random or even a deterministic process changes in response to changes in input or the environment. It is essentially the only option left when exact mathematical calculations are impossible, or require an amount of effort that the user is not willing to invest. Even when the mathematical calculations are quite doable, a preliminary simulation can be very helpful in guiding the researcher to theorems that were not a priori obvious or conjectured, and also to identify the more productive corners of a particular problem. Although simulation in itself is a machine-based exercise, credible simulation must be based on appropriate theory. A simulation algorithm must be theoretically justified before we use it. This chapter gives a fairly broad introduction to the classic theory and techniques of probabilistic simulation, and also to some of the modern advents in simulation, particularly Markov chain Monte Carlo (MCMC) methods based on ergodic Markov chain theory.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  • Athreya, K., Doss, H., and Sethuraman, J. (1996). On the convergence of the Markov chain simulation method, Ann. Statist., 24, 89–100.

    MathSciNet  Google Scholar 

  • Barnard, G. (1963). Discussion of paper by M.S. Bartlett, JRSS Ser. B, 25, 294.

    Google Scholar 

  • Besag, J. and Clifford, P. (1989). Generalized Monte Carlo significance tests, Biometrika, 76, 633–642.

    Article  MathSciNet  MATH  Google Scholar 

  • Besag, J. and Clifford, P. (1991). Sequential Monte Carlo p-values, Biometrika, 78, 301–304.

    MathSciNet  Google Scholar 

  • Brémaud, P. (1999). Markov Chains, Springer, New York.

    MATH  Google Scholar 

  • Chan, K. (1993). Asymptotic behavior of the Gibbs samples, J. Amer. Statist. Assoc., 88, 320–326.

    MathSciNet  MATH  Google Scholar 

  • Cowles, M. and Carlin, B. (1996). Markov chain Monte Carlo convergence diagnostics: A comparative review, J. Amer. Statist. Assoc., 91, 883–904.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P. (2009). The MCMC revolution, Bull. Amer. Math. Soc., 46, 179–205.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P. and Saloff-Coste, L. (1996). Logarithmic Sobolev inequalities for finite Markov chains, Ann. Appl. Prob., 6, 695–750.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P. and Saloff-Coste, L. (1998). What do we know about the Metropolis algorithm, J. Comput. System Sci., 57, 20–36.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P. and Stroock, D. (1991). Geometric bounds for eigenvalues of Markov chains, Ann. Appl. Prob., 1, 36–61.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P. and Sturmfels, B. (1998). Algebraic algorithms for sampling from conditional distributions, Ann. Statist., 26, 363–398.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P., Khare, K., and Saloff-Coste, L. (2008). Gibbs sampling, exponential families, and orthogonal polynomials, with discussion, Statist. Sci., 23, 2, 151–200.

    Article  MathSciNet  Google Scholar 

  • Dimakos, X.K. (2001). A guide to exact simulation, Internat. Statist. Rev., 69, 27–48.

    Article  MATH  Google Scholar 

  • Do, K.-A. and Hall, P. (1989). On importance resampling for the bootstrap, Biometrika, 78, 161–167.

    Article  MathSciNet  Google Scholar 

  • Dobrushin, R.L. (1956). Central limit theorems for non-stationary Markov chains II, Ther. Prob. Appl., 1, 329–383.

    Article  Google Scholar 

  • Fill, J. (1991). Eigenvalue bounds on convergence to stationarity for non-reversible Markov chains, with an application to the exclusion process, Ann. Appl. Prob., 1, 62–87.

    Article  MathSciNet  MATH  Google Scholar 

  • Fill, J. (1998). An interruptible algorithm for perfect sampling via Markov chains, Ann. App. Prob., 8, 131–162.

    Article  MathSciNet  MATH  Google Scholar 

  • Fishman, G. S. (1995). Monte Carlo, Concepts, Algorithms, and Applications, Springer, New York.

    Google Scholar 

  • Gamerman, D. (1997). Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Chapman and Hall, London.

    MATH  Google Scholar 

  • Garren, S. and Smith, R.L. (1993). Convergence diagnostics for Markov chain samplers, Manuscript.

    Google Scholar 

  • Gelfand, A. and Smith, A.F.M. (1987). Sampling based approaches to calculating marginal densities, J. Amer. Stat. Assoc., 85, 398–409.

    Article  MathSciNet  Google Scholar 

  • Gelman, A. and Rubin, D. (1992). Inference from iterative simulation using multiple sequences, with discussion, Statist. Sci., 7, 457–511.

    Article  Google Scholar 

  • Gelman, A., Carlin, B., Stern, H., and Rubin, D. (2003). Bayesian Data Analysis, Chapman and Hall/CRC, Boca Raton.

    Google Scholar 

  • Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans. Pattern Anal. Mach. Intele., 721–740.

    Google Scholar 

  • Geyer, C. (1992). Practical Markov chain Monte Carlo, with discussion, Statist. Sci., 7, 473–511.

    Article  Google Scholar 

  • Gilks, W., Richardson, S., and Spiegelhalter, D. (Eds.), (1995). Markov Chain Monte Carlo in Practice, Chapman and Hall, London.

    Google Scholar 

  • Glauber, R. (1963). Time dependent statistics of the Ising Model, J. Math. Phys., 4, 294–307.

    Article  MathSciNet  MATH  Google Scholar 

  • Green, P.J. (1995). Reversible jump Markov Chain Monte Carlo computation and Bayesian model determination, Biometrika, 82, 711–732.

    Article  MathSciNet  MATH  Google Scholar 

  • Hall, P. and Titterington, D.M. (1989). The effect of simulation order on level accuracy and power of Monte Carlo tests, JRSS Ser. B, 51, 459–467.

    MathSciNet  MATH  Google Scholar 

  • Hastings, W. (1970). Monte Carlo sampling methods using Markov chains and their applications, Biometrika, 57, 92–109.

    Article  Google Scholar 

  • Higdon, D. (1998). Auxiliary variables methods for Markov chain Monte Carlo applications, J. Amer. Statist. Assoc., 93, 585–595.

    Article  MATH  Google Scholar 

  • Jones, G. and Hobert, J. (2001). Honest exploration of intractable probability distributions via Markov Chain Monte Carlo, Statist. Sci., 16, 312–334.

    Article  MathSciNet  MATH  Google Scholar 

  • Kendall, W. and Thönnes, E. (1999). Perfect simulation in stochastic geometry, Patt. Recogn., 32, 1569–1586.

    Article  Google Scholar 

  • Liu, J. (1995). Eigenanalysis for a Metropolis sampling scheme with comparisons to rejection sampling and importance sampling, Manuscript.

    Google Scholar 

  • Liu, J. (2008). Monte Carlo Strategies in Scientific Computing, Springer, New York.

    MATH  Google Scholar 

  • Mengersen, K. and Tweedie, R. (1996). Rates of convergence of Hastings and Metropolis algorithms, Ann. Statist., 24, 101–121.

    Article  MathSciNet  MATH  Google Scholar 

  • Mengersen, K., Knight, S., and Robert, C. (2004). MCMC: How do we know when to stop?, Manuscript.

    Google Scholar 

  • Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., and Teller, E. (1953). Equations of state calculations by fast computing machines, J. Chem. Phys., 21, 1087–1092.

    Article  Google Scholar 

  • Propp, J. and Wilson, B. (1998). How to get a perfectly random sample from a generic Markov chain and generate a random spanning tree to a directed graph, J. Alg., 27, 170–217.

    Article  MathSciNet  MATH  Google Scholar 

  • Ripley, B. D. (1987). Stochastic Simulation, Wiley, New York.

    Book  MATH  Google Scholar 

  • Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods, Springer, New York.

    MATH  Google Scholar 

  • Roberts, G. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms, Prob. Surveys, 1, 20–71.

    Article  MathSciNet  MATH  Google Scholar 

  • Rosenthal, J. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo, J. Amer. Statist. Assoc., 90, 558–566.

    Article  MathSciNet  MATH  Google Scholar 

  • Rosenthal, J. (1996). Analysis of the Gibbs sampler for a model related to the James–Stein estimations, Statist. Comput., 6, 269–275.

    Article  Google Scholar 

  • Rosenthal, J. (2002). Quantitative convergence rates of Markov chains: A simple account, Electr. Comm. Prob., 7, 123–128.

    Google Scholar 

  • Ross, S. (2006). Simulation, Academic Press, New York.

    MATH  Google Scholar 

  • Rubin, H. (1976). Some fast methods of generating random variables with pre-assigned distributions: General acceptance-rejection procedures, Manuscript.

    Google Scholar 

  • Schmeiser, B. (1994). Modern simulation environments: Statistical issues, Proceedings of the First IE Research Conference, 139–144.

    Google Scholar 

  • Schmeiser, B. (2001). Some myths and common errors in simulation experiments, B. Peters et al. Eds., Proceedings of the Winter Simulation Conference, 39–46.

    Google Scholar 

  • Smith, A.F.M. and Roberts, G. (1993). Bayesian computation via the Gibbs sampler, with discussion, JRSS Ser. B, 55, 3–23.

    MathSciNet  MATH  Google Scholar 

  • Tanner, M. and Wong, W. (1987). The calculation of posterior distributions, with discussions, J. Amer. Statist. Assoc., 82, 528–550.

    Article  MathSciNet  MATH  Google Scholar 

  • Tierney, L. (1994). Markov chains for exploring posterior distributions, with discussion, Ann. Statist., 22, 1701–1762.

    Article  MathSciNet  MATH  Google Scholar 

  • Yu, B. and Mykland, P. (1994). Looking at Markov samplers through CUSUM path plots: A simple diagnostic idea, Manuscript.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anirban DasGupta .

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

DasGupta, A. (2011). Simulation and Markov Chain Monte Carlo. In: Probability for Statistics and Machine Learning. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9634-3_19

Download citation

Publish with us

Policies and ethics