Abstract
Simulation is a computer-based exploratory exercise that aids in understanding how the behavior of a random or even a deterministic process changes in response to changes in input or the environment. It is essentially the only option left when exact mathematical calculations are impossible, or require an amount of effort that the user is not willing to invest. Even when the mathematical calculations are quite doable, a preliminary simulation can be very helpful in guiding the researcher to theorems that were not a priori obvious or conjectured, and also to identify the more productive corners of a particular problem. Although simulation in itself is a machine-based exercise, credible simulation must be based on appropriate theory. A simulation algorithm must be theoretically justified before we use it. This chapter gives a fairly broad introduction to the classic theory and techniques of probabilistic simulation, and also to some of the modern advents in simulation, particularly Markov chain Monte Carlo (MCMC) methods based on ergodic Markov chain theory.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Athreya, K., Doss, H., and Sethuraman, J. (1996). On the convergence of the Markov chain simulation method, Ann. Statist., 24, 89–100.
Barnard, G. (1963). Discussion of paper by M.S. Bartlett, JRSS Ser. B, 25, 294.
Besag, J. and Clifford, P. (1989). Generalized Monte Carlo significance tests, Biometrika, 76, 633–642.
Besag, J. and Clifford, P. (1991). Sequential Monte Carlo p-values, Biometrika, 78, 301–304.
Brémaud, P. (1999). Markov Chains, Springer, New York.
Chan, K. (1993). Asymptotic behavior of the Gibbs samples, J. Amer. Statist. Assoc., 88, 320–326.
Cowles, M. and Carlin, B. (1996). Markov chain Monte Carlo convergence diagnostics: A comparative review, J. Amer. Statist. Assoc., 91, 883–904.
Diaconis, P. (2009). The MCMC revolution, Bull. Amer. Math. Soc., 46, 179–205.
Diaconis, P. and Saloff-Coste, L. (1996). Logarithmic Sobolev inequalities for finite Markov chains, Ann. Appl. Prob., 6, 695–750.
Diaconis, P. and Saloff-Coste, L. (1998). What do we know about the Metropolis algorithm, J. Comput. System Sci., 57, 20–36.
Diaconis, P. and Stroock, D. (1991). Geometric bounds for eigenvalues of Markov chains, Ann. Appl. Prob., 1, 36–61.
Diaconis, P. and Sturmfels, B. (1998). Algebraic algorithms for sampling from conditional distributions, Ann. Statist., 26, 363–398.
Diaconis, P., Khare, K., and Saloff-Coste, L. (2008). Gibbs sampling, exponential families, and orthogonal polynomials, with discussion, Statist. Sci., 23, 2, 151–200.
Dimakos, X.K. (2001). A guide to exact simulation, Internat. Statist. Rev., 69, 27–48.
Do, K.-A. and Hall, P. (1989). On importance resampling for the bootstrap, Biometrika, 78, 161–167.
Dobrushin, R.L. (1956). Central limit theorems for non-stationary Markov chains II, Ther. Prob. Appl., 1, 329–383.
Fill, J. (1991). Eigenvalue bounds on convergence to stationarity for non-reversible Markov chains, with an application to the exclusion process, Ann. Appl. Prob., 1, 62–87.
Fill, J. (1998). An interruptible algorithm for perfect sampling via Markov chains, Ann. App. Prob., 8, 131–162.
Fishman, G. S. (1995). Monte Carlo, Concepts, Algorithms, and Applications, Springer, New York.
Gamerman, D. (1997). Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Chapman and Hall, London.
Garren, S. and Smith, R.L. (1993). Convergence diagnostics for Markov chain samplers, Manuscript.
Gelfand, A. and Smith, A.F.M. (1987). Sampling based approaches to calculating marginal densities, J. Amer. Stat. Assoc., 85, 398–409.
Gelman, A. and Rubin, D. (1992). Inference from iterative simulation using multiple sequences, with discussion, Statist. Sci., 7, 457–511.
Gelman, A., Carlin, B., Stern, H., and Rubin, D. (2003). Bayesian Data Analysis, Chapman and Hall/CRC, Boca Raton.
Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans. Pattern Anal. Mach. Intele., 721–740.
Geyer, C. (1992). Practical Markov chain Monte Carlo, with discussion, Statist. Sci., 7, 473–511.
Gilks, W., Richardson, S., and Spiegelhalter, D. (Eds.), (1995). Markov Chain Monte Carlo in Practice, Chapman and Hall, London.
Glauber, R. (1963). Time dependent statistics of the Ising Model, J. Math. Phys., 4, 294–307.
Green, P.J. (1995). Reversible jump Markov Chain Monte Carlo computation and Bayesian model determination, Biometrika, 82, 711–732.
Hall, P. and Titterington, D.M. (1989). The effect of simulation order on level accuracy and power of Monte Carlo tests, JRSS Ser. B, 51, 459–467.
Hastings, W. (1970). Monte Carlo sampling methods using Markov chains and their applications, Biometrika, 57, 92–109.
Higdon, D. (1998). Auxiliary variables methods for Markov chain Monte Carlo applications, J. Amer. Statist. Assoc., 93, 585–595.
Jones, G. and Hobert, J. (2001). Honest exploration of intractable probability distributions via Markov Chain Monte Carlo, Statist. Sci., 16, 312–334.
Kendall, W. and Thönnes, E. (1999). Perfect simulation in stochastic geometry, Patt. Recogn., 32, 1569–1586.
Liu, J. (1995). Eigenanalysis for a Metropolis sampling scheme with comparisons to rejection sampling and importance sampling, Manuscript.
Liu, J. (2008). Monte Carlo Strategies in Scientific Computing, Springer, New York.
Mengersen, K. and Tweedie, R. (1996). Rates of convergence of Hastings and Metropolis algorithms, Ann. Statist., 24, 101–121.
Mengersen, K., Knight, S., and Robert, C. (2004). MCMC: How do we know when to stop?, Manuscript.
Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., and Teller, E. (1953). Equations of state calculations by fast computing machines, J. Chem. Phys., 21, 1087–1092.
Propp, J. and Wilson, B. (1998). How to get a perfectly random sample from a generic Markov chain and generate a random spanning tree to a directed graph, J. Alg., 27, 170–217.
Ripley, B. D. (1987). Stochastic Simulation, Wiley, New York.
Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods, Springer, New York.
Roberts, G. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms, Prob. Surveys, 1, 20–71.
Rosenthal, J. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo, J. Amer. Statist. Assoc., 90, 558–566.
Rosenthal, J. (1996). Analysis of the Gibbs sampler for a model related to the James–Stein estimations, Statist. Comput., 6, 269–275.
Rosenthal, J. (2002). Quantitative convergence rates of Markov chains: A simple account, Electr. Comm. Prob., 7, 123–128.
Ross, S. (2006). Simulation, Academic Press, New York.
Rubin, H. (1976). Some fast methods of generating random variables with pre-assigned distributions: General acceptance-rejection procedures, Manuscript.
Schmeiser, B. (1994). Modern simulation environments: Statistical issues, Proceedings of the First IE Research Conference, 139–144.
Schmeiser, B. (2001). Some myths and common errors in simulation experiments, B. Peters et al. Eds., Proceedings of the Winter Simulation Conference, 39–46.
Smith, A.F.M. and Roberts, G. (1993). Bayesian computation via the Gibbs sampler, with discussion, JRSS Ser. B, 55, 3–23.
Tanner, M. and Wong, W. (1987). The calculation of posterior distributions, with discussions, J. Amer. Statist. Assoc., 82, 528–550.
Tierney, L. (1994). Markov chains for exploring posterior distributions, with discussion, Ann. Statist., 22, 1701–1762.
Yu, B. and Mykland, P. (1994). Looking at Markov samplers through CUSUM path plots: A simple diagnostic idea, Manuscript.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
DasGupta, A. (2011). Simulation and Markov Chain Monte Carlo. In: Probability for Statistics and Machine Learning. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9634-3_19
Download citation
DOI: https://doi.org/10.1007/978-1-4419-9634-3_19
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-9633-6
Online ISBN: 978-1-4419-9634-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)