Markov Chain Monte Carlo

  • Ronald W. Shonkwiler
  • Franklin Mendivil
Part of the Undergraduate Texts in Mathematics book series (UTM)


A Markov chain extends the idea of a single probabilistic experiment on the outcome space Ω to a sequence of experiments on Ω, one for every t = 0, 1,…Letting X t denote the tth outcome, we say that the process moves from state X t-1 to state X t on the tth iteration. The other major novelty here is that the probabilities governing the next move can depend on the present state. In fact, it is usually the case that from any given state x it is possible to move to only a small subset of Ω, called the neighborhood of x.

It turns out that this setup of a recurring probabilistic process has wide applicability. Some examples are the changes from moment to moment of a thermodynamic system, the changes in a species’ DNA sequence wrought by mutations, the step-by-step folding of a protein molecule, the day-to-day price of a stock, a gambler’s fortune from gamble to gamble, and many others. The crucial aspect of a Markov chain is that the system must evolve from one moment to the next in a random way, but depending only on the state of the system at the given moment and not on the entire history.

As the process is performed repeatedly, what conclusions can be drawn about it?


Markov Chain Markov Chain Monte Carlo Transition Matrix Ising Model Markov Chain Monte Carlo Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.School of MathematicsGeorgia Institute of TechnologyAtlantaUSA
  2. 2.Department of Mathematics and StatisticsAcadia UniversityWolfvilleCanada

Personalised recommendations