Explorations in Monte Carlo Methods pp 101-138 | Cite as

# Markov Chain Monte Carlo

## Abstract

A *Markov chain* extends the idea of a single probabilistic experiment on the outcome space Ω to a sequence of experiments on Ω, one for every *t* = 0, 1,…Letting *X* _{ t } denote the *t*th outcome, we say that the process moves from state *X* _{t-1} to state *X* _{ t } on the *t*th iteration. The other major novelty here is that the probabilities governing the next move can depend on the present state. In fact, it is usually the case that from any given state *x* it is possible to move to only a small subset of Ω, called the *neighborhood* of *x*.

It turns out that this setup of a recurring probabilistic process has wide applicability. Some examples are the changes from moment to moment of a thermodynamic system, the changes in a species’ DNA sequence wrought by mutations, the step-by-step folding of a protein molecule, the day-to-day price of a stock, a gambler’s fortune from gamble to gamble, and many others. The crucial aspect of a Markov chain is that the system must evolve from one moment to the next in a random way, but depending only on the state of the system at the given moment and not on the entire history.

As the process is performed repeatedly, what conclusions can be drawn about it?

## Keywords

Markov Chain Markov Chain Monte Carlo Transition Matrix Ising Model Markov Chain Monte Carlo Algorithm## Preview

Unable to display preview. Download preview PDF.