Advertisement

Convergence of Markov Chains

Part of the Universitext book series (UTX)
We consider a Markov chain X with invariant distribution π and investigate conditions under which the distribution of X n converges to π for n → ∞. Essentially it is necessary and sufficient that the state space of the chain cannot be decomposed into subspaces
  • that the chain does not leave

  • or that are visited by the chain periodically; e.g., only for odd n or only for even n.

Keywords

Markov Chain Random Walk Transition Matrix Ising Model Gibbs Sampler 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag London Limited 2008

Personalised recommendations