Markov Chains and Their Convergence
When running a MCMC sampler, one is often fascinated by the fact that the sampler can produce desirable random samples from a target distribution by making a series of local changes to an arbitrary initial state. It is therefore a natural question to ask: What makes this operation work? Why can we obtain “typical samples” from a target distribution by conducting a series of local moves? A basic tool for studying theoretical properties of these Monte Carlo algorithms is the Markov chain theory.
KeywordsMarkov Chain Transition Rule Target Distribution Simple Random Walk Coupling Time
Unable to display preview. Download preview PDF.