General Conditional Sampling
The fundamental idea underlying all Markov chain Monte Carlo algorithms is the construction of implementable Markov transition rules that leave the target distribution π(x) invariant. Although the Metropolis-Hastings algorithm for constructing a desirable Markov chain is very simple and powerful, a potential problem with the Metropolis algorithm, as explained in the previous chapter, is that the proposal function is often chosen out of convenience and is somewhat too “arbitrary.” In contrast, the Markov transition rules of the Gibbs sampler are built upon conditional distributions derived from the target distribution π(x). In this chapter, we describe a more general form of the conditional sampling, partial resampling, introduced in Goodman and Sokal (1989) and generalized in Liu and Sabatti (2000).
KeywordsConditional Distribution Gibbs Sampler Markov Random Field Target Distribution Markov Chain Monte Carlo Algorithm
Unable to display preview. Download preview PDF.