Population-Based Monte Carlo Methods
In parallel tempering (Section 10.4), the target distribution is embedded into a larger system which hosts a number of similar distributions differing with each other only in a temperature parameter. Then, parallel Monte Carlo Markov chains are conducted to sample from these distributions simultaneously. An important step which makes PT effective and which connects the multiple distributions in the augmented system is to propose configuration exchanges between two adjacent sampling chains. The attractiveness of this configuration-swap step can be loosely attributed to a population-based “learning” strategy; that is, in high-temperature states, radically different new configurations are allowed to arise, whereas in lower-temperature states, a configuration is given opportunities to refine itself. By making exchanges, we can retain and improve those good configurations generated in the population by putting them into low-temperature chains. However, one may feel that this “exchange” step is a rather minimal interaction among the multiple chains in the “population.” More active interactions such as those employed in a genetic algorithm might be more helpful. In this chapter, we will follow this thought to venture into population-based Monte Carlo strategies.
KeywordsMarkov Chain Monte Carlo Crossover Operator Hide Unit Target Distribution Markov Chain Monte Carlo Sampler
Unable to display preview. Download preview PDF.