Population-Based Monte Carlo Methods

  • Jun S. Liu
Part of the Springer Series in Statistics book series (SSS)


In parallel tempering (Section 10.4), the target distribution is embedded into a larger system which hosts a number of similar distributions differing with each other only in a temperature parameter. Then, parallel Monte Carlo Markov chains are conducted to sample from these distributions simultaneously. An important step which makes PT effective and which connects the multiple distributions in the augmented system is to propose configuration exchanges between two adjacent sampling chains. The attractiveness of this configuration-swap step can be loosely attributed to a population-based “learning” strategy; that is, in high-temperature states, radically different new configurations are allowed to arise, whereas in lower-temperature states, a configuration is given opportunities to refine itself. By making exchanges, we can retain and improve those good configurations generated in the population by putting them into low-temperature chains. However, one may feel that this “exchange” step is a rather minimal interaction among the multiple chains in the “population.” More active interactions such as those employed in a genetic algorithm might be more helpful. In this chapter, we will follow this thought to venture into population-based Monte Carlo strategies.


Markov Chain Monte Carlo Crossover Operator Hide Unit Target Distribution Markov Chain Monte Carlo Sampler 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Jun S. Liu
    • 1
  1. 1.Department of StatisticsHarvard UniversityCambridgeUSA

Personalised recommendations