Markov Random Fields and Gibbs Sampling

Markov chain theories aim to model sequences of random variables that have certain dependencies among themselves. In this chapter, we present Markov random field theories that extend Markov chains to enable the modeling of local structures/interactions among random variables. Because markov random fields can naturally model signals with both spatial and temporal configurations, they have been widely applied in areas of image processing, computer vision, multimedia computing, etc.

We start this chapter by introducing important concepts and definitions of Markov random fields, followed by describing Gibbs distributions and their equivalence to Markov random fields. We also describe the Gibbs sampling method that is a special version of the Markov chain Monte Carlo method described in the previous chapter. At the end of this chapter, we provide a case study that describes our original work to apply Markov random fields to the video foreground object segmentation task.


Gibbs Sampling Markov Random Field Markov Chain Monte Carlo Method Foreground Object Gibbs Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media, LLC 2007

Personalised recommendations