Advertisement

Abstract

The goal of segmentation is to partition an image into a finite set of regions, homogeneous in some (e.g., statistical) sense, thus being an intrinsically discrete problem. Bayesian approaches to segmentation use priors to impose spatial coherence; the discrete nature of segmentation demands priors defined on discrete-valued fields, thus leading to difficult combinatorial problems.

This paper presents a formulation which allows using continuous priors, namely Gaussian fields, for image segmentation. Our approach completely avoids the combinatorial nature of standard Bayesian approaches to segmentation. Moreover, it’s completely general, i.e., it can be used in supervised, unsupervised, or semi-supervised modes, with any probabilistic observation model (intensity, multispectral, or texture features).

To use continuous priors for image segmentation, we adopt a formulation which is common in Bayesian machine learning: introduction of hidden fields to which the region labels are probabilistically related. Since these hidden fields are real-valued, we can adopt any type of spatial prior for continuous-valued fields, such as Gaussian priors. We show how, under this model, Bayesian MAP segmentation is carried out by a (generalized) EM algorithm. Experiments on synthetic and real data shows that the proposed approach performs very well at a low computational cost.

Keywords

Image Segmentation Observation Model Gaussian Prior Posterior Class Probability IEEE CVPR 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Balram, N., Moura, J.: Noncausal Gauss-Markov random fields: parameter structure and estimation. IEEE Trans. Information Theory 39, 1333–1355 (1993)zbMATHCrossRefGoogle Scholar
  2. 2.
    Bernardo, J., Smith, A.: Bayesian Theory. J. Wiley & Sons, Chichester (1994)zbMATHCrossRefGoogle Scholar
  3. 3.
    Böhning, D.: Multinomial logistic regression algorithm. Annals Inst. Stat. Math. 44, 197–200 (1992)zbMATHCrossRefGoogle Scholar
  4. 4.
    Böhning, D., Lindsay, B.: Monotonicity of quadratic-approximation algorithms. Annals Inst. Stat. Math. 40, 641–663 (1988)zbMATHCrossRefGoogle Scholar
  5. 5.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Patt. Anal. Mach. Intell. 23, 1222–1239 (2001)CrossRefGoogle Scholar
  6. 6.
    Chung, F.: Spectral Graph Theory. American Mathematical Society, Providence (1997)zbMATHGoogle Scholar
  7. 7.
    Cross, G., Jain, A.: Markov random field texture models. IEEE Trans. Patt. Anal. and Mach. Intell. 5, 25–39 (1983)CrossRefGoogle Scholar
  8. 8.
    Derin, H., Elliot, H.: Modelling and segmentation of noisy and textured images in Gibbsian random fields. IEEE Trans. Patt. Anal. and Mach. Intell. 9, 39–55 (1987)CrossRefGoogle Scholar
  9. 9.
    Figueiredo, M.: Bayesian image segmentation using wavelet-based priors. In: Proc. of IEEE CVPR 2005, San Diego, CA (2005)Google Scholar
  10. 10.
    Figueiredo, M., Jain, A.K.: Unsupervised learning of finite mixture models. IEEE Trans. Patt. Anal. and Mach. Intell. 24, 381–396 (2002)CrossRefGoogle Scholar
  11. 11.
    Haralick, R., Shanmugan, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst., Man, and Cybernetics 8, 610–621 (1973)CrossRefGoogle Scholar
  12. 12.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2001)zbMATHGoogle Scholar
  13. 13.
    Hermes, L., Buhmann, J.: A minimum entropy approach to adaptive image polygonization. IEEE Trans. Image Proc. 12, 1243–1258 (2003)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Hofmann, T., Puzicha, J., Buhmann, J.: Unsupervised texture segmentation in a deterministic annealing framework. IEEE Trans. Patt. Anal. and Mach. Intell. 20, 803–818 (1998)CrossRefGoogle Scholar
  15. 15.
    Jain, A.: Fundamentals of Digital Image Processing. Prentice Hall, Englewood Cliffs (1989)zbMATHGoogle Scholar
  16. 16.
    Jain, A., Farrokhnia, F.: Unsupervised texture segmentation using Gabor filters. Pattern Recognition 24, 1167–1186 (1991)CrossRefGoogle Scholar
  17. 17.
    Kim, J., Fisher, J., Yezzi, A., Çetin, M., Willsky, A.: A nonparametric statistical method for image segmentation using information theory and curve evolution. IEEE Trans. Image Proc. (2005) (to appear)Google Scholar
  18. 18.
    Krishnapuram, B., Carin, L., Figueiredo, M., Hartemink, A.: Learning sparse Bayesian classifiers: multi-class formulation, fast algorithms, and generalization bounds. IEEE-TPAMI 27(6) (2005)Google Scholar
  19. 19.
    Lange, K., Hunter, D., Yang, I.: Optimization transfer using surrogate objective functions. Jour. Comp. Graph. Stat. 9, 1–59 (2000)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Li, S.Z.: Markov Random Field Modelling in Computer Vision. Springer, Heidelberg (2001)Google Scholar
  21. 21.
    Magnus, J., Neudecker, H.: Matrix Differential Calculus. John Wiley & Sons, Chichester (1988)zbMATHGoogle Scholar
  22. 22.
    Marroquin, J., Santana, E., Botello, S.: Hidden Markov measure field models for image segmentation. IEEE Trans. Patt. Anal. and Mach. Intell. 25, 1380–1387 (2003)CrossRefGoogle Scholar
  23. 23.
    McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions. John Wiley & Sons, New York (1997)zbMATHGoogle Scholar
  24. 24.
    Nowak, R., Figueiredo, M.: Unsupervised progressive parsing of Poisson fields using minimum description length criteria. In: Proc. IEEE ICIP 1999, Kobe, Japan, vol. II, pp. 26–29 (1999)Google Scholar
  25. 25.
    Randen, T., Husoy, J.: Filtering for texture classification: a comparative study. IEEE Trans. Patt. Anal. Mach. Intell. 21, 291–310 (1999)CrossRefGoogle Scholar
  26. 26.
    Martin, D., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color and texture cues. IEEE Trans. Patt. Anal. Mach. Intell. 26, 530–549 (2004)CrossRefGoogle Scholar
  27. 27.
    Sharon, E., Brandt, A., Basri, R.: Segmentation and boundary detection using multiscale intensity measurements. In: Proc. IEEE CVPR, Kauai, Hawaii, vol. I, pp. 469–476 (2001)Google Scholar
  28. 28.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Patt. Anal. Mach. Intell. 22, 888–905 (2000)CrossRefGoogle Scholar
  29. 29.
    Unser, M.: Texture classification and segmentation using wavelet frames. IEEE Trans. Image Proc. 4, 1549–1560 (1995)CrossRefGoogle Scholar
  30. 30.
    Weiss, Y.: Segmentation using eigenvectors: a unifying view. In: Proc. Intern. Conf. on Computer Vision – ICCV 1999, pp. 975–982 (1999)Google Scholar
  31. 31.
    Williams, C., Barber, D.: Bayesian classification with Gaussian priors. IEEE Trans. Patt. Anal. and Mach. Intell. 20, 1342–1351 (1998)CrossRefGoogle Scholar
  32. 32.
    Wu, Z., Leahy, R.: Optimal graph theoretic approach to data clustering: theory and its application to image segmentation. IEEE Trans. Patt. Anal. Mach. Intell. 15, 1101–1113 (1993)CrossRefGoogle Scholar
  33. 33.
    Zabih, R., Kolmogorov, V.: Spatially coherent clustering with graph cuts. In: Proc. IEEE-CVPR, vol. II, pp. 437–444 (2004)Google Scholar
  34. 34.
    Zhu, S.C., Yuille, A.: Region competition: unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Trans. Patt. Anal. Mach. Intell. 18, 884–900 (1996)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Mário A. T. Figueiredo
    • 1
  1. 1.Instituto de Telecomunicações, and Department of Electrical and Computer EngineeringInstituto Superior TécnicoLisboaPortugal

Personalised recommendations