Advertisement

Accurate Modeling of Marginal Signal Distributions in 2D/3D Images

  • Ayman S. El-BazEmail author
  • Georgy Gimel’farb
Chapter

Abstract

Modeling a multimodal empirical probability density or distribution function with a linear combination of continuous or discrete Gaussians is outlined. The model is learned (estimated) in two expectation-maximization-based steps: (a) a close initial approximation and (b) an iterative refinement. Experiments show the model approximates, both prominent modes of a complex function and transitions between them, more accurately than a conventional probability mixture. These experimental results show that the proposed LCDG model can be used to get an accurate initial segmentation for any segmentation framework.

References

  1. 1.
    Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkGoogle Scholar
  2. 2.
    Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning: data mining, inference, and prediction. Springer, New YorkGoogle Scholar
  3. 3.
    Duda R, Hart P, Stork D (2001) Pattern classification. Wiley Interscience, New YorkGoogle Scholar
  4. 4.
    Pal N, Pal K (1993) A review on image segmentation techniques. Pattern Recognit 26:1277–1294CrossRefGoogle Scholar
  5. 5.
    Goshtasby A, O’Neill W (1999) Curve fitting by a sum of Gaussians. CVGIP: Graph Models Image Process 56:281–288CrossRefGoogle Scholar
  6. 6.
    Poggio T, Girosi F (1990) Networks for approximation and learning. Proc IEEE 78:1481–1497CrossRefGoogle Scholar
  7. 7.
    Sorenson H, Alspach D (1971) Recursive Bayesian estimation using Gaussian sums. Automatica 7:465–479CrossRefGoogle Scholar
  8. 8.
    McLachlan GJ, Krishnan T (1997) The EM algorithm and extensions. Wiley, New YorkGoogle Scholar
  9. 9.
    Moon T (1996) The expectation-maximization algorithm. IEEE Signal Process Mag 11:47–60CrossRefGoogle Scholar
  10. 10.
    Redner R, Walker H (1984) Mixture densities, maximum likelihood and the EM algorithm (review). SIAM Rev 26:195–237CrossRefGoogle Scholar
  11. 11.
    Schlesinger M (1968) A connection between supervised and unsupervised learning in pattern recognition. Kibernetika 2:81–88Google Scholar
  12. 12.
    Schlesinger M, Hlavac V (2002) Ten Lectures on Statistical and Structural Pattern Recognition. Kluwer Academic, The NetherlandsGoogle Scholar
  13. 13.
    Day N (1969) Estimating the components of mixture of normal distributions. Biometrika 56:463–474CrossRefGoogle Scholar
  14. 14.
    Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc 39B:1–38Google Scholar
  15. 15.
    McLachlan GJ, Peel D (2000) Finite mixture models. Wiley, New YorkCrossRefGoogle Scholar
  16. 16.
    Farag A, El-Baz A, Gimel’farb G (2004) Precise image segmentation by iterative EM-based approximation of empirical gray level distribution with linear combination of Gaussians. In: Proc. IEEE international workshop on learning in computer vision and pattern recognition, Washington, D.C., USA, pp 121–129.Google Scholar
  17. 17.
    Gimel’farb G, Farag A, El-Baz A (2004) Expectation-maximization for a linear combination of Gaussians. In: Proc. IAPR international conference on pattern recognition, ICPR-04, pp 422–425Google Scholar
  18. 18.
    Farag A, El-Baz A, Gimel’farb G (2006) Precise segmentation of multi-modal images. IEEE Trans Image Process 15:952–968PubMedCrossRefGoogle Scholar
  19. 19.
    Lamperti J (1996) Probability. Wiley Interscience, New YorkGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.BioImaging Laboratory, Department of BioengineeringUniversity of LouisvilleLouisvilleUSA

Personalised recommendations