Advertisement

Image and Pattern Clustering

Chapter
  • 2k Downloads

Abstract

Clustering, or grouping samples which share similar features, is a recurrent problem in computer vision and pattern recognition. The core element of a clustering algorithm is the similarity measure. In this regard information theory offers a wide range of measures (not always metrics) which inspire clustering algorithms through their optimization. In addition, information theory also provides both theoretical frameworks and principles to formulate the clustering problem and provide effective algorithms. Clustering is closely related to the segmentation problem, already presented in Chapter 3. In both problems, finding the optimal number of clusters or regions is a challenging task. In the present chapter we cover this question in depth. To that end we explore several criteria for model order selection.

All the latter concepts are developed through the description and discussion of several information theoretic clustering algorithms: Gaussian mixtures, Information Bottleneck, Robust Information Clustering (RIC) and IT-based Mean Shift. At the end of the chapter we also discuss basic strategies to form clustering ensembles.

Keywords

Mutual Information Gaussian Mixture Model Shannon Entropy Expectation Maximization Algorithm Pattern Cluster 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Key References

  1. A. Peñalver, F. Escolano, and J.M. Sáez. “EBEM: An Entropy-Based EM Algorithm for Gaussian Mixture Models”. International Conference on Pattern Recognition, Hong Kong (China) (2006)Google Scholar
  2. A. Peñalver, F. Escolano, and J.M. Sáez. “Two Entropy-Based Methods for Learning Unsupervised Gaussian Mixture Models”. SSPR/SPR — LNCS (2006)Google Scholar
  3. M. Figueiredo and A.K. Jain. “Unsupervised Learning of Finite Mixture Models”. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(3): 381–396 (2002)CrossRefGoogle Scholar
  4. N. Srebro, G. Shakhnarovich, and S. Roweis. “An Investigation of Computational and Informational Limits in Gaussian Mixture Clustering”. International Conference on Machine Learning (2006)Google Scholar
  5. N.Z. Tishby, F. Pereira, and W. Bialek. “The Information Bottleneck method”. 37th Allerton Conference on Communication, Control and Computing (1999)Google Scholar
  6. N. Slonim, N. Friedman, and N. Tishby. “Multivariate Information Bottleneck”. Neural Computation 18: 1739–1789 (2006)zbMATHCrossRefMathSciNetGoogle Scholar
  7. J. Goldberger, S. Gordon, and H. Greenspan. “Unsupervised Image-Set Clustering Using an Information Theoretic Framework”. IEEE Transactions on Image Processing 15(2): 449–458 (2006)CrossRefGoogle Scholar
  8. N. Slonim and N. Tishby. “Agglomerative Information Bottleneck”. In Proceeding of Neural Information Processing Systems (1999)Google Scholar
  9. W. Punch, A. Topchy, and A. Jain. “Clustering Ensembles: Models of Consensus and Weak Partitions”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(12): 1866–1881 (2005)CrossRefGoogle Scholar

Copyright information

© Springer Verlag London Limited 2009

Personalised recommendations