Advertisement

k*-Means — A Generalized k-Means Clustering Algorithm with Unknown Cluster Number

  • Yiu-ming Cheung
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2412)

Abstract

This paper presents a new clustering technique named STep- wise Automatic Rival- penalized (STAR) k-means algorithm (denoted as k*-means), which is actually a generalized version of the conventional k-means (MacQueen 1967). Not only is this new algorithm applicable to ellipse-shaped data clusters rather than just to ball-shaped ones like the k-means algorithm, but also it can perform appropriate clustering without knowing cluster number by gradually penalizing the winning chance of those extra seed points during learning competition. Although the existing RPCL (Xu et al. 1993) can automatically select the cluster number as well by driving extra seed points far away from the input data set, its performance is much sensitive to the selection of the de-learning rate. To our best knowledge, there is still no theoretical result to guide its selection as yet. In contrast, the proposed k*-means algorithm need not determine this rate. We have qualitatively analyzed its rival-penalized mechanism with the results well-justified by the experiments.

Keywords

Cluster Number Seed Point True Cluster Competitive Learn Input Data Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S.C. Ahalt, A.K. Krishnamurty, P. Chen, and D.E. Melton, “Competitive Learning Algorithms for Vector Quantization”, Neural Networks, Vol. 3, pp. 277–291, 1990.CrossRefGoogle Scholar
  2. 2.
    H. Akaike, “Information Theory and An Extension of the Maximum Likelihood Principle”, Proceedings of Second International Symposium on Information Theory, pp. 267–281, 1973.Google Scholar
  3. 3.
    H. Akaike, “A New Look at the Statistical Model Identfication”, IEEE Transactions on Automatic Control AC-19, pp. 716–723, 1974.Google Scholar
  4. 4.
    H. Bozdogan, “Model Selection and Akaike’s Information Criterion: The General Theory and its Analytical Extensions”, Psychometrika, Vol. 52, No. 3, pp. 345–370, 1987.zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    J.B. MacQueen, “Some Methods for Classification and Analysis of Multivariate Observations”, Proceedings of 5nd Berkeley Symposium on Mathematical Statistics and Probability, 1, Berkeley, Calif.: University of California Press, pp. 281–297, 1967.Google Scholar
  6. 6.
    G. Schwarz, “Estimating the Dimension of a Model”, The Annals of Statistics, Vol. 6, No. 2, pp. 461–464, 1978.zbMATHMathSciNetCrossRefGoogle Scholar
  7. 7.
    L. Xu, “How Many Clusters?: A Ying-Yang Machine Based Theory for A Classical Open Problem in Pattern Recognition”, Proceedings of IEEE International Conference on Neural Networks, Vol. 3, pp. 1546–1551, 1996.Google Scholar
  8. 8.
    L. Xu, “Bayesian Ying-Yang Machine, Clustering and Number of Clusters”, Pattern Recognition Letters, Vol. 18, No. 11-13, pp. 1167–1178, 1997.CrossRefGoogle Scholar
  9. 9.
    L. Xu, A. Krzyÿzak and E. Oja, “Rival Penalized Competitive Learning for Clustering Analysis, RBF Net, and Curve Detection”, IEEE Transaction on Neural Networks, Vol. 4, pp. 636–648, 1993. Its preliminary version was appeared in Proceedings of 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 665-670, 1992.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Yiu-ming Cheung
    • 1
  1. 1.Department of Computer ScienceHong Kong Baptist UniversityHong Kong

Personalised recommendations