Advertisement

Setting the Number of Clusters in K-Means Clustering

  • Myung-Hoe Huh

Summary

K-means clustering is an efficient non-hierarchical clustering method, which became widely used in data mining. In applying the method, however, one needs to specify k,the number of clusters, a priori. In this short paper, we propose an exploratory procedure for setting k using Euclidean and/or Mahalanobis inter-point distances.

Keywords

Mahalanobis Distance Iris Data Multivariate Normal Distribution Exploratory Procedure Rock Crab 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Art, D., Gnanadesikan, R., and Kettenring, J. R. (1982). Data-based metrics for cluster analysis, Utilitas Mathematica, 21A, 75–99.MathSciNetGoogle Scholar
  2. Bensmail, H. and Meulman, J. J. (1998). MCMC inference for modelbased cluster analysis, Advances in Data Science and Classification, edited by Rizzi,A. and Vichi, M., Berlin: Springer.Google Scholar
  3. Campbell, M. A. and Mahon, R. J. (1974). A multivariate study of variation in two species of rock crab of genus Leptograpsus, Australian Journal of Zoology, 22, 417–425.CrossRefGoogle Scholar
  4. Everitt, B. S. and Dunn, G. (1991). Applied Multivariate Data Analysis. London: Edward Arnold.MATHGoogle Scholar
  5. Huh, Myung-Hoe (2000). Double K-means clustering, Unpublished manuscript (Submitted to Korean Journal of Applied Statistics, Written in Korean).Google Scholar
  6. Jin, Seohoon (1999). A Study of the Partitioning Method for Cluster Analysis. Doctoral Thesis, Dept. of Statistics, Korea University. Seoul, Korea.Google Scholar
  7. McLachlan, G. and Basford, K. (1988). Mixture Models: Inference and Applications to Clustering. New York: Macel Dekker.MATHGoogle Scholar
  8. Milligan, G. W. and Cooper, M. C. (1985). An examination of procedures for determining the number of clusters in a data set, Psychometrika, 50, 159179.Google Scholar
  9. Peck, R., Fisher, L., and Van Ness, J. (1989). Approximate confidence intervals for the number of clusters, Journal of the American Statistical Association, 84, 184–191.MathSciNetMATHCrossRefGoogle Scholar
  10. Ripley, R. D. (1996). Pattern Recognition and Neural Networks. Cambridge: Cambridge University Press.MATHGoogle Scholar
  11. Rost, D. (1995). A simulation study of the weighted -means cluster procedure, Journal of Statistical Computing and Simulation, 53, 51–63.MATHCrossRefGoogle Scholar
  12. Sarle, W. S. (1983). Cubic Clustering Criterion, Technical Report A-108. SAS Institute, NC: Cary.Google Scholar
  13. SAS Institute (1990). SAS/STAT User’s Guide (Vol. 1), Version 6 Fourth Edition. SAS Institute, NC: Cary.Google Scholar
  14. Sharma, S. (1996). Applied Multivariate Techniques. New York: Wiley. SPSS Inc. (1997). SPSS 7. 5 Statistical Algorithms. Chicago: SPSS Inc.Google Scholar
  15. Trejos, J., Murillo, A., and Piza, E. (1998). Global stochastic optimization techniques applied to partitioning, Advances in Data Science and Classification, edited by Rizzi, A. and Vichi, M., Berlin: Springer.Google Scholar
  16. Wong, M. A. (1982). A hybrid clustering method for identifying high-density clusters, Journal of the American Statistical Association, 77, 841–847.MathSciNetMATHCrossRefGoogle Scholar
  17. Wong, M. A., and Lane, T. (1983). A kth nearest neighbor clustering procedure, Journal of the Royal Statistical Society (Series B), 45, 362–368.MathSciNetMATHGoogle Scholar

Copyright information

© The Institute of Statistical Mathematics 2002

Authors and Affiliations

  • Myung-Hoe Huh
    • 1
  1. 1.Dept. of StatisticsKorea UniversitySeoulKorea

Personalised recommendations