Selecting the Minkowski Exponent for Intelligent K-Means with Feature Weighting

  • Renato Cordeiro de AmorimEmail author
  • Boris Mirkin
Part of the Springer Optimization and Its Applications book series (SOIA, volume 92)


Recently, a three-stage version of K-Means has been introduced, at which not only clusters and their centers, but also feature weights are adjusted to minimize the summary p-th power of the Minkowski p-distance between entities and centroids of their clusters. The value of the Minkowski exponent p appears to be instrumental in the ability of the method to recover clusters hidden in data. This paper advances into the problem of finding the best p for a Minkowski metric-based version of K-Means, in each of the following two settings: semi-supervised and unsupervised. This paper presents experimental evidence that solutions found with the proposed approaches are sufficiently close to the optimum.


Clustering Minkowski metric Feature weighting K-Means 


  1. 1.
    Arbelaitz, O., Gurrutxaga, I., Muguerza, J., Pérez, J.M., Perona, I.: An extensive comparative study of cluster validity indices. Pattern Recognit. 46, 243–256 (2012)CrossRefGoogle Scholar
  2. 2.
    Bache, K., Lichman, M.: UCI machine learning repository. (2013)
  3. 3.
    Chan, E.Y., Ching, W.K., Ng, M.K., Huang, J.Z.: An optimization algorithm for clustering using weighted dissimilarity measures. Pattern Recognit. 37(5), 943–952 (2004)CrossRefzbMATHGoogle Scholar
  4. 4.
    Chiang, M.M.T., Mirkin, B.: Intelligent choice of the number of clusters in k-means clustering: an experimental study with different cluster spreads. J. Classif. 27(1), 3–40 (2010)CrossRefMathSciNetGoogle Scholar
  5. 5.
    de Amorim, R.C., Fenner, T.: Weighting features for partition around medoids using the minkowski metric. In: Jaakko, H., Frank, K., Allan, T. (eds.) Advances in Intelligent Data Analysis. Lecture Notes in Computer Science, vol. 7619, pp. 35–44. Springer, Berlin (2012)Google Scholar
  6. 6.
    de Amorim, R.C., Komisarczuk, P.: On initializations for the minkowski weighted k-means. In: Jaakko, H., Frank, K., Allan, T. (eds.) Advances in Intelligent Data Analysis. Lecture Notes in Computer Science, vol. 7619, pp. 45–55. Springer, Berlin (2012)Google Scholar
  7. 7.
    de Amorim, R.C., Mirkin, B.: Minkowski metric, feature weighting and anomalous cluster initializing in k-means clustering. Pattern Recognit. 45(3), 1061–1075 (2012)CrossRefGoogle Scholar
  8. 8.
    Frigui, H., Nasraoui, O.: Unsupervised learning of prototypes and attribute weights. Pattern Recognit. 37(3), 567–581 (2004)CrossRefGoogle Scholar
  9. 9.
    Huang, J.Z., Ng, M.K., Rong, H., Li, Z.: Automated variable weighting in k-means type clustering. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 657–668 (2005)CrossRefGoogle Scholar
  10. 10.
    Huang, J.Z., Xu, J., Ng, M., Ye, Y.: Weighting method for feature selection in k-means. In: Computational Methods of Feature Selection, pp. 193–209. Chapman & Hall, London (2008)Google Scholar
  11. 11.
    Makarenkov, V., Legendre, P.: Optimal variable weighting for ultrametric and additive trees and k-means partitioning: Methods and software. J. Classif. 18(2), 245–271 (2001)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Mirkin, B.: Clustering for Data Mining: A Data Recovery Approach, vol. 3. Chapman & Hall, London (2005)CrossRefGoogle Scholar
  13. 13.
    Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of ComputingGlyndŵr UniversityWrexhamUK
  2. 2.Department of Data Analysis and Machine IntelligenceNational Research University Higher School of EconomicsMoscowRussian Federation
  3. 3.Department of Computer ScienceBirkbeck University of LondonLondonUK

Personalised recommendations