Feature Selection for Unsupervised Learning via Comparison of Distance Matrices

  • Stephan Dreiseitl
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8111)


Feature selection for unsupervised learning is generally harder than for supervised learning, because the former lacks the class information of the latter, and thus an obvious way by which to measure the quality of a feature subset. In this paper, we propose a new method based on representing data sets by their distance matrices, and judging feature combinations by how well the distance matrix using only these features resembles the distance matrix of the full data set. Using articial data for which the relevant features were known, we observed that the results depend on the data dimensionality, the fraction of relevant features, the overlap between clusters in the relevant feature subspaces, and how to measure the similarity of distance matrices. Our method consistently achieved higher than 80% detection rates of relevant features for a wide variety of experimental configurations.


Unsupervised feature selection feature extraction dimensionality reduction distance matrix similarity 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)zbMATHGoogle Scholar
  2. 2.
    Saeys, Y., Inza, I., Larrañaga, P.: A review of feature selection techniques in bioinformatics. Bioinformatics 23, 2507–2517 (2007)CrossRefGoogle Scholar
  3. 3.
    Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. Journal of Machine Learning Research 5, 1205–1224 (2004)zbMATHGoogle Scholar
  4. 4.
    Liu, H., Motoda, H., Setiono, R., Zhao, Z.: Feature selection: An ever evolving frontier in data mining. In: Proceedings of the 4th International Workshop on Feature Selection in Data Mining, pp. 4–13 (2010)Google Scholar
  5. 5.
    Parsons, L., Haque, E., Liu, H.: Subspace clustering for high dimensinal data: A review. ACM SIGKDD Explorations 6, 90–105 (2004)CrossRefGoogle Scholar
  6. 6.
    Dy, J., Brodley, C.: Feature selection for unsupervised learning. Journal of Machine Learning Research 5, 845–889 (2004)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Dash, M., Liu, H.: Feature selection for clustering. In: Terano, T., Liu, H., Chen, A.L.P. (eds.) PAKDD 2000. LNCS, vol. 1805, pp. 110–121. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  8. 8.
    Dash, M., Choi, K., Scheuermann, P., Liu, H.: Feature selection for clustering — a filter solution. In: Proceedings of the Second International Conference on Data Mining, pp. 115–122 (2002)Google Scholar
  9. 9.
    Mitra, P., Murthy, C., Pal, S.: Unsupervised feature selection using feature similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1–13 (2002)CrossRefGoogle Scholar
  10. 10.
    Escoufier, Y.: Le traitement des variables vectorielles. Biometrics 29, 751–760 (1973)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Kullback, S., Leibler, R.: On information and sufficiency. Annals of Mathematical Statistics 22, 79–86 (1951)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Bhattacharyya, A.: On a measure of divergence between two statistical populations defined by their probability distribution. Bulletin of the Calcutta Mathematical Society 35, 99–109 (1943)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition. Academic Press, San Diego (1990)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Stephan Dreiseitl
    • 1
  1. 1.Dept. of Software EngineeringUpper Austria University of Applied SciencesHagenbergAustria

Personalised recommendations