Advertisement

Multimedia Tools and Applications

, Volume 77, Issue 22, pp 29531–29549 | Cite as

Unsupervised feature selection with graph learning via low-rank constraint

  • Guangquan Lu
  • Bo Li
  • Weiwei Yang
  • Jian YinEmail author
Article

Abstract

Feature selection is one of the most important machine learning procedure, and it has been successfully applied to make a preprocessing before using classification and clustering methods. High-dimensional features often appear in big data, and it’s characters block data processing. So spectral feature selection algorithms have been increasing attention by researchers. However, most feature selection methods, they consider these tasks as two steps, learn similarity matrix from original feature space (may be include redundancy for all features), and then conduct data clustering. Due to these limitations, they do not get good performance on classification and clustering tasks in big data processing applications. To address this problem, we propose an Unsupervised Feature Selection method with graph learning framework, which can reduce the redundancy features influence and utilize a low-rank constraint on the weight matrix simultaneously. More importantly, we design a new objective function to handle this problem. We evaluate our approach by six benchmark datasets. And all empirical classification results show that our new approach outperforms state-of-the-art feature selection approaches.

Keywords

Graph learning Feature selection Spectral clustering 

Notes

Acknowledgements

This work is supported by the Research Foundation of Science and Technology Plan Project in Guangdong Province (2013A011403001, 2014B030301007, 2015A030401057, 2016B030307002). Also this work is supported by Program for Science Research and Technology Development of Guangxi Province (15248003-8) and Science and Technology development project of Wuzhou (2014B01039). Besides we would like to thank the anonymous reviewers for their helpful comments and suggestions.

References

  1. 1.
    Boyd, Vandenberghe, Faybusovich (2006) Convex optimization. IEEE Trans Autom Control, pp 243–249Google Scholar
  2. 2.
    Cai D, Zhang C, He X (2010) Unsupervised feature selection for multi-cluster data. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining, pp 333–342Google Scholar
  3. 3.
    Cai X, Ding C, Nie F, Huang H (2013) On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions. In: Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1124–1132Google Scholar
  4. 4.
    Cai X, Nie F, Huang H (2013) Exact top-k feature selection via 2,0-norm constraint. In: Twenty-third international joint conference on artificial intelligenceGoogle Scholar
  5. 5.
    Chang X, Nie F, Yang Y, Huang H (2014) A convex formulation for semi-supervised multi-label feature selection. In: Twenty-Eighth AAAI Conference on Artificial Intelligence, vol 2, pp 1171–1177Google Scholar
  6. 6.
    Daubechies I, Devore R, Fornasier M, Gunturk CS (2010) Iteratively re-weighted least squares minimization for sparse recovery. Commun Pur Appl Math 63(1):1–38CrossRefGoogle Scholar
  7. 7.
    Du L, Shen YD (2015) Unsupervised feature selection with adaptive structure learning. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp 209–218Google Scholar
  8. 8.
    Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3(6):1157–1182zbMATHGoogle Scholar
  9. 9.
    Han Y, Yang Y, Zhou X (2013) Co-regularized ensemble for feature selection. In: International joint conference on artificial intelligence, pp 1380–1386Google Scholar
  10. 10.
    He X, Niyogi P (2003) Locality preserving projections. Neural information processing systems pp 153–160Google Scholar
  11. 11.
    He X, Cai D, Niyogi P (2005) Laplacian score for feature selection. In: Proceedings of 18th International conference on neural information processing systems, pp 507–514Google Scholar
  12. 12.
    Hinrichs A, Novak E, Ullrich M, Woźniakowski H (2014) The curse of dimensionality for numerical integration of smooth functions. Math Comput 83(290):2853–2863MathSciNetCrossRefGoogle Scholar
  13. 13.
    Hu R, Zhu X, Cheng D, He W, Yan Y, Song J, Zhang S (2017) Graph self-representation method for unsupervised feature selection. Neurocomputing 220:130–137CrossRefGoogle Scholar
  14. 14.
    Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1-2):273–324CrossRefGoogle Scholar
  15. 15.
    Kong X, Yu PS (2010) Semi-supervised feature selection for graph classification. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining, pp 793–802Google Scholar
  16. 16.
    Li Z, Yang Y, Liu J, Zhou X, Lu H (2012) Unsupervised feature selection using nonnegative spectral analysis. In: Proceedings of the twenty-sixth AAAI conference on artificial intelligence, vol 2, pp 1026–1032Google Scholar
  17. 17.
    Liu H, Motoda H (1998) Feature extraction, construction and selection: a data mining perspective. J Am Stat Assoc 94(448):1390zbMATHGoogle Scholar
  18. 18.
    Luo M, Chang X, Nie L, Yang Y, Hauptmann AG, Zheng Q (2017) An adaptive semisupervised feature analysis for video semantic recognition. IEEE Transactions on Cybernetics, pp 1–13Google Scholar
  19. 19.
    Nie F, Xiang S, Jia Y, Zhang C, Yan S (2008) Trace ratio criterion for feature selection. In: Twenty-Third AAAI Conference on Artificial Intelligence, pp 671–676Google Scholar
  20. 20.
    Nie F, Huang H, Cai X, Ding C (2010) Efficient and robust feature selection via joint 2,1-norms minimization. Adv Neural Inf Proces Syst 1813–1821Google Scholar
  21. 21.
    Nie F, Yuan J, Huang H (2014) Optimal mean robust principal component analysis. In: Proceedings of the 31st International Conference on Machine Learning, pp 1062–1070Google Scholar
  22. 22.
    Nie F, Zhu W, Li X (2016) Unsupervised feature selection with structured graph optimization. In: Thirtieth AAAI conference on artificial intelligence, pp 1302–1308Google Scholar
  23. 23.
    Nie F, Zhu W, Li X (2017) Unsupervised large graph embedding. In: Thirty-first AAAI conference on artificial intelligenceGoogle Scholar
  24. 24.
    Song T, Cai J, Zhang T, Gao C, Meng F, Wu Q (2017) Semi-supervised manifold-embedded hashing with joint feature representation and classifier learning. Pattern Recogn 68:99–110CrossRefGoogle Scholar
  25. 25.
    Sun Y, Todorovic S, Goodison S (2010) Local-learning-based feature selection for high-dimensional data analysis. IEEE Trans Pattern Anal Mach Intell 32(9):1610–1626CrossRefGoogle Scholar
  26. 26.
    Tan, Steinbach, Ning P, Kumar M, Vipin (2006) Introduction to data mining. Posts & Telecom Press, BeijingzbMATHGoogle Scholar
  27. 27.
    Tenenbaum JB, De Silva V, Langford J (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323CrossRefGoogle Scholar
  28. 28.
    Van Der Maaten LJP, Postma EO, Van Den Herik HJ (2009) Dimensionality reduction: a comparative review. J Mach Learn Res 10(1):66–71Google Scholar
  29. 29.
    Wang D, Nie F, Huang H (2014) Unsupervised feature selection via unified trace ratio formulation and k-means clustering (track). In: Ecml/pkdd, pp 306–321Google Scholar
  30. 30.
    Wen Z, Yin W (2013) A feasible method for optimization with orthogonality constraints. Math Program 142:397–434MathSciNetCrossRefGoogle Scholar
  31. 31.
    Weston J, Mukherjee S, Chapelle O, Pontil M, Poggio T, Vapnik V (2000) Feature selection for svms. Adv Neural Inf Proces Syst 13:668–674Google Scholar
  32. 32.
    Xu Z, Jin R, Lyu MR, King I (2010) Discriminative semi-supervised feature selection via manifold regularization. IEEE Trans Neural Netw 21(7):1033–47CrossRefGoogle Scholar
  33. 33.
    Zhang S (2012) Nearest neighbor selection for iteratively knn imputation. J Syst Softw 85(11):2541–2552CrossRefGoogle Scholar
  34. 34.
    Zhang S, Li X, Zong M, Zhu X, Cheng D (2017) Learning k for knn classification. ACM Trans Intell Syst Technol 8(3):43Google Scholar
  35. 35.
    Zhang S, Li X, Zong M, Zhu X, Wang R (2017) Efficient knn classification with different numbers of nearest neighbors. IEEE Transactions on Neural Networks and Learning Systems, pp 1–12Google Scholar
  36. 36.
    Zhao Z, Wang L, Liu H, Ye J (2013) On similarity preserving feature selection. IEEE Trans Knowl Data Eng 25(3):619–632CrossRefGoogle Scholar
  37. 37.
    Zhu P, Zuo W, Zhang L, Hu Q, Shiu SCK (2015) Unsupervised feature selection by regularized self-representation. Pattern Recogn 48(2):438–446CrossRefGoogle Scholar
  38. 38.
    Zhu P, Zhu W, Hu Q, Zhang C, Zuo W (2017) Subspace clustering guided unsupervised feature selection. Pattern Recognition 66, 364-374CrossRefGoogle Scholar
  39. 39.
    Zhu X, Huang Z, Yang Y, Shen HT, Xu C, Luo J (2013) Self-taught dimensionality reduction on the high-dimensional small-sized data. Pattern Recogn 46(1):215–229CrossRefGoogle Scholar
  40. 40.
    Zhu X, Zhang L, Huang Z (2014) A sparse embedding and least variance encoding approach to hashing. IEEE Trans Image Process 23(9):3737–3750MathSciNetCrossRefGoogle Scholar
  41. 41.
    Zhu X, Li X, Zhang S (2016) Block-row sparse multiview multilabel learning for image classification. IEEE Transactions on Cybernetics 46(2):450–461CrossRefGoogle Scholar
  42. 42.
    Zhu X, Li X, Zhang S, Ju C, Wu X (2016) Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE Trans Neural Networks 28(6):1263–1275MathSciNetCrossRefGoogle Scholar
  43. 43.
    Zhu X, Suk HI, Lee SW, Shen D (2016) Subspace regularized sparse multitask learning for multiclass neurodegenerative disease identification. IEEE Trans Biomed Eng 63(3):607–618CrossRefGoogle Scholar
  44. 44.
    Zhu X, Suk H, Wang L, Lee S, Shen D (2017) A novel relational regularization feature selection method for joint regression and classification in AD diagnosis. Med Image Anal 38:205–214CrossRefGoogle Scholar
  45. 45.
    Zhu X, Suk HI, Huang H, Shen D (2017) Low-rank graph-regularized structured sparse regression for identifying genetic biomarkers. IEEE Transactions on Big DataGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.Institute of Logic and Cognition, Department of PhilosophySun Yat-sen UniversityGuangzhouChina
  2. 2.Guangdong Key Laboratory of Big Data Analysis and ProcessingGuangzhouPeople’s Republic of China

Personalised recommendations