Multimedia Tools and Applications

, Volume 77, Issue 22, pp 29551–29572 | Cite as

Hypergraph expressing low-rank feature selection algorithm

  • Yue Fang
  • Yangding LiEmail author
  • Cong Lei
  • Yonggang Li
  • Xuelian Deng


Dimensionality reduction has been attracted extensive attention in machine learning. It usually includes two types: feature selection and subspace learning. Previously, many researchers have demonstrated that the dimensionality reduction is meaningful for real applications. Unfortunately, a large mass of these works utilize the feature selection and subspace learning independently. This paper explores a novel supervised feature selection algorithm by considering the subspace learning. Specifically, this paper employs an 2,1−norm and an 2,p −norm regularizers, respectively, to conduct sample denoising and feature selection via exploring the correlation structure of data. Then this paper uses two constraints (i.e. hypergraph and low-rank) to consider the local structure and the global structure among the data, respectively. Finally, this paper uses the optimizing framework to iteratively optimize each parameter while fixing the other parameter until the algorithm converges. A lot of experiments show that our new supervised feature selection method can get great results on the eighteen public data sets.


Hypergraph LowRank Feature selection 



This work was supported in part by the China Key Research Program (Grant No: 2016YFB1000905), the China 973 Program (Grant No: 2013CB329404), the China 1000-Plan National Distinguished Professorship, the Nation Natural Science Foundation of China (Grants No: 61573270, 61672177, 61363009 and 81701780), the Guangxi Natural Science Foundation (Grant No: 2015GXNSFCB139011), the Guangxi High Institutions Program of Introducing 100 High-Level Overseas Talents, the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing, the Research Fund of Guangxi Key Lab of MIMS (16-A-01-01 and 16-A-01-02), and the Guangxi Bagui Teams for Innovation and Research, and Innovation Project of Guangxi Graduate Education under grant YCSW2017065, XYCSZ2017064 and XYCSZ2017067.


  1. 1.
    Cai X, Nie F, Huang H (2013) Exact top-k feature selection via l 2,0 -norm constraint. Int Joint Conf Artif Intell:1240–1246Google Scholar
  2. 2.
    Chang X, Nie F, Yang Y, Huang H (2014) A convex formulation for semi-supervised multi-label feature selection. Twenty-Eighth AAAI Conf Artif Intell:1171–1177Google Scholar
  3. 3.
    Chen H-T, Chang H-W, Liu T-L (2005) Local Discriminant Embedding and Its Variants., Local Discriminant Embedding and Its Variants, IEEE Computer Society Conference on Computer Vision & Pattern RecognitionGoogle Scholar
  4. 4.
    Dadaneh BZ, Markid HY, Zakerolhosseini A (2016) Unsupervised probabilistic feature selection using ant colony optimization. Expert Syst Appl Int J 53:27–42CrossRefGoogle Scholar
  5. 5.
    Daubechies I, Devore R, Fornasier M, Güntürk CS (2008) Iteratively reweighted least squares minimization for sparse recovery. Commun Pure Appl Math 63:1–38MathSciNetCrossRefGoogle Scholar
  6. 6.
    Du X, Yan Y, Pan P, Long G, Zhao L (2014) Multiple graph unsupervised feature selection. Signal Process 120:754–760CrossRefGoogle Scholar
  7. 7.
    Du L, Shen YD (2015) Unsupervised feature selection with adaptive structure learning. Comput Sci 37:209–218Google Scholar
  8. 8.
    Feng S, Lu H, Long X (2015) Discriminative Dictionary Learning Based on Supervised Feature Selection for Image Classification. Seventh Int Symp Comput Intell Des:225–228Google Scholar
  9. 9.
    He X, Niyogi P (2003) Locality preserving projections. Advx Neural Inf Process Syst 16:186–197Google Scholar
  10. 10.
    Hou C, Nie F, Li X, Yi D (2014) Joint embedding learning and sparse regression: a framework for unsupervised feature selection. IEEE Trans Cybern 44:793CrossRefGoogle Scholar
  11. 11.
    Hu R, Zhu X, Cheng D, He W, Yan Y, Song J, Zhang S (2017) Graph self-representation method for unsupervised feature selection. Neurocomputing 220:130–137CrossRefGoogle Scholar
  12. 12.
    Ling C, Yang Q, Wang J, Zhang S (2004) Decision trees with minimal costs, Proceedings of 21st International Conference on Machine Learning (ICML)Google Scholar
  13. 13.
    Li J, Hu X, Wu L, Liu H (2016) Robust unsupervised feature selection on networked data, Siam Int Conf Data Mining:387–395Google Scholar
  14. 14.
    Liu Y, Nie F, Wu J, Chen L (2013) Efficient semi-supervised feature selection with noise insensitive trace ratio criterion. Neurocomputing 105:12–18CrossRefGoogle Scholar
  15. 15.
    Luo D, Ding CHQ, Huang H (2011) Linear discriminant analysis: new formulations and overfit analysis, AAAI Conference on Artificial IntelligenceGoogle Scholar
  16. 16.
    Ma Z, Nie F, Yang Y, Uijlings JRR (2012) Web image annotation via subspace-sparsity collaborated feature selection. IEEE Trans Multimed 14:1021–1030CrossRefGoogle Scholar
  17. 17.
    Nie F, Yuan J, Huang H (2014) Optimal mean robust principal component analysis. Int Conf Mach LearnGoogle Scholar
  18. 18.
    Qin Y, Zhang S, Zhu X, Zhang J, Zhang C (2007) Semi-parametric optimization for missing data imputation. Appl Intell 27:79–88CrossRefGoogle Scholar
  19. 19.
    Shi C, An G, Zhao R, Ruan Q (2016) Multi-view hessian semi-supervised sparse feature selection for multimedia analysis. IEEE Trans Circ Syst Video Technol 27:1947–1961CrossRefGoogle Scholar
  20. 20.
    Wang T, Qin Z, Zhang S, Zhang C (2012) Cost-sensitive classification with inadequate labeled data. Inf Syst 37:508–516CrossRefGoogle Scholar
  21. 21.
    Wang XD, Chen RC, Yan F, Zeng ZQ (2016) Semi-supervised feature selection with exploiting shared information among multiple tasks. J of Vis Commun Image Represent 41:272–280CrossRefGoogle Scholar
  22. 22.
    Wu X, Zhang S (2003) Synthesizing high-frequency rules from different data sources. IEEE Trans Knowl Data Eng 15:353–367CrossRefGoogle Scholar
  23. 23.
    Wu X, Zhang C, Zhang S (2004) Efficient mining of both positive and negative association rules. ACM Trans Inf Syst 22:381–405CrossRefGoogle Scholar
  24. 24.
    Wu X, Zhang C, Zhang S (2005) Database classification for multi-database mining. Inf Syst 30:71–88CrossRefGoogle Scholar
  25. 25.
    Zhang S, Zhang C (2002) Anytime mining for multi-user applications. IEEE Trans Syst Man Cybern (Part A) 32:515–521CrossRefGoogle Scholar
  26. 26.
    Zhang S, Zhang C (2003) PostMining: maintenance of association rules by weighting. Inf Syst 28: 691–707CrossRefGoogle Scholar
  27. 27.
    Yang Y, Ma Z, Hauptmann AG, Sebe N (2013) Feature selection for multimedia analysis by sharing information among multiple tasks. IEEE Trans Multimed 15:661–669CrossRefGoogle Scholar
  28. 28.
    Zhang S, Qin Z, Ling CX, Sheng S (2005) Missing Is useful: missing values in cost-sensitive decision trees. IEEE Trans Knowl Data Eng 17:1689–1693CrossRefGoogle Scholar
  29. 29.
    Zhang S, Qin Z, Ling C (2005) Missing is useful: missing values in cost-sensitive decision trees. IEEE Trans Knowl Data Eng 17:1689–1693CrossRefGoogle Scholar
  30. 30.
    Zhang S (2011) Shell-neighbor method and its application in missing data imputation. Appl Intell 35:123–133CrossRefGoogle Scholar
  31. 31.
    Zhang S, Jin Z, Zhu X (2011) Missing data imputation by utilizing information within incomplete instances. J Syst Softw 84:452–459CrossRefGoogle Scholar
  32. 32.
    Zhang S (2012) Nearest neighbor selection for iteratively kNN imputation. J Syst Softw 85:2541–2552CrossRefGoogle Scholar
  33. 33.
    Zhang S (2012) Decision tree classifiers sensitive to heterogeneous costs. J Syst Softw 85:771–779CrossRefGoogle Scholar
  34. 34.
    Zhang S, Li X, Zong M, Zhu X, Wang R (2017) Efficient kNN classification with different numbers of nearest neighbors. IEEE Transactions on Neural Networks and Learning SystemsGoogle Scholar
  35. 35.
    Zhang S, Li X, Zong M, Zhu X, Cheng D (2017) Learning k for knn classification. ACM Trans Intell Syst Technol 8:43Google Scholar
  36. 36.
    Zhao Y, Zhang S (2006) Generalized dimension-reduction framework for recent-biased time series analysis. IEEE Trans Knowl Data Eng 18:231–244CrossRefGoogle Scholar
  37. 37.
    Zhao Z, Wang L, Liu H (2011) Efficient spectral feature selection with minimum redundancy, Twenty-Fourth AAAI Conference on Artificial IntelligenceGoogle Scholar
  38. 38.
    Zhu X, Zhang* S, Jin Z, Zhang Z, Xu Z (2011) Missing value estimation for mixed-attribute datasets. IEEE Trans Knowl Data Eng 23:110–121CrossRefGoogle Scholar
  39. 39.
    Zhu X, Huang Z, Shen Heng T, Cheng J, Xu C (2012) Dimensionality reduction by mixed kernel canonical correlation analysis. Pattern Recogn 45:3003–3016CrossRefGoogle Scholar
  40. 40.
    Zhu X, Huang Z, Shen HT, Zhao X (2013) Linear cross-modal hashing for efficient multimedia search: 143-152Google Scholar
  41. 41.
    Zhu X, Huang Z, Cheng H, Cui J, Shen HT (2013) Sparse hashing for fast multimedia search. ACM Trans Inf Syst 31:9CrossRefGoogle Scholar
  42. 42.
    Zhu X, Zhang L, Huang Z (2014) A sparse embedding and least variance encoding approach to hashing. IEEE Trans Image Process 23:3737–3750MathSciNetCrossRefGoogle Scholar
  43. 43.
    Zeng Z, Wang X, Zhang J, Wu Q (2015) Semi-supervised feature selection based on local discriminative information. Neurocomputing 173:102–109CrossRefGoogle Scholar
  44. 44.
    Zhu P, Zuo W, Zhang L, Hu Q, Shiu SCK (2015) Unsupervised feature selection by regularized self-representation. Pattern Recogn 48:438–446CrossRefGoogle Scholar
  45. 45.
    Zhu P, Zhu W, Wang W, Zuo W, Hu Q (2016) Non-convex regularized self-representation for unsupervised feature selection *Google Scholar
  46. 46.
    Zhu X, Li X, Zhang S (2016) Block-row sparse multiview multilabel learning for image classification. IEEE Trans Cybern 46:450–461CrossRefGoogle Scholar
  47. 47.
    Zhu X, Suk H, Lee S-W, Shen D (2016) Subspace regularized sparse multitask learning for multiclass neurodegenerative disease identification. IEEE Trans Biomed Eng 63:607–618CrossRefGoogle Scholar
  48. 48.
    Zhu X, Li X, Zhang S (2016) Block-row sparse multiview multilabel learning for image classification. IEEE Trans Cybern 46:450–461CrossRefGoogle Scholar
  49. 49.
    Zhu X, Li X, Zhang S, Xu Z, Yu L, Wang C (2017) Graph PCA Hashing for Similarity Search, IEEE Transactions on MultimediaGoogle Scholar
  50. 50.
    Zhu X, Suk H-I, Wang L, Lee S-W, Shen D (2017) A novel relational regularization feature selection method for joint regression and classification in AD diagnosis. Med Image Anal 38:205–214CrossRefGoogle Scholar
  51. 51.
    Zhu X, Li X, Zhang S, Ju C, Wu X (2017) Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE Trans Neural Netw Learn Syst 28:1263–1275MathSciNetCrossRefGoogle Scholar
  52. 52.
    Zhu X, Suk H-I, Huang H, Shen D (2017) Low-rank graph-regularized structured sparse regression for identifying genetic biomarkers. IEEE Transactions on Big DataGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  • Yue Fang
    • 1
  • Yangding Li
    • 1
    Email author
  • Cong Lei
    • 1
  • Yonggang Li
    • 1
  • Xuelian Deng
    • 2
  1. 1.College of Computer ScienceInformation Technology Guangxi Normal UniversityGuilinPeople’s Republic of China
  2. 2.College of Public Health and ManagementGuangxi University of Chinese MedicineNanningPeople’s Republic of China

Personalised recommendations