Skip to main content
Log in

Hypergraph expressing low-rank feature selection algorithm

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Dimensionality reduction has been attracted extensive attention in machine learning. It usually includes two types: feature selection and subspace learning. Previously, many researchers have demonstrated that the dimensionality reduction is meaningful for real applications. Unfortunately, a large mass of these works utilize the feature selection and subspace learning independently. This paper explores a novel supervised feature selection algorithm by considering the subspace learning. Specifically, this paper employs an 2,1−norm and an 2,p −norm regularizers, respectively, to conduct sample denoising and feature selection via exploring the correlation structure of data. Then this paper uses two constraints (i.e. hypergraph and low-rank) to consider the local structure and the global structure among the data, respectively. Finally, this paper uses the optimizing framework to iteratively optimize each parameter while fixing the other parameter until the algorithm converges. A lot of experiments show that our new supervised feature selection method can get great results on the eighteen public data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. http://www.csie.ntu.edu.tw/∼cjlin/libsvm/.

  2. http://archive.ics.uci.edu/ml/.

  3. http://featureselection.asu.edu/datasets.php.

References

  1. Cai X, Nie F, Huang H (2013) Exact top-k feature selection via l 2,0 -norm constraint. Int Joint Conf Artif Intell:1240–1246

  2. Chang X, Nie F, Yang Y, Huang H (2014) A convex formulation for semi-supervised multi-label feature selection. Twenty-Eighth AAAI Conf Artif Intell:1171–1177

  3. Chen H-T, Chang H-W, Liu T-L (2005) Local Discriminant Embedding and Its Variants., Local Discriminant Embedding and Its Variants, IEEE Computer Society Conference on Computer Vision & Pattern Recognition

  4. Dadaneh BZ, Markid HY, Zakerolhosseini A (2016) Unsupervised probabilistic feature selection using ant colony optimization. Expert Syst Appl Int J 53:27–42

    Article  Google Scholar 

  5. Daubechies I, Devore R, Fornasier M, Güntürk CS (2008) Iteratively reweighted least squares minimization for sparse recovery. Commun Pure Appl Math 63:1–38

    Article  MathSciNet  Google Scholar 

  6. Du X, Yan Y, Pan P, Long G, Zhao L (2014) Multiple graph unsupervised feature selection. Signal Process 120:754–760

    Article  Google Scholar 

  7. Du L, Shen YD (2015) Unsupervised feature selection with adaptive structure learning. Comput Sci 37:209–218

    Google Scholar 

  8. Feng S, Lu H, Long X (2015) Discriminative Dictionary Learning Based on Supervised Feature Selection for Image Classification. Seventh Int Symp Comput Intell Des:225–228

  9. He X, Niyogi P (2003) Locality preserving projections. Advx Neural Inf Process Syst 16:186–197

    Google Scholar 

  10. Hou C, Nie F, Li X, Yi D (2014) Joint embedding learning and sparse regression: a framework for unsupervised feature selection. IEEE Trans Cybern 44:793

    Article  Google Scholar 

  11. Hu R, Zhu X, Cheng D, He W, Yan Y, Song J, Zhang S (2017) Graph self-representation method for unsupervised feature selection. Neurocomputing 220:130–137

    Article  Google Scholar 

  12. Ling C, Yang Q, Wang J, Zhang S (2004) Decision trees with minimal costs, Proceedings of 21st International Conference on Machine Learning (ICML)

  13. Li J, Hu X, Wu L, Liu H (2016) Robust unsupervised feature selection on networked data, Siam Int Conf Data Mining:387–395

  14. Liu Y, Nie F, Wu J, Chen L (2013) Efficient semi-supervised feature selection with noise insensitive trace ratio criterion. Neurocomputing 105:12–18

    Article  Google Scholar 

  15. Luo D, Ding CHQ, Huang H (2011) Linear discriminant analysis: new formulations and overfit analysis, AAAI Conference on Artificial Intelligence

  16. Ma Z, Nie F, Yang Y, Uijlings JRR (2012) Web image annotation via subspace-sparsity collaborated feature selection. IEEE Trans Multimed 14:1021–1030

    Article  Google Scholar 

  17. Nie F, Yuan J, Huang H (2014) Optimal mean robust principal component analysis. Int Conf Mach Learn

  18. Qin Y, Zhang S, Zhu X, Zhang J, Zhang C (2007) Semi-parametric optimization for missing data imputation. Appl Intell 27:79–88

    Article  Google Scholar 

  19. Shi C, An G, Zhao R, Ruan Q (2016) Multi-view hessian semi-supervised sparse feature selection for multimedia analysis. IEEE Trans Circ Syst Video Technol 27:1947–1961

    Article  Google Scholar 

  20. Wang T, Qin Z, Zhang S, Zhang C (2012) Cost-sensitive classification with inadequate labeled data. Inf Syst 37:508–516

    Article  Google Scholar 

  21. Wang XD, Chen RC, Yan F, Zeng ZQ (2016) Semi-supervised feature selection with exploiting shared information among multiple tasks. J of Vis Commun Image Represent 41:272–280

    Article  Google Scholar 

  22. Wu X, Zhang S (2003) Synthesizing high-frequency rules from different data sources. IEEE Trans Knowl Data Eng 15:353–367

    Article  Google Scholar 

  23. Wu X, Zhang C, Zhang S (2004) Efficient mining of both positive and negative association rules. ACM Trans Inf Syst 22:381–405

    Article  Google Scholar 

  24. Wu X, Zhang C, Zhang S (2005) Database classification for multi-database mining. Inf Syst 30:71–88

    Article  Google Scholar 

  25. Zhang S, Zhang C (2002) Anytime mining for multi-user applications. IEEE Trans Syst Man Cybern (Part A) 32:515–521

    Article  Google Scholar 

  26. Zhang S, Zhang C (2003) PostMining: maintenance of association rules by weighting. Inf Syst 28: 691–707

    Article  Google Scholar 

  27. Yang Y, Ma Z, Hauptmann AG, Sebe N (2013) Feature selection for multimedia analysis by sharing information among multiple tasks. IEEE Trans Multimed 15:661–669

    Article  Google Scholar 

  28. Zhang S, Qin Z, Ling CX, Sheng S (2005) Missing Is useful: missing values in cost-sensitive decision trees. IEEE Trans Knowl Data Eng 17:1689–1693

    Article  Google Scholar 

  29. Zhang S, Qin Z, Ling C (2005) Missing is useful: missing values in cost-sensitive decision trees. IEEE Trans Knowl Data Eng 17:1689–1693

    Article  Google Scholar 

  30. Zhang S (2011) Shell-neighbor method and its application in missing data imputation. Appl Intell 35:123–133

    Article  Google Scholar 

  31. Zhang S, Jin Z, Zhu X (2011) Missing data imputation by utilizing information within incomplete instances. J Syst Softw 84:452–459

    Article  Google Scholar 

  32. Zhang S (2012) Nearest neighbor selection for iteratively kNN imputation. J Syst Softw 85:2541–2552

    Article  Google Scholar 

  33. Zhang S (2012) Decision tree classifiers sensitive to heterogeneous costs. J Syst Softw 85:771–779

    Article  Google Scholar 

  34. Zhang S, Li X, Zong M, Zhu X, Wang R (2017) Efficient kNN classification with different numbers of nearest neighbors. IEEE Transactions on Neural Networks and Learning Systems

  35. Zhang S, Li X, Zong M, Zhu X, Cheng D (2017) Learning k for knn classification. ACM Trans Intell Syst Technol 8:43

    Google Scholar 

  36. Zhao Y, Zhang S (2006) Generalized dimension-reduction framework for recent-biased time series analysis. IEEE Trans Knowl Data Eng 18:231–244

    Article  Google Scholar 

  37. Zhao Z, Wang L, Liu H (2011) Efficient spectral feature selection with minimum redundancy, Twenty-Fourth AAAI Conference on Artificial Intelligence

  38. Zhu X, Zhang* S, Jin Z, Zhang Z, Xu Z (2011) Missing value estimation for mixed-attribute datasets. IEEE Trans Knowl Data Eng 23:110–121

    Article  Google Scholar 

  39. Zhu X, Huang Z, Shen Heng T, Cheng J, Xu C (2012) Dimensionality reduction by mixed kernel canonical correlation analysis. Pattern Recogn 45:3003–3016

    Article  Google Scholar 

  40. Zhu X, Huang Z, Shen HT, Zhao X (2013) Linear cross-modal hashing for efficient multimedia search: 143-152

  41. Zhu X, Huang Z, Cheng H, Cui J, Shen HT (2013) Sparse hashing for fast multimedia search. ACM Trans Inf Syst 31:9

    Article  Google Scholar 

  42. Zhu X, Zhang L, Huang Z (2014) A sparse embedding and least variance encoding approach to hashing. IEEE Trans Image Process 23:3737–3750

    Article  MathSciNet  Google Scholar 

  43. Zeng Z, Wang X, Zhang J, Wu Q (2015) Semi-supervised feature selection based on local discriminative information. Neurocomputing 173:102–109

    Article  Google Scholar 

  44. Zhu P, Zuo W, Zhang L, Hu Q, Shiu SCK (2015) Unsupervised feature selection by regularized self-representation. Pattern Recogn 48:438–446

    Article  Google Scholar 

  45. Zhu P, Zhu W, Wang W, Zuo W, Hu Q (2016) Non-convex regularized self-representation for unsupervised feature selection *

  46. Zhu X, Li X, Zhang S (2016) Block-row sparse multiview multilabel learning for image classification. IEEE Trans Cybern 46:450–461

    Article  Google Scholar 

  47. Zhu X, Suk H, Lee S-W, Shen D (2016) Subspace regularized sparse multitask learning for multiclass neurodegenerative disease identification. IEEE Trans Biomed Eng 63:607–618

    Article  Google Scholar 

  48. Zhu X, Li X, Zhang S (2016) Block-row sparse multiview multilabel learning for image classification. IEEE Trans Cybern 46:450–461

    Article  Google Scholar 

  49. Zhu X, Li X, Zhang S, Xu Z, Yu L, Wang C (2017) Graph PCA Hashing for Similarity Search, IEEE Transactions on Multimedia

  50. Zhu X, Suk H-I, Wang L, Lee S-W, Shen D (2017) A novel relational regularization feature selection method for joint regression and classification in AD diagnosis. Med Image Anal 38:205–214

    Article  Google Scholar 

  51. Zhu X, Li X, Zhang S, Ju C, Wu X (2017) Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE Trans Neural Netw Learn Syst 28:1263–1275

    Article  MathSciNet  Google Scholar 

  52. Zhu X, Suk H-I, Huang H, Shen D (2017) Low-rank graph-regularized structured sparse regression for identifying genetic biomarkers. IEEE Transactions on Big Data

Download references

Acknowledgments

This work was supported in part by the China Key Research Program (Grant No: 2016YFB1000905), the China 973 Program (Grant No: 2013CB329404), the China 1000-Plan National Distinguished Professorship, the Nation Natural Science Foundation of China (Grants No: 61573270, 61672177, 61363009 and 81701780), the Guangxi Natural Science Foundation (Grant No: 2015GXNSFCB139011), the Guangxi High Institutions Program of Introducing 100 High-Level Overseas Talents, the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing, the Research Fund of Guangxi Key Lab of MIMS (16-A-01-01 and 16-A-01-02), and the Guangxi Bagui Teams for Innovation and Research, and Innovation Project of Guangxi Graduate Education under grant YCSW2017065, XYCSZ2017064 and XYCSZ2017067.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yangding Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fang, Y., Li, Y., Lei, C. et al. Hypergraph expressing low-rank feature selection algorithm. Multimed Tools Appl 77, 29551–29572 (2018). https://doi.org/10.1007/s11042-017-5235-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5235-3

Keywords

Navigation