Advertisement

A Deep-Layer Feature Selection Method Based on Deep Neural Networks

  • Chen QiaoEmail author
  • Ke-Feng Sun
  • Bin Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10942)

Abstract

Inspired by the sparse mechanism of the biological nervous system, we propose a novel feature selection algorithm: features back-selection (FBS) method, which is based on the deep learning architecture. Compared with the existing feature selection method, this method is no longer a shallow layer approach, since it is from the global perspective, which traces back step by step to the original key feature sites of the raw data by the abstract features learned from the top of the deep neural networks. For MNIST data, the FBS method has quite well performance on searching for the original important pixels of the digit data. It shows that the FBS method not only can determine the relevant features for learning task with keeping a quite high prediction accuracy, but also can reduce the space of data storage as well as the computational complexity.

Keywords

Features back-selection Deep neural networks Deep-layer architecture Key sites 

Notes

Acknowledgment

This research was supported by NSFC Nos. 11471006 and 11101327, National Science and Technology Cooperation Program of China (No. 2015DFA81780), and the Fundamental Research Funds for the Central Universities (No. xjj2017126).

References

  1. Dzwinel, W., Wcisło, R.: Very fast interactive visualization of large sets of high-dimensional data. Procedia Comput. Sci. 51, 572–581 (2015)CrossRefGoogle Scholar
  2. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  3. Hinton, G.: training products of experts by minimizing contrastive divergence. Neural Comput. 14, 1771–1800 (2002)CrossRefGoogle Scholar
  4. Hinton, G.E.: A practical guide to training restricted Boltzmann machines. Momentum 9(1), 599–619 (2010)Google Scholar
  5. LeCun, Y., Cortes, C.: MNIST Handwritten Digit Database (2010). http://yann.lecun.com/exdb/mnist
  6. Jiang, B., Ding, C., Luo, B., et al.: Graph-Laplacian PCA: closed-form solution and robustness. CVPR 9(4), 3492–3498 (2013)Google Scholar
  7. Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)zbMATHGoogle Scholar
  8. Lee, J.A., Verleysen, M.: Shift-invariant similarities circumvent distance concentration in stochastic neighbor embedding and variants. Procedia Comput. Sci. 4(2), 538–547 (2011)CrossRefGoogle Scholar
  9. Stefano, C.D., Fontanella, F., Marrocco, C., et al.: A GA-based feature selection approach with an application to handwritten character recognition. Pattern Recognit. Lett. 35(1), 130–141 (2014)CrossRefGoogle Scholar
  10. Hinton, G., Deng, L., Yu, D., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. Signal Process. Mag. IEEE 29(6), 82–97 (2012)CrossRefGoogle Scholar
  11. Wang, J., Wonka, P., Ye, J.: Lasso screening rules via dual polytope projection. J. Mach. Learni. Res. 16(1), 1063–1101 (2015)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Mathematics and StatisticsXi’an Jiaotong UniversityXi’anChina

Personalised recommendations