Advertisement

A comparative study of neural-network feature weighting

  • Tongfeng SunEmail author
  • Shifei Ding
  • Pin Li
  • Wei Chen
Article

Abstract

Many feature weighting methods have been proposed to evaluate feature saliencies in recent years. Neural-network (NN) feature weighting, as a supervised method, is founded upon the mapping from input features to output decisions, and implemented by evaluating the sensitivity of network outputs to its inputs. Through training on sample data, NN implicitly embodies the saliencies of input features. The partial derivatives of the outputs with respect to the inputs in the trained NN are calculated to measure their sensitivities to input features, which means that implicit feature weighting of the NN is transformed into explicit feature weighting. The purpose of this paper is to further probe into the principle of NN feature weighting, and evaluate its performance through a comparative study between NN feature weighting method and state-of-art weighting methods in the same working conditions. The motivation of this study is inspired by the lack of direct and comprehensive comparison studies of NN feature weighting method. Experiments in UCI repository data sets, face data sets and self-built data sets show that NN feature weighting method achieves superior performance in different conditions and has promising prospects. Compared with the other existing methods, NN feature weighting method can be used in more complex conditions, provided that NN can work in those conditions. As decision data, output data can be labels, reals or integers. Especially, feature weights can be calculated without the discretization of outputs in the condition of continuous outputs.

Keywords

Neural-network feature weighting Partial derivative Implicit feature weighting Explicit feature weighting 

Notes

Acknowledgements

The authors would like to thank the editor and the anonymous reviewers for their valuable comments and constructive suggestions. This paper is jointly supported by the National Natural Science Foundation of China (No. 61672522), the National Natural Science Foundation and Shanxi Provincial People’s Government Jointly Funded Project of China for Coal Base and Low Carbon (No. U1510115) and the China Postdoctoral Science Foundation (No. 2016M601910).

References

  1. Bartlett MS, Movellan JR, Sejnowski TJ (2002) Face recognition by independent component analysis. IEEE Trans Neural Netw Publ IEEE Neural Netw Counc 13(6):1450–1464CrossRefGoogle Scholar
  2. Battiti R (1994) Using mutual information for selecting features in supervised neural net learning. IEEE Trans Neural Netw 5(4):537–550CrossRefGoogle Scholar
  3. Belhumeur PN, Hespanha JP, and Kriegman DJ (1996) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. In: European conference on computer vision, Berlin, Heidelberg, pp 43–58Google Scholar
  4. Cament LA, Castillo LE, Perez JP, Galdames FJ, Perez CA (2014) Fusion of local normalization and Gabor entropy weighted features for face identification. Pattern Recognit 47(2):568–577CrossRefGoogle Scholar
  5. Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16–28CrossRefGoogle Scholar
  6. Cover T, Hart P (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27CrossRefzbMATHGoogle Scholar
  7. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, CambridgeCrossRefzbMATHGoogle Scholar
  8. Delchambre L (2014) Weighted principal component analysis: a weighted covariance eigendecomposition approach. Mon Not R Astron Soc 446(4):3545–3555CrossRefGoogle Scholar
  9. Diakoulaki D, Mavrotas G, Papayannakis L (1995) Determining objective weights in multiple criteria problems: the CRITIC method. Comput Oper Res 22(7):763–770CrossRefzbMATHGoogle Scholar
  10. Dua D, Graff C (2019) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml/datasets.html
  11. Duan B, Pao Y-H (2006) Iterative feature weighting with neural networks. US PatentsGoogle Scholar
  12. Duda RO, Hart PE, Stork DG (2001) Pattern classification, 2nd edn. Wiley, NewYorkzbMATHGoogle Scholar
  13. Gilad-Bachrach R, Navot A, Tishby N (2004) Margin based feature selection-theory and algorithms. In: Proceedings of the twenty-first international conference on machine learning, pp 43–50Google Scholar
  14. Hocke J, Martinetz T (2013) Feature weighting by maximum distance minimization. In: International conference on artificial neural networks, pp 420–425Google Scholar
  15. Hocke J, Martinetz T (2015) Maximum distance minimization for feature weighting. Pattern Recogn Lett 52:48–52CrossRefGoogle Scholar
  16. Huang JZ, Ng MK, Rong H, Li Z (2005) Automated variable weighting in k-means type clustering. IEEE Trans Pattern Anal Mach Intell 27(5):657–668CrossRefGoogle Scholar
  17. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRefGoogle Scholar
  18. Jing L, Ng MK, Huang JZ (2007) An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. IEEE Trans Knowl Data Eng 19(8):1026–1041CrossRefGoogle Scholar
  19. Kaski S (1998) Dimensionality reduction by random mapping: fast similarity computation for clustering. In: Neural networks proceedings, 1998, The 1998 IEEE international joint conference on IEEE world congress on computational intelligence, vol 1, pp 413–418Google Scholar
  20. Kira K, Rendell LA (1992) A practical approach to feature selection. Mach Learn Proc 1992:249–256Google Scholar
  21. Kononenko I, Šimec E, Robnik-Šikonja M (1997) Overcoming the myopia of inductive learning algorithms with RELIEFF. Appl Intell 7(1):39–55CrossRefGoogle Scholar
  22. Krizhevsky A, Sutskever I, Hinton GE (2012). Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
  23. Li M, Zhang T, Chen Y, Smola AJ (2014) Efficient mini-batch training for stochastic optimization. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining, pp 661–670Google Scholar
  24. Mcculloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133MathSciNetCrossRefzbMATHGoogle Scholar
  25. Nie F, Xiang S, Jia Y, Zhang C, Yan S (2008) Trace ratio criterion for feature selection. Assoc Adv Artif Intell 2:671–676Google Scholar
  26. Nie F, Huang H, Cai X, Ding CH (2010) Efficient and robust feature selection via joint ℓ2, 1-norms minimization. In: Advances in neural information processing systems, pp 1813–1821Google Scholar
  27. Peng L, Zhang H, Zhang H, Yang B (2017) A fast feature weighting algorithm of data gravitation classification. Inf Sci 375:54–78CrossRefGoogle Scholar
  28. Powell M (1990) The theory of RBF approximation in 1990, numerical analysis report. University of Cambridge, CambridgeGoogle Scholar
  29. Robnik-Šikonja M, Kononenko I (2003) Theoretical and empirical analysis of ReliefF and RReliefF. Mach Learn 53(1–2):23–69CrossRefzbMATHGoogle Scholar
  30. Romero E, Sopena JM, Navarrete G, Alquézar R (2003) Feature selection forcing overtraining may help to improve performance. Proc Int Jt Conf Neural Netw 3:2181–2186Google Scholar
  31. Ruck DW, Rogers SK, Kabrisky M (1990) Feature selection using a multilayer perceptron. J Neural Netw Comput 2(2):40–48Google Scholar
  32. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Parallel distributed processing: explorations in the microstructure of cognition, vol 1. California Univ San Diego La Jolla Inst for Cognitive Science, pp 318–362Google Scholar
  33. Samaria FS, Harter AC (1994) Parameterisation of a stochastic model for human face identification. In: Proceedings of the second IEEE workshop on applications of computer vision, pp 138–142Google Scholar
  34. Sarikaya R, Hinton GE, Deoras A (2014) Application of deep belief networks for natural language understanding. IEEE/ACM Trans Audio Speech Lang Process (TASLP) 22(4):778–784CrossRefGoogle Scholar
  35. Vidit J, Amitabha M (2015) The Indian face database. http://vis-www.cs.umass.edu/~vidit/IndianFaceDatabase/
  36. Wei G-W (2011) Grey relational analysis method for 2-tuple linguistic multiple attribute group decision making with incomplete weight information. Expert Syst Appl 38(5):4824–4828CrossRefGoogle Scholar
  37. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10(Feb):207–244zbMATHGoogle Scholar
  38. Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chemometr Intell Lab Syst 2(1–3):37–52CrossRefGoogle Scholar
  39. Xia B, Bao C (2013). Speech enhancement with weighted denoising auto-encoder. In: Proceedings of INTERSPEECH, pp 3444–3448Google Scholar
  40. Yan H, Yang J (2016) Sparse discriminative feature weights learning. Neurocomputing 173:1936–1942CrossRefGoogle Scholar
  41. Yingming W (1997) Using the method of maximizing deviation to make decision for multiindices. J Syst Eng Electr 8(3):21–26Google Scholar
  42. Yu Z-J, Hu X-P, Mao Q (2009) Novel credit rating method under electronic commerce. Control Decis 11(24):1668–1672Google Scholar
  43. Zhu L, Miao L, Zhang D (2012) Iterative Laplacian score for feature selection. CCPR 2012. Commun Comput Inf Sci 321:80–87Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.School of Computer Science and TechnologyChina University of Mining and TechnologyXuzhouChina
  2. 2.Mine Digitization Engineering Research Center of the Ministry of EducationChina University of Mining and TechnologyXuzhouChina

Personalised recommendations