Advertisement

A Reliable Small Sample Classification Algorithm by Elman Neural Network Based on PLS and GA

  • Weikuan JiaEmail author
  • Dean Zhao
  • Ling Ding
  • Yuanjie Zheng
Article
  • 18 Downloads

Abstract

Aiming at the small sample with high-feature dimension and few numbers will cause a serious problem if simply using the traditional Elman neural network to deal with the small sample; these problems include poor learning ability, the redundancy structure, and incomplete training; these defects will result in lower operating efficiency and poor recognition precision. In this paper, combining the theory of partial least squares (PLS) and genetic algorithm (GA), as well as the nature of Elman neural network, and an optimized Elman neural network classification algorithm based on PLS and GA (PLS-GA-Elman) is established. The new algorithm reduces the feature dimension of small samples by PLS, the relatively ideal low-dimensional data is obtained, and the purpose reduces the neural network’s inputs and simplifies its structure. Using GA to optimize the connection weights, threshold values, and the number of hidden neurons and adopting the optimized way of encoding respectively and evolving simultaneously can improve the neural network incomplete training condition, leading to fewer number of samples and improvement in training speed and generalization ability; this ensures the optimal Elman neural network algorithm. A new algorithm based on twice consecutive optimization was the basis for a precise classification model. The results of experimental analysis illustrate that operating efficiency and classification precision of the new algorithm have been improved.

Keywords

Small sample Elman neural network Partial least squares Genetic algorithm PLS-GA-Elman algorithm 

Notes

Acknowledgments

The authors thank anonymous reviewers and the editor in chief Douglas Steinley Ph.D who made suggestions that helped to improve this paper.

Funding Information

This work is supported by Natural Science Foundation of Shandong Province in China (No.: ZR2017BC013); China Postdoctoral Science Foundation (No.: 2018M630797); Shandong Province Higher Educational Science and Technology Program (No.: J18KA308) National Nature Science Foundation of China (No.: 31571571, 61572300); Taishan Scholar Program of Shandong Province of China (No.: TSHW201502038).

References

  1. Azali, S., & Sheikhan, M. (2016). Intelligent control of photovoltaic system using BPSO-GSA-optimized neural network and fuzzy-based PID for maximum power point tracking. Applied Intelligence, 44(1), 1–23.Google Scholar
  2. Ding, S. H., & Li, S. H. (2017). Second-order sliding mode controller design subject to mismatched term. Automatica, 77, 388–392.MathSciNetzbMATHGoogle Scholar
  3. Ding, S. F., Jia, W. K., Su, C. Y., Zhang, L. W., & Shi, Z. Z. (2008). Neural network research progress and applications in forecast. Lecture Notes in Computer Science, 5264, 783–793.Google Scholar
  4. Ding, S. F., Zhu, H., Jia, W. K., & Su, C. Y. (2012). A survey on feature extraction for pattern recognition. Artificial Intelligence Review, 37(3), 169–180.Google Scholar
  5. Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211.Google Scholar
  6. Gan, X. S., Duanmub, J. S., Wang, J. F., & Cong, W. (2013). Anomaly intrusion detection based on PLS feature extraction and core vector machine. Knowledge-Based Systems, 40, 1–6.Google Scholar
  7. Garimella, S., & Hermansky, H. (2013). Factor analysis of auto-associative neural networks with application in speaker verification. IEEE Transactions on Neural Networks and Learning Systems, 24(4), 522–528.Google Scholar
  8. Gopi, E. S. (2007). Digital image forgery detection using artificial neural network and independent component analysis. Applied Mathematics and Computation, 194(2), 540–543.MathSciNetzbMATHGoogle Scholar
  9. HOU, S. J., Chen, L., Tao, D. C., Zhou, S. B., Liu, W. J., & ZHENG, Y. J. (2017). Multi-layer multi-view topic model for classifying advertising video. Pattern Recognition, 68, 66–81.Google Scholar
  10. Hu, B., Zhao, Z., & Jun, L. (2012). Multi-loop nonlinear internal model controller design under nonlinear dynamic PLS framework using ARX-neural network model. Journal of Process Control, 22(1), 207–217.Google Scholar
  11. Huang, J. H., Li, X., & Wang, G. C. (2010). Maximum principles for a class of partial information risk-sensitive optimal controls. IEEE Transactions on Automatic Control, 55(6), 1438–1443.MathSciNetzbMATHGoogle Scholar
  12. Jane, C. J. (2017). An improved grey back propagation neural network with Levenberg-Marquardt algorithm and genetic algorithm. Journal of Grey System, 20(2), 71–77.Google Scholar
  13. Janik, L. J., Forrester, S. T., & Rawson, A. (2009). The prediction of soil chemical and physical properties from mid-infrared spectroscopy and combined partial least-squares regression and neural networks (PLS-NN) analysis. Chemometrics and Intelligent Laboratory Systems, 97(2), 179–188.Google Scholar
  14. Jia, W.K., Zhao, D.A. , Tang, Y.Y., Hu, C.L., and Zhao, Y.Y. (2014). An optimized classification algorithm by neural network ensemble based on PLS and OLS, Mathematical Problems in Engineering, ID: 395263.Google Scholar
  15. Kaikhah, K., & Garlick, R. (2000). Variable hidden layer sizing in Elman recurrent neuro-evolution. Applied Intelligence, 12(3), 193–205.Google Scholar
  16. Lee, H. K. H. (2007). Default priors for neural network classification. Journal of Classification, 24(1), 53–70.MathSciNetzbMATHGoogle Scholar
  17. Li, X. D., & Fu, X. L. (2012). Lag synchronization of chaotic delayed neural networks via impulsive control. IMA Journal of Mathematical Control & Information, 29(1), 133–145 (13).MathSciNetzbMATHGoogle Scholar
  18. Li, X. D., & Rakkiyappan, R. (2013). Stability results for Takagi–Sugeno fuzzy uncertain bam neural networks with time delays in the leakage term. Neural Computing & Applications, 22(1), 203–219.Google Scholar
  19. Liu, H., Zhang, P., Hu, B., & Moore, P. (2015). A novel approach to task assignment in a cooperative multi-agent design system. Applied Intelligence, 43(1), 162–175.Google Scholar
  20. Loghmanian, S. M. R., Jamaluddin, H., Ahmad, R., Yusof, R., & Khalid, M. (2012). Structure optimization of neural network for dynamic system modeling using multi-objective genetic algorithm. Neural Computing and Applications, 21(6), 1281–1295.Google Scholar
  21. Lu, J. J., & Chen, H. (2006). Researching development on BP neural networks. Control Engineering of China, 13(5), 449–451 (in Chinese).Google Scholar
  22. Marques, J., & Erik, D. (2011). Texture analysis by a PLS based method for combined feature extraction and selection. Lecture Notes in Computer Science, 7009, 109–116.Google Scholar
  23. Mccllochw, S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 10(5), 115–133.MathSciNetzbMATHGoogle Scholar
  24. Mcneish, D. M., & Harring, J. R. (2017). Clustered data with small sample sizes: comparing the performance of model-based and design-based approaches. Communications in Statistics-Simulation and Computation, 46(2), 855–869.MathSciNetzbMATHGoogle Scholar
  25. Ren, Y. W., Wang, A. P., & Wang, H. (2015). Fault diagnosis and tolerant control for discrete stochastic distribution collaborative control systems. IEEE Transactions on Systems Man & Cybernetics Systems, 45(3), 462–471.Google Scholar
  26. Sharifzadeh, S., Ghodsi, A., Clemmensen, L. H., & ErsbØll, B. K. (2017). Sparse supervised principal component analysis (SSPCA) for dimension reduction and variable selection. Engineering Applications of Artificial Intelligence, 65, 168–177.Google Scholar
  27. Shen, C., Song, R., LI, J., Zhang, X. M., Tang, J., Shi, Y. B., Liu, J., & Cao, H. L. (2016). Temperature drift modeling of MEMS gyroscope based on genetic-Elman neural network. Mechanical Systems and Signal Processing, 72, 897–905.Google Scholar
  28. Shu, Z., Henson, R., & Willse, J. (2013). Using neural network analysis to define methods of DINA model estimation for small sample sizes. Journal of Classification, 30(2), 173–194.MathSciNetzbMATHGoogle Scholar
  29. Song, X. M., & Yan, X. H. (2014). Linear quadratic Gaussian control for linear time-delay systems. IET Control Theory & Applications, 8(6), 375–383.MathSciNetGoogle Scholar
  30. Tsoi, A. C., & Back, A. (1997). Discrete time recurrent neural network architectures: a unifying review. Neurocomputing, 15(3), 183–223.zbMATHGoogle Scholar
  31. UCI DATA SET, (2015), http://www.ics.uci.edu/~mlearn/databases/. Accessed 20 Dec 2015
  32. Xia, Y., & Wang, J. (2016). A bi-projection neural network for solving constrained quadratic optimization problems. IEEE transactions on neural networks and learning systems, 27(2), 214–224.MathSciNetGoogle Scholar
  33. Yamamoto, M., & Hwang, H. (2017). Dimension-reduced clustering of functional data via subspace separation. Journal of Classification, 34(2), 294–326.MathSciNetzbMATHGoogle Scholar
  34. Zheng, X. W., Lu, D. J., Wang, X. G., & Liu, H. (2015). A cooperative coevolutionary biogeography-based optimize. Applied Intelligence, 43(1), 1–17.Google Scholar

Copyright information

© The Classification Society 2019

Authors and Affiliations

  • Weikuan Jia
    • 1
    • 2
    Email author
  • Dean Zhao
    • 2
  • Ling Ding
    • 3
  • Yuanjie Zheng
    • 1
    • 2
    • 4
  1. 1.School of Information Science and EngineeringShandong Normal UniversityJinanChina
  2. 2.Key Lab of Intelligent Computing & Information Security in Universities of ShandongShandong Normal UniversityJinanChina
  3. 3.School of Electrical and Information EngineeringShandong Normal UniversityJinanChina
  4. 4.Institute of Life Sciences, Shandong Provincial Key Laboratory for Distributed Computer Software Novel Technology, and Key Lab of Intelligent Information ProcessingShandong Normal UniversityJinanChina

Personalised recommendations