A new intelligent pattern classifier based on deep-thinking

  • Zhenyi Shen
  • Zhihong ManEmail author
  • Zhenwei Cao
  • Jinchuan Zheng
Extreme Learning Machine and Deep Learning Networks


A new intelligent pattern classifier based on the human being’s thinking logics is developed in this paper, aiming to approximate the optimal design process and avoid the matrix inverse computation in conventional classifier designs. It is seen that the proposed classifier has no parameters to be determined via mathematical optimization. Instead, it is built by using the correlation principles to construct the clusters at first. The middle-level feature vectors can then be extracted from the statistical information of the correlations between the input data and the ones in each pattern cluster. For accurate classification purpose, the advanced feature vectors are generated with the moments’ information of the middle-level feature vectors. After that, Bayesian inference is implemented to make decisions from the weighted sum of the advanced feature components. In addition, a real-time fine-tuning loop (layer) is designed to adaptively “widen” the border of each pattern clustering region such that the input data can be directly classified once they are located in one of the clustering regions. An experiment for the classification of the handwritten digit images from the MNIST database is performed to show the excellent performance and effectiveness of the proposed intelligent pattern classifier.


Deep-thinking pattern classifier Bayesian inference Unsupervised learning Correlation principle 



The first author has received financial support from the Chinese Scholarship Council (No. 201606630032).

Compliance with ethical standards

Conflict of interest

In the present work, we have not used any material from previously published. So we have no conflict of interest.


  1. 1.
    Solari SVH, Stoner RM (2011) Cognitive consilience: primate non-primary neuroanatomical circuits underlying cognition. Front Neuroanat 5:65Google Scholar
  2. 2.
    LeCun Y et al (1990) Handwritten digit recognition with a back-propagation network. In: Advances in neural information processing systems, 1990, pp 396–404Google Scholar
  3. 3.
    Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300CrossRefGoogle Scholar
  4. 4.
    Huang G, Huang G-B, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48CrossRefzbMATHGoogle Scholar
  5. 5.
    Niaki STA, Hoseinzade S (2013) Forecasting S&P 500 index using artificial neural networks and design of experiments. J Ind Eng Int 9(1):1CrossRefGoogle Scholar
  6. 6.
    Scherer D, Müller A, Behnke S (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In: International conference on artificial neural networks, 2010. Springer, pp 92–101Google Scholar
  7. 7.
    Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579
  8. 8.
    Man Z, Lee K, Wang D, Cao Z, Khoo S (2013) An optimal weight learning machine for handwritten digit image recognition. Signal Process 93(6):1624–1638CrossRefGoogle Scholar
  9. 9.
    Xiao W, Chen H, Liao Q, Poggio T (2018) Biologically-plausible learning algorithms can scale to large datasets. In: Center for brains, minds and machines (CBMM), 2018Google Scholar
  10. 10.
    Zhang Y, Zhou G, Jin J, Zhao Q, Wang X, Cichocki A (2016) Sparse Bayesian classification of EEG for brain–computer interface. IEEE Trans Neural Netw Learn Syst 27(11):2256–2267MathSciNetCrossRefGoogle Scholar
  11. 11.
    Moon TK (1996) The expectation-maximization algorithm. IEEE Signal Process Mag 13(6):47–60CrossRefGoogle Scholar
  12. 12.
    Zivkovic Z (2004) Improved adaptive Gaussian mixture model for background subtraction. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004, vol. 2. IEEE, pp 28–31Google Scholar
  13. 13.
    Rabiner LR (1989) A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 77(2):257–286CrossRefGoogle Scholar
  14. 14.
    Lee H, Grosse R, Ranganath R, Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning, 2009. ACM, pp 609–616Google Scholar
  15. 15.
    He K (2017) A theory of creative thinking: construction and verification of the dual circulation model. Springer, New YorkCrossRefGoogle Scholar
  16. 16.
    Igelnik B, Pao Y-H (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6(6):1320–1329CrossRefGoogle Scholar
  17. 17.
    Cover TM (1965) Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans Electron Comput 3:326–334CrossRefzbMATHGoogle Scholar
  18. 18.
    Haykin SS, Haykin SS, Haykin SS, Haykin SS (2009) Neural networks and learning machines. Pearson, Upper Saddle RiverzbMATHGoogle Scholar
  19. 19.
    Mehrotra K, Mohan CK, Ranka S (1997) Elements of artificial neural networks. MIT Press, CambridgezbMATHGoogle Scholar
  20. 20.
    Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66CrossRefGoogle Scholar
  21. 21.
    Peters G (2014) Rough clustering utilizing the principle of indifference. Inf Sci 277:358–374MathSciNetCrossRefGoogle Scholar
  22. 22.
    LeCun Y, Cortes C, Burges C (2010) MNIST handwritten digit database, vol 2. AT&T Labs [Online].
  23. 23.
    Maaten LVD, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9:2579–2605zbMATHGoogle Scholar
  24. 24.
    Zhou Y, Gu K, Huang T (2018) Unsupervised representation adversarial learning network: from reconstruction to generation. arXiv preprint arXiv:1804.07353
  25. 25.
    Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in neural information processing systems, 2016, pp 2172–2180Google Scholar
  26. 26.
    Diehl PU, Cook M (2015) Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front Comput Neurosci 9:99CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Science, Engineering and TechnologySwinburne University of TechnologyHawthorn, MelbourneAustralia

Personalised recommendations