A new intelligent pattern classifier based on deep-thinking
- 23 Downloads
A new intelligent pattern classifier based on the human being’s thinking logics is developed in this paper, aiming to approximate the optimal design process and avoid the matrix inverse computation in conventional classifier designs. It is seen that the proposed classifier has no parameters to be determined via mathematical optimization. Instead, it is built by using the correlation principles to construct the clusters at first. The middle-level feature vectors can then be extracted from the statistical information of the correlations between the input data and the ones in each pattern cluster. For accurate classification purpose, the advanced feature vectors are generated with the moments’ information of the middle-level feature vectors. After that, Bayesian inference is implemented to make decisions from the weighted sum of the advanced feature components. In addition, a real-time fine-tuning loop (layer) is designed to adaptively “widen” the border of each pattern clustering region such that the input data can be directly classified once they are located in one of the clustering regions. An experiment for the classification of the handwritten digit images from the MNIST database is performed to show the excellent performance and effectiveness of the proposed intelligent pattern classifier.
KeywordsDeep-thinking pattern classifier Bayesian inference Unsupervised learning Correlation principle
The first author has received financial support from the Chinese Scholarship Council (No. 201606630032).
Compliance with ethical standards
Conflict of interest
In the present work, we have not used any material from previously published. So we have no conflict of interest.
- 1.Solari SVH, Stoner RM (2011) Cognitive consilience: primate non-primary neuroanatomical circuits underlying cognition. Front Neuroanat 5:65Google Scholar
- 2.LeCun Y et al (1990) Handwritten digit recognition with a back-propagation network. In: Advances in neural information processing systems, 1990, pp 396–404Google Scholar
- 6.Scherer D, Müller A, Behnke S (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In: International conference on artificial neural networks, 2010. Springer, pp 92–101Google Scholar
- 7.Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579
- 9.Xiao W, Chen H, Liao Q, Poggio T (2018) Biologically-plausible learning algorithms can scale to large datasets. In: Center for brains, minds and machines (CBMM), 2018Google Scholar
- 12.Zivkovic Z (2004) Improved adaptive Gaussian mixture model for background subtraction. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004, vol. 2. IEEE, pp 28–31Google Scholar
- 14.Lee H, Grosse R, Ranganath R, Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning, 2009. ACM, pp 609–616Google Scholar
- 22.LeCun Y, Cortes C, Burges C (2010) MNIST handwritten digit database, vol 2. AT&T Labs [Online]. http://yann.lecun.com/exdb/mnist
- 24.Zhou Y, Gu K, Huang T (2018) Unsupervised representation adversarial learning network: from reconstruction to generation. arXiv preprint arXiv:1804.07353
- 25.Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in neural information processing systems, 2016, pp 2172–2180Google Scholar