A random-weighted plane-Gaussian artificial neural network
- 82 Downloads
Multilayer perceptron (MLP) and radial basis function network (RBFN) have received considerable attentions in data classification and regression. As a bridge between MLP and RBFN, plane-Gaussian (PG) network is capable of exhibiting globality and locality simultaneously by so-called PG activation function. Due to tuning network weight values by back propagation or clustering method in the training phase, they all confront with slow convergence rate, time-consuming, and easily dropping in local minima. To speed training networks, random projection technologies, for instance, extreme learning machine (ELM), have brightened up in recent decades. In this paper, we propose a random-weighted PG network, termed as RwPG. Instead of plane clustering in PG network, our RwPG adopts random values as network weight, and then analytically calculates network output by matrix inversion. Compared to PG and ELM, the advantages of the proposed RwPG list in fourfold: (1) It will be proved that the RwPG is also a universal approximator. (2) It inherits the geometrical interpretation of PG network, and is also suitable for capturing linearity in data, especially for plane distribution cases. (3) It has comparable training speed for ELM, but significantly faster than that of PG network. (4) Owing to random-weighted technology, RwPG is probably capable of breaking through local extremum problems. Finally, experiments on artificial and benchmark datasets will show its superiorities.
KeywordsMatrix-generalized inverse Plane-Gaussian artificial neural network Random weight
We would thank the anonymous editors and reviewers for their valuable comments and suggestions. We would thank Dr. Liyong Fu, the professor of Chinese Academy of Forestry, for his academic advice about deep networks in our revisions. This research was supported in part by the Central Public-interest Scientific Institution Basal Research Fund (Grant No. CAFYBB2019QD003), Natural Science Foundation of China under Grant 31670554 and 61871444, the Jiangsu Science Foundation under Grant BK20161527 and BK20171453, and Postgraduate Research and Practice Innovation Program of Jiangsu Province (SJKY19_0907).
XY proposed learning method and wrote manuscript. HY and ZF designed experiments. XF, FZ, and QY analyzed experimental results and gave some advice for manuscript.
Compliance with ethical standards
Conflict of interest
The authors declared that they have no conflicts of interest to this work.
- 4.Wu Y, Schuster M, Chen Z et al (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. CoRR. Technical report, available at http://arxiv.org/abs/1609.08144
- 9.Bengio Y. (2012) Practical recommendations for gradient-based training of deep architectures. In: LNCS neural networks: tricks of the trade 2nd ed. Springer, Berlin, pp 437–478Google Scholar
- 11.Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceeding of the 27th international conference on machine learning (ICML). Omnipress, Haifa, Israel, Wahpeton, ND, USA, June 21–24, pp 807–814Google Scholar
- 12.Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. NIPS 25:1097–1105Google Scholar
- 13.Srivastava RK, Greff K, Schmidhuber J (2015) Training very deep networks. NIPS 28:2377–2385Google Scholar
- 14.He KM, Zhang XY, Ren SQ et al (2016) Deep residual learning for image recognition. In: 2016 IEEE Conference on computer vision and pattern recognition (CVPR). IEEE Press, Las Vegas, NV, pp 770–778. https://doi.org/10.1109/CVPR.2016.90
- 15.He KM, Zhang XY, Ren SQ et al (2016) Identity mapping in deep residual networks. In: Proceedings of European conference on computer vision (ECCV), Amsterdam, Netherlands, October 8–16, pp 630–645. arXiv:1603.05027v3
- 16.Guo P, Zhou XL, Wang K (2018) PILAE: a non-gradient descent learning scheme for deep feedforward neural networks. arXiv:1811.01545
- 21.Li J, Zhao X, Li Y et al (2018) Classification of hyperspectral imagery using a new fully convolutional neural network. IEEE Geosci Remote Sens Lett 99:1–5Google Scholar
- 23.Schmidt W, Kraaijveld M, Duin R (1992) Feedforward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition methodology and systems, vol 2, pp 1–4Google Scholar
- 28.Li JY, Chow W, Igenik B, Pao YH (1997) Comments on “Stochastic choice of basis functions in adaptive function approximation and the functional-link net”. IEEE Trans Neural Netw 8(2):452–454Google Scholar
- 29.Kasun L, Zhou H, Huang G-B, Vong CM (2013) Representational learning with extreme learning machine for big data. IEEE Intell Syst 28(6):31–34Google Scholar
- 31.Dua D, Taniskidou EK (2017) UCI machine learning repository (http://archive.ics.uci.edu/ml). University of California, School of Information and Computer Science, Irvine
- 32.Moore AW, Crogan ML (2005) Discriminators for use in flow-based classification. Research reports: RR-05-13, Department of Computer Science, Queen Mary, University of LondonGoogle Scholar