Abstract
The plasticity in our brain gives us promising ability to learn and know the world. Although great successes have been achieved in many fields, few bio-inspired methods have mimiced this ability. They are infeasible when the data is time-varying and the scale is large because they need all training data loaded into memory. Furthermore, even the popular deep convolutional neural network (CNN) models have relatively fixed structures. Through incremental PCANet, this paper aims at exploring a lifelong learning framework to achieve the plasticity of both feature and classifier constructions. The proposed model mainly comprises of three parts: Gabor filters followed by maxpooling layer offering shift and scale tolerance to input samples, cascade incremental PCA to achieve the plasticity of feature extraction and incremental SVM to pursue plasticity of classifier construction. Different from CNN, the plasticity in our model has no back propogation (BP) process and don’t need huge parameters. Experiments have been done and their results validate the plasticity of our models in both feature and classifier constructions and further verify the hypothesis of physiology that the plasticity of high layer is better than the low layer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jim, M., David, L.G.: Multiclass object recognition with sparse, localized features. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 1, pp. 11–18 (2006)
Rolls, E.T., Milward, T.: A model of invariant object recognition in the visual system: learning rules, activation functions, lateral inhibition, and information-based performance measures. Neural Comput. 12(11), 2547–2572 (2000)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Thorpe, S., Fize, D., Marlot, C.: Speed of processing in the human visual system. Nature 381(6582), 520–522 (1996)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, vol. 3361, MIT Press, Cambridge (1995). (10)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Chan, T.H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y.: PCANet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)
LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015).
Nguyen, V.L., Vu, N.S., Gosselin, P.H.: A scattering transform combination with local binary pattern for texture classification. In 2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI), pp. 1–4, IEEE (2016)
Hegde, A., Principe, J.C., Erdogmus, D., Ozertem, U., Rao, Y.N., Peddaneni, H.: Perturbation-based eigenvector updates for on-line principal components analysis and canonical correlation analysis. J. VLSI Signal Process. Syst. Signal Image Video Technol. 45(1–2), 85–95 (2006)
Weng, J., Zhang, Y., Hwang, W.S.: Candid: covariance-free incremental principal component analysis. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1034–1040 (2003)
Krasulina, T.: Method of stochastic approximation in the determination of the largest eigenvalue of the mathematical expectation of random matrices. Autom. Remote Control., 50–56 (1970)
Diehl, C.P., Cauwenberghs, G.: SVM incremental learning, adaptation and optimization. In: Proceedings of the International Joint Conference on Neural Networks, 2003, vol. 4, pp. 2685–2690. IEEE (2003)
Thrun, S.: Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Kluwer Academic Publishers, Boston (1996)
Thrun, S., O’Sullivan, J.: Discovering structure in multiple learning tasks: the TC algorithm. ICML 96, 489–497 (1996)
Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: AAAI, vol. 5, p. 3 (2010)
Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res. 1871–1874 (2008)
Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of the 24th International Conference on Machine learning, pp. 473–480. ACM, June 2007
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Hao, WL., Zhang, Z. (2016). Incremental PCANet: A Lifelong Learning Framework to Achieve the Plasticity of both Feature and Classifier Constructions. In: Liu, CL., Hussain, A., Luo, B., Tan, K., Zeng, Y., Zhang, Z. (eds) Advances in Brain Inspired Cognitive Systems. BICS 2016. Lecture Notes in Computer Science(), vol 10023. Springer, Cham. https://doi.org/10.1007/978-3-319-49685-6_27
Download citation
DOI: https://doi.org/10.1007/978-3-319-49685-6_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49684-9
Online ISBN: 978-3-319-49685-6
eBook Packages: Computer ScienceComputer Science (R0)