Skip to main content

Incremental PCANet: A Lifelong Learning Framework to Achieve the Plasticity of both Feature and Classifier Constructions

  • Conference paper
  • First Online:
Advances in Brain Inspired Cognitive Systems (BICS 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10023))

Included in the following conference series:

Abstract

The plasticity in our brain gives us promising ability to learn and know the world. Although great successes have been achieved in many fields, few bio-inspired methods have mimiced this ability. They are infeasible when the data is time-varying and the scale is large because they need all training data loaded into memory. Furthermore, even the popular deep convolutional neural network (CNN) models have relatively fixed structures. Through incremental PCANet, this paper aims at exploring a lifelong learning framework to achieve the plasticity of both feature and classifier constructions. The proposed model mainly comprises of three parts: Gabor filters followed by maxpooling layer offering shift and scale tolerance to input samples, cascade incremental PCA to achieve the plasticity of feature extraction and incremental SVM to pursue plasticity of classifier construction. Different from CNN, the plasticity in our model has no back propogation (BP) process and don’t need huge parameters. Experiments have been done and their results validate the plasticity of our models in both feature and classifier constructions and further verify the hypothesis of physiology that the plasticity of high layer is better than the low layer.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jim, M., David, L.G.: Multiclass object recognition with sparse, localized features. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 1, pp. 11–18 (2006)

    Google Scholar 

  2. Rolls, E.T., Milward, T.: A model of invariant object recognition in the visual system: learning rules, activation functions, lateral inhibition, and information-based performance measures. Neural Comput. 12(11), 2547–2572 (2000)

    Article  Google Scholar 

  3. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  4. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  5. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  6. Thorpe, S., Fize, D., Marlot, C.: Speed of processing in the human visual system. Nature 381(6582), 520–522 (1996)

    Article  Google Scholar 

  7. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  8. LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, vol. 3361, MIT Press, Cambridge (1995). (10)

    Google Scholar 

  9. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  10. Chan, T.H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y.: PCANet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015)

    Article  MathSciNet  Google Scholar 

  11. Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)

    Article  MATH  Google Scholar 

  12. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Article  Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015).

  14. Nguyen, V.L., Vu, N.S., Gosselin, P.H.: A scattering transform combination with local binary pattern for texture classification. In 2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI), pp. 1–4, IEEE (2016)

    Google Scholar 

  15. Hegde, A., Principe, J.C., Erdogmus, D., Ozertem, U., Rao, Y.N., Peddaneni, H.: Perturbation-based eigenvector updates for on-line principal components analysis and canonical correlation analysis. J. VLSI Signal Process. Syst. Signal Image Video Technol. 45(1–2), 85–95 (2006)

    Article  Google Scholar 

  16. Weng, J., Zhang, Y., Hwang, W.S.: Candid: covariance-free incremental principal component analysis. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1034–1040 (2003)

    Article  Google Scholar 

  17. Krasulina, T.: Method of stochastic approximation in the determination of the largest eigenvalue of the mathematical expectation of random matrices. Autom. Remote Control., 50–56 (1970)

    Google Scholar 

  18. Diehl, C.P., Cauwenberghs, G.: SVM incremental learning, adaptation and optimization. In: Proceedings of the International Joint Conference on Neural Networks, 2003, vol. 4, pp. 2685–2690. IEEE (2003)

    Google Scholar 

  19. Thrun, S.: Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Kluwer Academic Publishers, Boston (1996)

    Book  MATH  Google Scholar 

  20. Thrun, S., O’Sullivan, J.: Discovering structure in multiple learning tasks: the TC algorithm. ICML 96, 489–497 (1996)

    Google Scholar 

  21. Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: AAAI, vol. 5, p. 3 (2010)

    Google Scholar 

  22. Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res. 1871–1874 (2008)

    Google Scholar 

  23. Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of the 24th International Conference on Machine learning, pp. 473–480. ACM, June 2007

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaoxiang Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Hao, WL., Zhang, Z. (2016). Incremental PCANet: A Lifelong Learning Framework to Achieve the Plasticity of both Feature and Classifier Constructions. In: Liu, CL., Hussain, A., Luo, B., Tan, K., Zeng, Y., Zhang, Z. (eds) Advances in Brain Inspired Cognitive Systems. BICS 2016. Lecture Notes in Computer Science(), vol 10023. Springer, Cham. https://doi.org/10.1007/978-3-319-49685-6_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49685-6_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49684-9

  • Online ISBN: 978-3-319-49685-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics