A Learning Automata-Based Compression Scheme for Convolutional Neural Network

  • Shuai FengEmail author
  • Haonan Guo
  • Jichao Yang
  • Zhengwu Xu
  • Shenghong Li
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 517)


The convolutional neural network has been proved to be the state-of-the-art technique in image classification problems. In general, the improved recognition accuracy of the CNN is often accompanied by the increase of structure complexity. However, apart from the accuracy issues, computational resources and operating speed need to be considered on some occasions. Therefore, we propose an efficient compression scheme based on learning automata, which are usually used to choose the optimal action as a reinforcement learning method in this paper. Our proposed method can help the trained CNN to delete insignificant convolution kernels according to the actual requirements. According to the results of experiments, the proposed scheduling method can effectively compress the number of convolutional kernels at the expense of losing weak classification accuracy.


Convolutional neural network Learning automata Redundant convolution kernels 



This research work is funded by the National Key Research and Development Project of China (2016YFB0801003) and the Sichuan province & university cooperation (Key Program) of science & technology department of Sichuan Province (2018JZ0050).


  1. 1.
    Lecun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  2. 2.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems Curran Associates Inc. pp. 1097–1105 (2012)Google Scholar
  3. 3.
    Guo, H., et al.: A new learning automata based pruning method to train deep neural networks. IEEE Internet Things J. pp. 99, 1–1 (2017)Google Scholar
  4. 4.
    Tsetlin, M.L.: Automaton theory and modeling of biological systems. Am. Econ. Rev. 234–244 (1973)Google Scholar
  5. 5.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. Fiber 56(4), 3–7 (2015)Google Scholar
  6. 6.
    Thathachar, M., Sastry, P.S.: Varieties of learning automata: an overview. IEEE Trans. Syst. Man Cybern. (A Publication of the IEEE Systems Man & Cybernetics Society) 32(6), 711–722 (2002)CrossRefGoogle Scholar
  7. 7.
    Zipser, D., Andersen, R.A.: A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature 331(6158), 679–684 (1988)CrossRefGoogle Scholar
  8. 8.
    Mostafaei, H., Meybodi, M.R.: Maximizing lifetime of target coverage in wireless sensor networks using learning automata. Wirel. Pers. Commun. 71(2), 1461–1477 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Shuai Feng
    • 1
    Email author
  • Haonan Guo
    • 1
  • Jichao Yang
    • 1
  • Zhengwu Xu
    • 1
  • Shenghong Li
    • 1
  1. 1.School of Cyber SecurityShanghai Jiaotong UniversityShanghaiChina

Personalised recommendations