Structured Network Pruning via Adversarial Multi-indicator Architecture Selection

Abstract

Network pruning offers an opportunity to facilitate deploying convolutional neural networks (CNNs) on resource-limited embedded devices. Pruning more redundant network structures while ensuring network accuracy is challenging. Most existing CNN compression methods iteratively prune the “least important” filters and retrain the pruned network layer-by-layer, which may lead to a sub-optimal solution. In this paper, an end-to-end structured network pruning method based on adversarial multi-indicator architecture selection (AMAS) is presented. The pruning is implemented by striving to align the output of the baseline network with the output of the pruned network in a generative adversarial framework. Furthermore, to efficiently find optimal pruned architecture under constrained resources, an adversarial fine-tuning network selection strategy is designed, in which two contradictory indicators, namely pruned channel number and network classification accuracy, are considered. Experiments on SVHN show that AMAS reduces 75.37% of FLOPs and 74.42% of parameters with even 0.36% accuracy improvement for ResNet-110. On CIFAR-10, it achieves a reduction of 77.08% FLOPs and removes 73.98% of parameters with negligible accuracy cost for GoogLeNet. In particular, it obtains a 56.87% pruned rate in FLOPs and 59.18% parameters reduction, while with an increase of 0.49% accuracy for ResNet-110, which significantly outperforms state-of-the-art methods.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Data Availability

The datasets analyzed during the current study are available. The CIFAR-10 database is freely available for download at http://www.cs.toronto.edu/~kriz/cifar.html. The SVHN database can be found at: http://ufldl.stanford.edu/housenumbers/

References

  1. 1.

    A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1), 183–202 (2009)

    MathSciNet  Article  Google Scholar 

  2. 2.

    M. Carreira-Perpinan, Y. Idelbayev, “Learning-compression” algorithms for neural net pruning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8532–8541 (2018)

  3. 3.

    T. Chin, R. Ding, C. Zhang, D. Marculescu, Towards efficient model compression via learned global ranking, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1518–1528 (2020)

  4. 4.

    S. Ding, Z. Wang, Event-triggered synchronization of discrete-time neural networks: a switching approach. Neural Netw. 125, 31–40 (2020)

    Article  Google Scholar 

  5. 5.

    T. Goldstein, C. Studer, R. Baraniuk, A field guide to forward-backward splitting with a fasta implementation. arXiv preprint arXiv:1411.3406 (2014)

  6. 6.

    N. Gkalelis, V. Mezaris, Fractional step discriminant pruning: a filter pruning framework for deep convolutional neural networks, in IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 1–6 (2020)

  7. 7.

    S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for efficient neural network, in The 29th Annual Conference on Neural Information Processing Systems, pp. 1135–1143 (2015)

  8. 8.

    K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  9. 9.

    Y. He, X. Zhang, J. Sun, Channel pruning for accelerating very deep neural networks, in Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397 (2017)

  10. 10.

    G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, in Conference and Workshop on Neural Information Processing Systems Workshop (2014)

  11. 11.

    Z. Huang, N. Wang, Data-driven sparse structure selection for deep neural networks, in Europeon Conference on Computer Vision, pp. 317–334 (2018)

  12. 12.

    Y. He, X. Dong, G. Kang, Y. Fu, C. Yan, Y. Yang, Asymptotic soft filter pruning for deep convolutional neural networks. IEEE Trans. Cybern. 50(8), 3594–3604 (2020)

    Article  Google Scholar 

  13. 13.

    Y. He, G. Kang, X. Dong, Y. Fu, Y. Yang, Soft filter pruning for accelerating deep convolutional neural networks, in International Joint Conference on Artificial Intelligence, pp. 2234–2240 (2018)

  14. 14.

    Y. He, P. Liu, Z. Wang, Z. Hu, Y. Yang, Filter pruning via geometric median for deep convolutional neural networks acceleration, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 4340–4349 (2019)

  15. 15.

    Y. He, J. Lin, Z. Liu, H. Wang, L. Li, S. Han, Amc: Automl for model compression and acceleration on mobile devices, in Proceedings of the European Conference on Computer Vision, pp. 784–800 (2018)

  16. 16.

    B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, D. Kalenichenko, Quantization and training of neural networks for efficient integer-arithmetic-only inference, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704–2713 (2018)

  17. 17.

    B. Kuo, F. Golnaraghi, Automatic Control Systems (Prentice Hall, Englewood Cliffs, 2003)

    Google Scholar 

  18. 18.

    A. Krizhevsky, G. Hinton, Learning multiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep. 1 (2009). http://www.cs.toronto.edu/kriz/cifar.html

  19. 19.

    H. Li, A. Kadav, I. Durdanovic, H. Samet, H. P. Graf, Pruning filters for efficient convnets, in International Conference on Learning Representations (2017)

  20. 20.

    V. Lebedev, V. Lempitsky, Fast convnets using group-wise brain damage, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554–2564 (2016)

  21. 21.

    N. Lee, T. Ajanthan, P. H. Torr, Snip: Single-shot network pruning based on connection sensitivity, in International Conference on Learning Representations (2018)

  22. 22.

    Y. Li, S. Lin, B. Zhang, J. Liu, D. Doermann, Y. Wu, F. Huang, R. Ji, Exploiting kernel sparsity and entropy for interpretable cnn compression, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2800–2809 (2019)

  23. 23.

    S. Lin, R. Ji, C. Yan, B. Zhang, L. Cao, Q. Ye, F. Huang, D. Doermann, Towards optimal structured cnn pruning via generative adversarial learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2799 (2019)

  24. 24.

    M. Lin, R. Ji, Y. Zhang, B. Zhang, Y. Wu, Y. Tian, Channel pruning via automatic structure search, in The 29th International Conference on Artificial Intelligence (2020)

  25. 25.

    Z. Liu, J. Xu, X. Peng, R. Xiong, Frequency-domain dynamic pruning for convolutional neural networks., in Advances in Neural Information Processing Systems, pp. 1043–1053 (2018)

  26. 26.

    M. Lin, R. Ji, Y. Wang, Y. Zhang, B. Zhang, Y. Tian, L. Shao, Hrank: Filter pruning using high-rank feature map, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1529–1538 (2020)

  27. 27.

    C. Louizos, K.Ullrich, Max.Welling, Bayesian compression for deep learning, in Advances in Neural Information Processing Systems, pp. 3288–3298 (2017)

  28. 28.

    C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, Photo-realistic single image super-resolution using a generative adversarial network, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

  29. 29.

    J. Luo, J. Wu, W. Lin, Thinet: A filter level pruning method for deep neural network compression, in Proceedings of the IEEE International Conference on Computer Vision , pp. 5058–5066 (2017)

  30. 30.

    X. Mu, H. Qi, X. Li, Automatic segmentation of images with superpixel similarity combined with deep learning. Circuits Syst Signal Process 39(2), 884–899 (2020)

    Article  Google Scholar 

  31. 31.

    P. Molchanov, S. Tyree, T. Karras, T. Aila, J. Kautz, Pruning convolutional neural networks for resource efficient inference, in International Conference on Learning Representations (2016)

  32. 32.

    Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. ,Reading digits in natural images with unsupervised feature learning (2011)

  33. 33.

    T. Park, M.-Y. Liu, T.-C. Wang, J.-Y. Zhu, Semantic image synthesis with spatially-adaptive normalization, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2337–2346 (2019)

  34. 34.

    A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, Y. Bengio, Fitnets: hints for thin deep nets, in International Conference on Learning Representations (2015)

  35. 35.

    C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

  36. 36.

    K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in International Conference on Learning Representations (2015)

  37. 37.

    W. Wen, C. Wu, Y. Wang, Y. Chen, H. Li, Learning structured sparsity in deep neural networks, in Advances in Neural Information Processing Systems, pp. 2074–2082 (2016)

  38. 38.

    P. Wang, X. He, G. Li, T. Zhao, J. Cheng, Sparsity-inducing binarized neural networks. Assoc. Adv. Artif. Intell. 34, 12:192–12:199 (2020)

    Google Scholar 

  39. 39.

    D. Wang, L. Zhou, X. Zhang, X. Bai, J. Zhou, Exploring linear relationship in feature map subspace for convnets compression. arXiv preprint arXiv:1803.05729 (2018)

  40. 40.

    B. Xu, A. Tulloch, Y. Chen, X. Yang, L. Qiao, Hybrid composition with idleblock: more efficient networks for image recognition. arXiv preprint arXiv:1911.08609 (2019)

  41. 41.

    J. Ye, X. Lu, Z. Lin, J. Z. Wang, Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers, in International Conference on Learning Representations (2018)

  42. 42.

    R. Yu, A. Li, C.-F. Chen, J.-H. Lai, V.I. Morariu, X. Han, M. Gao, C.-Y. Lin, L.S. Davis, Nisp: Pruning networks using neuron importance score propagation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 9194–9203 (2018)

  43. 43.

    F. Zhu, L. Zhu, Y. Yang, Sim-real joint reinforcement transfer for 3d indoor navigation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11 388–11 397 (2019)

  44. 44.

    X. Zhang, J. Zou, K. He, J. Sun, Accelerating very deep convolutional networks for classification and detection. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 1943–1955 (2015)

    Article  Google Scholar 

  45. 45.

    X. Zhang, J. Zou, X. Ming, K. He, J. Sun, Efficient and accurate approximations of nonlinear convolutional networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1984–1992 (2015)

  46. 46.

    H. Zhuo, X. Qian, Y. Fu, H. Yang, X. Xue, Scsp: spectral clustering filter pruning with soft self-adaption manners. arXiv preprint arXiv:1806.05320 (2018)

  47. 47.

    S. Zagoruyko, N. Komodakis, Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928 (2016)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 61573168).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ying Chen.

Ethics declarations

Conflict of interest

We declare that we have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wei, Y., Chen, Y. Structured Network Pruning via Adversarial Multi-indicator Architecture Selection. Circuits Syst Signal Process (2021). https://doi.org/10.1007/s00034-021-01668-y

Download citation

Keywords

  • Convolutional neural networks
  • Network pruning
  • Architecture selection
  • Generative adversarial learning
  • Model compression