Advertisement

Studying the plasticity in deep convolutional neural networks using random pruning

  • Deepak MittalEmail author
  • Shweta Bhardwaj
  • Mitesh M. Khapra
  • Balaraman Ravindran
Special Issue Paper
  • 66 Downloads

Abstract

Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, \(l_1\)-norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25–50% filters from deep CNNs we are able to obtain the same performance as obtained by using state-of-the-art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real-world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class-specific pruning and show that even here a random pruning strategy gives close to state-of-the-art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection and image segmentation. We show that using a simple random pruning strategy, we can achieve significant speedup in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster-RCNN model. Similarly, we show that the performance of a pruned segmentation network is actually very similar to that of the original unpruned SegNet.

Keywords

Deep learning Filter pruning Model compression Convolutional neural networks 

Notes

Acknowledgements

We thank the Robert Bosch Centre for Data Science and AI (RBC-DSAI) and Intel India for supporting this research.

References

  1. 1.
    Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. ACM J. Emerg. Technol. Comput. Syst. (JETC) 13(3), 32 (2017)Google Scholar
  2. 2.
    Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 2654–2662. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5484-do-deep-nets-really-need-to-be-deep.pdf
  3. 3.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for scene segmentation. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 2481–2495. IEEE (2017)Google Scholar
  4. 4.
    Brostow, G.J., Fauqueur, J., Cipolla, R.: Semantic object classes in video: a high-definition ground truth database. Pattern Recognit. Lett. 30(2), 88–97 (2009)CrossRefGoogle Scholar
  5. 5.
    Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017, pp. 1800–1807 (2017).  https://doi.org/10.1109/CVPR.2017.195
  6. 6.
    Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: training deep neural networks with weights and activations constrained to + 1 or \(-\) 1. ArXiv preprint, arXiv:1602.02830 (2016)
  7. 7.
    Denil, M., Shakibi, B., Dinh, L., Ranzato, M., de Freitas, N.: Predicting parameters in deep learning. In: Burges, C.J.C., Bottou, L.,Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, pp. 2148–2156 Curran Associates, Inc. (2013). http://papers.nips.cc/paper/5025-predicting-parameters-in-deep-learning.pdf
  8. 8.
    Endisch, C., Hackl, C., Schröder, D.: Optimal brain surgeon for general dynamic neural networks. In: Neves, J., Santos, M.F., Machado, J.M. (eds.) Progress in Artificial Intelligence, pp. 15–28. Springer, Berlin (2007)CrossRefGoogle Scholar
  9. 9.
    Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRefGoogle Scholar
  10. 10.
    Girshick, R.: Fast R-CNN. In: Proceedings of the International Conference on Computer Vision (ICCV) (2015)Google Scholar
  11. 11.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)Google Scholar
  12. 12.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. ArXiv preprint arXiv:1510.00149 (2015)
  13. 13.
    Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 1135–1143. Curran Associates Inc., Red Hook, NY (2015)Google Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 770–778 (2016).  https://doi.org/10.1109/CVPR.2016.90
  15. 15.
    He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. ArXiv preprint, arXiv:1707.06168 (2017)
  16. 16.
    Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR, arXiv:abs/1503.02531 (2015)
  17. 17.
    Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications. CoRR, arXiv:abs/1704.04861 (2017)
  18. 18.
    Hu, H., Peng, R., Tai, Y., Tang, C.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. CoRR, arXiv:abs/1607.03250 (2016)
  19. 19.
    Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017, pp. 2261–2269 (2017).  https://doi.org/10.1109/CVPR.2017.243
  20. 20.
    Huseyinsinoglu, B.E., Ozdincler, A.R., Krespi, Y.: Bobath concept versus constraint-induced movement therapy to improve arm functional recovery in stroke patients: a randomized controlled trial. Clin. Rehabil. 26(8), 705–715 (2012)CrossRefGoogle Scholar
  21. 21.
    Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size. ArXiv preprint, arXiv:1602.07360 (2016)
  22. 22.
    Ioannou, Y., Robertson, D., Shotton, J., Cipolla, R., Criminisi, A.: Training CNNS with low-rank filters for efficient image classification. ArXiv preprint, arXiv:1511.06744 (2015)
  23. 23.
    Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. ArXiv preprint, arXiv:1405.3866 (2014)
  24. 24.
    Jian-Hao Luo, J.W., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: International Conference on Computer Vision (ICCV) (2017)Google Scholar
  25. 25.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
  26. 26.
    LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 2, pp. 598–605. Morgan-Kaufmann (1990). http://papers.nips.cc/paper/250-optimal-brain-damage.pdf
  27. 27.
    Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. ArXiv preprint, arXiv:1608.08710 (2016)
  28. 28.
    Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37. Springer (2016)Google Scholar
  29. 29.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  30. 30.
    Luo, J.H., Wu, J.: An entropy-based pruning method for CNN compression. ArXiv preprint, arXiv:1706.05791 (2017)
  31. 31.
    Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient inference. ArXiv preprint, arXiv:1611.06440 (2016)
  32. 32.
    Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-NET: Imagenet classification using binary convolutional neural networks. In: European Conference on Computer Vision, pp. 525–542. Springer (2016)Google Scholar
  33. 33.
    Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 779–788 (2016).  https://doi.org/10.1109/CVPR.2016.91
  34. 34.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems (NIPS) (2015)Google Scholar
  35. 35.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015).  https://doi.org/10.1007/s11263-015-0816-y MathSciNetCrossRefGoogle Scholar
  36. 36.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ArXiv preprint, arXiv:1409.1556 (2014)
  37. 37.
    Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. CoRR, arXiv:abs/1505.00387 (2015)
  38. 38.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015). http://arxiv.org/abs/1409.4842
  39. 39.
    Tai, C., Xiao, T., Zhang, Y., Wang, X., et al.: Convolutional neural networks with low-rank regularization. ArXiv preprint, arXiv:1511.06067 (2015)
  40. 40.
    Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in Neural Information Processing Systems, pp. 2074–2082 (2016)Google Scholar
  41. 41.
    Zhang, X., Zou, J., Ming, X., He, K., Sun, J.: Efficient and accurate approximations of nonlinear convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1984–1992 (2015)Google Scholar
  42. 42.
    Zhou, A., Yao, A., Guo, Y., Xu, L., Chen, Y.: Incremental network quantization: towards lossless CNNS with low-precision weights. ArXiv preprint, arXiv:1702.03044 (2017)

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC-DSAI)Indian Institute of Technology MadrasChennaiIndia

Personalised recommendations