Skip to main content

An Efficient Binary Search Based Neuron Pruning Method for ConvNet Condensation

  • Conference paper
  • First Online:
Book cover Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10635))

Included in the following conference series:

  • 7876 Accesses

Abstract

Convolutional neural networks (CNNs) have been widely applied in the field of computer vision. Nowadays, the architecture of CNNs is becoming more and more complex, involving more layers and more neurons per layer. The augmented depth and width of CNNs will lead to greatly increased computational and memory costs, which may limit CNNs practical utility. However, as demonstrated in previous research, CNNs of complex architecture may contain considerable redundancy in terms of hidden neurons. In this work, we propose a magnitude based binary neuron pruning method which can selectively prune neurons to shrink the network size while keeping the performance of the original model without pruning. Compared to some existing neuron pruning methods, the proposed method can achieve higher compression rate while automatically determining the number of neurons to be pruned per hidden layer in an efficient way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The fine-tuning process is the same as https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/finetune.ipynb.

References

  1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  2. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  3. Denil, M., Shakibi, B., Dinh, L., Ranzato, M., de Freitas, N.: Predicting parameters in deep learning. In: Nips, pp. 2148–2156 (2013)

    Google Scholar 

  4. Ba, L., Caurana, R.: Do deep nets really need to be deep? In: Advances in Neural Information Processing Systems 2014, pp. 1–6 (2014)

    Google Scholar 

  5. Chen, W., Wilson, J.T., Tyree, S., Weinberger, K.Q., Chen, Y.: Compressing Convolutional Neural Networks. arXiv:1506.04449, pp. 1–9 (2015)

  6. Chen, W., Wilson, J.T., Tyree, S., Weinberger, K.Q., Chen, Y.: Compressing neural networks with the hashing trick. CoRR, abs/1504.04788 (2015)

    Google Scholar 

  7. Han, S., Mao, H., Dally, W.J.: Deep compression - compressing deep neural networks with pruning, trained quantization and Huffman coding. In: Iclr, pp. 1–13 (2016)

    Google Scholar 

  8. Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. In: Nips, pp. 1135–1143 (2015)

    Google Scholar 

  9. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS 2014 Deep Learning Workshop, pp. 1–9 (2015)

    Google Scholar 

  10. Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures (2016)

    Google Scholar 

  11. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up Convolutional Neural Networks with Low Rank Expansions. arXiv preprint arXiv:1405.3866, p. 7 (2014)

  12. Lebedev, V., Ganin, Y., Rakhuba1, M., Oseledets, I., Lempitsky, V.: Speeding-up convolutional neural networks using fine-tuned CP-Decomposition. In: Iclr, pp. 1–10 (2015)

    Google Scholar 

  13. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning Filters for Efficient ConvNets (2016)

    Google Scholar 

  14. Mariet, Z., Sra, S.: Diversity networks. In: Iclr, pp. 1–11 (2015)

    Google Scholar 

  15. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: FitNets: Hints for Thin Deep Nets, pp. 1–13 (2014)

    Google Scholar 

  16. Srinivas, S., Babu, R.V., Education, S.: Data-free Parameter Pruning for Deep Neural Networks, pp. 1–12 (2015)

    Google Scholar 

  17. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  18. He, T., Fan, Y., Qian, Y., Tan, T., Yu, K.: Reshaping deep neural network for fast decoding by node-pruning. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 245–249. IEEE (2014)

    Google Scholar 

  19. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  21. Griffin, G., Holub, A., Perona, P.: Caltech-256 Object Category Dataset (2007)

    Google Scholar 

  22. Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C., Zhang, Z.: Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 (2015)

Download references

Acknowledgements

This research is supported by Chinese Scholarship Council.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boyu Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhang, B., Qin, A.K., Chan, J. (2017). An Efficient Binary Search Based Neuron Pruning Method for ConvNet Condensation. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10635. Springer, Cham. https://doi.org/10.1007/978-3-319-70096-0_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70096-0_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70095-3

  • Online ISBN: 978-3-319-70096-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics