Abstract
In this work, we propose a heuristic genetic algorithm (GA) for pruning convolutional neural networks (CNNs) according to the multi-objective trade-off among error, computation and sparsity. In our experiments, we apply our approach to prune pre-trained LeNet across the MNIST dataset, which reduces 95.42% parameter size and achieves 16 times speedups of convolutional layer computation with tiny accuracy loss by laying emphasis on sparsity and computation, respectively. Our empirical study suggests that GA is an alternative pruning approach for obtaining a competitive compression performance. Additionally, compared with state-of-the-art approaches, GA can automatically pruning CNNs based on the multi-objective importance by a pre-defined fitness function.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
MNIST dataset. http://yann.lecun.com/exdb/mnist/. Accessed 23 Mar 2019
Louizos, C., Welling, M., Kingma, D.P.: Learning sparse neural networks through \(l_{0}\) regularization. In: Proceedings of the International Conference on Learning Representations (2018)
Dong, X., Liu, L., Li, G., Zhao, P., Feng, X.: Fast CNN pruning via redundancy-aware training. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11139, pp. 3–13. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01418-6_1
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791
Mao, H., et al.: Exploring the granularity of sparsity in convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 13–20 (2017). https://doi.org/10.1109/cvprw.2017.241
Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 2498–2507. JMLR.org (2017)
Neklyudov, K., Molchanov, D., Ashukha, A., Vetrov, D.P.: Structured Bayesian pruning via log-normal multiplicative noise. In: Advances in Neural Information Processing Systems, pp. 6775–6784 (2017)
Srinivas, S., Babu, V.: Learning neural network architectures using backpropagation. In: Wilson, R.C., Hancock, E.R., Smith, W.A.P. (eds.) Proceedings of the British Machine Vision Conference (BMVC), pp. 104.1–104.11. BMVA Press, September 2016. https://doi.org/10.5244/C.30.104
Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in Neural Information Processing Systems, vol. 29, pp. 2074–2082 (2016)
Srinivas, S., Subramanya, A., Babu, R.V.: Training sparse neural networks. arXiv preprint arXiv:1611.06694 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, C., An, Z., Li, C., Diao, B., Xu, Y. (2019). Multi-objective Pruning for CNNs Using Genetic Algorithm. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. ICANN 2019. Lecture Notes in Computer Science(), vol 11728. Springer, Cham. https://doi.org/10.1007/978-3-030-30484-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-30484-3_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30483-6
Online ISBN: 978-3-030-30484-3
eBook Packages: Computer ScienceComputer Science (R0)