Advertisement

An Improved Capsule Network Based on Newly Reconstructed Network and the Method of Sharing Parameters

  • Chunyan Lu
  • Shukai DuanEmail author
  • Lidan Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11554)

Abstract

The capsule network is considered as the latest technology in the field of computer vision. However, it needs a large amount of storage space due to the large amount of parameters. In this paper, we have adopted two methods to solve this problem. First, a method of sharing the parameters of capsule layer is proposed to solve the problem of too many parameters in capsule layer, which can decrease by 18% parameters compared with the previous. Second, we redesigned the structure of the reconstructed network to replace the original, reducing the network’s parameters by 16%. Moreover, we combine the two methods to further reduce the parameters, which can decrease by 34%. Finally, we use the improved capsule network for MNIST handwritten digit recognition, the result is almost the same as or even slightly higher than the original capsule network, and the reconstructed images also can smooth the noise. This article provides new ideas for the future optimization methods of various capsule networks.

Keywords

Capsule network Shared parameters Reconstructed network 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61571372, 61672436 and 61601376, the Fundamental Research Funds for the Central Universities under Grant XDJK2016A001 and XDJK2017A005, and the Fundamental Science and Advanced Technology Research Foundation of Chongqing under Grant cstc2016jcyjA0547.

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G. E.: ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)Google Scholar
  2. 2.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  3. 3.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  4. 4.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  6. 6.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  7. 7.
    He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 346–361. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10578-9_23CrossRefGoogle Scholar
  8. 8.
    Ren, S., Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  9. 9.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  10. 10.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CZNN. In: European Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar
  11. 11.
    Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: AAAI, pp. 2741–2749 (2017)Google Scholar
  12. 12.
    Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3859–3869 (2017)Google Scholar
  13. 13.
    Hinton, G.E., Krizhevsky, A., Wang, S.D.: Transforming auto-encoders. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 44–51. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21735-7_6CrossRefGoogle Scholar
  14. 14.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  15. 15.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  16. 16.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)Google Scholar
  17. 17.
    LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/
  18. 18.
    Chen, T., et al.: Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 (2015)
  19. 19.
    Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.The College of Electronic and Information EngineeringSouthwest UniversityChongqingChina
  2. 2.National & Local Joint Engineering Laboratory of Intelligent Transmission and Control TechnologyChongqingChina
  3. 3.Brain-inspired Computing & Intelligent Control of Chongqing Key LabChongqingChina

Personalised recommendations