Advertisement

A Sparse Substitute for Deconvolution Layers in GANs

  • Juzheng Li
  • Pengfei Ge
  • Chuan-Xian RenEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11257)

Abstract

Generative adversarial networks are useful tools in image generation task, but training and running them are relatively slow due to the large amount parameters introduced by their generators. In this paper, S-Deconv, a sparse drop-in substitute for deconvolution layers, is proposed to alleviate this issue. S-Deconv decouples reshaping input tensor from reweighing it by first processing it with a sparse fixed filter into desired form then reweighing them using learnable one. By doing so, S-Deconv reduces the numbers of learnable and total parameters with sparsity. Our experiments on Fashion-MNIST, CelebA and Anime-Faces verify the feasibility of our method. We also give another interpretation of our method from the perspective of regularization.

Keywords

Generative adversarial networks Sparsity Deconvolution 

References

  1. 1.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
  2. 2.
    Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv preprint arXiv:1606.00915 (2016)
  3. 3.
    Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: Advances in Neural Information Processing Systems, pp. 1486–1494 (2015)Google Scholar
  4. 4.
    Ghosh, A., Kulharia, V., Namboodiri, V., Torr, P.H., Dokania, P.K.: Multi-agent diverse generative adversarial networks. arXiv preprint arXiv:1704.02906 (2017)
  5. 5.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  6. 6.
    Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
  7. 7.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  8. 8.
    Juefei-Xu, F., Boddeti, V.N., Savvides, M.: Local binary convolutional neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1 (2017)Google Scholar
  9. 9.
    Li, J., Monroe, W., Shi, T., Ritter, A., Jurafsky, D.: Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547 (2017)
  10. 10.
    Liu, L., Lu, Y., Yang, M., Qu, Q., Zhu, J., Li, H.: Generative adversarial network for abstractive text summarization. arXiv preprint arXiv:1711.09357 (2017)
  11. 11.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738 (2015)Google Scholar
  12. 12.
    Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
  13. 13.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 (2017)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of MathematicsSun Yat-sen UniversityGuangzhouChina
  2. 2.Shenzhen Research InstituteSun Yat-sen UniversityShenzhenChina

Personalised recommendations