A Step Beyond Generative Multi-adversarial Networks

  • Aman SinghEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11010)


In this paper we modify the structure and introduce new formulation to improve the performance of the Generative adversarial networks (GANs). We achieve this based on the discriminating capability of the Generative Multi-Adversarial Network (GMAN), which is a variation of GANs. GANs in general has the advantage of accelerating training at the initial phase using the minimax objectives. On the other hand, GMAN can produce reliable training using the original dataset. We explored a number of improvement possibilities, including automatic regulations, boosting using Adaboost and a new Generative Adversarial Metric (GAM). In our design, the images generated from noisy samples are reused by the generator instead of adding new samples. Experimental results show that our image generation strategy produces better resolution and higher quality samples as compared to the standard GANs. Furthermore, the number of iterations and the required time for quantitative evaluation is greatly reduced using our method.


Multi-discriminators Generators Adversarial Minimax GAN GMAN GAM 


  1. 1.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  2. 2.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  3. 3.
    Durugkar, I., Gemp, I., Mahadevan, S.: Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673 (2016)
  4. 4.
    Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset (2014).
  5. 5.
    Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y.: Generalization and equilibrium in generative adversarial nets (GANs). arXiv preprint arXiv:1703.00573 (2017)
  6. 6.
    Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)Google Scholar
  7. 7.
    Lucic, M., Kurach, K., Michalski, M., Gelly, S., Bousquet, O.: Are GANs created equal. A Large-Scale Study. ArXiv e-prints (2017)Google Scholar
  8. 8.
    Wu, Y., Burda, Y., Salakhutdinov, R., Grosse, R.: On the quantitative analysis of decoder-based generative models. arXiv preprint arXiv:1611.04273 (2016)
  9. 9.
    Theis, L., Oord, A., Bethge, M.: A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844 (2015)
  10. 10.
    LeCun, Y.: The MNIST database of handwritten digits (1998).
  11. 11.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)Google Scholar
  12. 12.
    Yoo, D., Kim, N., Park, S., Paek, A.S., Kweon, I.S.: Pixel-level domain transfer. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 517–532. Springer, Cham (2016). Scholar
  13. 13.
    Ho, J., Ermon, S.: Generative adversarial imitation learning. In: Advances in Neural Information Processing Systems, pp. 4565–4573 (2016)Google Scholar
  14. 14.
    Edwards, H., Storkey, A.: Censoring representations with an adversary. arXiv preprint arXiv:1511.05897 (2015)
  15. 15.
    Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. arXiv preprint arXiv:1605.09782 (2016)
  16. 16.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2172–2180 (2016)Google Scholar
  17. 17.
    Im, D.J., Kim, C.D., Jiang, H., Memisevic, R.: Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110 (2016)
  18. 18.
    Li, Y., Swersky, K., Zemel, R.: Generative moment matching networks. In: International Conference on Machine Learning, pp. 1718–1727 (2015)Google Scholar
  19. 19.
    Zhao, J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126 (2016)
  20. 20.
    Nowozin, S., Cseke, B., Tomioka, R.: f-GAN: training generative neural samplers using variational divergence minimization. In: Advances in Neural Information Processing Systems, pp. 271–279 (2016)Google Scholar
  21. 21.
    Uehara, M., Sato, I., Suzuki, M., Nakayama, K., Matsuo, Y.: Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 (2016)
  22. 22.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN. arXiv preprint arXiv:1701.07875 (2007)
  23. 23.
    Beygelzimer, A., Kale, S., Luo, H.: Optimal and adaptive algorithms for online boosting. In: International Conference on Machine Learning, pp. 2323–2331 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Computing ScienceUniversity of AlbertaEdmontonCanada

Personalised recommendations