Advertisement

TiedGAN: Multi-domain Image Transformation Networks

  • Mohammad Ahangar Kiasari
  • Dennis Singh Moirangthem
  • Jonghong Kim
  • Minho LeeEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11306)

Abstract

Recently, domain transformation has become a popular challenge in deep generative networks. One of the recent well-known domain transformation model named CycleGAN, has shown good performance in transformation task from one domain to another domain. However, CycleGAN lacks the capability to address multi-domain transformation problems because of its high complexity. In this paper, we propose TiedGAN in order to achieve multi-domain image transformation with reduced complexity. The results of our experiment indicate that the proposed model has comparable performance to CycleGAN as well as successfully alleviates the complexity issue in the multi-domain transformation task.

Keywords

Generative models Generative adversarial networks Image domain transformation 

Notes

Acknowledgements

This work was partly supported by Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) (50%) and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2016R1A2A2A05921679) (50%).

References

  1. 1.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  2. 2.
    Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in Neural Information Processing Systems, pp. 700–708 (2017)Google Scholar
  3. 3.
    Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. In: Advances in Neural Information Processing Systems, pp. 469–477 (2016)Google Scholar
  4. 4.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)Google Scholar
  5. 5.
    Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I.: Adversarial autoencoders. In: International Conference on Learning Representations (2016)Google Scholar
  6. 6.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: International Conference on Learning Representations (2016)Google Scholar
  7. 7.
    Tolstikhin, I.O., Gelly, S., Bousquet, O., Simon-Gabriel, C.J., Schölkopf, B.: Adagan: boosting generative models. In: Advances in Neural Information Processing Systems, pp. 5424–5433 (2017)Google Scholar
  8. 8.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Mohammad Ahangar Kiasari
    • 1
  • Dennis Singh Moirangthem
    • 1
  • Jonghong Kim
    • 1
  • Minho Lee
    • 1
    Email author
  1. 1.School of Electronics EngineeringKyungpook National UniversityDaeguSouth Korea

Personalised recommendations