Skip to main content

Modular Domain-to-Domain Translation Network

  • Conference paper
  • First Online:
  • 8509 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11141))

Abstract

We present a method for constructing and training a deep domain-to-domain translation network: two datasets describing the same classes (i.e. the source and target domains) are used to train a deep network that can translate a pattern coming from the source domain to its counterpart form in the target domain. We introduce the development of a hierarchical architecture that encapsulates information of the target domain by embedding individually trained networks. This deep hierarchical architecture is then trained as one unified deep network. Using this approach, we prove that samples from the original domain are translated to the target domain format for both the cases where there is a one-to-one correspondence in the samples of the two domains and also when this correspondence information is absent. In our experiments we get a good translation operation as long as the target domain dataset provides good classification results when trained alone. We use either some distorted version of the MNIST dataset or the SVHN dataset as the original domain for the translation task and the MNIST as the target domain. The translation from one information domain to the other is visualized and evaluated. We also discuss the proposed model’s relation to the conditional Generative Adversarial Networks and we further argue that deep learning can benefit from such forms of strict hierarchical architectures.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Chen, T., Goodfellow, I., Shlens, J.: Net2Net: accelerating learning via knowledge transfer. http://arxiv.org/abs/1511.05641 (2015)

  2. Denton, E., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a laplacian pyramid of adversarial networks. arxiv Preprint http://arxiv.org/abs/1506.05751, pp. 1–10 (2015)

  3. Goodfellow, I., Pouget-Abadie, J., Mirza, M.: Generative adversarial networks. arXiv Preprint http://arxiv.org/abs/1406.2661, pp. 1–9 (2014)

  4. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS 2014 Deep Learning Workshop, pp. 1–9. https://doi.org/10.1063/1.4931082 (2015)

    Article  MathSciNet  Google Scholar 

  5. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift, pp. 1–11 (2015). arXiv:1502.03167, https://doi.org/10.1007/s13398-014-0173-7.2

  6. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. http://arxiv.org/abs/1611.07004 (2016)

  7. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference On Learning Representations, pp. 1–13. http://arxiv.org/abs/1412.6980 (2014)

  8. Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. In: ICLR, pp. 1–14. http://arxiv.org/abs/1511.05440 (2015)

  9. Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR, pp. 1–7. http://arxiv.org/abs/1411.1784 (2014)

  10. Odena, A.: Semi-supervised learning with generative adversarial networks. In: ICML, pp. 1–3. http://arxiv.org/abs/1504.01391 (2016)

  11. Rasmus, A., Berglund, M., Honkala, M., Valpola, H., Raiko, T.: Semi-supervised learning with ladder networks. In: Advances in Neural Information Processing Systems, pp. 3532–3540 (2015)

    Google Scholar 

  12. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved Techniques for Training GANs. In: NIPS, pp. 1–10. http://arxiv.org/abs/1504.01391 (2016)

  13. Springenberg, J.T.: Unsupervised and semi-supervised learning with categorical generative adversarial networks. http://arxiv.org/abs/1511.06390 (2015)

  14. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014). https://doi.org/10.1214/12-AOS1000

    Article  MathSciNet  MATH  Google Scholar 

  15. Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. http://arxiv.org/abs/1611.02200 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Savvas Karatsiolis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Karatsiolis, S., Schizas, C.N., Petkov, N. (2018). Modular Domain-to-Domain Translation Network. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer Science(), vol 11141. Springer, Cham. https://doi.org/10.1007/978-3-030-01424-7_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-01424-7_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-01423-0

  • Online ISBN: 978-3-030-01424-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics