Progress in Artificial Intelligence

, Volume 8, Issue 1, pp 73–82 | Cite as

Conditional multichannel generative adversarial networks with an application to traffic signs representation learning

  • Farzin GhorbanEmail author
  • Narges Milani
  • Daniel Schugk
  • Lutz Roese-Koerner
  • Yu Su
  • Dennis Müller
  • Anton Kummert
Regular Paper


Generative adversarial networks (GANs) are known to produce photorealistic representations. However, we show in this study that this is only valid when the input channels come from a regular RGB camera sensor. In order to alleviate this shortcoming, we propose a general solution to which we refer to as multichannel GANs (MCGANs). In contrast to the existing approaches, MCGANs can process multiple channels with different textures and resolutions. This is achieved by using known concepts in deep learning such as weight sharing and specially separated convolutions. The proposed pipeline enables particular kernels to learn low-level characteristics from the different channels without the need for exhaustive hyper-parameter tuning. We demonstrate the improved representational ability of the framework on traffic sign samples that are captured by a camera with a so-called red-clear-clear-clear pixel topology. Furthermore, we extend our solution by applying the concept of conditions, that offers a whole spectrum of new features, especially for the generation of traffic signs. Throughout this paper, we further discuss relevant applications for the generated synthetic data.


Machine learning Deep neural networks Artificial vision Representation learning Generative models Synthesizing data 


  1. 1.
    Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862 (2017)
  2. 2.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
  3. 3.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2172–2180 (2016)Google Scholar
  4. 4.
    Chigorin, A., Konushin, A.: A system for large-scale automatic traffic sign recognition and mapping. In: CMRT13-City Models, Roads and Traffic, 2013, pp. 13–17 (2013)Google Scholar
  5. 5.
    Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3642–3649. IEEE (2012)Google Scholar
  6. 6.
    Dai, Z., Yang, Z., Yang, F., Cohen, W. W., Salakhutdinov, R.: Good semi-supervised learning that requires a bad gan. arXiv preprint arXiv:1705.09783 (2017)
  7. 7.
    Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a laplacian pyramid of adversarial networks. In: Advances in Neural Information Processing Systems, pp. 1486–1494 (2015)Google Scholar
  8. 8.
    Dosovitskiy, A., Tobias Springenberg, J., Brox, T.: Learning to generate chairs with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1538–1546 (2015)Google Scholar
  9. 9.
    Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)
  10. 10.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  11. 11.
    Haselhoff, A., Nunn, C., Müller, D., Meuter, M., Roese-Koerner, L.: Markov random field for image synthesis with an application to traffic sign recognition. In: Intelligent Vehicles Symposium (IV), 2017 IEEE, pp. 1407–1412. IEEE (2017)Google Scholar
  12. 12.
    Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004 (2016)
  13. 13.
    Kim, T., Cha, M., Kim, H., Lee, J., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192 (2017)
  14. 14.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  15. 15.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
  16. 16.
    Kumar, A., Sattigeri, P., Fletcher, T.: Semi-supervised learning with gans: manifold invariance with improved inference. In: Advances in Neural Information Processing Systems, pp. 5540–5550 (2017)Google Scholar
  17. 17.
    Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2016)Google Scholar
  18. 18.
    Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. arXiv preprint ArXiv:1611.04076 (2016)
  19. 19.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  20. 20.
    Mogelmose, A., Trivedi, M.M., Moeslund, T.B.: Learning to detect traffic signs: comparative evaluation of synthetic and real-world datasets. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 3452–3455. IEEE (2012)Google Scholar
  21. 21.
    Moiseev, B., Konev, A., Chigorin, A., Konushin, A.: Evaluation of traffic sign recognition methods trained on synthetically generated data. In: International Conference on Advanced Concepts for Intelligent Vision Systems, pp. 576–583. Springer (2013)Google Scholar
  22. 22.
    Odena, A.: Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583 (2016)
  23. 23.
    Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585 (2016)
  24. 24.
    Oord, A.v.d., Kalchbrenner, N., Kavukcuoglu, K.: Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 (2016)
  25. 25.
    Paganini, M., de Oliveira, L., Nachman, B.: Accelerating science with generative adversarial networks: an application to 3d particle showers in multilayer calorimeters. Phys. Rev. Lett. 120(4), 042003 (2018)CrossRefGoogle Scholar
  26. 26.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  27. 27.
    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 (2016)
  28. 28.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015)Google Scholar
  29. 29.
    Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)Google Scholar
  30. 30.
    Sixt, L.: Rendergan: generating realistic labeled data—with an application on decoding bee tags. unpublished Bachelor Thesis, Freie Universität, Berlin (2016)Google Scholar
  31. 31.
    Springenberg, J.T.: Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390 (2015)
  32. 32.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012)CrossRefGoogle Scholar
  33. 33.
    Vedaldi, A., Lenc, K.: Matconvnet: convolutional neural networks for matlab. In: Proceedings of the 23rd ACM International Conference on Multimedia, pp. 689–692. ACM (2015)Google Scholar
  34. 34.
    Wang, X., Shrivastava, A., Gupta, A.: A-fast-rcnn: hard positive generation via adversary for object detection. arXiv preprint arXiv:1704.03414 (2017)
  35. 35.
    Zheng, Z., Zheng, L., Yang, Y.: Unlabeled samples generated by gan improve the person re-identification baseline in vitro. arXiv preprint arXiv:1701.07717 (2017)

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of WuppertalWuppertalGermany
  2. 2.Delphi Deutschland GmbHWuppertalGermany

Personalised recommendations