Advertisement

Aerial GANeration: Towards Realistic Data Augmentation Using Conditional GANs

  • Stefan MilzEmail author
  • Tobias Rüdiger
  • Sebastian Süss
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11130)

Abstract

Environmental perception for autonomous aerial vehicles is a rising field. Recent years have shown a strong increase of performance in terms of accuracy and efficiency with the aid of convolutional neural networks. Thus, the community has established data sets for benchmarking several kinds of algorithms. However, public data is rare for multi-sensor approaches or either not large enough to train very accurate algorithms. For this reason, we propose a method to generate multi-sensor data sets using realistic data augmentation based on conditional generative adversarial networks (cGAN). cGANs have shown impressive results for image to image translation. We use this principle for sensor simulation. Hence, there is no need for expensive and complex 3D engines. Our method encodes ground truth data, e.g. semantics or object boxes that could be drawn randomly, in the conditional image to generate realistic consistent sensor data. Our method is proven for aerial object detection and semantic segmentation on visual data, such as 3D Lidar reconstruction using the ISPRS and DOTA data set. We demonstrate qualitative accuracy improvements for state-of-the-art object detection (YOLO) using our augmentation technique.

Keywords

Conditional GANs Sensor fusion Aerial perception Object detection Semantic segmentation 3D reconstruction 

Notes

Acknowledgement

The authors would like to thank their families especially their wifes (Julia, Isabell, Caterina) and children (Til, Liesbeth, Karl, Fritz, Frieda) for their strong mental support.

References

  1. 1.
    Khoshelham, K., Díaz Vilariño, L., Peter, M., Kang, Z., Acharya, D.: The ISPRS benchmark on indoor modelling. In: ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7, pp. 367–372 (2017)Google Scholar
  2. 2.
    Xia, G., et al.: DOTA: a large-scale dataset for object detection in aerial images. CoRR abs/1711.10398 (2017)Google Scholar
  3. 3.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR abs/1612.08242 (2016)Google Scholar
  4. 4.
    Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CoRR abs/1611.07004 (2016)Google Scholar
  5. 5.
    Goodfellow, I.J., et al.: Generative adversarial networks (2014)Google Scholar
  6. 6.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Tran, N.T., Bui, T.A., Cheung, N.M.: Generative adversarial autoencoder networks (2018)Google Scholar
  8. 8.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR abs/1411.1784 (2014)Google Scholar
  9. 9.
    Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR abs/1703.10593 (2017)Google Scholar
  10. 10.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR abs/1505.04597 (2015)Google Scholar
  11. 11.
    Li, C., Wand, M.: Precomputed real-time texture synthesis with Markovian generative adversarial networks. CoRR abs/1604.04382 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.SpleenlabSaalburg-EbersdorfGermany

Personalised recommendations