Aerial GANeration: Towards Realistic Data Augmentation Using Conditional GANs
Environmental perception for autonomous aerial vehicles is a rising field. Recent years have shown a strong increase of performance in terms of accuracy and efficiency with the aid of convolutional neural networks. Thus, the community has established data sets for benchmarking several kinds of algorithms. However, public data is rare for multi-sensor approaches or either not large enough to train very accurate algorithms. For this reason, we propose a method to generate multi-sensor data sets using realistic data augmentation based on conditional generative adversarial networks (cGAN). cGANs have shown impressive results for image to image translation. We use this principle for sensor simulation. Hence, there is no need for expensive and complex 3D engines. Our method encodes ground truth data, e.g. semantics or object boxes that could be drawn randomly, in the conditional image to generate realistic consistent sensor data. Our method is proven for aerial object detection and semantic segmentation on visual data, such as 3D Lidar reconstruction using the ISPRS and DOTA data set. We demonstrate qualitative accuracy improvements for state-of-the-art object detection (YOLO) using our augmentation technique.
KeywordsConditional GANs Sensor fusion Aerial perception Object detection Semantic segmentation 3D reconstruction
The authors would like to thank their families especially their wifes (Julia, Isabell, Caterina) and children (Til, Liesbeth, Karl, Fritz, Frieda) for their strong mental support.
- 1.Khoshelham, K., Díaz Vilariño, L., Peter, M., Kang, Z., Acharya, D.: The ISPRS benchmark on indoor modelling. In: ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7, pp. 367–372 (2017)Google Scholar
- 2.Xia, G., et al.: DOTA: a large-scale dataset for object detection in aerial images. CoRR abs/1711.10398 (2017)Google Scholar
- 3.Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR abs/1612.08242 (2016)Google Scholar
- 4.Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CoRR abs/1611.07004 (2016)Google Scholar
- 5.Goodfellow, I.J., et al.: Generative adversarial networks (2014)Google Scholar
- 7.Tran, N.T., Bui, T.A., Cheung, N.M.: Generative adversarial autoencoder networks (2018)Google Scholar
- 8.Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR abs/1411.1784 (2014)Google Scholar
- 9.Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR abs/1703.10593 (2017)Google Scholar
- 10.Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR abs/1505.04597 (2015)Google Scholar
- 11.Li, C., Wand, M.: Precomputed real-time texture synthesis with Markovian generative adversarial networks. CoRR abs/1604.04382 (2016)Google Scholar