Advertisement

Procedural Synthesis of Remote Sensing Images for Robust Change Detection with Neural Networks

  • Maria Kolos
  • Anton Marin
  • Alexey Artemov
  • Evgeny BurnaevEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11555)

Abstract

Data-driven methods such as convolutional neural networks (CNNs) are known to deliver state-of-the-art performance on image recognition tasks when the training data are abundant. However, in some instances, such as change detection in remote sensing images, annotated data cannot be obtained in sufficient quantities. In this work, we propose a simple and efficient method for creating realistic targeted synthetic datasets in the remote sensing domain, leveraging the opportunities offered by game development engines. We provide a description of the pipeline for procedural geometry generation and rendering as well as an evaluation of the efficiency of produced datasets in a change detection scenario. Our evaluations demonstrate that our pipeline helps to improve the performance and convergence of deep learning models when the amount of real-world data is severely limited.

Keywords

Remote sensing Deep learning Synthetic imagery 

References

  1. 1.
    Cryengine. https://www.cryengine.com. Accessed 30 Jan 2019
  2. 2.
  3. 3.
    Osm2xp. https://wiki.openstreetmap.org/wiki/Osm2xp. Accessed 30 Jan 2019
  4. 4.
    Unity. https://unity3d.com. Accessed 30 Jan 2019
  5. 5.
    Unreal engine 4. https://www.unrealengine.com/en-US/what-is-unreal-engine-4. Accessed 30 Jan 2019
  6. 6.
    World machine. http://www.world-machine.com/. Accessed 30 Jan 2019
  7. 7.
    Alcantarilla, P.F., Stent, S., Ros, G., Arroyo, R., Gherardi, R.: Street-view change detection with deconvolutional networks. Auton. Robots 42(7), 1301–1322 (2018)Google Scholar
  8. 8.
    Anniballe, R., et al.: Earthquake damage mapping: an overall assessment of ground surveys and VHR image change detection after l’aquila 2009 earthquake. Remote Sens. Environ. 210, 166–178 (2018)Google Scholar
  9. 9.
    Artemov, A., Burnaev, E.: Ensembles of detectors for online detection of transient changes. In: Proceedings of SPIE, vol. 9875, pp. 9875–9875-5 (2015).  https://doi.org/10.1117/12.2228369
  10. 10.
    Babenko, A., Slesarev, A., Chigorin, A., Lempitsky, V.: Neural codes for image retrieval. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 584–599. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_38Google Scholar
  11. 11.
    Bai, Y., Mas, E., Koshimura, S.: Towards operational satellite-based damage-mapping using U-net convolutional network: a case study of 2011 Tohoku Earthquake-Tsunami. Remote Sens. 10(10), 1626 (2018)Google Scholar
  12. 12.
    Bourdis, N., Denis, M., Sahbi, H.: Constrained optical flow for aerial image change detection. In: 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 4176–4179 (2011)Google Scholar
  13. 13.
    Bruzzone, L., Demir, B.: A review of modern approaches to classification of remote sensing data. In: Manakos, I., Braun, M. (eds.) Land Use and Land Cover Mapping in Europe. RDIP, vol. 18, pp. 127–143. Springer, Dordrecht (2014).  https://doi.org/10.1007/978-94-007-7969-3_9Google Scholar
  14. 14.
    Burnaev, E., Cichocki, A., Osin, V.: Fast multispectral deep fusion networks. Bull. Pol. Acad. Sci.: Techn. Sci. 66(4), 875–880 (2018)Google Scholar
  15. 15.
    Burnaev, E., Erofeev, P., Papanov, A.: Influence of resampling on accuracy of imbalanced classification. In: Proceedings of SPIE, vol. 9875, pp. 9875–9875-5 (2015).  https://doi.org/10.1117/12.2228523
  16. 16.
    Burnaev, E., Koptelov, I., Novikov, G., Khanipov, T.: Automatic construction of a recurrent neural network based classifier for vehicle passage detection, vol. 10341, pp. 10341–10341-6 (2017).  https://doi.org/10.1117/12.2268706
  17. 17.
    Buslaev, A., Seferbekov, S., Iglovikov, V., Shvets, A.: Fully convolutional network for automatic road extraction from satellite imagery. CoRR, abs/1806.05182 (2018)Google Scholar
  18. 18.
    Cai, S., Liu, D.: Detecting change dates from dense satellite time series using a sub-annual change detection algorithm. Remote Sens. 7(7), 8705–8727 (2015)Google Scholar
  19. 19.
    Caye Daudt, R., Le Saux, B., Boulch, A., Gousseau, Y.: Urban change detection for multispectral earth observation using convolutional neural networks. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2018Google Scholar
  20. 20.
    Chen, C., Seff, A., Kornhauser, A., Xiao, J.: DeepDriving: learning affordance for direct perception in autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730 (2015)Google Scholar
  21. 21.
    Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)Google Scholar
  22. 22.
    Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: benchmark and state of the art. Proc. IEEE 105(10), 1865–1883 (2017)Google Scholar
  23. 23.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 248–255. IEEE (2009)Google Scholar
  24. 24.
    Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Conference on Robot Learning, pp. 1–16 (2017)Google Scholar
  25. 25.
    El Amin, A.M., Liu, Q., Wang, Y.: Convolutional neural network features based change detection in satellite images. In: First International Workshop on Pattern Recognition, vol. 10011, p. 100110W. International Society for Optics and Photonics (2016)Google Scholar
  26. 26.
    El Amin, A.M., Liu, Q., Wang, Y.: Zoom out CNNs features for optical remote sensing change detection. In: 2017 2nd International Conference on Image, Vision and Computing (ICIVC), pp. 812–817. IEEE (2017)Google Scholar
  27. 27.
    Fujita, A., Sakurada, K., Imaizumi, T., Ito, R., Hikosaka, S., Nakamura, R.: Damage detection from aerial images via convolutional neural networks. In: IAPR International Conference on Machine Vision Applications, Nagoya, Japan, pp. 08–12 (2017)Google Scholar
  28. 28.
    Gaidon, A., Lopez, A., Perronnin, F.: The reasonable effectiveness of synthetic visual data. Int. J. Comput. Vis. 126(9), 899–901 (2018).  https://doi.org/10.1007/s11263-018-1108-0Google Scholar
  29. 29.
    Gaidon, A., Wang, Q., Cabon, Y., Vig, E.: Virtual worlds as proxy for multi-object tracking analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4340–4349 (2016)Google Scholar
  30. 30.
    Goodenough, A.A., Brown, S.D.: DIRSIG 5: core design and implementation. In: Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, vol. 8390, p. 83900H. International Society for Optics and Photonics (2012)Google Scholar
  31. 31.
    Goward, S.N., Masek, J.G., Williams, D.L., Irons, J.R., Thompson, R.: The Landsat 7 mission: terrestrial research and applications for the 21st century. Remote Sens. Environ. 78(1–2), 3–12 (2001)Google Scholar
  32. 32.
    Gu, W., Lv, Z., Hao, M.: Change detection method for remote sensing images based on an improved markov random field. Multimed. Tools Appl. 76(17), 17719–17734 (2017)Google Scholar
  33. 33.
    Haarburger, C., et al.: Transfer learning for breast cancer malignancy classification based on dynamic contrast-enhanced MR images. Bildverarbeitung für die Medizin 2018. INFORMAT, pp. 216–221. Springer, Heidelberg (2018).  https://doi.org/10.1007/978-3-662-56537-7_61Google Scholar
  34. 34.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988. IEEE (2017)Google Scholar
  35. 35.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  36. 36.
    Huang, L., Fang, Y., Zuo, X., Yu, X.: Automatic change detection method of multitemporal remote sensing images based on 2D-OTSU algorithm improved by firefly algorithm. J. Sens. 2015 (2015)Google Scholar
  37. 37.
    Iglovikov, V., Shvets, A.: TernausNet: U-net with VGG11 encoder pre-trained on imagenet for image segmentation. arXiv preprint arXiv:1801.05746 (2018)
  38. 38.
    Ignatiev, V., Trekin, A., Lobachev, V., Potapov, G., Burnaev, E.: Targeted change detection in remote sensing images, vol. 110412H (2019).  https://doi.org/10.1117/12.2523141
  39. 39.
    Jabari, S., Zhang, Y.: Building change detection using multi-sensor and multi-view-angle imagery. In: IOP Conference Series: Earth and Environmental Science, vol. 34, p. 012018. IOP Publishing (2016)Google Scholar
  40. 40.
    Jianya, G., Haigang, S., Guorui, M., Qiming, Z.: A review of multi-temporal remote sensing data change detection algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 37(B7), 757–762 (2008)Google Scholar
  41. 41.
    Kemker, R., Kanan, C.: Deep neural networks for semantic segmentation of multispectral remote sensing imagery. CoRR, vol. abs/1703.06452 (2017)Google Scholar
  42. 42.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2015)Google Scholar
  43. 43.
    Lee, H., et al.: An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat. Biomed. Eng. 1 (2018)Google Scholar
  44. 44.
    Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR. vol. 1, p. 4 (2017)Google Scholar
  45. 45.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. (2018)Google Scholar
  46. 46.
    Müller, M., Casser, V., Lahoud, J., Smith, N., Ghanem, B.: Sim4CV: a photo-realistic simulator for computer vision applications. Int. J. Comput. Vis. 128, 902–919 (2018)Google Scholar
  47. 47.
    Notchenko, A., Kapushev, Y., Burnaev, E.: Large-scale shape retrieval with sparse 3D convolutional neural networks. In: van der Aalst, W.M.P., et al. (eds.) AIST 2017. LNCS, vol. 10716, pp. 245–254. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-73013-4_23Google Scholar
  48. 48.
    Novikov, G., Trekin, A., Potapov, G., Ignatiev, V., Burnaev, E.: Satellite imagery analysis for operational damage assessment in emergency situations. In: Abramowicz, W., Paschke, A. (eds.) BIS 2018. LNBIP, vol. 320, pp. 347–358. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-93931-5_25Google Scholar
  49. 49.
    Qiu, W., Yuille, A.: UnrealCV: connecting computer vision to unreal engine. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 909–916. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49409-8_75Google Scholar
  50. 50.
    Richter, S.R., Vineet, V., Roth, S., Koltun, V.: Playing for data: ground truth from computer games. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 102–118. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_7Google Scholar
  51. 51.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28Google Scholar
  52. 52.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)Google Scholar
  53. 53.
    Saha, S., Bovolo, F., Brurzone, L.: Unsupervised multiple-change detection in VHR optical images using deep features. In: IGARSS 2018, 2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 1902–1905. IEEE (2018)Google Scholar
  54. 54.
    Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 621–635. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-67361-5_40Google Scholar
  55. 55.
    Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014)Google Scholar
  56. 56.
    Smolyakov, D., Korotin, A., Erofeev, P., Papanov, A., Burnaev, E.: Meta-learning for resampling recommendation systems, vol. 11041 (2019).  https://doi.org/10.1117/12.2523103
  57. 57.
    Tewkesbury, A.P., Comber, A.J., Tate, N.J., Lamb, A., Fisher, P.F.: A critical synthesis of remotely sensed optical image change detection techniques. Remote Sens. Environ. 160, 1–14 (2015)Google Scholar
  58. 58.
    Vakalopoulou, M., Karantzalos, K., Komodakis, N., Paragios, N.: Simultaneous registration and change detection in multitemporal, very high resolution remote sensing data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 61–69 (2015)Google Scholar
  59. 59.
    Van Ginneken, B., Setio, A.A., Jacobs, C., Ciompi, F.: Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), pp. 286–289. IEEE (2015)Google Scholar
  60. 60.
    Vittek, M., Brink, A., Donnay, F., Simonetti, D., Desclée, B.: Land cover change monitoring using landsat MSS/TM satellite image data over West Africa between 1975 and 1990. Remote Sens. 6(1), 658–676 (2014)Google Scholar
  61. 61.
    Wang, B., Choi, J., Choi, S., Lee, S., Wu, P., Gao, Y.: Image fusion-based land cover change detection using multi-temporal high-resolution satellite images. Remote Sens. 9(8), 804 (2017)Google Scholar
  62. 62.
    Wiratama, W., Lee, J., Park, S.E., Sim, D.: Dual-dense convolution network for change detection of high-resolution panchromatic imagery. Appl. Sci. 8(10), 1785 (2018)Google Scholar
  63. 63.
    Yao, Y., Doretto, G.: Boosting for transfer learning with multiple sources. In: 2010 IEEE Conference on Computer vision and pattern recognition (CVPR), pp. 1855–1862. IEEE (2010)Google Scholar
  64. 64.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)Google Scholar
  65. 65.
    Yu, H., Yang, W., Hua, G., Ru, H., Huang, P.: Change detection using high resolution remote sensing images based on active learning and Markov random fields. Remote Sens. 9(12), 1233 (2017)Google Scholar
  66. 66.
    Zhang, L., Zhang, L., Du, B.: Deep learning for remote sensing data: a technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 4(2), 22–40 (2016)Google Scholar
  67. 67.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Maria Kolos
    • 2
  • Anton Marin
    • 2
  • Alexey Artemov
    • 1
  • Evgeny Burnaev
    • 1
    Email author
  1. 1.ADASESkolkovo Institute of Science and TechnologyMoscowRussia
  2. 2.Aeronet GroupsSkolkovo Institute of Science and TechnologyMoscowRussia

Personalised recommendations