Advertisement

Deep Learning Approach in Aerial Imagery for Supporting Land Search and Rescue Missions

  • Dunja Božić-ŠtulićEmail author
  • Željko Marušić
  • Sven Gotovac
Article

Abstract

In this paper, we propose a novel approach to person detection in UAV aerial images for search and rescue tasks in Mediterranean and Sub-Mediterranean landscapes. Person detection in very high spatial resolution images involves target objects that are relatively small and often camouflaged within the environment; thus, such detection is a challenging and demanding task. The proposed method starts by reducing the search space through a visual attention algorithm that detects the salient or most prominent segments in the image. To reduce the number of non-relevant salient regions, we selected those regions most likely to contain a person using pre-trained and fine-tuned convolutional neural networks (CNNs) for detection. We established a special database called HERIDAL to train and test our model. This database was compiled for training purposes, and it contains over 68,750 image patches of wilderness acquired from an aerial perspective as well as approximately 500 labelled full-size real-world images intended for testing purposes. The proposed method achieved a detection rate of 88.9% and a precision of 34.8%, which demonstrates better effectiveness than the system currently used by Croatian Mountain search and rescue (SAR) teams (IPSAR), which is based on mean-shift segmentation. We also used the HERIDAL database to train and test a state-of-the-art region proposal network, Faster R-CNN (Ren et al. in Faster R-CNN: towards real-time object detection with region proposal networks, 2015. CoRR arXiv:1506.01497), which achieved comparable but slightly worse results than those of our proposed method.

Keywords

Convolutional neural networks RCNN Salient object detection Unmanned aerial vehicles (UAV) Search and rescue SAR image database 

Notes

Acknowledgements

This research was carried out in part within the framework of a IPSAR project, University of Split, Croatia. It is also partly supported by Federal Ministry of Education and Science, Bosnia and Herzegovina by Grant (NG 05-39-2945-3/16) to Faculty of Science and Education, University of Mostar. We thank NVIDIA Corporation for GPUs donation through Nvidia GPU Edcuation Center program at University of Mostar.

References

  1. Angelova, A., Krizhevsky, A., Vanhoucke, V., Ogale, A., & Ferguson, D. (2015). Real-time pedestrian detection with deep network cascades. In Proceedings of BMVC 2015.Google Scholar
  2. Anna Gaszczak, J. H., & Breckon, Toby P. (2011). Real-time people and vehicle detection from UAV imagery (Vol. 7878, pp. 7878–7878-13).  https://doi.org/10.1117/12.876663.
  3. Borji, A., Cheng, M. M., Hou, Q., Jiang, H., & Li, J. (2014). Salient object detection: A survey. arXiv preprint arXiv:1411.5878.
  4. Chen, C., Liu, M. Y., Tuzel, O., & Xiao, J. (2017). R-CNN for small object detection. In S. H. Lai, V. Lepetit, K. Nishino, & Y. Sato (Eds.), Computer Vision—ACCV 2016 (pp. 214–230). Cham: Springer.CrossRefGoogle Scholar
  5. Daubechies, I. (1992). Ten lectures on wavelets. Philadelphia, PA: Society for Industrial and Applied Mathematics.CrossRefzbMATHGoogle Scholar
  6. Eggert, C., Brehm, S., Winschel, A., Zecha, D., & Lienhart, R. (2017). A closer look: Small object detection in faster R-CNN. In 2017 IEEE international conference on multimedia and expo (ICME) (pp. 421–426).  https://doi.org/10.1109/ICME.2017.8019550.
  7. Enzweiler, M., & Gavrila, D. M. (2009). Monocular pedestrian detection: Survey and experiments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2179–2195.  https://doi.org/10.1109/TPAMI.2008.260.CrossRefGoogle Scholar
  8. Girshick, R. B. (2015). Fast R-CNN. CoRR arXiv:1504.08083.
  9. Girshick, R. B., Donahue, J., Darrell, T., & Malik, J. (2013). Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR arXiv:1311.2524.
  10. Gotovac, S., Papić, V., & Marušić, Ž. (2016). Analysis of saliency object detection algorithms for search and rescue operations. In 24th International conference on software, telecommunications and computer networks (SoftCOM) (pp. 1–6).  https://doi.org/10.1109/SOFTCOM.2016.7772118.
  11. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition. CoRR arXiv:1512.03385.
  12. Hosang, J., Omran, M., Benenson, R., & Schiele, B. (2015). Taking a deeper look at pedestrians. In IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  13. Imamoglu, N., Lin, W., & Fang, Y. (2013). A saliency detection model using low-level features based on wavelet transform. IEEE Transactions on Multimedia, 15(1), 96–105.  https://doi.org/10.1109/TMM.2012.2225034.CrossRefGoogle Scholar
  14. Koch, C., & Ullman, S. (1987). Shifts in selective visual attention: Towards the underlying neural circuitry (pp. 115–141). Dordrecht: Springer.  https://doi.org/10.1007/978-94-009-3833-55.Google Scholar
  15. Koester, R. (2008). Lost person behavior: A search and rescue guide on where to look for land, air, and water. dbS Productions. https://books.google.hr/books?id=YQeSIAAACAAJ.
  16. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th international conference on neural information processing systems—Volume 1, Curran Associates Inc., USA, NIPS’12 (pp. 1097–1105). http://dl.acm.org/citation.cfm?id=2999134.2999257.
  17. Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE (pp. 2278–2324).Google Scholar
  18. Leroy, J., Riche, N., Mancas, M., Gosselin, B., & Dutoit, T. (2014). Superrare: an object-oriented saliency algorithm based on superpixels rarity.Google Scholar
  19. Li, J., Levine, M. D., An, X., Xu, X., & He, H. (2016). Visual saliency based on scale-space analysis in the frequency domain. CoRR arXiv:1605.01999.
  20. Musić, J., Orović, I., Marasović, T., Papić, V., & Stanković, S. (2016). Gradient compressive sensing for image data reduction in UAV based search and rescue in the wild. In Mathematical problems in engineering, 2016.  https://doi.org/10.1155/2016/6827414.
  21. Ren, S., He, K., Girshick, R. B., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. CoRR arXiv:1506.01497.
  22. Rudol, P., & Doherty, P. (2008). Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. In 2008 IEEE aerospace conference (pp. 1–8).  https://doi.org/10.1109/AERO.2008.4526559.
  23. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2014). Imagenet large scale visual recognition challenge. CoRR arXiv:1409.0575.
  24. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR arXiv:1409.1556.
  25. Sokalski, J., Breckon, T. P., & Cowling, I. (2010). Automatic salient object detection in uav imagery. In Proceedings of the 25th international unmanned air vehicle systems (pp. 1–12).Google Scholar
  26. Syrotuck, W., & Syrotuck, J. (2000). Analysis of lost person behavior: An aid to search planning. Barkleigh Productions. https://books.google.hr/books?id=3rWDAAAACAAJ.
  27. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In 2015 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1–9).  https://doi.org/10.1109/CVPR.2015.7298594.
  28. Tian, Y., Luo, P., Wang, X., & Tang, X. (2015). Deep learning strong parts for pedestrian detection. In: 2015 IEEE international conference on computer vision (ICCV) (pp. 1904–1912).Google Scholar
  29. Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognit Psychol, 12(1), 97–136.CrossRefGoogle Scholar
  30. Turić, H., Dujmić, H., & Papić, V. (2010). Two-stage segmentation of aerial images for search and rescue. Information Technology and Control, 39, 138–145.Google Scholar
  31. Viola, P., Jones, M. J., & Snow, D. (2003). Detecting pedestrians using patterns of motion and appearance. In Proceedings ninth IEEE international conference on computer vision (Vol. 2, pp. 734–741).  https://doi.org/10.1109/ICCV.2003.1238422.
  32. Yuan, P., Zhong, Y., & Yuan, Y. (2017). Faster r-cnn with region proposal refinement.Google Scholar
  33. Zendel, O., Murschitz, M., Humenberger, M., & Herzner, W. (2017). How good is my test data? Introducing safety analysis for computer vision. International Journal of Computer Vision, 125(1–3), 95–109.  https://doi.org/10.1007/s11263-017-1020-z.MathSciNetCrossRefGoogle Scholar
  34. Zhang, L., Lin, L., Liang, X., & He, K. (2016). Is faster R-CNN doing well for pedestrian detection? In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer Vision—ECCV 2016 (pp. 443–457). Cham: Springer.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Electrical Engineering, Mechanical Engineering and Naval ArchitectureUniversity of SplitSplitCroatia
  2. 2.Faculty of Science and EducationUniversity of MostarMostarBosnia and Herzegovina

Personalised recommendations