Advertisement

Foreground Object Segmentation in RGB–D Data Implemented on GPU

Conference paper
  • 386 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1196)

Abstract

This paper presents a GPU implementation of two foreground object segmentation algorithms: Gaussian Mixture Model (GMM) and Pixel Based Adaptive Segmenter (PBAS) modified for RGB–D data support. The simultaneous use of colour (RGB) and depth (D) data allows one to improve segmentation accuracy, especially in case of colour camouflage, illumination changes and shadow occurrence. Three GPUs were used to accelerate computations: embedded NVIDIA Jetson TX2 (Maxwell architecture), mobile NVIDIA GeForce GTX 1050m (Pascal architecture) and efficient NVIDIA RTX 2070 (Turing architecture). Segmentation accuracy comparable to previously published works was obtained. Moreover, the use of a GPU platform allowed us to get real-time image processing. In addition, the system has been adapted to work with two RGB–D sensors: RealSense D415 and D435 from Intel.

Keywords

Foreground object segmentation Background subtraction RGB–D GPU GMM PBAS Intel RealSense 

Notes

Acknowledgements

The work presented in this paper was supported by the AGH University of Science and Technology project no. 16.16.120.773.

References

  1. 1.
    Barnich, O., Van Droogenbroeck, M.: ViBe: a powerful random technique to estimate the background in video sequences. In: 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE (2009)Google Scholar
  2. 2.
    Campos-Macías, L., Aldana-López, R., de la Guardia, R., Parra-Vilchis, J.I., Gómez-Gutiérrez, D.: Autonomous navigation of MAVs in unknown cluttered environments. J. Field Robot. (2020).  https://doi.org/10.1002/rob.21959. ISSN 1556-4967
  3. 3.
    Garcia-Garcia, B., Bouwmans, T., Silva, A.J.R.: Background subtraction in real applications: challenges, current models and future directions. Comput. Sci. Rev. 35 (2020).  https://doi.org/10.1016/j.cosrev.2019.100204
  4. 4.
    Gordon, G.G., Darrell, T., Harville, M., Woodfill, J.: Background estimation and removal based on range and color. In: Proceedings of the 1999 Conference on Computer Vision and Pattern Recognition (CVPR 1999), Ft. Collins, CO, USA, pp. 2459–2464 (1999)Google Scholar
  5. 5.
    Guler, P., Emeksiz, D., Temizel, A., Teke, M., Temizel, T.T.: Real-time multicamera video analytics system on GPU. J. Real-Time Image Proc. 11(3), 457–472 (2016). ISSN 1861–8219CrossRefGoogle Scholar
  6. 6.
    Hofmann, M., Tiefenbacher, P., Rigoll, G.: Background segmentation with feedback: the pixel-based adaptive segmenter. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE (2012)Google Scholar
  7. 7.
    Karahan, Ş., Sevilgen, F.E.: CUDA implementation of the pixel based adaptive segmentation algorithm. In: 2015 23rd Signal Processing and Communications Applications Conference (SIU), Malatya, pp. 2505–2508 (2015).  https://doi.org/10.1109/SIU.2015.7130393
  8. 8.
    Kryjak, T., Komorkiewicz, M., Gorgon, M.: Real-time foreground object detection combining the PBAS background modelling algorithm and feedback from scene analysis module. In. J. Electron. Telecommun. 60(1), 61–72 (2014)Google Scholar
  9. 9.
    Kumar, P., Singhal, A., Mehta, S., et al.: Real-time moving object detection algorithm on high-resolution videos using GPUs. J. Real-Time Image Proc. 11, 93–109 (2016).  https://doi.org/10.1007/s11554-012-0309-yCrossRefGoogle Scholar
  10. 10.
    Leens, J., Piérard, S., Barnich, O., Van Droogenbroeck, M., Wagner, J.M.: Combining color, depth, and motion for video segmentation. In: Proceedings of the Computer Vision Systems: 7th International Conference on Computer Vision Systems (ICVS 2009), Liège, Belgium (2009)Google Scholar
  11. 11.
    Maddalena, L., Petrosino, A.: Background subtraction for moving object detection in RGBD data: a survey. J. Imaging 4, 71 (2018)CrossRefGoogle Scholar
  12. 12.
    Minematsu, T., Shimada, A., Uchiyama, H., Taniguchi, R.: Simple combination of appearance and depth for foreground segmentation. In: Proceedings of the New Trends in Image Analysis and Processing (ICIAP 2017), Catania, Italy (2017)Google Scholar
  13. 13.
    Pham, V., Vo, P., Vu, H.T., Le, B.: GPU implementation of extended gaussian mixture model for background subtraction. In: IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future (RIVF), Hanoi, pp. 1–4 (2010).  https://doi.org/10.1109/RIVF.2010.563400
  14. 14.
    Qin, L., Sheng, B., Lin, W., Wu, W., Shen, R.: GPU-accelerated video background subtraction using Gabor detector. J. Vis. Commun. Image Represent. 32, 1–9 (2015) CrossRefGoogle Scholar
  15. 15.
    SBM-RGBD Dataset. http://rgbd2017.na.icar.cnr.it/SBM-RGBDdataset.html. Accessed 20 Jan 2020
  16. 16.
    Song, Y., Noh, S., Yu, J., Park, C., Lee, B.: Background subtraction based on Gaussian mixture models using color and depth information. In: The 2014 International Conference on Control, Automation and Information Sciences (2014)Google Scholar
  17. 17.
    Stauffer, C., Eric, W., Grimson, L.: Adaptive background mixture models for real-time tracking. In: Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), vol. 2. IEEE (1999)Google Scholar
  18. 18.
    Stormer, A., Hofmann, M., Rigoll, G.: Depth gradient based segmentation of overlapping foreground objects in range images. In: Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, pp. 1–4 (2010)Google Scholar
  19. 19.
    Qin, L., Sheng, B., Lin, W., Wu, W., Shen, R.: GPU-accelerated video background subtraction using Gabor detector. J. Vis. Commun. Image Represent. 32, 1–9 (2015).  https://doi.org/10.1016/j.jvcir.2015.07.010CrossRefGoogle Scholar
  20. 20.
    Zhou, X., et al.: Improving video segmentation by fusing depth cues and the visual background extractor (ViBe) algorithm. Sensors 17(5), 1177 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.AGH University of Science and TechnologyKrakówPoland

Personalised recommendations