BMOG: boosted Gaussian Mixture Model with controlled complexity for background subtraction

  • Isabel Martins
  • Pedro Carvalho
  • Luís Corte-Real
  • José Luis Alba-Castro
Original Article
  • 25 Downloads

Abstract

Developing robust and universal methods for unsupervised segmentation of moving objects in video sequences has proved to be a hard and challenging task that has attracted the attention of many researchers over the last decades. State-of-the-art methods are, in general, computationally heavy preventing their use in real-time applications. This research addresses this problem by proposing a robust and computationally efficient method, coined BMOG, that significantly boosts the performance of a widely used method based on a Mixture of Gaussians. The proposed solution explores a novel classification mechanism that combines color space discrimination capabilities with hysteresis and a dynamic learning rate for background model update. The complexity of BMOG is kept low, proving its suitability for real-time applications. BMOG was objectively evaluated using the ChangeDetection.net 2014 benchmark. An exhaustive set of experiments was conducted, and a detailed analysis of the results, using two complementary types of metrics, revealed that BMOG achieves an excellent compromise in performance versus complexity.

Keywords

GMM MOG Background subtraction Change detection Foreground segmentation Background model 

Notes

Acknowledgements

This work has received financial support from the Xunta de Galicia (Agrupación Estratéxica Consolidada de Galicia accreditation 2016–2019) and the European Union (European Regional Development Fund—ERDF) and research contract GRC2014/024 (Modalidade: Grupos de Referencia Competitiva 2014) and project “TEC4Growth—Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020,” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). It was also partially supported by MOG CLOUD SETUP—N17561, supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF).

References

  1. 1.
    Balcilar M, Amasyali MF, Sonmez AC (2014) Moving object detection using lab2000hl color space with spatial and temporal smoothing. Appl Math Inf Sci 8(4):1755–1766CrossRefGoogle Scholar
  2. 2.
    Balcilar M, Karabiber F, Sonmez AC (2013) Performance analysis of lab2000hl color space for background subtraction. In: IEEE international symposium on innovations in intelligent systems and applications (INISTA), 2013, pp 1–6. IEEEGoogle Scholar
  3. 3.
    Benezeth Y, Jodoin PM, Emile B, Laurent H, Rosenberger C (2010) Comparative study of background subtraction algorithms. J Electr Imaging 19(3):033,003–033,003–12 .  https://doi.org/10.1117/1.3456695
  4. 4.
    Bouwmans T (2011) Recent advanced statistical background modeling for foreground detection: a systematic survey. Recent Pat Comput Sci 4(3):147–176Google Scholar
  5. 5.
    Bouwmans T (2014) Traditional and recent approaches in background modeling for foreground detection: an overview. Comput Sci Rev 11:31–66.  https://doi.org/10.1016/j.cosrev.2014.04.001 CrossRefMATHGoogle Scholar
  6. 6.
    Bouwmans T, Baf FE, Vachon B (2008) Background modeling using mixture of gaussians for foreground detection: a survey. Recent Pat Comput Sci 1(3):219–237CrossRefGoogle Scholar
  7. 7.
    Bouwmans T, Silva C, Marghes C, Zitouni MS, Bhaskar H, Frelicot C (2016) On the role and the importance of features for background modeling and foreground detection. CoRR arXiv:1611.09099
  8. 8.
    Bouwmans T, Zahzah EH (2014) Robust PCA via principal component pursuit: A review for a comparative evaluation in video surveillance. Comput Vis Image Underst 122:22–34.  https://doi.org/10.1016/j.cviu.2013.11.009 CrossRefGoogle Scholar
  9. 9.
    Brutzer S, Höferlin B, Heidemann G (2011) Evaluation of background subtraction techniques for video surveillance. IEEE Conf Comput Vis Pattern Recognit (CVPR) 2011:1937–1944Google Scholar
  10. 10.
    Cardoso JS, Carvalho P, Teixeira LF, Corte-Real L (2009) Partition-distance methods for assessing spatial segmentations of images and videos. Comput Vis Image Underst 113(7):811–823.  https://doi.org/10.1016/j.cviu.2009.02.001 CrossRefGoogle Scholar
  11. 11.
    Cardoso JS, Corte-Real L (2005) Toward a generic evaluation of image segmentation. IEEE Trans Image Process 14(11):1773–1782.  https://doi.org/10.1109/TIP.2005.854491 CrossRefGoogle Scholar
  12. 12.
    ChangeDetection.NET (CDNET) (2017) http://www.changedetection.net. Accessed July 2017
  13. 13.
    Cheng J, Yang J, Zhou Y, Cui Y (2006) Flexible background mixture models for foreground segmentation. Image Vis Comput 24(5):473–482.  https://doi.org/10.1016/j.imavis.2006.01.018 CrossRefGoogle Scholar
  14. 14.
    Cucchiara R, Grana C, Piccardi M, Prati A (2003) Detecting moving objects, ghosts, and shadows in video streams. IEEE Trans Pattern Anal Mach Intell 25(10):1337–1342.  https://doi.org/10.1109/TPAMI.2003.1233909 CrossRefGoogle Scholar
  15. 15.
    Dickinson P, Hunter A, Appiah K (2009) A spatially distributed model for foreground segmentation. Image Vis Comput 27(9):1326–1335CrossRefGoogle Scholar
  16. 16.
    Elgammal A, Duraiswami R, Harwood D, Davis L (2002) Background and foreground modeling using nonparametric kernel density for visual surveillance. Proc IEEE 90:1151–1163CrossRefGoogle Scholar
  17. 17.
    Elhabian SY, El-Sayed KM, Ahmed SH (2008) Moving object detection in spatial domain using background removal techniques—state-of-art. Recent Pat Comput Sci 1:32–54CrossRefGoogle Scholar
  18. 18.
    Goyette N, Jodoin PM, Porikli F, Konrad J, Ishwar P (2012) Changedetection.net: a new change detection benchmark dataset. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, pp 1–8 .  https://doi.org/10.1109/CVPRW.2012.6238919
  19. 19.
    Harville M, Gordon G, Woodfill J (2001) Foreground segmentation using adaptive mixture models in color and depth. In: Proceedings of the IEEE workshop on detection and recognition of events in videoGoogle Scholar
  20. 20.
    Jiang S, Lu X (2017) WeSamBE: a weight-sample-based method for background subtraction. IEEE Trans Circuits Syst Video Technol PP(99):1–1.  https://doi.org/10.1109/TCSVT.2017.2711659 Google Scholar
  21. 21.
    KaewTraKulPong P, Bowden R (2001) An improved adaptive background mixture model for real-time tracking with shadow detection. In: 2nd European workshop on advanced video based surveillance systems (AVBS)Google Scholar
  22. 22.
    Kristensen F, Nilsson P, Owall V (2006) Background segmentation beyond RGB. In: Narayanan PJ et al (ed) Proceedings of the 7th Asian conference on computer vision (ACCV 2006), vol 3852, pp 602–612. Springer, BerlinGoogle Scholar
  23. 23.
    Lee DS (2005) Effective gaussian mixture learning for video background subtraction. IEEE Trans Pattern Anal Mach Intell 27(5):827–832.  https://doi.org/10.1109/TPAMI.2005.102 CrossRefGoogle Scholar
  24. 24.
    Lissner I, Urban P (2012) Toward a unified color space for perception-based image processing. IEEE Trans Image Process 21(3):1153–1168MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Liu Y, Ai H, Xu G (2001) Moving object detection and tracking based on background subtraction. In: Proceedings of SPIE, vol 4554, pp 62–66. SPIE—the international society for optical engineering.  https://doi.org/10.1117/12.441618
  26. 26.
    López-Rubio FJ, López-Rubio E (2015) Features for stochastic approximation based foreground detection. Comput Vis Image Underst 133:30–50.  https://doi.org/10.1016/j.cviu.2014.12.007 CrossRefGoogle Scholar
  27. 27.
    Mahalanobis PC (1936) On the generalised distance in statistics. Proc Natl Inst Sci India 2(1):49–55MathSciNetMATHGoogle Scholar
  28. 28.
    OpenCV (2017) Open source computer vision library: opencv.org. Accessed June 2017Google Scholar
  29. 29.
    Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21.  https://doi.org/10.1016/j.cviu.2013.12.005 CrossRefGoogle Scholar
  30. 30.
    St-Charles PL, Bilodeau GA, Bergevin R (2015) SuBSENSE: a universal change detection method with local adaptive sensitivity. IEEE Trans Image Process 24(1):359–373.  https://doi.org/10.1109/TIP.2014.2378053 MathSciNetCrossRefGoogle Scholar
  31. 31.
    Stauffer C, Grimson W (1999) Adaptive background mixture models for real-time tracking. In: IEEE computer society conference on computer vision and pattern recognition, 1999, vol 2, p 252Google Scholar
  32. 32.
    SuBSENSE (2016) https://bitbucket.org/pierre_luc_st_charles/subsense. Accessed May 2016
  33. 33.
    Varadarajan S, Miller P, Zhou H (2015) Region-based mixture of gaussians modelling for foreground detection in dynamic scenes. Pattern Recognit 48(11):3488–3503.  https://doi.org/10.1016/j.patcog.2015.04.016 CrossRefGoogle Scholar
  34. 34.
    Wang H, Miller P (2011) Regularized online mixture of gaussians for background subtraction. In: 2011 8th IEEE international conference on advanced video and signal-based surveillance (AVSS), pp. 249–254. IEEE.  https://doi.org/10.1109/AVSS.2011.6027331
  35. 35.
    Wang Y, Jodoin PM, Porikli F, Konrad J, Benezeth Y, Ishwar P (2014) CDnet 2014: an expanded change detection benchmark dataset. In: 2014 IEEE conference on computer vision and pattern recognition workshops, pp. 393–400.  https://doi.org/10.1109/CVPRW.2014.126
  36. 36.
    Xu Y, Dong J, Zhang B, Xu D (2016) Background modeling methods in video analysis: a review and comparative evaluation. CAAI Trans Intell Technol 1(1):43–60.  https://doi.org/10.1016/j.trit.2016.03.005 CrossRefGoogle Scholar
  37. 37.
    Yang L, Cheng H, Su J, Li X (2016) Pixel-to-model distance for robust background reconstruction. IEEE Trans Circuits Syst Video Technol 26(5):903–916.  https://doi.org/10.1109/TCSVT.2015.2424052 CrossRefGoogle Scholar
  38. 38.
    Zivkovic Z (2004) Improved adaptive gaussian mixture model for background subtraction. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004, vol. 2, pp 28–31Google Scholar
  39. 39.
    Zivkovic Z, van der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27(7):773–780CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.ISEP, School of EngineeringPolytechnic Institute of PortoPortoPortugal
  2. 2.Signal Theory and Communications DepartmentUniversity of Vigo36310 VigoSpain
  3. 3.INESC TECPortoPortugal
  4. 4.Faculty of EngineeringUniversity of PortoPortoPortugal

Personalised recommendations