Memetic Computing

, Volume 10, Issue 1, pp 53–61 | Cite as

Improved visual background extractor with adaptive range change

  • Shiyu Yang
  • Kuangrong Hao
  • Yongsheng Ding
  • Jian Liu
Regular Research Paper


The visual background extractor (ViBe) has become one of the best motion object detection algorithms because of its good detection results and low memory requirements. However, the ViBe model cannot self-adjust the value range of the parameter that controls the number of samples chosen from the background template. In this paper, two models are proposed to help automatically change the parameter range in different environments. The blink energy model can detect dynamic backgrounds by increasing the range, while the object probability model can prevent corrosion of motion objects by decreasing the range. The experimental results show that our proposed method can both accurately recognize dynamic backgrounds and efficiently prevent object corrosion. In addition, our method shows better performance on benchmark datasets than several commonly used detection algorithms.


Object detection ViBe Self-adjust Blink energy Object probability 



This work was supported in part by the Key Project of the National Natural Science Foundation of China (No. 61134009), the National Natural Science Foundation of China (Nos. 61473077, 61473078, 61503075, 61603090), Cooperative research funds of the National Natural Science Funds Overseas and Hong Kong and Macao scholars (No. 61428302), National Key Research and Development Plan from Ministry of Science and Technology (2016YFB0302700), Program for Changjiang Scholars from the Ministry of Education, International Collaborative Project of the Shanghai Committee of Science and Technology (No. 16510711100), Innovation Program of Shanghai Municipal Education Commission (No. 14ZZ067), Shanghai Science and Technology Promotion Project form Shanghai Municipal Agriculture Commission (No. 2016-1-5-12), Shanghai Pujiang Program (No. 15PJ1400100), and the Fundamental Research Funds for the Central Universities (No. 2232015D3-32).


  1. 1.
    Schaefer G, Krawczyk B, Celebi ME, Iyatomi H (2014) An ensemble classification approach for melanoma diagnosis. Memet Comput 6:233–240CrossRefGoogle Scholar
  2. 2.
    Thammano A, Pravesjit S (2015) Recognition of archaic Lanna handwritten manuscripts using a hybrid bio-inspired algorithm. Memet Comput 7:3–17CrossRefGoogle Scholar
  3. 3.
    Mei X (2006) Moving object detection algorithm based on space–time background difference. J Comput Aided Des Comput 18.7(2006):1044Google Scholar
  4. 4.
    Li H, Ma J, Gong M, Jiang Q, Jiao L (2015) Change detection in synthetic aperture radar images based on evolutionary multiobjective optimization with ensemble learning. Memet Comput 7:275–289CrossRefGoogle Scholar
  5. 5.
    Alvarez L, Weickert J, Sánchez J (2000) Reliable estimation of dense optical flow fields with large displacements. Int J Comput Vis 39:41–56CrossRefMATHGoogle Scholar
  6. 6.
    Elhabian SY, El-Sayed KM, Ahmed SH (2008) Moving object detection in spatial domain using background removal techniques-state-of-art. Recent Pat Comput Sci 1:32–54CrossRefGoogle Scholar
  7. 7.
    Stauffer C, Grimson WEL (2000) Learning patterns of activity using real-time tracking. IEEE Trans Pattern Anal Mach Intell 22:747–757CrossRefGoogle Scholar
  8. 8.
    Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on, vol 2. IEEEGoogle Scholar
  9. 9.
    Wang Y, Loe K-F, Wu J-K (2006) A dynamic conditional random field model for foreground and shadow segmentation. IEEE Trans Pattern Anal Mach Intell 28:279–289CrossRefGoogle Scholar
  10. 10.
    Lee D-S (2005) Effective Gaussian mixture learning for video background subtraction. IEEE Trans Pattern Anal Mach Intell 27:827–832CrossRefGoogle Scholar
  11. 11.
    Wang Y, Liang Y, Zhang L, Pan Q (2012) Adaptive spatiotemporal background modelling. Comput Vis IET 6(5):451–458MathSciNetCrossRefGoogle Scholar
  12. 12.
    Varadarajan S, Miller P, Zhou H (2015) Region-based mixture of Gaussians modelling for foreground detection in dynamic scenes. Pattern Recogn 48(11):3488–3503CrossRefGoogle Scholar
  13. 13.
    Srivastava A, Lee AB, Simoncelli EP, Zhu S-C (2003) On advances in statistical modeling of natural images. J Math Imaging Vis 18:17–33MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Elgammal A, Harwood D, Davis L (2000) Non-parametric model for background subtraction. In: Computer Vision—ECCV 2000. Springer, pp 751–767Google Scholar
  15. 15.
    Sheikh Y, Shah M (2005) Bayesian modeling of dynamic scenes for object detection. IEEE Trans Pattern Anal Mach Intell 27:1778–1792CrossRefGoogle Scholar
  16. 16.
    Lee J, Park M (2012) An adaptive background subtraction method based on kernel density estimation. Sensors 12(9):12279–12300CrossRefGoogle Scholar
  17. 17.
    Park JG, Lee C (2010) Bayesian rule-based complex background modeling and foreground detection. Opt Eng 49(2):027006–027006CrossRefGoogle Scholar
  18. 18.
    Kim K, Chalidabhongse TH, Harwood D, Davis L (2005) Real-time foreground–background segmentation using codebook model. Real-time Imaging 11:172–185CrossRefGoogle Scholar
  19. 19.
    Wu M, Peng X (2010) Spatio-temporal context for codebook-based dynamic background subtraction. AEU Int J Electron Commun 64:739–747CrossRefGoogle Scholar
  20. 20.
    Lee J, Cheon M, Hyun CH, Eum H, Park M (2013) Adaptive background model for non-static background subtraction by estimation of the color change ratio. Electron Mater Lett 9(1):33–38CrossRefGoogle Scholar
  21. 21.
    Oliver NM, Rosario B, Pentland AP (2000) A Bayesian computer vision system for modeling human interactions. IEEE Trans Pattern Anal Mach Intell 22:831–843CrossRefGoogle Scholar
  22. 22.
    Saha S, Bandyopadhyay S (2011) On principle axis based line symmetry clustering techniques. Memetic Comput 3:129–144CrossRefGoogle Scholar
  23. 23.
    Mahadevan V, Vasconcelos N (2010) Spatiotemporal saliency in dynamic scenes. IEEE Trans Pattern Anal Mach Intell 32:171–177CrossRefGoogle Scholar
  24. 24.
    Han H, Zhu J, Liao S, Lei Z, Li SZ (2015) Moving object detection revisited: speed and robustness. IEEE Trans Circuits Syst Video Technol 25(6):910–921CrossRefGoogle Scholar
  25. 25.
    Lu X (2014) A multiscale spatio-temporal background model for motion detection. In: Image Processing (ICIP), 2014 IEEE International Conference on, pp 3268–3271. IEEEGoogle Scholar
  26. 26.
    Barnich O, Van Droogenbroeck M (2011) ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans Image Process 20:1709–1724MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Van Droogenbroeck M, Paquot O (2012) Background subtraction: experiments and improvements for ViBe. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pp 32–37. IEEEGoogle Scholar
  28. 28.
    Cheng L, Ding Y, Hao K, Hu Y (2012) An ensemble kernel classifier with immune clonal selection algorithm for automatic discriminant of primary open-angle glaucoma. Neurocomputing 83:1–11CrossRefGoogle Scholar
  29. 29.
    Ding Y, Cheng L, Pedrycz W, Hao K (2015) Global nonlinear kernel prediction for large data set with a particle swarm-optimized interval support vector regression. IEEE Trans Neural Netw Learn Syst 26(10):2521–2534MathSciNetCrossRefGoogle Scholar
  30. 30.
    Goyette N, Jodoin PM, Porikli F, Konrad J, Ishwar P (2014) A novel video dataset for change detection benchmarking. IEEE Trans Image Process 23(11):4663–4679MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Shiyu Yang
    • 1
  • Kuangrong Hao
    • 1
  • Yongsheng Ding
    • 1
  • Jian Liu
    • 1
  1. 1.Engineering Research Center of Digitized Textile and Apparel Technology at Ministry of Education, College of Information Sciences and TechnologyDonghua UniversityShanghaiPeople’s Republic of China

Personalised recommendations