Computational Visual Media

, Volume 3, Issue 3, pp 273–284 | Cite as

Practical automatic background substitution for live video

  • Haozhi Huang
  • Xiaonan Fang
  • Yufei Ye
  • Songhai Zhang
  • Paul L. Rosin
Open Access
Research Article

Abstract

In this paper we present a novel automatic background substitution approach for live video. The objective of background substitution is to extract the foreground from the input video and then combine it with a new background. In this paper, we use a color line model to improve the Gaussian mixture model in the background cut method to obtain a binary foreground segmentation result that is less sensitive to brightness differences. Based on the high quality binary segmentation results, we can automatically create a reliable trimap for alpha matting to refine the segmentation boundary. To make the composition result more realistic, an automatic foreground color adjustment step is added to make the foreground look consistent with the new background. Compared to previous approaches, our method can produce higher quality binary segmentation results, and to the best of our knowledge, this is the first time such an automatic and integrated background substitution system has been proposed which can run in real time, which makes it practical for everyday applications.

Keywords

background substitution background replacement background subtraction alpha matting 

Notes

Acknowledgements

We thank the reviewers for their valuable comments. This work was supported by the National High-Tech R&D Program of China (Project No. 2012AA011903), the National Natural Science Foundation of China (Project No. 61373069), the Research Grant of Beijing Higher Institution Engineering Research Center, and Tsinghua–Tencent Joint Laboratory for Internet Innovation Technology.

Supplementary material

41095_2016_74_MOESM1_ESM.mp4 (10.4 mb)
Supplementary material, approximately 10.3 MB.

References

  1. [1]
    Bai, X.; Wang, J.; Simons, D.; Sapiro, G. Video SnapCut: Robust video object cutout using localized classifiers. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 70, 2009.Google Scholar
  2. [2]
    Chen, T.; Zhu, J.-Y.; Shamir, A.; Hu, S.-M. Motionaware gradient domain video composition. IEEE Transactions on Image Processing Vol. 22, No. 7, 2532–2544, 2013.CrossRefGoogle Scholar
  3. [3]
    Liu, Z.; Cohen, M. Head-size equalization for better visual perception of video conferencing. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 4, 2005.Google Scholar
  4. [4]
    Zhu, Z.; Martin, R. R.; Pepperell, R.; Burleigh, A. 3D modeling and motion parallax for improved videoconferencing. Computational Visual Media Vol. 2, No. 2, 131–142, 2016.CrossRefGoogle Scholar
  5. [5]
    Van Krevelen, D. W. F.; Poelman, R. A survey of augmented reality technologies, applications and limitations. International Journal of Virtual Reality Vol. 9, No. 2, 1–21, 2010.Google Scholar
  6. [6]
    Apostoloff, N.; Fitzgibbon, A. Bayesian video matting using learnt image priors. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, I-407–I-414, 2004.Google Scholar
  7. [7]
    Bouwmans, T.; El Baf, F.; Vachon, B. Background modeling using mixture of Gaussians for foreground detection—A survey. Recent Patents on Computer Science Vol. 1, No. 3, 219–237, 2008.CrossRefGoogle Scholar
  8. [8]
    Maddalena, L.; Petrosino, A. A self-organizing approach to background subtraction for visual surveillance applications. IEEE Transactions on Image Processing Vol. 17, No. 7, 1168–1177, 2008.MathSciNetCrossRefGoogle Scholar
  9. [9]
    Tsai, D.-M.; Lai, S.-C. Independent component analysis-based background subtraction for indoor surveillance. IEEE Transactions on Image Processing Vol. 18, No. 1, 158–167, 2009.MathSciNetCrossRefMATHGoogle Scholar
  10. [10]
    Barnich, O.; Van Droogenbroeck, M. ViBe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing Vol. 20, No. 6, 1709–1724, 2011.MathSciNetCrossRefMATHGoogle Scholar
  11. [11]
    Hofmann, M.; Tiefenbacher, P.; Rigoll, G. Background segmentation with feedback: The pixel-based adaptive segmenter. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 38–43, 2012.Google Scholar
  12. [12]
    Sun, J.; Zhang, W.; Tang, X.; Shum, H.-Y. Background cut. In: Computer Vision–ECCV 2006. Leonardis, A.; Bischof, H.; Pinz, A. Eds. Springer Berlin Heidelberg, 628–641, 2006.CrossRefGoogle Scholar
  13. [13]
    Criminisi, A.; Cross, G.; Blake, A.; Kolmogorov, V. Bilayer segmentation of live video. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 53–60, 2006.Google Scholar
  14. [14]
    Yin, P.; Criminisi, A.; Winn, J.; Essa, I. Bilayer segmentation of webcam videos using tree-based classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 33, No. 1, 30–42, 2011.CrossRefGoogle Scholar
  15. [15]
    Chuang, Y.-Y.; Agarwala, A.; Curless, B.; Salesin, D. H.; Szeliski, R. Video matting of complex scenes. ACM Transactions on Graphics Vol. 21, No. 3, 243–248, 2002.CrossRefGoogle Scholar
  16. [16]
    Gong, M.; Qian, Y.; Cheng, L. Integrated foreground segmentation and boundary matting for live videos. IEEE Transactions on Image Processing Vol. 24, No. 4, 1356–1370, 2015.MathSciNetCrossRefGoogle Scholar
  17. [17]
    Pérez, P.; Gangnet, M.; Blake, A. Poisson image editing. ACM Transactions on Graphics Vol. 22, No. 3, 313–318, 2003.CrossRefGoogle Scholar
  18. [18]
    Jia, J.; Sun, J.; Tang, C.-K.; Shum, H.-Y. Drag-anddrop pasting. ACM Transactions on Graphics Vol. 25, No. 3, 631–637, 2006.CrossRefGoogle Scholar
  19. [19]
    Buchsbaum, G. A spatial processor model for object colour perception. Journal of the Franklin Institute Vol. 310, No. 1, 1–26, 1980.CrossRefGoogle Scholar
  20. [20]
    Finlayson, G. D.; Hordley, S. D.; Hubel, P. M. Color by correlation: A simple, unifying framework for color constancy. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 23, No. 11, 1209–1221, 2001.CrossRefGoogle Scholar
  21. [21]
    Cheng, D.; Prasad, D. K.; Brown, M. S. Illuminant estimation for color constancy: Why spatial-domain methods work and the role of the color distribution. Journal of the Optical Society of America A Vol. 31, No. 5, 1049–1058, 2014.CrossRefGoogle Scholar
  22. [22]
    Cheng, D.; Price, B.; Cohen, S.; Brown, M. S. Beyond white: Ground truth colors for color constancy correction. In: Proceedings of the IEEE International Conference on Computer Vision, 298–306, 2015.Google Scholar
  23. [23]
    Omer, I.; Werman, M. Color lines: Image specific color representation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, II-946–II-953, 2004.Google Scholar
  24. [24]
    Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 30, No. 2, 228–242, 2008.CrossRefGoogle Scholar
  25. [25]
    Land, E. H.; McCann, J. J. Lightness and retinex theory. Journal of the Optical Society of America Vol. 61, No. 1, 1–11, 1971.CrossRefGoogle Scholar
  26. [26]
    Zhang, Y.; Tang, Y.-L.; Cheng, K.-L. Efficient video cutout by paint selection. Journal of Computer Science and Technology Vol. 30, No. 3, 467–477, 2015.CrossRefGoogle Scholar
  27. [27]
    Smith, A. R.; Blinn, J. F. Blue screen matting. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 259–268, 1996.Google Scholar
  28. [28]
    Mumtaz, A.; Zhang, W.; Chan, A. B. Joint motion segmentation and background estimation in dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 368–375, 2014.Google Scholar
  29. [29]
    Zhang, L.; Huang, H.; Fu, H. EXCOL: An extractand- complete layering approach to cartoon animation reusing. IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 7, 1156–1169, 2012.CrossRefGoogle Scholar
  30. [30]
    Gastal, E. S. L.; Oliveira, M. M. Shared sampling for real-time alpha matting. Computer Graphics Forum Vol. 29, No. 2, 575–584, 2010.CrossRefGoogle Scholar
  31. [31]
    Chen, X.; Zou, D.; Zhou, S.; Zhao, Q.; Tan, P. Image matting with local and nonlocal smooth priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1902–1907, 2013.Google Scholar
  32. [32]
    Wong, B.-Y.; Shih, K.-T.; Liang, C.-K.; Chen, H. H. Single image realism assessment and recoloring by color compatibility. IEEE Transactions on Multimedia Vol. 14, No. 3, 760–769, 2012.CrossRefGoogle Scholar
  33. [33]
    Cohen-Or, D.; Sorkine, O.; Gal, R.; Leyvand, T.; Xu, Y.-Q. Color harmonization. ACM Transactions on Graphics Vol. 25, No. 3, 624–630, 2006.CrossRefGoogle Scholar
  34. [34]
    Kuang, Z.; Lu, P.; Wang, X.; Lu, X. Learning selfadaptive color harmony model for aesthetic quality classification. In: Proceedings of SPIE 9443, the 6th International Conference on Graphic and Image Processing, 94431O, 2015.Google Scholar
  35. [35]
    Chen, T.; Cheng, M.-M.; Tan, P.; Shamir, A.; Hu, S.-M. Sketch2Photo: Internet image montage. ACM Transactions on Graphics Vol. 28, No. 5, Article No. 124, 2009.Google Scholar
  36. [36]
    Farbman, Z.; Hoffer, G.; Lipman, Y.; Cohen-Or, D.; Lischinski, D. Coordinates for instant image cloning. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 67, 2009.Google Scholar
  37. [37]
    Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. In: Energy Minimization Methods in Computer Vision and Pattern Recognition. Figueiredo, M.; Zerubia, J.; Jain, A. K. Eds. Springer Berlin Heidelberg, 359–374, 2001.CrossRefGoogle Scholar
  38. [38]
    Sigari, M. H.; Mozayani, N.; Pourreza, H. R. Fuzzy running average and fuzzy background subtraction: concepts and application. International Journal of Computer Science and Network Security Vol. 8, No. 2, 138–143, 2008.Google Scholar
  39. [39]
    Sobral, A. BGSLibrary. 2016. Available at https://github.com/andrewssobral/bgslibrary.Google Scholar
  40. [40]
    Rosin, P. L.; Ioannidis, E. Evaluation of global image thresholding for change detection. Pattern Recognition Letters Vol. 24, No. 14, 2345–2356, 2003.CrossRefMATHGoogle Scholar
  41. [41]
    Kerbyson, D. J.; Atherton, T. J. Circle detection using Hough transform filters. In: Proceedings of the 5th International Conference on Image Processing and its Applications, 370–374, 1995.CrossRefGoogle Scholar
  42. [42]
    Graphics and Media Lab. Videomatting benchmark. 2016. Available at http://videomatting.com.Google Scholar

Copyright information

© The Author(s) 2016

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Authors and Affiliations

  • Haozhi Huang
    • 1
  • Xiaonan Fang
    • 1
  • Yufei Ye
    • 1
  • Songhai Zhang
    • 1
  • Paul L. Rosin
    • 2
  1. 1.Department of Computer ScienceTsinghua UniversityBeijingChina
  2. 2.School of Computer Science and InformaticsCardiff UniversityCardiffUK

Personalised recommendations