Abstract
Extracting temporally-coherent alpha mattes in video is an important but challenging problem in post-production. Previous video matting systems are highly sensitive to initial conditions and image noise, thus cannot reliably produce stable alpha mattes without temporal jitter. In this paper we propose an improved video matting system which contains two new components: (1) an accurate trimap propagation mechanism for setting up the initial matting conditions in a temporally-coherent way; and (2) a temporal matte filter which can improve the temporal coherence of the mattes while maintaining the matte structures on individual frames. Experimental results show that compared with previous methods, the two new components lead to alpha mattes with better temporal coherence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Villegas, P., Marichal, X.: Perceptually-weighted evaluation criteria for segmentation masks in video sequences. IEEE Trans. Image Processing 13, 1092–1103 (2004)
Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. on Pattern Analysis and Machine Intelligence 30, 228–242 (2008)
Gastal, E.S.L., Oliveira, M.M.: Shared sampling for real-time alpha matting. Computer Graphics Forum 29(2), 575–584 (2010); Proceedings of Eurographics
Zheng, Y., Kambhamettu, C.: Learning based digital matting. In: Proc. IEEE International Conference on Computer Vision (2009)
Wang, J., Cohen, M.: Image and video matting: A survey. Foundations and Trends in Computer Graphics and Vision 3, 97–175 (2007)
Chuang, Y., Agarwala, A., Curless, B., Salesin, D., Szeliski, R.: Video matting of complex scenes. In: Proceedings of ACM SIGGRAPH, pp. 243–248 (2002)
Chuang, Y., Curless, B., Salesin, D.H., Szeliski, R.: A bayesian approach to digital matting. In: Proc. of IEEE CVPR, pp. 264–271 (2001)
Li, Y., Sun, J., Shum, H.: Video object cut and paste. In: Proc. ACM SIGGRAPH, pp. 595–600 (2005)
Wang, J., Bhat, P., Colburn, R.A., Agrawala, M., Cohen, M.F.: Interactive video cutout. ACM Trans. Graph. 24, 585–594 (2005)
Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28, 70:1–70:11 (2009)
Lee, S.Y., Yoon, J.C., Lee, I.K.: Temporally coherent video matting. Graphical Models 72, 25–33 (2010)
Wang, J., Cohen, M.: Optimized color sampling for robust matting. In: Proc. IEEE CVPR (2007)
Wahba, G.: Spline models for observational data. In: CBMSNSF Regl. Conf. Ser. Appl. Math., vol. 59 (1990)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bai, X., Wang, J., Simons, D. (2011). Towards Temporally-Coherent Video Matting. In: Gagalowicz, A., Philips, W. (eds) Computer Vision/Computer Graphics Collaboration Techniques. MIRAGE 2011. Lecture Notes in Computer Science, vol 6930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24136-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-24136-9_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24135-2
Online ISBN: 978-3-642-24136-9
eBook Packages: Computer ScienceComputer Science (R0)