Advertisement

Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video

  • Xue Bai
  • Jue Wang
  • Guillermo Sapiro
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6315)

Abstract

Accurately modeling object colors, and features in general, plays a critical role in video segmentation and analysis. Commonly used color models, such as global Gaussian mixtures, localized Gaussian mixtures, and pixel-wise adaptive ones, often fail to accurately represent the object appearance in complicated scenes, thereby leading to segmentation errors. We introduce a new color model, Dynamic Color Flow, which unlike previous approaches, incorporates motion estimation into color modeling in a probabilistic framework, and adaptively changes model parameters to match the local properties of the motion. The proposed model accurately and reliably describes changes in the scene’s appearance caused by motion across frames. We show how to apply this color model to both foreground and background layers in a balanced way for efficient object segmentation in video. Experimental results show that when compared with previous approaches, our model provides more accurate foreground and background estimations, leading to more efficient video object cutout systems.

Keywords

Motion Estimation Color Model Object Segmentation Video Object Video Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wang, J., Cohen, M.: Image and video matting: A survey. Foundations and Trends in Computer Graphics and Vision 3, 97–175 (2007)CrossRefGoogle Scholar
  2. 2.
    Wang, J., Bhat, P., Colburn, A., Agrawala, M., Cohen, M.: Interactive video cutout. In: Proc. of ACM SIGGRAPH (2005)Google Scholar
  3. 3.
    Li, Y., Sun, J., Shum, H.: Video object cut and paste. In: Proc. ACM SIGGRAPH, pp. 595–600 (2005)Google Scholar
  4. 4.
    Bai, X., Sapiro, G.: A geodesic framework for fast interactive image and video segmentation and matting. In: Proc. of IEEE ICCV (2007)Google Scholar
  5. 5.
    Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. In: Proc. of ICPR (2004)Google Scholar
  6. 6.
    Elgammal, A., Harwood, D., Davis, L.S.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28, 1–11 (2009)CrossRefGoogle Scholar
  8. 8.
    Price, B., Morse, B., Cohen, S.: Livecut: Learning-based interactive video segmentation by evaluation of multiple propagated cues. In: Proc. of ICCV (2009)Google Scholar
  9. 9.
    Sun, J., Zhang, W., Tang, X., yeung Shum, H.: Background cut. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 628–641. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Criminisi, A., Cross, G., Blake, A., Kolmogorov, V.: Bilayer segmentation of live video. In: Proc. of CVPR (2006)Google Scholar
  11. 11.
    Roth, S., Black, M.J.: On the spatial statistics of optical flow. IJCV 74, 33–50 (2007)CrossRefGoogle Scholar
  12. 12.
    Sun, D., Roth, S., Lewis, J.P., Black, M.J.: Learning optical flow. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 83–97. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  13. 13.
    Black, M.J., Yacoob, Y., Jepson, A.D., Fleet, D.J.: Learning parameterized models of image motion. In: Proc. of CVPR, pp. 561–567 (1997)Google Scholar
  14. 14.
    Simoncelli, E., Adelson, E.H., Heeger, D.J.: Probability distributions of optical flow. In: Proc of CVPR, pp. 310–315 (1991)Google Scholar
  15. 15.
    Bruhn, A., Weickert, J., Schnörr, C.: Lucas/kanade meets horn/schunck: combining local and global optic flow methods. IJCV 61, 211–231 (2005)CrossRefGoogle Scholar
  16. 16.
    Silverman, B.: Density estimation for statistic and data analysis. Monographs on Statistics and Applied Probability (1986)Google Scholar
  17. 17.
    Irani, M., Anandan, P., Bergen, J.: Efficient representations of video sequences and their applications. Signal Processing: Image Communication 8, 327–351 (1996)CrossRefGoogle Scholar
  18. 18.
    Rav-Acha, A., Pritch, Y., Lischinski, D., Peleg, S.: Dynamosaicing: Mosaicing of dynamic scenes. IEEE Trans. on Pattern Analysis and Machine Intelligence. 29, 1789–1801 (2007)CrossRefGoogle Scholar
  19. 19.
    Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proc. of ACM SIGGRAPH, pp. 417–424 (2000)Google Scholar
  20. 20.
    Chuang, Y.Y., Agarwala, A., Curless, B., Salesin, D., Szeliski, R.: Video matting of complex scenes. In: Proc. of ACM SIGGRAPH, pp. 243–248 (2002)Google Scholar
  21. 21.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut - interactive foreground extraction using iterated graph cut. In: Proc. of ACM SIGGRAPH, pp. 309–314 (2004)Google Scholar
  22. 22.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. on Pattern Analysis and Machine Intelligence 23, 1222–1239 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Xue Bai
    • 1
  • Jue Wang
    • 2
  • Guillermo Sapiro
    • 1
  1. 1.University of MinnesotaMinneapolisUSA
  2. 2.Adobe SystemsSeattleUSA

Personalised recommendations