Advertisement

A Generative Method for Textured Motion: Analysis and Synthesis

  • Yizhou Wang
  • Song-Chun Zhu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2350)

Abstract

Natural scenes contain rich stochastic motion patterns which are characterized by the movement of a large number of small elements, such as falling snow, raining, flying birds, firework and waterfall. In this paper, we call these motion patterns textured motion and present a generative method that combines statistical models and algorithms from both texture and motion analysis. The generative method includes the following three aspects. 1). Photometrically, an image is represented as a superposition of linear bases in atomic decomposition using an over-complete dictionary, such as Gabor or Laplacian. Such base representation is known to be generic for natural images, and it is low dimensional as the number of bases is often 100 times smaller than the number of pixels. 2). Geometrically, each moving element (called moveton), such as the individual snowflake and bird, is represented by a deformable template which is a group of several spatially adjacent bases. Such templates are learned through clustering. 3). Dynamically, the movetons are tracked through the image sequence by a stochastic algorithm maximizing a posterior probability. A classic second order Markov chain model is adopted for the motion dynamics. The sources and sinks of the movetons are modeled by birth and death maps. We adopt an EM-like stochastic gradient algorithm for inference of the hidden variables: bases, movetons, birth/death maps, parameters of the dynamics. The learned models are also verified through synthesizing random textured motion sequences which bear similar visual appearance with the observed sequences.

Keywords

Motion Dynamic Markov Chain Model Atomic Decomposition Texture Synthesis Texture Modeling 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Z. Bar-Joseph, R. El-Yaniv, D. Lischinski, and M. Werman. “Texture mixing and texture movie synthesis using statistical learning”, IEEE Trans on Visualization and Computer Graphics, (to appear).Google Scholar
  2. 2.
    M. J. Black and A. Jepson, “Estimating optical flow in segmented images using variable-order parametric models with local deformations”, IEEE Trans on PAMI, Vol. 18, No. 10, pp. 972–986, 1996.Google Scholar
  3. 3.
    S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decompostion by basis pursuit”, Technical preprint, Dept. of Stat., Stanford Univ., 1996.Google Scholar
  4. 4.
    A. D. Cliff and J.K. Ord, “Space-time modeling with an application to regional forecasting”, Trans. Inst. British Geographers, 66:119–128, 1975.CrossRefGoogle Scholar
  5. 5.
    D. S. Ebert and R. E. Parent, “Rendering and animation of gaseous phenomena by combining fast volume and scaleline A-buffer techniques”, SIGGRAPH, 1990.Google Scholar
  6. 6.
    M. G. Gu, “A stochastic approximation algorithm with MCMC method for incomplete data estimation problems”, Preprint, Dept. of Stat., McGill Univ. 1998.Google Scholar
  7. 7.
    D. Heeger and J. Bergen, “Pyramid-based texture analysis and synthesis”, Proc. of SIGGRAPH, 1995.Google Scholar
  8. 8.
    M. Isard and A. Blake, “Contour tracking by stochastic propagation of conditional density”, EGGV, 1996.Google Scholar
  9. 9.
    S. Mallat and Z. Zhang, “Matching pursuit in a timef-requency dictionary”, IEEE trans. on Signal Processing, vol. 41, pp3397–3415, 1993.zbMATHCrossRefGoogle Scholar
  10. 10.
    Y. Meyer, Wavelets: Algorithm and Applications, SIAM, Philadelphia, 1993.zbMATHGoogle Scholar
  11. 11.
    B. A. Olshausen and D. J. Field, “Sparse coding with an overcomplete basis set: A strategy employed by V1?”, Vision Research, Vo. 37, No. 23, pp3311–3325, 1997.CrossRefGoogle Scholar
  12. 12.
    W. T. Reeves and R. Blau, “Approximate and probabilistic algorithms for shading and rendering structured particle systems”, Proc. of SIGGRAPH, 1985.Google Scholar
  13. 13.
    P. Saisan, G. Doretto, Y.N. Wu, S. Soatto, ”Dynamic Texture Recognition,” GVPR, 2001.Google Scholar
  14. 14.
    A. Schodl, R. Szeliski, D. Salesin, and I. Essa, “Video texture”, Proc. of SIGGRAPH, 2000.Google Scholar
  15. 15.
    S. Soatto, G. Doretto, and Y.N. Wu, “Dynamic texture”, IGGV, 2001.Google Scholar
  16. 16.
    A. W. Fitzgibbon, “Stochastic rigidity: image registration for nowhere-static scenes.”, Proc. IGGV, pages 662–669, July 2001.Google Scholar
  17. 17.
    M. O. Szummer and R. W. Picard, “Temporal texture modeling”, Proc. of Int’l Conf. on Image Processing, Lausanne, Switzerland, 1996.Google Scholar
  18. 18.
    L.Y. Wei and M. Levoy, “Fast texture synthesis using tree structured vector quantization”, Proc. of SIGGRAPH, 2000.Google Scholar
  19. 19.
    S. C. Zhu, Y.N. Wu, and D. B. Mumford, “Minimax entropy principle and Its Applications to Texture Modeling”, Neural Computation, Vol. 9, Nov. 1997Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Yizhou Wang
    • 1
  • Song-Chun Zhu
    • 1
  1. 1.Dept. of Comp. and Info. Sci.Ohio State Univ.ColumbusUSA

Personalised recommendations