Abstract
Natural scenes contain rich stochastic motion patterns which are characterized by the movement of a large number of small elements, such as falling snow, raining, flying birds, firework and waterfall. In this paper, we call these motion patterns textured motion and present a generative method that combines statistical models and algorithms from both texture and motion analysis. The generative method includes the following three aspects. 1). Photometrically, an image is represented as a superposition of linear bases in atomic decomposition using an over-complete dictionary, such as Gabor or Laplacian. Such base representation is known to be generic for natural images, and it is low dimensional as the number of bases is often 100 times smaller than the number of pixels. 2). Geometrically, each moving element (called moveton), such as the individual snowflake and bird, is represented by a deformable template which is a group of several spatially adjacent bases. Such templates are learned through clustering. 3). Dynamically, the movetons are tracked through the image sequence by a stochastic algorithm maximizing a posterior probability. A classic second order Markov chain model is adopted for the motion dynamics. The sources and sinks of the movetons are modeled by birth and death maps. We adopt an EM-like stochastic gradient algorithm for inference of the hidden variables: bases, movetons, birth/death maps, parameters of the dynamics. The learned models are also verified through synthesizing random textured motion sequences which bear similar visual appearance with the observed sequences.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Z. Bar-Joseph, R. El-Yaniv, D. Lischinski, and M. Werman. “Texture mixing and texture movie synthesis using statistical learning”, IEEE Trans on Visualization and Computer Graphics, (to appear).
M. J. Black and A. Jepson, “Estimating optical flow in segmented images using variable-order parametric models with local deformations”, IEEE Trans on PAMI, Vol. 18, No. 10, pp. 972–986, 1996.
S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decompostion by basis pursuit”, Technical preprint, Dept. of Stat., Stanford Univ., 1996.
A. D. Cliff and J.K. Ord, “Space-time modeling with an application to regional forecasting”, Trans. Inst. British Geographers, 66:119–128, 1975.
D. S. Ebert and R. E. Parent, “Rendering and animation of gaseous phenomena by combining fast volume and scaleline A-buffer techniques”, SIGGRAPH, 1990.
M. G. Gu, “A stochastic approximation algorithm with MCMC method for incomplete data estimation problems”, Preprint, Dept. of Stat., McGill Univ. 1998.
D. Heeger and J. Bergen, “Pyramid-based texture analysis and synthesis”, Proc. of SIGGRAPH, 1995.
M. Isard and A. Blake, “Contour tracking by stochastic propagation of conditional density”, EGGV, 1996.
S. Mallat and Z. Zhang, “Matching pursuit in a timef-requency dictionary”, IEEE trans. on Signal Processing, vol. 41, pp3397–3415, 1993.
Y. Meyer, Wavelets: Algorithm and Applications, SIAM, Philadelphia, 1993.
B. A. Olshausen and D. J. Field, “Sparse coding with an overcomplete basis set: A strategy employed by V1?”, Vision Research, Vo. 37, No. 23, pp3311–3325, 1997.
W. T. Reeves and R. Blau, “Approximate and probabilistic algorithms for shading and rendering structured particle systems”, Proc. of SIGGRAPH, 1985.
P. Saisan, G. Doretto, Y.N. Wu, S. Soatto, ”Dynamic Texture Recognition,” GVPR, 2001.
A. Schodl, R. Szeliski, D. Salesin, and I. Essa, “Video texture”, Proc. of SIGGRAPH, 2000.
S. Soatto, G. Doretto, and Y.N. Wu, “Dynamic texture”, IGGV, 2001.
A. W. Fitzgibbon, “Stochastic rigidity: image registration for nowhere-static scenes.”, Proc. IGGV, pages 662–669, July 2001.
M. O. Szummer and R. W. Picard, “Temporal texture modeling”, Proc. of Int’l Conf. on Image Processing, Lausanne, Switzerland, 1996.
L.Y. Wei and M. Levoy, “Fast texture synthesis using tree structured vector quantization”, Proc. of SIGGRAPH, 2000.
S. C. Zhu, Y.N. Wu, and D. B. Mumford, “Minimax entropy principle and Its Applications to Texture Modeling”, Neural Computation, Vol. 9, Nov. 1997
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wang, Y., Zhu, SC. (2002). A Generative Method for Textured Motion: Analysis and Synthesis. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds) Computer Vision — ECCV 2002. ECCV 2002. Lecture Notes in Computer Science, vol 2350. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47969-4_39
Download citation
DOI: https://doi.org/10.1007/3-540-47969-4_39
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43745-1
Online ISBN: 978-3-540-47969-7
eBook Packages: Springer Book Archive