Tracking through Optical Snow

  • Michael S. Langer
  • Richard Mann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2525)


Optical snow is a natural type of image motion that results when the observer moves laterally relative to a cluttered 3D scene. An example is an observer moving past a bush or through a forest, or a stationary observer viewing falling snow. Optical snow motion is unlike standard motion models in computer vision, such as optical flow or layered motion since such models are based on spatial continuity assumptions. For optical snow, spatial continuity cannot be assumed because the motion is characterized by dense depth discontinuities. In previous work, we considered the special case of parallel optical snow. Here we generalize that model to allow for non-parallel optical snow. The new model describes a situation in which a laterally moving observer tracks an isolated moving object in an otherwise static 3D cluttered scene. We argue that despite the complexity of the motion, sufficient constraints remain that allow such an observer to navigate through the scene while tracking a moving object.


Motion Plane Surface Patch Image Motion Image Velocity Spatial Continuity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    M. S. Langer and R. Mann. Dimensional analysis of image motion. In IEEE International Conference on Computer Vision, pages 155–162, 2001.Google Scholar
  2. 2.
    R. Mann and M. S. Langer. Optical snow and the aperture problem. In International Conference on Pattern Recognition, Quebec City, Canada, Aug. 2002.Google Scholar
  3. 3.
    A.B. Watson and A.J. Ahumada. Model of human visual-motion sensing. Journal of the Optical Society of America A, 2(2):322–342, 1985.CrossRefGoogle Scholar
  4. 4.
    H.C. Longuet-Higgins and K. Prazdny. The interpretation of a moving retinal image. Proceedings of the Royal Society of London B), B-208:385–397, 1980.Google Scholar
  5. 5.
    D.J. Heeger. Optical flow from spatiotemporal filters. In First International Conference on Computer Vision, pages 181–190, 1987.Google Scholar
  6. 6.
    D. J. Fleet. Measurement of Image Velocity. Kluwer Academic Press, Norwell, MA, 1992.Google Scholar
  7. 7.
    N.M. Grzywacz and A.L. Yuille. A model for the estimate of local image velocity by cells in the visual cortex. Proceedings of the Royal Society of London. B, 239:129–161, 1990.Google Scholar
  8. 8.
    E P Simoncelli and D J Heeger. A model of neural responses in visual area mt. Vision Research, 38(5):743–761, 1998.CrossRefGoogle Scholar
  9. 9.
    M. Shizawa and K. Mase. A unified computational theory for motion transparency and motion boundaries based on eigenenergy analysis. In IEEE Conference on Computer Vision and Pattern Recognition, pages 289–295, 1991.Google Scholar
  10. 10.
    P. Milanfar. Projection-based, frequency-domain estimation of superimposed translational motions. Journal of the Optical Society of America A, 13(11):2151–2162, November 1996.CrossRefGoogle Scholar
  11. 11.
    D. J. Fleet and K. Langley. Computational analysis of non-fourier motion. Vision Research, 34(22):3057–3079, 1994.CrossRefGoogle Scholar
  12. 12.
    S.S. Beauchemin and J.L. Barron. The frequency structure of 1d occluding image signals. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(2):200–206, February 2000.CrossRefGoogle Scholar
  13. 13.
    E. Trucco and A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice-Hall, 1998.Google Scholar
  14. 14.
    M. Lappe and J. P. Rauschecker. A neural network for the processing of optical flow from egomotion in man and higher mammals. Neural Computation, 5:374–391, 1993.CrossRefGoogle Scholar
  15. 15.
    S. W. Zucker and L. Iverson. From orientation selection to optical flow. Computer Vision Graphics and Image Processing, 37:196–220, 1987.CrossRefGoogle Scholar
  16. 16.
    E.H. Adelson and J.R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2(2):284–299, 1985.CrossRefGoogle Scholar
  17. 17.
    R. S. Zemel and P. Dayan. Distributional population codes and multiple motion models. In D. A. Cohn M. S. Kearns, S. A. Solla, editor, Advances in Neural Information Processing Systems 11, pages 768–784, Cambridge, MA, 1999. MIT Press.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Michael S. Langer
    • 1
  • Richard Mann
    • 2
  1. 1.School of Computer ScienceMcGill UniversityMontrealCanada
  2. 2.School of Computer ScienceUniversity of WaterlooWaterlooCanada

Personalised recommendations