Tracking through Optical Snow
Optical snow is a natural type of image motion that results when the observer moves laterally relative to a cluttered 3D scene. An example is an observer moving past a bush or through a forest, or a stationary observer viewing falling snow. Optical snow motion is unlike standard motion models in computer vision, such as optical flow or layered motion since such models are based on spatial continuity assumptions. For optical snow, spatial continuity cannot be assumed because the motion is characterized by dense depth discontinuities. In previous work, we considered the special case of parallel optical snow. Here we generalize that model to allow for non-parallel optical snow. The new model describes a situation in which a laterally moving observer tracks an isolated moving object in an otherwise static 3D cluttered scene. We argue that despite the complexity of the motion, sufficient constraints remain that allow such an observer to navigate through the scene while tracking a moving object.
KeywordsMotion Plane Surface Patch Image Motion Image Velocity Spatial Continuity
Unable to display preview. Download preview PDF.
- 1.M. S. Langer and R. Mann. Dimensional analysis of image motion. In IEEE International Conference on Computer Vision, pages 155–162, 2001.Google Scholar
- 2.R. Mann and M. S. Langer. Optical snow and the aperture problem. In International Conference on Pattern Recognition, Quebec City, Canada, Aug. 2002.Google Scholar
- 4.H.C. Longuet-Higgins and K. Prazdny. The interpretation of a moving retinal image. Proceedings of the Royal Society of London B), B-208:385–397, 1980.Google Scholar
- 5.D.J. Heeger. Optical flow from spatiotemporal filters. In First International Conference on Computer Vision, pages 181–190, 1987.Google Scholar
- 6.D. J. Fleet. Measurement of Image Velocity. Kluwer Academic Press, Norwell, MA, 1992.Google Scholar
- 7.N.M. Grzywacz and A.L. Yuille. A model for the estimate of local image velocity by cells in the visual cortex. Proceedings of the Royal Society of London. B, 239:129–161, 1990.Google Scholar
- 9.M. Shizawa and K. Mase. A unified computational theory for motion transparency and motion boundaries based on eigenenergy analysis. In IEEE Conference on Computer Vision and Pattern Recognition, pages 289–295, 1991.Google Scholar
- 13.E. Trucco and A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice-Hall, 1998.Google Scholar
- 17.R. S. Zemel and P. Dayan. Distributional population codes and multiple motion models. In D. A. Cohn M. S. Kearns, S. A. Solla, editor, Advances in Neural Information Processing Systems 11, pages 768–784, Cambridge, MA, 1999. MIT Press.Google Scholar