Skip to main content

Real-Time, Parallel Motion Tracking of Three Dimensional Objects From Spatiotemporal Sequences

  • Chapter
Parallel Algorithms for Machine Intelligence and Vision

Part of the book series: Symbolic Computation ((1064))

Abstract

A major issue in computer vision is the interpretation of three-dimensional (3D) motion of moving objects from a continuous stream of two-dimensional (2D) images. In this paper we consider the problem where the 3D motion of an object corresponding to a known 3D model is to be tracked using only the motion of 2D features in the stream of images. Two general solution paradigms for this problem are characterized: (1) motion-searching, which hypothesizes and tests 3D motion parameters, and (2) motion-calculating, which uses back-projection to directly estimate 3D motion from image-feature motion. Two new algorithms for computing 3D motion based on these two paradigms are presented. One of the major novel aspects of both algorithms is their use of the assumption that the input image stream is spatiotemporally dense. This constraint is psychologically plausible since it is also used by the short-range motion processes in the human visual system.

The processing of a temporally-unbounded, spatiotemporal image combined with the resource constraint of finite image buffer memory, requires real-time throughput rates for our algorithms. Consequently, another major focus of this paper is the development of real-time, parallel implementations to achieve the required throughput. Implementations of both algorithms are described using an Aspex Pipe for low-level, image-feature computations and a Sequent Symmetry for high-level, model-based computations. The Pipe, a pipelined image processor, is tightly-coupled with the Sequent, and semaphores are used for synchronization between the two. Design issues and parallel implementation issues of both algorithms are discussed in detail.

The support of NSF under Grant Nos. IRI-8802436 and CCR-8612407, and NASA under Grant No. NAGW-975 is gratefully acknowledged. Code for the Newton-Raphson convergence procedure was provided by David Lowe.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adelson, E., and Bergen, J. Spatiotemporal energy models for the perception of motion. J. Optical Soc. America A 2, 2 (Feb. 1985), 284–299.

    Article  Google Scholar 

  2. Asada, M., Yachida, M, and Tsuji, S. Analysis of three-dimensional motions in blocks world. Pattern Recognition 17 (1984), 57–71.

    Article  Google Scholar 

  3. Bolles, R., Baker, H., and Marimont, D. Epipolar-plane image analysis: An approach to determining structure from motion. Int. J. Computer Vision 1, 1 (1987), 57–71

    Article  Google Scholar 

  4. Braddick, O. Low-level and high-level processes in apparent motion. Phil. Trans. R. Soc. London B 290 (1980), 137–151.

    Article  Google Scholar 

  5. Burt, P. Attention mechanisms for vision in a dynamic world. In Proceedings of the Ninth International Conference on Pattern Recognition (Rome, Italy, Nov. 14–17). IEEE Computer Society Press, Washington, D.C., 1988, pp. 977–987.

    Chapter  Google Scholar 

  6. Crowley, J., Stelmaszyk, P., and Discours, C. Measuring image flow by tracking edge-lines. In Proceedings of the Second International Conference on Computer Vision (Tampa, Florida, Dec. 5–8). IEEE Computer Society Press, Washington, D.C., 1988, pp. 658–664.

    Chapter  Google Scholar 

  7. Gennery, D. Tracking known three-dimensional objects. In Proceedings of the National Conference on Artificial Intelligence (Pittsburgh, Pa., Aug. 18- 20). Morgan Kaufmann, Palo Alto, Ca., 1982, pp. 13–17.

    Google Scholar 

  8. Gennery, D. Stereo vision for the acquisition and tracking of moving three- dimensional objects. In Techniques for 3-D Machine Perception, A. Rosenfeld, Ed., North-Holland, New York, 1986.

    Google Scholar 

  9. Gennery, D., Litwin, T., Wilcox, B., and Bon, B. Sensing and perception research for space telerobotics at JPL. In Proceedings of the 1987 IEEE International Conference on Robotics and Automation (Raleigh, N.C., March 31 - April 3). IEEE Computer Society Press, Washington, D.C., 1987, pp. 311–317.

    Chapter  Google Scholar 

  10. Jain, R. Dynamic vision. In Proceedings of the Ninth International Conference on Pattern Recognition (Rome, Italy, Nov. 14–17). IEEE Computer Society Press, Washington, D.C., 1988, pp. 226–235.

    Chapter  Google Scholar 

  11. Kent, E., Shneier, M., and Lumia, R. Pipe — Pipelined image processing engine. J. Parallel Distrib. Comput. 2,1 (Feb. 1985), 50–78.

    Article  Google Scholar 

  12. Liu, Y., and Huang, T. A linear algorithm for motion estimation using straight line correspondences. Comput. Vision Graph. Image Process. 44 (1988), 35–57.

    Article  Google Scholar 

  13. Lowe, D. Perceptual Organization and Visual Recognition. Kluwer, Boston, Mass., 1985.

    Google Scholar 

  14. Moran, J., and Desimone, R. Selective attention gates visual processing in the extrastriate cortex. Science 229 (1985), 782–784.

    Article  Google Scholar 

  15. O’Rourke, J., and Badler, N. Model-based image analysis of human motion using constraint propagation. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 6 (Nov. 1980), 522–536.

    Google Scholar 

  16. Spetsakis, M., and Aloimonos, J. Closed form solution to the structure from motion problem from line correspondences. In Proceedings of the National Conference on Artificial Intelligence (Seattle, Wash., July 13–17). Morgan Kaufmann, Palo Alto, Ca., 1987,738–743.

    Google Scholar 

  17. Thompson, D., and Mundy, J. Motion-based motion analysis: Motion from motion. In Robotics Research: The Fourth International Symposium, R. Bolles and B. Roth, Eds., MIT Press, Cambridge, Mass., 1988, 299–309.

    Google Scholar 

  18. Tsotsos, J., Mylopoulos, J., Covvey, H., and Zucker, S. A Framework for visual motion understanding. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 6 (Nov. 1980), 563–573.

    Google Scholar 

  19. Ullman, S. The Interpretation of Visual Motion. MIT Press, Cambridge, Mass., 1979.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag New York Inc.

About this chapter

Cite this chapter

Verghese, G., Gale, K.L., Dyer, C.R. (1990). Real-Time, Parallel Motion Tracking of Three Dimensional Objects From Spatiotemporal Sequences. In: Kumar, V., Gopalakrishnan, P.S., Kanal, L.N. (eds) Parallel Algorithms for Machine Intelligence and Vision. Symbolic Computation. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-3390-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-3390-9_9

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4612-7994-5

  • Online ISBN: 978-1-4612-3390-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics