Advertisement

X Vision: Combining image warping and geometric constraints for fast visual tracking

  • Gregory D. Hager
  • Kentaro Toyama
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1065)

Abstract

In this article, we describe X Vision, a modular, portable framework for visual tracking. X Vision is designed to be a programming environment for real-time vision which provides high performance on standard workstations outfitted with a simple digitizer. X Vision consists of a small set of image-level tracking primitives and a framework for combining tracking primitives to form complex tracking systems. Efficiency and robustness are achieved by propagating geometric and temporal constraints to the feature detection level, where image warping and specialized image processing are combined to perform feature detection quickly and robustly. We illustrate how useful, robust tracking systems can be constructed by simple combinations of a few basic primitives with the appropriate task-specific constraints.

Keywords

Feature Tracking Composite Feature Edge Segment Image Warping Simple Edge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. [1]
    P. Anandan. A computational framework and an algorithm for the measurement of structure from motion. Int'l Journal of Computer Vision, 2:283–310, 1989.Google Scholar
  2. [2]
    G. D. Hager. Real-time feature tracking and projective invariance as a basis for hand-eye coordination. In Proc. IEEE Conf. Comp. Vision and Patt. Recog., pages 533–539. IEEE Computer Society Press, 1994.Google Scholar
  3. [3]
    G. D. Hager. A modular system for robust hand-eye coordination. DCS RR-1074, Yale University, New Haven, CT, June 1995.Google Scholar
  4. [4]
    G. D. Hager. The “X-vision” system: A general purpose substrate for real-time vision applications. DCS RR-1078, Yale University, New Haven, CT, December 1995. Submitted to Image Understanding.Google Scholar
  5. [5]
    G. D. Hager, W-C. Chang, and A. S. Morse. Robot hand-eye coordination based on stereo vision. IEEE Control Systems Magazine, 15(1):30–39, February 1995.Google Scholar
  6. [6]
    C.-P. Lu. Online Pose Estimation and Model Matching. PhD thesis, Yale University, 1995.Google Scholar
  7. [7]
    B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. Int. Joint Conf. Artificial Intelligence, pages 674–679, 1981.Google Scholar
  8. [8]
    J. Shi and C. Tomasi. Good features to track. In Computer Vision and Patt. Recog., pages 593–600. IEEE Computer Society Press, 1994.Google Scholar
  9. [9]
    C. Tomasi and T. Kanade. Shape and motion from image streams: a factorization method, full report on the orthographic case. CMU-CS 92-104, CMU, 1992.Google Scholar
  10. [10]
    K. Toyama and G. D. Hager. Distraction-proof tracking: Keeping one's eye on the ball. In IEEE Int. Workshop on Intelligent Robots and Systems, volume I, pages 354–359. IEEE Computer Society Press, 1995. Also Available as Yale CS-RR-1059.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Gregory D. Hager
    • 1
  • Kentaro Toyama
    • 1
  1. 1.Department of Computer ScienceYale UniversityNew Haven

Personalised recommendations