Advertisement

Annotation of Video by Alignment to Reference Imagery

  • Keith J. Hanna
  • Harpreet S. Sawhney
  • Rakesh Kumar
  • Y. Guo
  • S. Samarasekara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1883)

Abstract

Video as an entertainment or information source in consumer, military, and broadcast television applications is widespread. Typically however, the video is simply presented to the viewer, with only minimal manipulation. Examples include chroma-keying (often used in news and weather broadcasts) where specific color components are detected and used to control the video source. In the past few years, the advent of digital video and increases in computational power has meant that more complex manipulation can be performed. In this paper we present some highlights of our work in annotating video by aligning features extracted from the video to a reference set of features.

Video insertion and annotation require manipulation of the video stream to composite synthetic imagery and information with real video imagery. The manipulation may involve only the 2D image space or the 3D scene space. The key problems to be solved are: (i) indexing and matching to determine the location of insertion, (ii) stable and jitter-free tracking to compute the time variation of the camera, and (iii) seamlessly blended insertion for an authentic viewing experience. We highlight our approach to these problems by showing three example scenarios: (i) 2D synthetic pattern insertion in live video, (ii) annotation of aerial imagery through geo-registration with stored reference imagery and annotations, and (iii) 3D object insertion in a video for a 3D scene.

Keywords

Reference Image Video Frame Video Stream Current Image Correlation Surface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani. Hierarchical model-based motion estimation. In European Conference on Computer Vision, Santa-Margherita Ligure, Italy, 1992.Google Scholar
  2. 2.
    Burt et. al. Object tracking with a moving camera. In IEEE Workshop on Visual Motion, Irvine CA, 1989.Google Scholar
  3. 3.
    R. Kumar et al. Registration of video to geo-referenced imagery. In Proc. International Conference on Pattern Recognition, 1998.Google Scholar
  4. 4.
    K. Hanna and P. Burt. US Patent 5,566,251-October 15, 1996.Google Scholar
  5. 5.
    R. I. Hartley. Estimation of relative camera positions for uncalibrated cameras. In Proc. 2nd European Conference on Computer Vision, pages 579–587, 1992.Google Scholar
  6. 6.
    Michal Irani. Applications of image mosaics. In International Conference on Computer Vision, Cambridge, MA, November 1995.Google Scholar
  7. 7.
    R. Kumar, P. Anandan, and K. Hanna. Direct recovery of shape from multiple views: a parallax based approach. In Proc 12th ICPR, 1994.Google Scholar
  8. 8.
    Princeton Video Image. http://www.pvimage.com.
  9. 9.
    RADIUS PI Reports and Technical papers. Proc. darpa image understanding workshop, 1996. pp. 255–525.Google Scholar
  10. 10.
    R. Rosser and M. Leach. US Patent 5,264,933-November 23, 1993.Google Scholar
  11. 11.
    Harpreet Sawhney. 3D geometry from planar parallax. In Proc. CVPR 94, June 1994.Google Scholar
  12. 12.
    Harpreet S. Sawhney, Steve Hsu, and R. Kumar. Robust video mosaicing through topology inference and local to global alignment. In ECCV, pages 103–119, 1998.Google Scholar
  13. 13.
    A. Shashua and N. Navab. Relative affine structure, theory and application to 3d reconstruction from 2d views. In IEEE Conference on Computer Vision and Pattern Recognition, June 1994.Google Scholar
  14. 14.
    R. Szeliski and H. Shum. Creating full view panoramic image mosaics and environment maps. In Proc. ofSIGGRAPH, pages 251–258, 1997.Google Scholar
  15. 15.
    Z. Zhang et al. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence, 78:87–119, 1995.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Keith J. Hanna
    • 1
  • Harpreet S. Sawhney
    • 1
  • Rakesh Kumar
    • 1
  • Y. Guo
    • 1
  • S. Samarasekara
    • 1
  1. 1.Sarnoff CorporationPrinceton

Personalised recommendations