Advertisement

Tracking and Rendering Using Dynamic Textures on Geometric Structure from Motion

  • Dana Cobzas
  • Martin Jagersand
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2351)

Abstract

Estimating geometric structure from uncalibrated images accurately enough for high quality rendering is difficult. We present a method where only coarse geometric structure is tracked and estimated from a moving camera. Instead a precise model of the intensity image variation is obtained by overlaying a dynamic, time varying texture on the structure. This captures small scale variations (e.g. non-planarity of the rendered surfaces, small camera geometry distortions and tracking errors). The dynamic texture is estimated and coded much like in movie compression, but parameterized in 6D pose instead of time, hence allowing the interpolation and extrapolation of new poses in the rendering and animation phase. We show experiments tracking and re-animating natural scenes as well as evaluating the geometric and image intensity accuracy on constructed special test scenes.

Keywords

Training Image Geometric Error Static Texture Dynamic Texture Factorization Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Black, D. Fleet, and Y. Yacoob. Robustly estimating changes in image appearance. Computer Vision and Image Understanding, 78(1):8–31, 2000.CrossRefGoogle Scholar
  2. 2.
    M. Black, Y. Yacoob, A. Jepson, and D. Fleet. Learning parameterized models of image motion. In Computer Vision and Pattern Recognition, pages 561–567, 1997.Google Scholar
  3. 3.
    D. Cobzas and M. Jagersand. A comparison of non-euclidean image-based rendering. In Proceedings of Graphics Interface, 2001.Google Scholar
  4. 4.
    P. E. Debevec, C. J. Taylor, and J. Malik. Modeling and rendering architecture from phtographs. In Computer Graphics (SIGGRAPH’96), 1996.Google Scholar
  5. 5.
    O. D. Faugeras. Three Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, Boston, 1993.Google Scholar
  6. 6.
    S. J. Gortler, R. Grzeszczuk, and R. Szeliski. The lumigraph. In Computer Graphics (SIGGRAPH’96), pages 43–54, 1996.Google Scholar
  7. 7.
    G. D. Hager and P. N. Belhumeur. Efficient region tracking with parametric models of geometry and illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(10):1025–1039, 1998.CrossRefGoogle Scholar
  8. 8.
    M. Jagersand. Saliency maps and attention selection in scale and spatial coordinates: An information theoretic approach. In Fifth International Conference on Computer Vision, 1995.Google Scholar
  9. 9.
    M. Jagersand. Image based view synthesis of articulated agents. In Computer Vision and Pattern Recognition, 1997.Google Scholar
  10. 10.
    K. Kutulakos and S. Seitz. A theory of shape by shape carving. International Journal of Computer Vision, 38:197–216, 2000.CrossRefGoogle Scholar
  11. 11.
    M. Levoy and P. Hanrahan. Light field rendering. In Computer Graphics (SIGGRAPH’96), pages 31–42, 1996.Google Scholar
  12. 12.
    L. McMillan and G. Bishop. Plenoptic modeling: Am image-based rendering system. In Computer Graphics (SIGGRAPH’95), pages 39–46, 1995.Google Scholar
  13. 13.
    H. Murase and S. Nayar. Visual learning and recognition of 3d objects from appearance. International Journal of Computer Vision, 14:5–24, 1995.CrossRefGoogle Scholar
  14. 14.
    M. Pollyfeys. Tutorial on 3D Modeling from Images. Lecture Nores, Dublin, Ireland (in conjunction with ECCV 2000), 2000.Google Scholar
  15. 15.
    W. H. Press, B. Flannery, S. A. Teukolsky, and W. T. V. Tterling. Numerical Recipies in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, England, 1992.Google Scholar
  16. 16.
    S. M. Seitz and C. R. Dyer. View morphing. In Computer Graphics (SIGGRAPH’96), pages 21–30, 1996.Google Scholar
  17. 17.
    A. Shashua. Geometry and Photometry in 3D Visual Recognition. PhD thesis, MIT, 1993.Google Scholar
  18. 18.
    I. Stamos and P. K. Allen. Integration of range and image sensing for photorealistic 3d modeling. In ICRA, 2000.Google Scholar
  19. 19.
    P. Sturm and B. Triggs. A factorization based algorithm for multi-image projective structure and motion. In ECCV(2), pages 709–720, 1996.Google Scholar
  20. 20.
    R. Szeliski. Video mosaics for virtual environments. IEEE Computer Graphics and Applications, pages 22–30, March 1996.Google Scholar
  21. 21.
    C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. International Journal of Computer Vision, 9:137–154, 1992.CrossRefGoogle Scholar
  22. 22.
    D. Weinshall and C. Tomasi. Linear and incremental aquisition of invariant shape models from image sequences. In Proc. of 4th Int. Conf. on Compute Vision, pages 675–682, 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Dana Cobzas
    • 1
  • Martin Jagersand
    • 1
  1. 1.Department of Computing ScienceUniversity of AlbertaCanada

Personalised recommendations