Advertisement

What Does the Scene Look Like from a Scene Point?

  • M. Irani
  • T. Hassner
  • P. Anandan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2351)

Abstract

In this paper we examine the problem of synthesizing virtual views from scene points within the scene, i.e., from scene points which are imaged by the real cameras. On one hand this provides a simple way of defining the position of the virtual camera in an uncalibrated setting. On the other hand, it implies extreme changes in viewpoint between the virtual and real cameras. Such extreme changes in viewpoint are not typical of most New-View-Synthesis (NVS) problems.

In our algorithm the virtual view is obtained by aligning and comparing all the projections of each line-of-sight emerging from the “virtual camera” center in the input views. In contrast to most previous NVS algorithms, our approach does not require prior correspondence estimation nor any explicit 3D reconstruction. It can handle any number of input images while simultaneously using the information from all of them. However, very few images are usually enough to provide reasonable synthesis quality. We show results on real images as well as synthetic images with ground-truth.

Keywords

Novel-view synthesis Synthesis without structure or motion 

References

  1. 1.
    S. Avidan and A. Shashua. Novel view synthesis by cascading trilinear tensors. In IEEE Transactions on Visualization and Computer Graphics, 1998.Google Scholar
  2. 2.
    A. Criminisi, I. Reid, and A. Zisserman. Duality, rigidity and planar parallax. In ECCV, Freiburg, 1998.Google Scholar
  3. 3.
    O. Faugeras and L. Robert. What can two images tell us about a third one? In ECCV, pages 485–492, 1994.Google Scholar
  4. 4.
    S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen. The lumigraph. In SIGGRAPH, pages 43–54, 1996.Google Scholar
  5. 5.
    M. Irani, P. Anandan, and D. Weinshall. From reference frames to reference planes: Multi-view parallax geometry and applications. In ECCV, Freiburg, June 1998.Google Scholar
  6. 6.
    R. Kumar, P. Anandan, and K. Hanna. Direct recovery of shape from multiple views: a parallax based approach. In Proc 12th ICPR, pages 685–688, 1994.Google Scholar
  7. 7.
    K. N. Kutulakos. Approximate n-view stereo. In ECCV, 2000.Google Scholar
  8. 8.
    M. Levoy and P. Hanrahan. Light field rendering. In SIGGRAPH, 1996.Google Scholar
  9. 9.
    P. J. Narayanan, P. W. Rander, and T. Kanade. Constructing virtual worlds using dense stereo. In ICCV, 1998.Google Scholar
  10. 10.
    M. Orchard and C. Bouman. Color quantization of images. In IEEE Transactions on Signal Processing, volume 39, 1991.Google Scholar
  11. 11.
    P. Rander, P.J. Narayanan, and T. Kanade. Virtualized reality: constructing time-varying virtual worlds from real events. In Proc. IEEE Visualization, pages 277–283, October 1997.Google Scholar
  12. 12.
    H. Saito and T. Kanade. Shape reconstruction in projective grid space from a large number of images. In CVPR, 1999.Google Scholar
  13. 13.
    H. Sawhney. 3D geometry from planar parallax. In CVPR, 1994.Google Scholar
  14. 14.
    S. Seitz and C. Dyer. Photorealistic scene reconstruction by voxel coloring. In CVPR, 1997.Google Scholar
  15. 15.
    A. Shashua and N. Navab. Relative affine structure: Theory and application to 3D reconstruction from perspective views. In CVPR, pages 483–489, 1994.Google Scholar
  16. 16.
    R. Szeliski and P. Golland. Stereo matching with transparency and matting. In ICCV, pages 517–524, January 1998.Google Scholar
  17. 17.
    W. Triggs. Plane + parallax, tensors, and factorization. In ECCV, pages 522–538, June 2000.Google Scholar
  18. 18.
    S. Vedula, P. Rander, H. Saito, and T. Kanade. Modeling, combining, and rendering dynamic real-world events from image sequences. In Proceedings of the International Conference on Virtual Systems and Multimedia, 1998.Google Scholar
  19. 19.
    D. Weinshall, P. Anandan, and M. Irani. From ordinal to euclidean reconstruction with partial scene calibration. In Workshop on 3D Structure from Multiple Images of Large-Scale Environments, Freiburg, June 1998.Google Scholar
  20. 20.
    T. Werner, R.D. Hersch, and V. Hlaváč. Rendering real-world objects using view interpolation. In ICCV, pages 957–962, June 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • M. Irani
    • 1
  • T. Hassner
    • 1
  • P. Anandan
    • 2
  1. 1.Dept. of Computer Science and Applied MathematicsThe Weizmann Institute of ScienceRehovotIsrael
  2. 2.Microsoft ResearchRedmond

Personalised recommendations