Advertisement

View Synthesis with Occlusion Reasoning Using Quasi-Sparse Feature Correspondences

  • David Jelinek
  • Camillo J. Taylor
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2351)

Abstract

The goal of most image based rendering systems can be stated as follows: given a set of pictures taken from various vantage points, synthesize the image that would be obtained from a novel viewpoint. In this paper we present a novel approach to view synthesis which hinges on the observation that human viewers tend to be quite sensitive to the motion of features in the image corresponding to intensity discontinuities or edges. Our system focuses its efforts on recovering the 3D position of these features so that their motions can be synthesized correctly. In the current implementation these feature points are recovered from image sequences by employing the epipolar plane image (EPI) analysis techniques proposed by Bolles, Baker, and Marimont. The output of this procedure resembles the output of an edge extraction system where the edgels are augmented with accurate depth information. This method has the advantage of producing accurate depth estimates for most of the salient features in the scene including those corresponding to occluding contours. We will demonstrate that it is possible to produce compelling novel views based on this information.

The paper will also describe a principled approach to reasoning about the 3D structure of the scene based on the quasi-sparse features returned by the EPI analysis. This analysis allows us to correctly reproduce occlusion and disocclusion effects in the synthetic views without requiring dense correspondences. Importantly, the technique could also be used to analyze and refine the 3-D results returned by range finders, stereo systems or structure from motion algorithms. Results obtained by applying the proposed techniques to actual image data sets are presented.

Keywords

Structure From Motion Surface Geometry Image Based Rendering 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R.C. Bolles, H.H. Baker, and D.H. Marimont. Epipolar-plane image anlysis: An approach to determining structure from motion. International Journal of Computer Vision, 1(1):7–55, 1987.CrossRefGoogle Scholar
  2. 2.
    S. E. Chen and L. Williams. View interpolation from image synthesis. In SIGGRAPH, pages 279–288, August 1993.Google Scholar
  3. 3.
    Brian Curless and Marc Levoy. A volumetric method for building complex models from range images. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 31–43, New Orleans, LA, August 4–9 1996. ACM SIGGRAPH.Google Scholar
  4. 4.
    Y. Genc and J. Ponce. Parameterized image varieties: A novel approach to the analysis and synthesis of image sequences. In International Conference on Computer Vision, pages 11–16, January 1998.Google Scholar
  5. 5.
    Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael Cohen. The lumigraph. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 31–43, New Orleans, LA, August 4–9 1996. ACM SIGGRAPH.Google Scholar
  6. 6.
    V. Hlavac, A. Leonardis, and T. Werner. Automatic selection of reference views for image-based scene representations. In European Conference on Computer Vision, pages 526–535, 1996.Google Scholar
  7. 7.
    T. Kanade, P.W. Rander, and J. P. Narayanan. Virtualized reality: Constructing virtual worlds from real scenes. IEEE Multimedia, 4(1):34–47, 1997.CrossRefGoogle Scholar
  8. 8.
    S. Laveau and O.D Faugeras. 3-d scene representation as a collection of images. In International Conference on Pattern Recognition, pages 689–691, 1994.Google Scholar
  9. 9.
    Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 31–43, New Orleans, LA, August 4–9 1996. ACM SIGGRAPH.Google Scholar
  10. 10.
    Maxime Lhuiller and Long Quan. Image interpolation by joint view triangulation. In Proc. IEEE Conf. on Vision and Patt. Recog., pages 139–145, 1999.Google Scholar
  11. 11.
    Maxime Lhuiller and Long Quan. Edge-constrained joint view triangulation for image interpolation. In Proc. IEEE Conf. on Vision and Patt. Recog., pages 218–224, 2000.Google Scholar
  12. 12.
    Ko Nishino, Yoichi Sato, and Katsui Ikeuchi. Eigen-texture method: Appearance compression based on 3d model. In Proc. IEEE Conf. on Vision and Patt. Recog., pages 618–624, 1999.Google Scholar
  13. 13.
    M. Pollefeys, L Van Gool, and M. Proesmans. Euclidean 3d reconstruction from image sequences with variable focal lengths. In European Conference on Computer Vision, pages 31–42, 1996.Google Scholar
  14. 14.
    W. Press, B. Flannery, S. Teukolsky, and W. Vetterling. Numerical Recipes in C. Cambridge University Press, 1988.Google Scholar
  15. 15.
    Y. Sato, M.D. Wheeler, and K. Ikeuchi. Object shape and reflectance modeling from observation. In Proceedings of SIGGRAPH 97. In Computer Graphics Proceedings, Annual Conference Series, pages 379–387. ACM SIGGRAPH, August 1997.Google Scholar
  16. 16.
    Steven Seitz and Charles R. Dyer. View morphing. In Proceedings of SIGGRAPH 96. In Computer Graphics Proceedings, Annual Conference Series, pages 31–43, New Orleans, LA, August 4–9 1996. ACM SIGGRAPH.Google Scholar
  17. 17.
    J.A. Sethian. Level Set Methods; Evolving Interfaces in Geometry, Fluid Mechanics, Computer Vision and Material Sciences. Cambridge University Press, 1996.Google Scholar
  18. 18.
    Jonathan Shade, Steven Gortler, Li wei He, and Richard Szeliski. Layered depth images. In Proceedings of SIGGRAPH 98. In Computer Graphics Proceedings, Annual Conference Series, pages 231–242. ACM SIGGRAPH, August 1998.Google Scholar
  19. 19.
    Heung-Yeung Shum and Li-Wei He. Rendering with concentric mosaics. In SIGGRAPH, pages 299–306, August 1999.Google Scholar
  20. 20.
    Ioannis Stamos and Peter Allen. 3-d model construction using range and image data. In IEEE Conference on Computer Vision and Pattern Recogniton, 2000.Google Scholar
  21. 21.
    Hai Tao and Harpreet Sawhney. Global matching criterion and color segmentation based stereo. In Workshop on the Applications of Computer Vision, pages 246–253, December 2000.Google Scholar
  22. 22.
    Hai Tao, Harpreet Sawhney, and Rakesh Kumar. A global matching framework for stereo computation. In International Conference on Computer Vision, pages 532–539, 2001.Google Scholar
  23. 23.
    T. Werner, R.D. Hersch, and V. Hlavac. Rendering real-world objects using view interpolation. In International Conference on Computer Vision, pages 957–962, 1995.Google Scholar
  24. 24.
    Masanobu Yamamoto. The Image Sequence Analysis of Three-Dimensional Dynamic Scenes. PhD thesis, Electrotechnical Laboratory, Agency of Industrial Science and Technology, May 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • David Jelinek
    • 1
  • Camillo J. Taylor
    • 1
  1. 1.GRASP Laboratory, CIS DepartmentUniversity of PennsylvaniaPhiladelphia

Personalised recommendations