Advertisement

Non-redundant rendering for efficient multi-view scene discretization

  • 304 Accesses

Abstract

A powerful approach for managing scene complexity is to sample the scene with a set of images. However, conventional images from nearby viewpoints have a high level of redundancy, which reduces scene sampling efficiency. We present non-redundant rendering, which detects and avoids redundant samples as the image is computed. We show that non-redundant rendering leads to improved scene sampling quality according to several view-independent and view-dependent metrics, compared to conventional scene discretization using redundant images and compared to depth peeling. Non-redundant images have a higher degree of fragmentation and, therefore, conventional approaches for scene reconstruction from samples are ineffective. We present a novel reconstruction approach that is well suited to scene discretization by non-redundant rendering. Finally, we apply non-redundant rendering and scene reconstruction techniques to soft shadow rendering where we show that our approach has an accuracy advantage over conventional images and over depth peeling.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

References

  1. 1.

    Agarwala, A., Agrawala, M., Cohen, M., Salesin, D., Szeliski, R.: Photographing long scenes with multi-viewpoint panoramas. ACM Trans. Graph. (TOG) 25(3), 853–861 (2006)

  2. 2.

    Bavoil, L., Callahan, S.P., Lefohn, A., Comba, J.L.D., Silva, C.T.: Multi-fragment effects on the gpu using the k-buffer. In: Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games, pp. 97–104. ACM, New York (2007)

  3. 3.

    Bavoil, L., Myers, K.: Order independent transparency with dual depth peeling. In: NVIDIA OpenGL SDK, pp. 1–12 (2008)

  4. 4.

    Chang, C.F., Bishop, G., Lastra, A.: Ldi tree: a hierarchical representation for image-based rendering. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 291–298. ACM Press/Addison-Wesley Publishing Co., New York (1999)

  5. 5.

    Eisemann, E., Décoret, X.: On exact error bounds for view-dependent simplification. In: Computer Graphics Forum, vol. 26, pp. 202–213. Wiley Online Library, New York (2007)

  6. 6.

    Everitt, C.: Interactive order-independent transparency. In: White Paper nVIDIA 2(6), 7 (2001)

  7. 7.

    Liu, B., Wei, L.Y., Xu, Y.Q., Wu, E.: Multi-layer depth peeling via fragment sort. In: In: 11th IEEE International Conference on Computer-aided design and computer graphics, 2009. CAD/Graphics’ 09, pp. 452–456. IEEE, New York (2009)

  8. 8.

    Maciel, P.W.C., Shirley, P.: Visual navigation of large environments using textured clusters. In: Proceedings of the 1995 Symposium on Interactive 3D Graphics, pp. 95–ff. ACM, New York (1995)

  9. 9.

    Mark, W.R., McMillan, L., Bishop, G.: Post-rendering 3d warping. In: Proceedings of the 1997 Symposium on Interactive 3D Graphics, pp. 7–ff. ACM, New York (1997)

  10. 10.

    Matthias, N., Henry, S., Marc, S.: Fast indirect illumination using layered depth images. Vis. Comput. (TVCJ) 26(6), 679–686 (2010)

  11. 11.

    Max, N., Ohsaki, K.: Rendering trees from precomputed z-buffer views. In: Rendering Techniques, vol. 95, pp. 74–81. Springer, Berlin (1995)

  12. 12.

    McMillan, L., Bishop, G.: Plenoptic modeling: an image-based rendering system. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 39–46. ACM, New York (1995)

  13. 13.

    Mei, C., Popescu, V., Sacks, E.: The occlusion camera. In: Computer Graphics Forum, vol. 24, pp. 335–342. Wiley Online Library, New York (2005)

  14. 14.

    NVIDIA \(\textregistered \). Optix\(^{{{\rm TM}}}\) ray tracing engine. https://developer.nvidia.com/optix

  15. 15.

    Rademacher, P., Bishop, G.: Multiple-center-of-projection images. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 199–206. ACM, New York (1998)

  16. 16.

    Roman, A., Garg, G., Levoy, M.: Interactive design of multi-perspective images for visualizing urban landscapes. In: Proceedings of the Conference on Visualization’04, pp. 537–544. IEEE Computer Society, New York (2004)

  17. 17.

    Román, A., Lensch, H.P.A.: Automatic multiperspective images. In: Rendering Techniques 2(2006), 161–171 (2006)

  18. 18.

    Shade, J., Gortler, S., He, L.W., Szeliski, R.: Layered depth images. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 231–242. ACM, New York (1998)

  19. 19.

    Nguyen, K.T., Hanyoung, J., JungHyun, H.: Layered occlusion map for soft shadow generation. Vis. Comput. (TVCJ) 26(12), 1497–1512 (2010)

  20. 20.

    Wilson, A., Manocha, D.: Simplifying complex environments using incremental textured depth meshes. In: ACM Transactions on Graphics (TOG), vol. 22, pp. 678–688. ACM, New York (2003)

  21. 21.

    Wonka, P., Wimmer, M., Zhou, K., Maierhofer, S., Hesina, G., Reshetov, A.: Guided visibility sampling. ACM Trans. Graph. (TOG) 25(3), 494–502 (2006)

  22. 22.

    Yu, J., McMillan, L.: General linear cameras. In: Computer Vision-ECCV 2004, pp. 14–27. Springer, Berlin (2004)

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China through Projects 61272349, 61190121 and 61190125, by the National High Technology Research and Development Program of China through 863 Program No. 2013AA01A604. Naiwen Xie gratefully acknowledges financial support from China Scholarship Council (CSC) through No. 201506020037.

Author information

Correspondence to Naiwen Xie.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (avi 53473 KB)

Supplementary material 1 (avi 53473 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xie, N., Wang, L. & Popescu, V. Non-redundant rendering for efficient multi-view scene discretization. Vis Comput 33, 1555–1569 (2017). https://doi.org/10.1007/s00371-016-1300-6

Download citation

Keywords

  • Scene sampling
  • Sampling redundancy
  • Non-redundant sampling