Skip to main content
Log in

Depth texture synthesis for high-resolution reconstruction of large scenes

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Large scenes such as building facades and other architectural constructions often contain repeating elements such as identical windows and brick patterns. In this paper, we present a novel approach that improves the resolution and geometry of 3D meshes of large scenes with such repeating elements. By leveraging structure from motion reconstruction and an off-the-shelf depth sensor, our approach captures a small sample of the scene in high resolution and automatically extends that information to similar regions of the scene. Using RGB and SfM depth information as a guide and simple geometric primitives as canvas, our approach extends the high-resolution mesh by exploiting powerful, image-based texture synthesis approaches. The final results improve on standard SfM reconstruction with higher detail. Our approach benefits from reduced manual labor as opposed to full RGBD reconstruction, and can be done much more cheaply than with LiDAR-based solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Agarwal, S., Furukawa, Y., Snavely, N.: Building Rome in a day. In: IEEE international conference on computer vision, (2009)

  2. Frahm J.M., Fite-Georgel, P., Gallup, D., Johnson, T., Raguram, R., Wu, C., Jen, Y.H., Dunn, E., Clipp, B., Lazebnik, S., Pollefeys, M.: Building Rome on a cloudless day. In: European conference on computer vision, (2010)

  3. Wu, C.: Towards linear-time incremental structure from motion. In: International conference on 3D vision, (2013)

  4. Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. ACM Trans. Graph. 25(3), 835–846 (2006)

    Article  Google Scholar 

  5. Kazhdan, M., Hoppe, H.: Screened Poisson surface reconstruction. ACM Trans. Graph. 32(3), 1–13 (2013)

    Article  MATH  Google Scholar 

  6. Shan, Q., Adams, R., Curless, B., Furukawa, Y., Seitz, S.M.: The visual turing test for scene reconstruction. In: International conference on 3D vision, (2013)

  7. Newcombe, R.A., Molyneaux, D.: KinectFusion: Real-time dense surface mapping and tracking. In: IEEE international symposium on mixed and augmented reality, (2011)

  8. Zhou, Q.-Y., Koltun, V.: Color map optimization for 3d reconstruction with consumer depth cameras. ACM Trans. Graph. 33(4), 155 (2014)

    Google Scholar 

  9. Zollhöfer, M., Dai, A., Innmann, M., Wu, C., Stamminger, M., Theobalt, C., Nießner, M.: Shading-based refinement on volumetric signed distance functions. ACM Trans. Graph. 34(4), 96 (2015)

    Article  MATH  Google Scholar 

  10. Newcombe, R.A., Fox, D., Seitz, S.M.: DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 343–352, (2015)

  11. Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., McDonald, J.: Kintinuous: Spatially extended KinectFusion. In: RSS Workshop on RGB-D: advanced reasoning with depth Cameras, Sydney, Australia, (2012)

  12. Spinello, L., Triebel, R., Vasquez, D., Arras, K.O., Siegwart, R.: Exploiting repetitive object patterns for model compression and completion. In: European conference on computer vision, (2010)

  13. Kopf, J., Fu, C.-W., Cohen-Or, D., Deussen, O., Lischinski, D., Wong, T.-T.: Solid texture synthesis from 2D exemplars. ACM Trans. Graph. 26(3), 2:1–2:9 (2007)

    Article  Google Scholar 

  14. Labrie-Larrivée, F., Laurendeau, D., Lalonde, J.-F.: Depth texture synthesis for realistic architectural modeling. In: Conference on computer and robot vision, (2016)

  15. Agarwal, S., Snavely, N., Seitz, S., Szeliski, R.: Bundle adjustment in the large. In: European conference on computer vision, 6312, pp. 29–42 (2010)

  16. Snavely, N.: Scene reconstruction and visualization from internet photo collections: a survey. IPSJ Trans. Comput. Vis. Appl. 3, 44–66 (2011)

    Article  Google Scholar 

  17. Arikan, M., Schwärzler, M., Flöry, S., Wimmer, M., Maierhofer, S.: O-snap: optimization-based snapping for modeling architecture. ACM Trans. Graph. 32(1), 6:1–6:15 (2013)

    Article  MATH  Google Scholar 

  18. Waechter, M., Moehrle, N., Goesele, M.: Let there be color! Large-scale texturing of 3D reconstructions. In: European conference on computer vision, (2014)

  19. Laffont, P.-Y., Bousseau, A., Paris, S.: Coherent intrinsic images from photo collections. ACM Trans. Graph. 31(6), 61–68 (2012)

    Article  Google Scholar 

  20. Lalonde, J.-F., Matthews, I.: Lighting estimation in outdoor image collections. In: International conference on 3D vision, (2014)

  21. Kolev, K., Tanskanen, P., Speciale, P., Pollefeys, M.: Turning mobile phones into 3D scanners. In: IEEE conference on computer vision and pattern recognition, (2014)

  22. Schöps, T., Sattler, T., Christian, H., Pollefeys, M.: 3D modeling on the go: interactive 3D reconstruction of large-scale scenes on mobile devices. In: International conference on 3D vision, (2015)

  23. Hu, J., You, S., Neumann, U.: Approaches to large-scale urban modeling. Comput. Graph. Appl. 23(6), 62–69 (2003)

    Article  Google Scholar 

  24. Nan, L., Sharf, A., Zhang, H., Cohen-Or, D., Chen, B.: Smartboxes for interactive urban reconstruction. ACM Trans. Graph. 29(4), 93 (2010)

    Article  Google Scholar 

  25. Furukawa, Y., Ponce, J.: Accurate, dense, and robust multi-view stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2012)

    Article  Google Scholar 

  26. Cignoni, P., Corsini, M., Ranzuglia, G.: Meshlab: an open-source 3d mesh processing system. Ercim News 73(45–46), 6 (2008)

    Google Scholar 

  27. Schöps, T., Sattler, T., Häne, C., Pollefeys M.: Poisson image editing. 22, p. 313 (2003)

  28. Levin, A., Zomet, A., Peleg, S., Weiss, Y.: Seamless image stitching in the gradient domain. In: European conference on computer vision, (2004)

  29. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 1 (2009)

    Article  Google Scholar 

  30. Barnes, C., Shechtman, E., Goldman, D.B., Finkelstein, A.: The generalized PatchMatch correspondence algorithm. In: European conference on computer vision, (2010)

  31. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: IEEE international conference on computer vision, (1999)

  32. Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. Neural Information Processing Systems, pp. 836–850 (2012)

  33. Whelan, T., Kaess, M., Fallon, M.F., Johannsson, H., Leonard, J.J., McDonald, J.B.: Gradientshop: a gradient-domain optimization framework for image and video filtering. ACM Trans. Graph. 29, 1–14 (2010)

    Google Scholar 

  34. Liu, F., Lin, G., Shen, C.: Deep convolutional neural fields for depth estimation from a single image. (2015). https://ieeexplore.ieee.org/document/6599068

  35. Ceylan, D., Mitra, N.J., Zheng, Y., Pauly, M.: Coupled structure-from-motion and 3d symmetry detection for urban facades. ACM Trans. Graph. 33(1), 2 (2014)

    Article  MATH  Google Scholar 

  36. Gupta, M., Nayar, S.: Micro phase shifting. In: IEEE conference on computer vision and pattern recognition, (2012)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-François Lalonde.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the NSERC/Creaform Industrial Research Chair on 3D Scanning.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Labrie-Larrivée, F., Laurendeau, D. & Lalonde, JF. Depth texture synthesis for high-resolution reconstruction of large scenes. Machine Vision and Applications 30, 795–806 (2019). https://doi.org/10.1007/s00138-019-01030-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-019-01030-y

Keywords

Navigation