Predicting the Next Best View for 3D Mesh Refinement

  • Luca Morreale
  • Andrea Romanoni
  • Matteo MatteucciEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 867)


3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently many volumetric methods have been proposed; they choose the Next Best View by reasoning into a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach that focuses the next best view on the worst reconstructed region of the environment. We define a photo-consistent index to evaluate the model accuracy, and an energy function over the worst regions of the mesh that takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We tested our approach over a well known dataset and achieve state-of-the-art results.


  1. 1.
    Banta, J.E., Wong, L., Dumont, C., Abidi, M.A.: A next-best-view system for autonomous 3-d object reconstruction. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(5), 589–598 (2000)CrossRefGoogle Scholar
  2. 2.
    Bircher, A., Kamel, M., Alexis, K., Oleynikova, H., Siegwart, R.: Receding horizon “next-best-view” planner for 3d exploration. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1462–1468. IEEE (2016)Google Scholar
  3. 3.
    Buck, D.K., Collins, A.A.: POV-Ray - The Persistence of Vision Raytracer.
  4. 4.
    Chen, S., Li, Y., Kwok, N.M.: Active vision in robotic systems: a survey of recent developments. Int. J. Robot. Res. 30(11), 1343–1377 (2011)CrossRefGoogle Scholar
  5. 5.
    Connolly, C.: The determination of next best views. In: Proceedings of the 1985 IEEE International Conference on Robotics and Automation, vol. 2, pp. 432–435. IEEE (1985)Google Scholar
  6. 6.
    Delmerico, J., Isler, S., Sabzevari, R., Scaramuzza, D.: A comparison of volumetric information gain metrics for active 3d object reconstruction. Auton. Robot. 42, 1–12 (2017)Google Scholar
  7. 7.
    Dunn, E., Van Den Berg, J., Frahm, J.M.: Developing visual sensing strategies through next best view planning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009. IEEE (2009)Google Scholar
  8. 8.
    Egels, Y., Kasser, M.: Digital Photogrammetry. CRC Press, London (2003)Google Scholar
  9. 9.
    Fraser, C.S.: Network design considerations for non-topographic photogrammetry. Photogramm. Eng. Remote. Sens. 50(8), 1115–1126 (1984)Google Scholar
  10. 10.
    Hornung, A., Zeng, B., Kobbelt, L.: Image selection for improved multi-view stereo. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  11. 11.
    Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: Octomap: an efficient probabilistic 3d mapping framework based on octrees. Auton. Robot. 34(3), 189–206 (2013)CrossRefGoogle Scholar
  12. 12.
    Isler, S., Sabzevari, R., Delmerico, J., Scaramuzza, D.: An information gain formulation for active volumetric 3d reconstruction. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3477–3484. IEEE (2016)Google Scholar
  13. 13.
    Jancosek, M., Shekhovtsov, A., Pajdla, T.: Scalable multi-view stereo. In: 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1526–1533. IEEE (2009)Google Scholar
  14. 14.
    Kriegel, S., Rink, C., Bodenmüller, T., Suppa, M.: Efficient next-best-scan planning for autonomous 3d surface reconstruction of unknown objects. J. R. Time Image Process. 10(4), 611–631 (2015)CrossRefGoogle Scholar
  15. 15.
    Labatut, P., Pons, J.P., Keriven, R.: Efficient multi-view reconstruction of large-scale scenes using interest points, delaunay triangulation and graph cuts. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8. IEEE (2007)Google Scholar
  16. 16.
    Li, S., Siu, S.Y., Fang, T., Quan, L.: Efficient multi-view surface refinement with adaptive resolution control. In: European Conference on Computer Vision, pp. 349–364. Springer (2016)Google Scholar
  17. 17.
    Li, S., Siu, S.Y., Fang, T., Quan, L.: Efficient multi-view surface refinement with adaptive resolution control. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision - ECCV 2016, pp. 349–364. Springer International Publishing, Cham (2016)CrossRefGoogle Scholar
  18. 18.
    Mauro, M., Riemenschneider, H., Signoroni, A., Leonardi, R., Van Gool, L.: A unified framework for content-aware view selection and planning through view importance. In: Proceedings of the BMVC 2014, pp. 1–11 (2014)Google Scholar
  19. 19.
    Meixner, P., Leberl, F.: Characterizing building facades from vertical aerial images. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 38(PART 3B), 98–103 (2010)Google Scholar
  20. 20.
    Mendez, O., Hadfield, S., Pugeault, N., Bowden, R.: Taking the scenic route to 3d: optimising reconstruction from moving cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  21. 21.
    Mendez, O., Hadfield, S., Pugeault, N., Bowden, R.: Next-best stereo: extending next best view optimisation for collaborative sensors. In: Proceedings of BMVC 2016 (2016)Google Scholar
  22. 22.
    Moulon, P., Monasse, P., Marlet, R., Others: Openmvg. an open multiple view geometry library.
  23. 23.
    Nocerino, E., Menna, F., Remondino, F.: Accuracy of typical photogrammetric networks in cultural heritage 3d modeling projects. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 40(5), 465 (2014)CrossRefGoogle Scholar
  24. 24.
    Piazza, E., Romanoni, A., Matteucci, M.: Real-time CPU-based large-scale 3d mesh reconstruction. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2018)Google Scholar
  25. 25.
    Potthast, C., Sukhatme, G.S.: A probabilistic framework for next best view estimation in a cluttered environment. J. Vis. Commun. Image Represent. 25(1), 148–164 (2014)CrossRefGoogle Scholar
  26. 26.
    Romanoni, A., Matteucci, M.: Incremental reconstruction of urban environments by edge-points delaunay triangulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4473–4479. IEEE (2015)Google Scholar
  27. 27.
    Schmid, K., Hirschmüller, H., Dömel, A., Grixa, I., Suppa, M., Hirzinger, G.: View planning for multi-view stereo 3d reconstruction using an autonomous multicopter. J. Intell. Robot. Syst. 65(1), 309–323 (2012)CrossRefGoogle Scholar
  28. 28.
    Scott, W.R., Roth, G., Rivest, J.F.: View planning for automated three-dimensional object reconstruction and inspection. ACM Comput. Surv. (CSUR) 35(1), 64–96 (2003)CrossRefGoogle Scholar
  29. 29.
    Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: 2006 IEEE Computer Society Conference on Computer vision and pattern recognition, vol. 1, pp. 519–528. IEEE (2006)Google Scholar
  30. 30.
    Vasquez-Gomez, J.I., Sucar, L.E., Murrieta-Cid, R., Lopez-Damian, E.: Volumetric next-best-view planning for 3d object reconstruction with positioning error. Int. J. Adv. Robot. Syst. 11, 159 (2014)CrossRefGoogle Scholar
  31. 31.
    Vu, H.H., Labatut, P., Pons, J.P., Keriven, R.: High accuracy and visibility-consistent dense multiview stereo. IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 889–901 (2012)CrossRefGoogle Scholar
  32. 32.
    Wenhardt, S., Deutsch, B., Angelopoulou, E., Niemann, H.: Active visual object reconstruction using d-, e-, and t-optimal next best views. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2007. IEEE (2007)Google Scholar
  33. 33.
    Yamauchi, B.: A frontier-based approach for autonomous exploration. In: Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA 1997, pp. 146–151. IEEE (1997)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Luca Morreale
    • 1
  • Andrea Romanoni
    • 1
  • Matteo Matteucci
    • 1
    Email author
  1. 1.Politecnico di MilanoMilanItaly

Personalised recommendations