Advertisement

Optimal View Path Planning for Visual SLAM

  • Sebastian Haner
  • Anders Heyden
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6688)

Abstract

In experimental design and 3D reconstruction it is desirable to minimize the number of observations required to reach a prescribed estimation accuracy. Many approaches in the literature attempt to find the next best view from which to measure, and iterate this procedure. This paper discusses a continuous optimization method for finding a whole set of future imaging locations which minimize the reconstruction error of observed geometry along with the distance traveled by the camera between these locations. A computationally efficient iterative algorithm targeted toward application within real-time SLAM systems is presented and tested on simulated data.

Keywords

Next best view planning path optimization SLAM 

References

  1. 1.
    Botterill, T., Mills, S., Green, R.: Bag-of-words-driven, single-camera simultaneous localization and mapping. Journal of Field Robotics (2010)Google Scholar
  2. 2.
    Piniés, P., Paz, L.M., Gálvez-López, D., Tardós, J.D.: Ci-graph simultaneous localization and mapping for three-dimensional reconstruction of large and complex environments using a multicamera system. Journal of Field Robotics 27(5), 561–586 (2010)CrossRefGoogle Scholar
  3. 3.
    Fraser, C.S.: Network design considerations for non-topographic photogrammetry. Photo Eng. and Remote Sensing 50(8), 1115–1126 (1984)Google Scholar
  4. 4.
    Wenhardt, S., Deutsch, B., Hornegger, J., Niemann, H., Denzler, J.: An information theoretic approach for next best view planning in 3-d reconstruction. In: Proc. International Conference on Pattern Recognition (ICPR 2006), vol. 1, pp. 103–106. IEEE Computer Society Press, Los Alamitos (2006)CrossRefGoogle Scholar
  5. 5.
    Trummer, M., Munkelt, C., Denzler, J.: Online next-best-view planning for accuracy optimization using an extended e-criterion. In: Proc. International Conference on Pattern Recognition (ICPR 2010), pp. 1642–1645. IEEE Computer Society, Los Alamitos (2010)CrossRefGoogle Scholar
  6. 6.
    Chen, S., Li, Y.F., Zhang, J., Wang, W.: Active Sensor Planning for Multiview Vision Tasks, 1st edn. Springer Publishing Company, Incorporated, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Dunn, E., van den Berg, J., Frahm, J.-M.: Developing visual sensing strategies through next best view planning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, pp. 4001–4008 (October 2009)Google Scholar
  8. 8.
    Montgomery, D.C.: Design and Analysis of Experiments, 5th edn. John Wiley & Sons, Chichester (2000)Google Scholar
  9. 9.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proc. Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2007), Nara, Japan (November 2007)Google Scholar
  10. 10.
    Strasdat, H., Montiel, J.M.M., Davison, A.J.: Scale drift-aware large scale monocular slam. Proc. Robotics; Science and Systems (2010)Google Scholar
  11. 11.
    Mouragnon, E., Lhuillier, M., Dhome, M., Dekeyser, F., Sayd, P.: Generic and real-time structure from motion using local bundle adjustment. Image and Vision Computing 27(8), 1178–1193 (2009)CrossRefGoogle Scholar
  12. 12.
    Hartley, R., Zisserman, A.: Multiple View Geometry. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  13. 13.
    Chen, S.Y., Li, Y.F.: Automatic sensor placement for model-based robot vision. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 34(1), 393–408 (2004)CrossRefGoogle Scholar
  14. 14.
    Dunn, E., Olague, G., Lutton, E.: Parisian camera placement for vision metrology. Pattern Recognition Letters 27(11), 1209 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Sebastian Haner
    • 1
  • Anders Heyden
    • 1
  1. 1.Centre for Mathematical SciencesLund UniversitySweden

Personalised recommendations