Skip to main content

Large Scale Dense Visual Inertial SLAM

  • Chapter
  • First Online:
Field and Service Robotics

Part of the book series: Springer Tracts in Advanced Robotics ((STAR,volume 113))

Abstract

In this paper we present a novel large scale SLAM system that combines dense stereo vision with inertial tracking. The system divides space into a grid and efficiently allocates GPU memory only when there is surface information within a grid cell. A rolling grid approach allows the system to work for large scale outdoor SLAM. A dense visual inertial dense tracking pipeline incrementally localizes stereo cameras against the scene. The proposed system is tested with both a simulated data set and several real-life data in different lighting (illumination changes), motion (slow and fast), and weather (snow, sunny) conditions. Compared to structured light-RGBD systems the proposed system works indoors and outdoors and over large scales beyond single rooms or desktop scenes. Crucially, the system is able to leverage inertial measurements for robust tracking when visual measurements do not suffice. Results demonstrate effective operation with simulated and real data, and both indoors and outdoors under varying lighting conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Nüchter, A., Lingemann, K., Hertzberg, J., Surmann, H.: 6d slam3d mapping outdoor environments. J. Field Robot. 24(8–9), 699–722 (2007)

    Article  MATH  Google Scholar 

  2. Fioraio, N., Konolige, K.: Realtime visual and point cloud slam. In: Proceedings of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics: Science and Systems Conference (RSS), vol. 27 (2011)

    Google Scholar 

  3. Strasdat, H., Davison, A.J., Montiel, J., Konolige, K.: Double window optimisation for constant time visual slam. In: IEEE International Conference on Computer Vision (ICCV), pp. 2352–2359. IEEE (2011)

    Google Scholar 

  4. Newcombe, R.A., Davison, A.J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., Fitzgibbon, A.: Kinectfusion: Real-time dense surface mapping and tracking. In: 10th IEEE International Symposium on Mixed and augmented reality (ISMAR), pp. 127–136 (2011)

    Google Scholar 

  5. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A. et al.: Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, pp. 559–568 (2011)

    Google Scholar 

  6. Keller, M., Lefloch, D., Lambers, M., Izadi, S., Weyrich, T., Kolb, A.: Real-time 3d reconstruction in dynamic scenes using point-based fusion. In: International Conference on 3D Vision-3DV 2013, pp. 1–8. IEEE (2013)

    Google Scholar 

  7. Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: Dense tracking and mapping in real-time. In IEEE International Conference on Computer Vision (ICCV), pp. 2320–2327 (2011)

    Google Scholar 

  8. Concha, A., Hussain, W., Montano, L., Civera, J.: Manhattan and piecewise-planar constraints for dense monocular mapping. In: Proceedings of Robotics: Science and Systems (RSS), (2014)

    Google Scholar 

  9. Zeng, M., Zhao, F., Zheng, J., Liu, X.: Octree-based fusion for realtime 3d reconstruction. Graph. Models 75(3), 126–136 (2013)

    Article  Google Scholar 

  10. Steinbrucker, F., Sturm, J., Cremers, D.: Volumetric 3d mapping in real-time on a cpu. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2021–2028. IEEE (2014)

    Google Scholar 

  11. Roth, H., Vona, M.: Moving volume kinectfusion. In BMVC, pp. 1–11 (2012)

    Google Scholar 

  12. Whelan, T., Johannsson, H., Kaess, M., Leonard, J.J., McDonald, J.: Robust tracking for real-time dense rgb-d mapping with kintinuous (2012)

    Google Scholar 

  13. Finman, R., Whelan, T., Kaess, M., Leonard, J.J.: Efficient incremental map segmentation in dense rgb-d maps. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5488–5494. IEEE (2014)

    Google Scholar 

  14. Geiger, A., Roser, M., Urtasun, R.: Efficient large-scale stereo matching. Computer Vision-ACCV 2010, pp. 25–38. Springer, Berlin (2011)

    Chapter  Google Scholar 

  15. Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M.: Real-time 3d reconstruction at scale using voxel hashing. ACM Trans. Graph. (TOG) 32(6), 169 (2013)

    Article  Google Scholar 

  16. Sengupta, S., Greveson, E., Shahrokni, A., Torr, P.H.: Urban 3d semantic modelling using stereo vision. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 580–585. IEEE (2013)

    Google Scholar 

  17. Prisacariu, V.A., Kähler, O., Cheng, M.M., Valentin, J., Torr, P.H., Reid, I.D., Murray, D.W.: A framework for the volumetric integration of depth images (2014). arXiv:1410.0925

  18. Baker, S., Matthews, I.: Lucas-kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  Google Scholar 

  19. Klose, S., Heise, P., Knoll, A.: Efficient compositional approaches for real-time robust direct visual odometry from RGB-D data. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2013

    Google Scholar 

  20. Keivan, N., Sibley, G.: Asynchronous adaptive conditioning for visual-inertial slam. In: International Symposium on Experimental Robotics (ISER) (2014)

    Google Scholar 

  21. Lovegrove, S., Patron-Perez, A., Sibley, G.: Spline fusion: a continuous-time representation for visual-inertial fusion with application to rolling shutter cameras. In Proceedings of the British machine vision conference, pp. 93.1–93.12 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lu Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Ma, L., Falquez, J.M., McGuire, S., Sibley, G. (2016). Large Scale Dense Visual Inertial SLAM. In: Wettergreen, D., Barfoot, T. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol 113. Springer, Cham. https://doi.org/10.1007/978-3-319-27702-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-27702-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-27700-4

  • Online ISBN: 978-3-319-27702-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics