Advertisement

Accuracy enhancement for the front-end tracking algorithm of RGB-D SLAM

  • Fuwen Hu
  • Jingli Cheng
  • Yunchang Bao
  • Yunhua HeEmail author
Original Research Paper
  • 37 Downloads

Abstract

A robust and accurate simultaneous localization and mapping (SLAM) in working scenarios is an essential competence to perform mobile robotic tasks autonomously. Plenty of research indicates that the extraction of point features from RGB-D data that simultaneously take into account the images and the depth data increases the robustness and precision of the visual odometry method, used either as a self-reliant localization system, or as a front-end in pose-based SLAM. However, due to pure rotation, sudden movements, motion blur, noise and large depth variations, RGB-D SLAM systems often suffer from tracking loss in data association. The front-end tracking process of the ORB-SLAM system requires screening step by step, which is more likely to cause tracking loss. In order to solve the above problems, this work is intended to improve the ORB-SLAM front-end tracking algorithm based on the uniform speed model tracking effective frame and the matching of nearby frame algorithms. Then three datasets selected from TUM datasets with more motion blur are used to further verify the effect of the improved front-end algorithmic architecture. The experimental results suggested that the proposed improved scheme can not only effectively increase the number of tracked frames, but also reduce the amount of computation by about two times under the premise of guaranteeing the path accuracy.

Keywords

Visual SLAM Robotic vision RGB-D data Data association Tracking 

Notes

References

  1. 1.
    Lv Q, Lin H, Wang G, Wei H, Wang Y (2017) ORB-SLAM-based tracing and 3D reconstruction for robot using Kinect 2.0. In: Control and decision conference. IEEE, pp 3319–3324Google Scholar
  2. 2.
    Lenac K, Kitanov A, Cupec R, Petrović I (2017) Fast planar surface 3D SLAM using LIDAR. Robot Auton Syst 92:197–220CrossRefGoogle Scholar
  3. 3.
    Liu L, Guo R, Wu J (2018) A collision-free motion planning method by integrating complexity-reduction SLAM and learning-based artificial force design. Robot Auton Syst 100:132–149CrossRefGoogle Scholar
  4. 4.
    Cho H, Kim EK, Kim S (2018) Indoor SLAM application using geometric and ICP matching methods based on line features. Robot Auton Syst 100:206–224CrossRefGoogle Scholar
  5. 5.
    Shin DW, Ho YS (2017) Local patch descriptor using deep convolutional generative adversarial network for loop closure detection in SLAM. In: Asia-Pacific signal and information processing association summit and conference, pp 546–549Google Scholar
  6. 6.
    Mustafa M, Stancu A, Delanoue N, Codres E (2018) Guaranteed SLAM—an interval approach. Robot Auton Syst 100:160–170CrossRefGoogle Scholar
  7. 7.
    Pire T, Fischer T, Castro G, Decristóforis P, Civera J, Jacoboberlles J (2017) S-PTAM: stereo parallel tracking and mapping. Robot Auton Syst 93:27–42CrossRefGoogle Scholar
  8. 8.
    Younes G, Asmar D, Shammas E, Zelek J (2017) Keyframe-based monocular SLAM: design, survey, and future directions. Robot Auton Syst 98:67–88CrossRefGoogle Scholar
  9. 9.
    Evers C, Naylor PA (2017) Optimized self-localization for SLAM in dynamic scenes using probability hypothesis density filters. IEEE Trans Signal Process 99:1zbMATHGoogle Scholar
  10. 10.
    Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J et al (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans Robot 32(6):1309–1332CrossRefGoogle Scholar
  11. 11.
    Cheeseman P, Smith R, Self M (1987) A stochastic map for uncertain spatial relationships. In: 4th international symposium on robotic research, pp 467–474Google Scholar
  12. 12.
    Frese U (2006) A discussion of simultaneous localization and mapping. Auton Robots 20(1):25–42CrossRefGoogle Scholar
  13. 13.
    Spero DJ, Jarvis RA (2007) A review of robotic SLAM, Monash University, Technical Report, MECSE-4-2007Google Scholar
  14. 14.
    Thrun S, Fox D, Burgard W, Dellaert F (2001) Robust Monte Carlo localization for mobile robots. Artif Intell 128(1–2):99–141CrossRefGoogle Scholar
  15. 15.
    Montemerlo M, Thrun S, Koller D, Wegbreit B (2002) FastSLAM: a factored solution to the simultaneous localization and mapping problem. In: Aaai/iaai, pp 593–598Google Scholar
  16. 16.
    Montemerlo M, Thrun S, Koller D, Wegbreit B (2003) FastSLAM 2.0: an improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In: IJCAI, pp 1151–1156Google Scholar
  17. 17.
    Davison AJ (2003) Real-time simultaneous localisation and mapping with a single camera. In: ICCV. IEEE, p 1403Google Scholar
  18. 18.
    Clemente L, Davison A, Reid I, Neira J, Tardós J (2007) Mapping large loops with a single hand-held camera. In: Proceedings of robotics sciences & systems, pp 297–304Google Scholar
  19. 19.
    Karlsson N, Bernardo ED, Ostrowski JP, Goncalves L, Pirjanian P, Munich ME (2006) The vSLAM algorithm for robust localization and mapping. In: IEEE international conference on robotics & automation. IEEEGoogle Scholar
  20. 20.
    Klein G (2007) Parallel tracking and mapping for small AR workspaces. In: Proceedings of sixth IEEE and ACM international symposium on mixed and augmented reality (ISMAR 2007). IEEE, Nara, JapanGoogle Scholar
  21. 21.
    Engel J, Schöps T, Cremers D (2014) LSD-SLAM: large-scale direct monocular SLAM. In: European conference on computer vision. Springer, ChamGoogle Scholar
  22. 22.
    Engel J, Sturm J, Cremers D (2013) Semi-dense visual odometry for a monocular camera. In: 2013 IEEE international conference on computer vision (ICCV). IEEE Computer SocietyGoogle Scholar
  23. 23.
    Forster C, Pizzoli M, Scaramuzza D (2014) SVO: fast semi-direct monocular visual odometry. In: IEEE international conference on robotics and automation (ICRA). IEEE, Hong KongGoogle Scholar
  24. 24.
    Engel J, Koltun V, Cremers D (2017) Direct sparse odometry. IEEE Trans Pattern Anal Mach Intell 40(3):611–625CrossRefGoogle Scholar
  25. 25.
    Labbé M, Michaud F (2014) Online global loop closure detection for large-scale multi-session graph-based SLAM. In: 2014 IEEE/RSJ international conference on intelligent robots and systems. IEEEGoogle Scholar
  26. 26.
    Newcombe RA, Lovegrove SJ, Davison AJ (2011) DTAM: dense tracking and mapping in real-time. In: IEEE international conference on computer vision, ICCV 2011. IEEE, Barcelona, Spain, November 6–13Google Scholar
  27. 27.
    Mur-Artal R, Tardos JD (2017) ORB-SLAM2: an open-source slam system for monocular, stereo, and RGB-D cameras. IEEE Trans Robot 33:1–8CrossRefGoogle Scholar
  28. 28.
    Mohanty V, Agrawal S, Datta S, Ghosh A, Sharma VD, Chakravarty D (2016) Deepvo: a deep learning approach for monocular visual odometryGoogle Scholar
  29. 29.
    Yousif K, Bab-Hadiashar A, Hoseinnezhad R (2015) An overview to visual odometry and visual SLAM: applications to mobile robotics. Intell Ind Syst 1(4):289–311CrossRefGoogle Scholar
  30. 30.
    Gálvez-López D, Tardos JD (2012) Bags of binary words for fast place recognition in image sequences. IEEE Trans Robot 28(5):1188–1197CrossRefGoogle Scholar
  31. 31.
    Chai Z, Matsumaru T (2017) ORB-SHOT SLAM: trajectory correction by 3D loop closing based on bag-of-visual-words (BoVW) model for RGB-D visual SLAM. J Robot Mechatron 29(2):365–380CrossRefGoogle Scholar
  32. 32.
    Sturm J, Engelhard N, Endres F, Burgard W, Cremers D (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEEGoogle Scholar
  33. 33.
    Zhao X, Min H, Xu Z, Wu X, Li X, Sun P (2018) Image antiblurring and statistic filter of feature space displacement: application to visual odometry for outdoor ground vehicle. J Sens 2018:14Google Scholar
  34. 34.
    Mustaniemi J, Kannala J, Särkkä S, Matas J, Heikkilä J (2018) Fast motion deblurring for feature detection and matching using inertial measurements. In: 2018 24th international conference on pattern recognition (ICPR). IEEE, pp 3068–3073Google Scholar
  35. 35.
    Russo LO, Farulla GA, Indaco M, Rosa S, Rolfo D, Bona B (2013) Blurring prediction in monocular SLAM. In: 2013 8th IEEE design and test symposium. IEEE, pp 1–6Google Scholar
  36. 36.
    Atashgah MA, Malaek SMB (2013) Prediction of aerial-image motion blurs due to the flying vehicle dynamics and camera characteristics in a virtual environment. Proc Inst Mech Eng Part G J Aerosp Eng 227(7):1055–1067CrossRefGoogle Scholar
  37. 37.
    Amiri Atashgah MA, Malaek SMB (2011) A simulation environment for path and image generation in an aerial single-camera vision system. Proc Inst Mech Eng Part G J Aerosp Eng 225(5):541–558Google Scholar
  38. 38.
    Atashgah MA, Gholampour P, Malaek SMB (2014) Integration of image de-blurring in an aerial Mono-SLAM. Proc Inst Mech Eng Part G J Aerosp Eng 228(8):1348–1362CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • Fuwen Hu
    • 1
  • Jingli Cheng
    • 1
  • Yunchang Bao
    • 1
  • Yunhua He
    • 2
    Email author
  1. 1.School of Mechanical and Material EngineeringNorth China University of TechnologyBeijingChina
  2. 2.Department of Mechanical and Electronic EngineeringShandong University of Science and TechnologyTai’anChina

Personalised recommendations