Advertisement

A Vision-Based Assistance Key Differenciator for Helicopters Automonous Scalable Missions

  • Rémi GirardEmail author
  • Sébastien Mavromatis
  • Jean Sequeira
  • Nicolas Belanger
  • Guillaume Anoufa
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11650)

Abstract

In the coming years, incremental automation will be the main challenge in the development of highly versatile helicopter technologies. To support this effort, vision-based systems are becoming a mandatory technological foundation for helicopter avionics. Among the different advantages that computer vision can provide for flight assistance, navigation in a GPS-denied environment is an important focus for Airbus because it is relevant for various types of missions. The present position paper introduces the different available SLAM algorithms, along with their limitations and advantages, for addressing vision-based navigation problems for helicopters. The reasons why Visual SLAM is of interest for our application are detailed. For an embedded application for helicopters, it is necessary to robustify the VSLAM algorithm with a special focus on the data model to be exchanged with the autopilot. Finally, we discuss future decisional architecture principles from the perspective of making vision-based navigation the 4th contributing agent in a wider distributed intelligence system composed of the autopilot, the flight management system and the crew.

Keywords

Visual SLAM Vision-based navigation Helicopters Autonomous Pose estimation 3D reconstruction 

References

  1. 1.
    Bailey, T., Durrant-Whyte, H.: Simultaneous localization and mapping (SLAM): part II. IEEE Robot. Autom. Mag. 13(3), 108–117 (2006).  https://doi.org/10.1109/MRA.2006.1678144CrossRefGoogle Scholar
  2. 2.
    Bayard, D.S., et al.: Vision-based navigation for the NASA mars helicopter. In: AIAA Scitech 2019 Forum. American Institute of Aeronautics and Astronautics (2019).  https://doi.org/10.2514/6.2019-1411. https://arc.aiaa.org/doi/abs/10.2514/6.2019-1411
  3. 3.
    Cadena, C., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016).  https://doi.org/10.1109/TRO.2016.2624754CrossRefGoogle Scholar
  4. 4.
    Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007).  https://doi.org/10.1109/TPAMI.2007.1049CrossRefGoogle Scholar
  5. 5.
    Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006).  https://doi.org/10.1109/MRA.2006.1638022CrossRefGoogle Scholar
  6. 6.
    Durrant-Whyte, H.F.: Uncertain geometry in robotics. IEEE J. Robot. Autom. 4(1), 23–31 (1988).  https://doi.org/10.1109/56.768CrossRefGoogle Scholar
  7. 7.
    Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40, 611–625 (2018)CrossRefGoogle Scholar
  8. 8.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: European Conference on Computer Vision (ECCV) (2014)CrossRefGoogle Scholar
  9. 9.
    Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 15–22, May 2014.  https://doi.org/10.1109/ICRA.2014.6906584
  10. 10.
    Fraundorfer, F., Scaramuzza, D.: Visual odometry: Part II: Matching, robustness, optimization, and applications. IEEE Robot. Autom. Mag. 19(2), 78–90 (2012).  https://doi.org/10.1109/MRA.2012.2182810. https://graz.pure.elsevier.com/en/publications/visual-odometry-part-ii-matching-robustness-optimization-and-applCrossRefGoogle Scholar
  11. 11.
    Galvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5), 1188–1197 (2012).  https://doi.org/10.1109/TRO.2012.2197158CrossRefGoogle Scholar
  12. 12.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234, November 2007.  https://doi.org/10.1109/ISMAR.2007.4538852
  13. 13.
    Konolige, K., Agrawal, M.: Frameslam: from bundle adjustment to real-time visual mapping. IEEE Trans. Robot. 24, 1066–1077 (2008).  https://doi.org/10.1109/TRO.2008.2004832CrossRefGoogle Scholar
  14. 14.
    Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B.: FastSLAM: a factored solution to the simultaneous localization and mapping problem. In: Proceedings of the AAAI National Conference on Artificial Intelligence, pp. 593–598. AAAI (2002)Google Scholar
  15. 15.
    Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017).  https://doi.org/10.1109/TRO.2017.2705103CrossRefGoogle Scholar
  16. 16.
    Mur-Artal, R., Montiel, J., Tardos, J.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31, 1147–1163 (2015).  https://doi.org/10.1109/TRO.2015.2463671CrossRefGoogle Scholar
  17. 17.
    Nister, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004, CVPR 2004, vol. 1, pp. I-652–I-659, June 2004.  https://doi.org/10.1109/CVPR.2004.1315094
  18. 18.
    Scaramuzza, D., Fraundorfer, F.: Tutorial: visual odometry. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011).  https://doi.org/10.1109/MRA.2011.943233. https://graz.pure.elsevier.com/en/publications/tutorial-visual-odometryCrossRefGoogle Scholar
  19. 19.
    Smith, R., Self, M., Cheeseman, P.: Estimating uncertain spatial relationships in robotics. In: 1987 IEEE International Conference on Robotics and Automation Proceedings, vol. 4, p. 850, March 1987.  https://doi.org/10.1109/ROBOT.1987.1087846
  20. 20.
    Smith, R.C., Cheeseman, P.: On the representation and estimation of spatial uncertainly. Int. J. Rob. Res. 5(4), 56–68 (1986).  https://doi.org/10.1177/027836498600500404CrossRefGoogle Scholar
  21. 21.
    Strasdat, H., Montiel, J.M.M., Davison, A.J.: Real-time monocular SLAM: Why filter? In: 2010 IEEE International Conference on Robotics and Automation, pp. 2657–2664, May 2010.  https://doi.org/10.1109/ROBOT.2010.5509636
  22. 22.
    Strasdat, H., Davison, A.J., Montiel, J.M.M., Konolige, K.: Double window optimisation for constant time visual SLAM. In: Proceedings of the 2011 International Conference on Computer Vision, ICCV 2011, pp. 2352–2359. IEEE Computer Society, Washington (2011).  https://doi.org/10.1109/ICCV.2011.6126517
  23. 23.
    Strasdat, H., Montiel, J.M.M., Davison, A.J.: Scale drift-aware large scale monocular SLAM. In: RSS 2010 (2010).  https://doi.org/10.15607/RSS.2010.VI.010
  24. 24.
    Stumberg, L.v., Usenko, V., Engel, J., Stückler, J., Cremers, D.: From monocular SLAM to autonomous drone exploration. In: 2017 European Conference on Mobile Robots (ECMR). pp. 1–8, September 2017.  https://doi.org/10.1109/ECMR.2017.8098709
  25. 25.
    Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment—a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) IWVA 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000).  https://doi.org/10.1007/3-540-44480-7_21CrossRefGoogle Scholar
  26. 26.
    Yang, T., Li, P., Zhang, H., Li, J., Li, Z.: Monocular vision SLAM-based UAV autonomous landing in emergencies and unknown environments. Electronics 7(5), 73 (2018).  https://doi.org/10.3390/electronics7050073. https://www.mdpi.com/2079-9292/7/5/73CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Rémi Girard
    • 1
    • 2
    Email author
  • Sébastien Mavromatis
    • 1
  • Jean Sequeira
    • 1
  • Nicolas Belanger
    • 2
  • Guillaume Anoufa
    • 3
  1. 1.Aix Marseille University, CNRS, LISMarseilleFrance
  2. 2.AirbusMarignaneFrance
  3. 3.CapgeminiParisFrance

Personalised recommendations