SLAM and Vision-based Humanoid Navigation

  • Olivier StasseEmail author
Reference work entry


In order for humanoid robots to evolve autonomously in a complex environment, they have to perceive it, build an appropriate representation, localize in it, and decide which motion to realize. The relationship between the environment and the robot is rather complex as some parts are obstacles to avoid, other possible support for locomotion, or objects to manipulate. The affordance with the objects and the environment may result in quite complex motions ranging from bimanual manipulation to whole-body motion generation. In this chapter, we will introduce tools to realize vision-based humanoid navigation. The general structure of such a system is depicted in Fig. 1. It classically represents the perception-action loop where, based on the sensor signals, a number of information are extracted. The information is used to localize the robot and build a representation of the environment. This process is the subject of the second paragraph. Finally a motion is planned and sent to the robot control system. The third paragraph describes several approaches to implement visual navigation in the context of humanoid robotics.


SLAM Factor graph Non linear filter Non linear optimization 



The author work was funded by the European Commission through the EUROC Project.


  1. 1.
    J.F. Seara, G. Schmidt, Intelligent gaze control for vision-guided humanoid walking: methodological aspects. Rob. Auton. Syst. 48(4), 231–248 2004CrossRefGoogle Scholar
  2. 2.
    A. Andreopoulos, H. Wersing, H. Janssen, S. Hasler, J.K. Tsotsos, E. Korner, Active 3d object localization using asimo. IEEE Trans. Robot.1, 47–64 (2011)Google Scholar
  3. 3.
    J. Chesnutt, M. Lau, G. Cheung, J. Kuffner, J. Hodgins, T. Kanade, Footstep planning for the honda asimo humanoid, in IEEE/RAS International Conference on Robotics and Automation (ICRA), (2005), pp. 631–636Google Scholar
  4. 4.
    F. Saïdi, O. Stasse, K. Yokoi, Chapter Active Visual Search by a Humanoid Robot, Recent Progress in Robotics Viable Robotic Service to Human, (Springer, Springer-Verlag Berlin, Heidelberg, 2007), pp. 171–184Google Scholar
  5. 5.
    K. Nishiwaki, J.E. Chestnutt, S. Kagami, Autonomous navigation of a humanoid robot over unknown rough terrain using a laser range sensor. Int. J. Robot. Res. 31(11), 1251–1262 (2012)CrossRefGoogle Scholar
  6. 6.
    F. Negrello, M. Garabini, M.G. Catalano, P. Kryczka, W. Choi, D.G. Caldwell, A. Bicchi, N.G. Tsagarakis, Walk-man humanoid lower body design optimization for enhanced physical performance, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2016, pp. 1817–1824Google Scholar
  7. 7.
    K. Kojima, T. Karasawa, T. Kozuki, E. Kuroiwa, S. Yukizaki, S. Iwaishi, T. Ishikawa, R. Koyama, S. Noda, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, M. Inaba, Development of life-sized highpower humanoid robot JAXON for real-world use, in IEEE/RAS International Conference on Humanoid Robotics (ICHR), 2015Google Scholar
  8. 8.
    O. Stasse, A. Escande, N. Mansard, S. Miossec, P. Evrard, A. Kheddar, Real-time (self)-collision avoidance task on a hrp-2 humanoid robot, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2008, pp. 3200–3205Google Scholar
  9. 9.
    A. Majumdar, A.A. Ahmadi, R. Tedrake, Control design along trajectories with sums of squares programming, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2013Google Scholar
  10. 10.
    J. Chestnutt, Navigation and Gait Planning, in K. Harada, E.E. Yoshida, K. Yokoi, eds. Motion Planning for Humanoid Robots, (Springer, London, 2010), pp. 1–28Google Scholar
  11. 11.
    A. Herdt, N. Perrin, P.B. Wieber, Walking without thinking about it, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, pp. 190–195Google Scholar
  12. 12.
    A. Faragasso, G. Oriolo, A. Paolillo, M. Vendittelli, Vision-Based Corridor Navigation for Humanoid Robots, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2013, pp. 3190–3195Google Scholar
  13. 13.
    M. Garcia, O. Stasse, J.-B. Hayet, C. Dune, C. Esteves, J.-P. Laumond, Vision-guided motion primitives for humanoid reactive walking: decoupled vs. coupled approaches. Int. J. Robot. Res. Spec. Issue Vis. 34(4–5), 402–419 (2014)Google Scholar
  14. 14.
    N. Perrin, O. Stasse, L. Baudouin, F. Lamiraux, E. Yoshida, Fast humanoid robot collision-free footstep planning using swept volume approximations. IEEE Trans. Robot.28(2), 427–439 (2012)CrossRefGoogle Scholar
  15. 15.
    H. Moravec, Sensor fusion in certainty grids for mobile robots. AI Mag. 9(2) (1988)Google Scholar
  16. 16.
    R.A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A.J. Davison, P. Kohli, J. Shotton, S. Hodges, A. Fitzgibbon, KinectFusion: real-time dense surface mapping and tracking, in IEEE International Symposium on Mixed and Augmented Reality (ISMAR), (Washington, DC, 2011), pp. 127–136Google Scholar
  17. 17.
    M. Meilland, A.I. Comport, On unifying key-frame and voxel-based dense visual slam at large scales, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3677–3683, 2013Google Scholar
  18. 18.
    T. Whelan, M. Kaess, H. Johannsson, M.F. Fallon, J.J. Leonard, J.B. McDonald, Real-time large scale dense rgb-d slam with volumetric fusion. Int. J. Robot. Res.34(4–5), 598–626 (2015)CrossRefGoogle Scholar
  19. 19.
    M. Dissanayake, P. Newman, S. Clark, H.F. Durrant-Whyte, M. Csorba, A solution to the simultaneous localization and map building (slam) problem, IEEE Trans. Robot.17(3), 229–241 (2001)CrossRefGoogle Scholar
  20. 20.
    D. Scaramuzza, Computer Vision: A Reference Guide, chapter Omnidirectional Camera, (Springer, 2014)Google Scholar
  21. 21.
    A.I. Comport, R. Mahony, F. Spindler, A visual servoing model for generalised cameras: Case study of non-overlapping cameras, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2011, pp. 5683–5688Google Scholar
  22. 22.
    A. De Luca, M. Ferri, G. Oriolo, P. R. Giordano, Visual servoing with exploitation of redundancy: an experimental study, in ICRA 2008 (2008)Google Scholar
  23. 23.
    G. Nuetzi, S. Weiss, D. Scaramuzza, R. Siegwart, Fusion of imu and vision for absolute scale estimation in monocular slam. J. Intell. Robot. Syst. 61, 287–299 (2011)Google Scholar
  24. 24.
    R. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn. (Cambridge University Press, Cambridge, 2004)CrossRefGoogle Scholar
  25. 25.
    C. Bishop. Pattern Recognition and Machine Learning, chapter Graphical Models (Springer, Springer-Verlag New York, 2006)Google Scholar
  26. 26.
    C. Dune, A. Herdt, O. Stasse, P.B. Wieber, K. Yokoi, Canceling the sway motion in visual servoing for dynamic walking, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010Google Scholar
  27. 27.
    G. Oriolo, A. Paolillo, L. Rosa, M. Vendittell, Vision-based trajectory control for humanoid navigation, in IEEE/RAS International Conference on Humanoid Robotics (ICHR), 2013, pp. 118–123Google Scholar
  28. 28.
    S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics (The MIT Press, 2005)Google Scholar
  29. 29.
    M. Kaess, V. Ila, R. Roberts, F. Dellaert, The bayes tree: an algorithmic foundation for probabilistic robot mapping, in Workshop on the Algorithmic Foundations of Robotics, 2010Google Scholar
  30. 30.
    A. Hornung, K.M. Wurm, M. Bennewitz, Humanoid robot localization in complex indoor environments, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, pp. 1690–1695Google Scholar
  31. 31.
    D.G. Lowe, Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(4), 91–110 (2004)MathSciNetCrossRefGoogle Scholar
  32. 32.
    H. Bay, T. Tuytelaars, L. Van Gool, Surf: speeded up robust features, in Computer Vision ECCV 2006, vol. 3951 of Lecture Notes in Computer Science, 2006, pp. 404–417CrossRefGoogle Scholar
  33. 33.
    M. Cummins, P. Newman. Fab-map: Probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008)CrossRefGoogle Scholar
  34. 34.
    H. Longuet-Higgins, A computer algorithm for reconstructing a scene from two projections. Nature 293, 133–125 (1981)CrossRefGoogle Scholar
  35. 35.
    J. Philip, A non-iterative algorithm for determining all essential matrices corresponding to five point pairs. Photogramm. Rec. 88, 589–599 1996CrossRefGoogle Scholar
  36. 36.
    O. Pizarro, R.R. Eustice, H. Singh, Relative pose estimation for instrumented, calibrated imaging platforms, in Digital Image Computing: Techniques and Applications, 2003Google Scholar
  37. 37.
    H. Stewenius, C. Engels, D. Nister, Recent developments on direct relative orientation. ISPRS J. Photogramm. Remote Sens. pp. 284–294 2006Google Scholar
  38. 38.
    D. Scaramuzza, Performance evaluation of 1-point-ransac visual odometry. J. Field Rob.28, 792–811 2011CrossRefGoogle Scholar
  39. 39.
    D. Nist’er, Preemptive ransac for live structure and motion estimation, in IEEE International Conference on Computer Vision (ICCV), 2003, pp. 199–206Google Scholar
  40. 40.
    T. Asfour, D. Gonzalez-Aguirre, M. Vollert, R. Dillmann, Robust real-time 6d active-visual localization for humanoid robots, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2014Google Scholar
  41. 41.
    M. Fallon, S. Kuindersma, S. Karumanchi, M. Antone, T. Schneider, H. Dai, C.P. DArpino, R. Deits, M. DiCicco, D. Fourie, T. Koolen, P. Marion, M. Posa, A. Valenzuela, K.T. Yu, J. Shah, K. Iagnemma, R. Tedrake, S. Teller, An architecture for online affordance-based perception and whole-body planning. Int. J. Field Rob. 32(2), 229–524 (2015)CrossRefGoogle Scholar
  42. 42.
    A. Hornung, K.M. Wurm, M. Bennewitz, C. Stachniss, W. Burgard, Octomap: an efficient probabilistic 3d mapping framework based on octrees. Auton. Robot. 3, 189–206 (2013)CrossRefGoogle Scholar
  43. 43.
    A. Hornung, Humanoid Robot Navigation in Complex Indoor Environments. PhD thesis, Albert-Ludwigs-Universitt, Freiburg, 2014Google Scholar
  44. 44.
    K. Kaneko, F. Kanehiro, S. Kajita, H. Hirukawa, T. Kawasaki, M. Hirata, K. Akachi, T. Isozumi, Humanoid robot hrp-2, in IEEE/RAS International Conference on Robotics and Automation (ICRA), vol. 2, 2004, pp. 1083–1090Google Scholar
  45. 45.
    D. Maier, C. Lutz, M. Bennewitz, Integrated perception, mapping, and footstep planning for humanoid navigation among 3d obstacles, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013Google Scholar
  46. 46.
    A. Geiger, M. Roser, R. Urtasun, Efficient Large-Scale Stereo Matching, in Asian Conference on Computer Vision, 2010Google Scholar
  47. 47.
    K. Okada, Y. Kino, F. Kanehiro, Y. Kuniyoshi, M. Inaba, H. Inoue, Rapid Development System for Humanoid Vision-Based Behaviors with Real-Virtual Common Interface, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, 2002, pp. 2515–2520Google Scholar
  48. 48.
    J.S. Gutmann, M. Fukuchi, M. Fujita, Real-Time Path Planning for Humanoid Robot Navigation, in International Joint Conference on Artificial Intelligence, 2005Google Scholar
  49. 49.
    R. Deits, R. Tedrake, Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization, in IEEE/RAS International Conference on Humanoid Robotics (ICHR), 2014Google Scholar
  50. 50.
    D. Scaramuzza, Omnidirectional camera, in Computer Vision, A Reference Guide, pp. 552–560 (2014)CrossRefGoogle Scholar
  51. 51.
    A. Herdt, D. Holger, P.B. Wieber, D. Dimitrov, K. Mombaur, D. Moritz, Online walking motion generation with automatic foot step placement. Adv. Robot. 24, 719–737 (2010)CrossRefGoogle Scholar
  52. 52.
    O.E. Ramos, G. Mauricio, N. Mansard, O. Stasse, J.B. Hayet, P. Soueres, Towards reactive vision-guided walking on rough terrain: an inverse-dynamics based approach. Int. J. Humanoid Rob. 11, 1441004–1441026 (2014)CrossRefGoogle Scholar
  53. 53.
    I. Mordatch, E. Todorov, Z. Popovic, Discovery of complex behaviors through contact-invariant optimization. ACM Trans. Graph. 31(4) (2012)CrossRefGoogle Scholar
  54. 54.
    P. Wensing, D.E. Orin, Generation of Dynamic Humanoid Behaviors through Task-Space Control with Conic Optimization, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2013, pp. 3103–3109Google Scholar
  55. 55.
    J. Delfin, H.M. Becerra, G. Arechavaleta, Visual Path Following Using a Sequence of Target Images and Smooth Robot Velocities for Humanoid Navigation, in IEEE/RAS International Conference on Humanoid Robotics (ICHR), 2014Google Scholar
  56. 56.
    E. Wirbel, S. Bonnabel, A. de La Fortelle, F. Moutarde, Humanoid robot navigation: getting localization information from vision. J. Intell. Syst. 23(2), 113–132 2014Google Scholar
  57. 57.
    V. Indelman, L. Carlone, F. Dellaert, Planning under Uncertainty in the Continuous Domain: A Generalized Belief Space Approach, in IEEE/RAS International Conference on Robotics and Automation (ICRA), 2014Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Gepetto TEAMLAAS-CNRSToulouseFrance

Personalised recommendations