Vision-Based Topological Navigation: An Implicit Solution to Loop Closure

  • Youcef Mezouar
  • Jonathan Courbon
  • Philippe Martinet
Reference work entry


Autonomous navigation using a single camera is a challenging and active field of research. Among the different approaches, visual memory-based navigation strategies have gained increasing interests in the last few years. They consist of representing the mobile robot environment with visual features topologically organized gathered in a database (visual memory). Basically, the navigation process from a visual memory can be split in three stages: (1) visual memory acquisition, (2) initial localization, and (3) path planning and following (refer to Fig. 53.1). Importantly, this frame work allows accurate autonomous navigation without using explicitly a loop closure strategy. The goal of this chapter is to provide to the reader an illustrative example of such a strategy.


Mobile Robot Interest Point Visual Memory Navigation Task Autonomous Navigation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Andreasson H, Treptow A, Duckett T (2005) Localization for mobile robots using panoramic vision, local features and particle filter. In: IEEE international conference on robotics and automation, ICRA’05, Barcelone, Espagne, Apr 2005, pp 3348–3353Google Scholar
  2. Andreasson H, Treptow A, Duckett T (2007) Self-localization in non-stationary environments using omni-directional vision. Robot Auton Syst 55(7):541–551CrossRefGoogle Scholar
  3. Atkinson R, Shiffrin R (1968) Human memory: a proposed system and its control processes. In: Spence KW, Spence JT (eds) The psychology of learning and motivation. Academic, New YorkGoogle Scholar
  4. Bacca B, Salvi J, Batlle J, Cufi X (2010) Appearance-based mapping and localization using feature stability histograms. Electron Lett 46(16):1120–1121CrossRefGoogle Scholar
  5. Bibby C, Reid I (2007) Simultaneous localisation and mapping in dynamic environments (slamide) with reversible data association. In: Robotics: science and systems, Atlanta, GA, USAGoogle Scholar
  6. Blaer P, Allen P (2002) Topological mobile robot localization using fast vision techniques. In: IEEE international conference on robotics and automation, ICRA’02, Washington, USA, May 2002, pp 1031–1036Google Scholar
  7. Chen J, Dixon WE, Dawson DM, McIntire M (2003) Homography-based visual servo tracking control of a wheeled mobile robot. In: Proceeding of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, Oct 2003, pp 1814–1819Google Scholar
  8. Courbon J, Mezouar Y, Martinet P (2008) Indoor navigation of a non-holonomic mobile robot using a visual memory. Auton Robots 25(3):253–266CrossRefGoogle Scholar
  9. Courbon J, Mezouar Y, Martinet P (2009) Autonomous navigation of vehicles from a visual memory using a generic camera model. Intell Transport Syst (ITS) 10:392–402Google Scholar
  10. Dayoub F, Duckett T (2008). An adaptative appearance-based map for long-term topological localization of mobile robots. In: IEEE/RSJ international conference on intelligent robots and systems, IROS’08, Nice, France, Sep 2008, pp 3364–3369Google Scholar
  11. Dayoub F, Duckett T, Cielniak G (2010) Short- and long-term adaptation of visual place memories for mobile robots. In: Remembering who we are- human memory for artificial agents symposium, AISB 2010, Leicester, UKGoogle Scholar
  12. DeSouza GN, Kak AC (2002) Vision for mobile robot navigation: a survey. IEEE Trans Pattern Anal Mach Intell 24(2):237–267CrossRefGoogle Scholar
  13. Fang Y, Dawson D, Dixon W, de Queiroz M (2002) Homography-based visual servoing of wheeled mobile robots. In: Conference on decision and control, Las Vegas, NV, Dec 2002, pp 2866–2871Google Scholar
  14. Gaspar J, Winters N, Santos-Victor J (2000) Vision-based navigation and environmental representations with an omnidirectional camera. IEEE Trans Robot Autom 16:890–898CrossRefGoogle Scholar
  15. Goedemé T, Tuytelaars T, Vanacker G, Nuttin M, Gool LV, Gool LV (2005) Feature based omnidirectional sparse visual path following. In: IEEE/RSJ international conference on intelligent robots and systems, Edmonton, Canada, Aug 2005, pp 1806–1811Google Scholar
  16. Gonzalez-Barbosa J, Lacroix S (2002) Rover localization in natural environments by indexing panoramic images. In: IEEE international conference on robotics and automation, ICRA’02, vol 2, Washington, DC, USA, May 2002, pp 1365–1370Google Scholar
  17. Harris C, Stephens M (1988) A combined corner and edge detector. In: The fourth alvey vision conference, Manchester, UK, pp 147–151Google Scholar
  18. Hayet J, Lerasle F, Devy M (2002) A visual landmark framework for indoor mobile robot navigation. In: IEEE international conference on robotics and automation, ICRA’02, Washington, DC, USA, May 2002, pp 3942–3947Google Scholar
  19. HochdorferS, Schlegel C (2009) Towards a robust visual SLAM approach: addressing the challenge of life-long operation. In: 14th international conference on advanced robotics, Munich, GermanyGoogle Scholar
  20. Ieng S, Benosman R, Devars J (2003) An efficient dynamic multi-angular feature points matcher for catadioptric views. In: Workshop OmniVis’03, in conjunction with computer vision and pattern recognition (CVPR), vol 07, Wisconsin, USA, Jun 2003, p 75Google Scholar
  21. Jones S, Andresen C, Crowley J (1997) Appearance-based process for visual navigation. In: IEEE/RSJ international conference on intelligent robots and systems, IROS’97, vol 2, Grenoble, France, pp 551–557Google Scholar
  22. Lemaire T, Berger C, Jung I, Lacroix S (2007) Vision-based slam: stereo and monocular approaches. Int J Comput Vision 74(3):343–364CrossRefGoogle Scholar
  23. Linåker F, Ishikawa M (2004) Rotation invariant features from omnidirectional camera images using a polar higher-order local autocorrelation feature extractor. In: IEEE/RSJ international conference on intelligent robots and systems, IROS’04, vol 4, Sendai, Japon, Sep 2004, pp 4026–4031Google Scholar
  24. Lowe D (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vision 60(2):91–110CrossRefGoogle Scholar
  25. Ma Y, Kosecka J, Sastry SS (1999) Vision guided navigation for a nonholonomic mobile robot. IEEE Trans Robot Autom 15(3):521–537CrossRefGoogle Scholar
  26. Matsumoto Y, Inaba M, Inoue H (1996) Visual navigation using view-sequenced route representation. In: IEEE international conference on robotics and automation, ICRA’96, vol 1, Minneapolis, Minnesota, USA, Apr 1996, pp 83–88Google Scholar
  27. Matsumoto Y, Ikeda K, Inaba M, Inoue H (1999) Visual navigation using omnidirectional view sequence. In: IEEE/RSJ international conference on intelligent robots and systems, IROS’99, vol 1, Kyongju, Corée, Oct 1999, pp 317–322Google Scholar
  28. Menegatti E, Zoccarato M, Pagello E, Ishiguro H (2003) Hierarchical image-based localisation for mobile robots with monte-carlo localisation. In: European conference on mobile robots, ECMR’03, Varsovie, Pologne, Sep 2003Google Scholar
  29. Mouragnon E, Lhuillier M, Dhome M, Dekeyser F, Sayd P (2009) Generic and real-time structure from motion using local bundle adjustment. Image Vision Comput 27(8):1178–1193CrossRefGoogle Scholar
  30. Murillo A, Guerrero J, Sagüés C (2007) SURF features for efficient robot localization with omnidirectional images. In: IEEE international conference on robotics and automation, ICRA’07, Rome, Italie, Apr 2007, pp 3901–3907Google Scholar
  31. Nistér D (2004) An efficient solution to the five-point relative pose problem. Trans Pattern Anal Mach Intell 26(6):756–770CrossRefGoogle Scholar
  32. Pajdla T, Hlaváč V (1999) Zero phase representation of panoramic images for image based localization. In: 8th international conference on computer analysis of images and patterns, Ljubljana, Slovénie, Sep 1999, pp 550–557Google Scholar
  33. Pollefeys M, Gool LV, Vergauwen M, Verbiest F, Cornelis K, Tops J, Koch R (2004) Visual modeling with a hand-held camera. Int J Comput Vision 59(3):207–232CrossRefGoogle Scholar
  34. Royer E, Lhuillier M, Dhome M, Lavest J-M (2007) Monocular vision for mobile robot localization and autonomous navigation. Int J Comput Vision 74:237–260, special joint issue on vision and roboticsCrossRefGoogle Scholar
  35. Samson C (1995) Control of chained systems. Application to path following and time-varying stabilization of mobile robots. IEEE Trans Autom Control 40(1):64–77MathSciNetMATHCrossRefGoogle Scholar
  36. Svoboda T, Pajdla T,(2001) Matching in catadioptric images with appropriate windows and outliers removal. In: 9th international conference on computer analysis of images and patterns, Berlin, Allemagne, Sep 2001, pp 733–740Google Scholar
  37. Tamimi A, Andreasson H, Treptow A, Duckett T, Zell A (2005) Localization of mobile robots with omnidirectional vision using particle filter and iterative SIFT. In: 2nd European conference on mobile robots (ECMR), Ancona, Italie, Sep 2005, pp 2–7Google Scholar
  38. Thormählen T, Broszio H, Weissenfeld A (2004) Keyframe selection for camera motion and structure estimation from multiple views. In: 8th European conference on computer vision (ECCV), Prague, Czech Republic, May 2004, pp 523–535Google Scholar
  39. Thuilot B, Bom J, Marmoiton F, Martinet P (2004) Accurate automatic guidance of an urban electric vehicle relying on a kinematic GPS sensor. In: 5th IFAC symposium on intelligent autonomous vehicles, IAV’04, Instituto Superior Técnico, Lisbonne, Portugal, Jul 2004Google Scholar
  40. Torr P (2002) Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. Int J Comput Vision 50(1):35–61MATHCrossRefGoogle Scholar
  41. Triggs B, McLauchlan P, Hartley R, Fitzgibbon A (2000) Bundle adjustment – a modern synthesis. In: Triggs B, Zisserman A, Szeliski R (eds) Vision algorithms: theory and practice, vol 1883, Lecture notes in computer science. Springer, Berlin, pp 298–372CrossRefGoogle Scholar
  42. Tsakiris D, Rives P, Samson C (1998) Extending visual servoing techniques to nonholonomic mobile robots. In: GHD Kriegman, A Morse (eds) The confluence of vision and control, LNCIS, vol 237. Springer, London/New York, pp 106–117Google Scholar
  43. Wangsiripitak S, Murray D (2009) Avoiding moving outliers in visual SLAM by tracking moving objects. In: IEEE international conference on robotics and automation, ICRA’09, Kobe, Japan, pp 705–710Google Scholar
  44. Yamauchi B, Langley P (1997) Spatial learning for navigation in dynamic environments. IEEE Trans Syst Man Cybern 26(3):496–505Google Scholar
  45. Zhang Z, Deriche R, Faugeras O, Luong Q-T (1995) A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artif Intell J 78:87–119CrossRefGoogle Scholar
  46. Zodiac T (1995) In: dewit Canedas C, Siciliano B, Bastin G (eds) Theory of robot control. Springer, BerlinGoogle Scholar

Copyright information

© Springer-Verlag London Ltd. 2012

Authors and Affiliations

  • Youcef Mezouar
    • 1
    • 3
  • Jonathan Courbon
    • 1
    • 3
  • Philippe Martinet
    • 2
    • 3
  1. 1.Clermont UniversitéUniversité Blaise PascalLASMEAFrance
  2. 2.Clermont UniversitéIFMALASMEAFrance
  3. 3.CNRSLASMEAFrance

Personalised recommendations