Advertisement

Angle-Based Navigation Using the 1D Trifocal Tensor

  • Miguel ArandaEmail author
  • Gonzalo López-Nicolás
  • Carlos Sagüés
Chapter
Part of the Advances in Industrial Control book series (AIC)

Abstract

The first problem addressed in the monograph is how to enable mobile robots to autonomously navigate toward specific positions in an environment. Vision sensors have often been used for this purpose, supporting a behavior known as visual homing, in which the robot’s target location is defined by an image. This chapter describes a novel visual homing methodology for robots moving in a planar environment. The employed visual information consists of a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment and the current image taken by the robot. One of the contributions presented is an algorithm that calculates the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion scenarios and provides important robustness properties to the presented technique. A further contribution within the proposed methodology is a novel control law that uses the available angles, with no range information involved, to drive the robot to the goal. This way, the method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. The chapter includes a formal proof of the stability of the proposed control law, and the performance of the visual navigation method is illustrated through simulations and different sets of experiments with real images captured by cameras on board robotic mobile platforms.

Keywords

Ground Plane Goal Location Relative Angle Goal Position Point Correspondence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    DeSouza GN, Kak AC (2002) Vision for mobile robot navigation: a survey. IEEE Trans Pattern Anal Mach Intell 24(2):237–267CrossRefGoogle Scholar
  2. 2.
    López-Nicolás G, Mezouar Y (2014) Visual control of mobile robots. Robot Auton Syst 62(11):1611–1612CrossRefGoogle Scholar
  3. 3.
    Lambrinos D, Möller R, Labhart T, Pfeifer R, Wehner R (2000) A mobile robot employing insect strategies for navigation. Robot Auton Syst 30(1–2):39–64CrossRefGoogle Scholar
  4. 4.
    Weber K, Venkatesh S, Srinivasany MV (1998) Insect-inspired robotic homing. Adapt Behav 7(1):65–97CrossRefGoogle Scholar
  5. 5.
    Argyros AA, Bekris KE, Orphanoudakis SC, Kavraki LE (2005) Robot homing by exploiting panoramic vision. Auton Robot 19(1):7–25CrossRefGoogle Scholar
  6. 6.
    Möller R, Vardy A, Kreft S, Ruwisch S (2007) Visual homing in environments with anisotropic landmark distribution. Auton Robot 23(3):231–245CrossRefGoogle Scholar
  7. 7.
    Stürzl W, Mallot HA (2006) Efficient visual homing based on Fourier transformed panoramic images. Robot Auton Syst 54(4):300–313CrossRefGoogle Scholar
  8. 8.
    Möller R, Krzykawski M, Gerstmayr L (2010) Three 2D-warping schemes for visual robot navigation. Auton Robot 29(3–4):253–291CrossRefGoogle Scholar
  9. 9.
    Churchill D, Vardy A (2008) Homing in scale space. In: IEEE/RSJ international conference on intelligent robots and systems, pp 1307–1312Google Scholar
  10. 10.
    Liu M, Pradalier C, Pomerleau F, Siegwart R (2012) Scale-only visual homing from an omnidirectional camera. In: IEEE international conference on robotics and automation, pp 3944 –3949Google Scholar
  11. 11.
    Yu SE, Kim D (2011) Landmark vectors with quantized distance information for homing navigation. Adapt Behav 19(2):121–141MathSciNetCrossRefGoogle Scholar
  12. 12.
    Fu Y, Hsiang TR, Chung SL (2013) Multi-waypoint visual homing in piecewise linear trajectory. Robotica 31(3):479–491CrossRefGoogle Scholar
  13. 13.
    Hong J, Tan X, Pinette B, Weiss R, Riseman EM (1992) Image-based homing. IEEE Control Syst Mag 12(1):38–45CrossRefGoogle Scholar
  14. 14.
    Lim J, Barnes N (2009) Robust visual homing with landmark angles. In: Robotics: science and systemsGoogle Scholar
  15. 15.
    Basri R, Rivlin E, Shimshoni I (1999) Visual homing: surfing on the epipoles. Int J Comput Vis 33(2):117–137CrossRefGoogle Scholar
  16. 16.
    Chesi G, Hashimoto K (2004) A simple technique for improving camera displacement estimation in eye-in-hand visual servoing. IEEE Trans Pattern Anal Mach Intell 26(9):1239–1242CrossRefGoogle Scholar
  17. 17.
    López-Nicolás G, Sagüés C, Guerrero JJ, Kragic D, Jensfelt P (2008) Switching visual control based on epipoles for mobile robots. Robot Auton Syst 56(7):592–603CrossRefGoogle Scholar
  18. 18.
    López-Nicolás G, Becerra HM, Aranda M, Sagüés C (2011) Visual navigation by means of three view geometry. In: Robot 2011 workshop. Sevilla, SpainGoogle Scholar
  19. 19.
    Chen J, Dixon WE, Dawson M, McIntyre M (2006) Homography-based visual servo tracking control of a wheeled mobile robot. IEEE Trans Robot 22(2):407–416Google Scholar
  20. 20.
    Courbon J, Mezouar Y, Martinet P (2008) Indoor navigation of a non-holonomic mobile robot using a visual memory. Auton Robot 25(3):253–266CrossRefGoogle Scholar
  21. 21.
    López-Nicolás G, Guerrero JJ, Sagüés C (2010) Multiple homographies with omnidirectional vision for robot homing. Robot Auton Syst 58(6):773–783CrossRefGoogle Scholar
  22. 22.
    Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, CambridgeGoogle Scholar
  23. 23.
    López-Nicolás G, Guerrero JJ, Sagüés C (2010) Visual control through the trifocal tensor for nonholonomic robots. Robot Auton Syst 58(2):216–226CrossRefGoogle Scholar
  24. 24.
    Shademan A, Jägersand M (2010) Three-view uncalibrated visual servoing. In: IEEE/RSJ international conference on intelligent robots and systems, pp 6234–6239Google Scholar
  25. 25.
    Faugeras O, Quan L, Sturm P (2000) Self-calibration of a 1D projective camera and its application to the self-calibration of a 2D projective camera. IEEE Trans Pattern Anal Mach Intell 22(10):1179–1185CrossRefGoogle Scholar
  26. 26.
    Quan L (2001) Two-way ambiguity in 2D projective reconstruction from three uncalibrated 1D images. IEEE Trans Pattern Anal Mach Intell 23(2):212–216CrossRefGoogle Scholar
  27. 27.
    Dellaert F, Stroupe AW (2002) Linear 2D localization and mapping for single and multiple robot scenarios. In: IEEE international conference on robotics and automation, pp 688–694Google Scholar
  28. 28.
    Guerrero JJ, Murillo AC, Sagüés C (2008) Localization and matching using the planar trifocal tensor with bearing-only data. IEEE Trans Robot 24(2):494–501CrossRefGoogle Scholar
  29. 29.
    Becerra HM, López-Nicolás G, Sagüés C (2010) Omnidirectional visual control of mobile robots based on the 1D trifocal tensor. Robot Auton Syst 58(6):796–808CrossRefGoogle Scholar
  30. 30.
    Goedemé T, Nuttin M, Tuytelaars T, Van Gool L (2007) Omnidirectional vision based topological navigation. Int J Comput Vis 74(3):219–236CrossRefGoogle Scholar
  31. 31.
    Booij O, Terwijn B, Zivkovic Z, Kröse B (2007) Navigation using an appearance based topological map. In: IEEE international conference on robotics and automation, pp 3927–3932Google Scholar
  32. 32.
    Franz MO, Schölkopf B, Georg P, Mallot HA, Bülthoff HH (1998) Learning view graphs for robot navigation. Auton Robot 5(1):111–125CrossRefGoogle Scholar
  33. 33.
    Cherubini A, Chaumette F (2011) Visual navigation with obstacle avoidance. In: IEEE/RSJ international conference on intelligent robots and systems, pp 1593–1598Google Scholar
  34. 34.
    Åström K, Oskarsson M (2000) Solutions and ambiguities of the structure and motion problem for 1D retinal vision. J Math Imaging Vis 12(2):121–135MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  36. 36.
    Shashua A, Werman M (1995) Trilinearity of three perspective views and its associated tensor. In: International conference on computer vision, pp 920–925Google Scholar
  37. 37.
    Oriolo G (2014) Wheeled robots. In: Encyclopedia of systems and control. Springer, London, pp 1–9Google Scholar
  38. 38.
    Khalil HK (2001) Nonlinear systems, 3rd edn. Prentice Hall, Englewood CliffsGoogle Scholar
  39. 39.
    Slotine JJE, Li W (1991) applied nonlinear control. Prentice Hall, Englewood CliffsGoogle Scholar
  40. 40.
    Vedaldi A, Fulkerson B (2008) VLFeat: an open and portable library of computer vision algorithms. http://www.vlfeat.org/
  41. 41.
    Booij O, Zivkovic Z, Kröse B (2006) Sparse appearance based modeling for robot localization. In: IEEE/RSJ international conference on intelligent robots and systems, pp 1510–1515Google Scholar
  42. 42.
    Zivkovic Z, Booij O, Kröse B (2007) From images to rooms. Robot Auton Syst 55(5):411–418CrossRefGoogle Scholar
  43. 43.
    Zivkovic Z, Bakker B, Kröse B (2006) Hierarchical map building and planning based on graph partitioning. In: IEEE international conference on robotics and automation, pp 803–809Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Miguel Aranda
    • 1
    Email author
  • Gonzalo López-Nicolás
    • 2
  • Carlos Sagüés
    • 2
  1. 1.ISPRSIGMA Clermont, Institut PascalAubièreFrance
  2. 2.Instituto de Investigación en Ingeniería de AragónUniversidad de ZaragozaZaragozaSpain

Personalised recommendations