Vision-Based Navigation Strategies

  • Darius Burschka
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 8)

In this chapter, we will focus on the localization task from a video camera. We assume a video camera mounted on a mobile system. The localization implicates several challenges. The first challenge is an accurate estimation of the 3D pose pa- rameters from the available sensor data. Another challenge is to perform the localization in situations, where the reference points or landmarks as we will refer to them in the following text are not known a-priori and need to be estimated in parallel to the localization process.We propose systems that are capable of simultaneous localization of the camera and navigation relative to obstacles in the world.


Mobile Robot Obstacle Avoidance Sensor Reading Visual Servoing Reference View 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    T. Alter. 3D Pose from 3 Points Using Weak Perspective.. IEEE PAMI, Vol. 16, No. 8, pp. 802-808, 1994Google Scholar
  2. 2.
    K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pat. Anal. Machine Intell., vol. 9, pp. 698-700, 1987.CrossRefGoogle Scholar
  3. 3.
    J. Borenstein and Y. Koren. Real-time Obstacle Avoidance for Fast Mobile Robots in Cluttered Environments. In IEEE International Conference on Robotics and Automation, pages 572 -577, May 1990.Google Scholar
  4. 4.
    D. Burschka, C. Eberst, C. Robl, and G. F ärber. Vision-Based Exploration of Indoor Environments. Workshop on Robust Vision for Vision-Based Control of Motion at the IEEE International Conference on Robotics and Automation, WS2, May 1998.Google Scholar
  5. 5.
    D. Burschka and G. Hager. Vision-based control of mobile robots. In Proc. International Conference on Robotics and Automation, pages 1707-1713, 2001.Google Scholar
  6. 6.
    D. Burschka and G. Hager. Scene Classification from Dense Disparity Maps in Indoor Environments. In Proc. ICPR, 2002.Google Scholar
  7. 7.
    D. Burschka and G. Hager. Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration. In International Conference on Robotics and Automation, pages 2066-2072, 2002.Google Scholar
  8. 8.
    D. Burschka and G.D. Hager. Vision-Based 3D Scene Analysis for Driver Assistance. ICRA, 2005.Google Scholar
  9. 9.
    D. Burschka and Gregory D. Hager. V-GPS - Image-Based Control for 3D Guidance Systems. In Proc. of IROS, pages 1789-1795, October 2003.Google Scholar
  10. 10.
    D. Burschka and Gregory D. Hager. V-GPS(SLAM): - Vision-Based Inertial System for Mobile Robots. In Proc. of ICRA, pages 409-415, April 2004.Google Scholar
  11. 11.
    D. F. DeMenthon and Larry S. Davis. Model-Based Object Pose in 25 Lines of Code. International Journal of Computer Vision, 15:123-141, June 1995.CrossRefGoogle Scholar
  12. 12.
    A. J. Davison. Real-Time Simultaneous Localisation and Mapping with a Single Camera. Proc Internation Conference on Computer Vision, Volume 2, pages 1403-1412, 2003.CrossRefGoogle Scholar
  13. 13.
    G. Dudek, P. Freedman, and I. Rekleitis. Just-in-time sensing: efficiently combining sonar and laser range data for exploring unknown worlds. In Proc. of ICRA, pages 667-672, April 1996.Google Scholar
  14. 14.
    O. Faugeras, Three-Dimensional Computer Vision, The MIT Press, 1993.Google Scholar
  15. 15.
    G. Hager, C-P. Lu, and E. Mjolsness. Object pose from video images. PAMI, 22(6):610-622, 2000.Google Scholar
  16. 16.
    L.M. Lorigo, R.A. Brooks, and W.E.L. Grimson. Visually-Guided Obstacle Avoidance in Unstructured Environments. IEEE Conference on Intelligent Robots and Systems, pages 373-379, September 1997.Google Scholar
  17. 17.
    E. Malis, F. Chaumette, and S. Boudet. 2D 1/2 visual servoing. IEEE Transactions on Robotics and Automation, 15(2):238-250, April 1999.CrossRefGoogle Scholar
  18. 18.
    R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000.Google Scholar
  19. 19.
    R. Haralick, C. Lee, K. Ottenberg and M. N ölle. Review and Analysis of Solutions of the Three Point Perspective Pose Estimation Problem. International Journal on Computer Vision, 13(3), pages 331-356, 1994.CrossRefGoogle Scholar
  20. 20.
    B. K. P. Horn, H. M. Hilden, and S. Negahdaripour, “Closed-form solution of absolute orientation using orthonomal matrices,” J. Opt. Soc. Amer., vol. A-5, pp. 1127-1135, 1987.MathSciNetGoogle Scholar
  21. 21.
    B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternion,” J. Opt. Soc. Amer., vol. A-4, pp. 629-642, 1987.CrossRefGoogle Scholar
  22. 22.
    D. Nister. A Minimal Solution to the Generalised 3-Point Pose Problem. CVPR 2004, 2004.Google Scholar
  23. 23.
    R. C. Smith, M. Self, and P. Cheeseman. Estimating uncertain spatial relationships in robotics. Autonomous Robot Vehicles, Springer-Verlag:167-193, 1990.Google Scholar
  24. 24.
    M. W. Walker, L. Shao, and R. A. Volz, “Estimating 3-D location parameters using dual number quaternions,” CVGIP: Image Understanding, vol. 54, no. 3, pp. 358-367, 1991.MATHCrossRefGoogle Scholar
  25. 25.
    E. Trucco and A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice Hall, 1998.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Darius Burschka
    • 1
  1. 1.Department of Computer ScienceTechnische Universität MünchenGermany

Personalised recommendations