Advertisement

Combination of Geometrical and Statistical Methods for Visual Navigation of Autonomous Robots

  • Naoya Ohnishi
  • Atsushi Imiya
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5604)

Abstract

For visual navigation of an autonomous robot, detection of collision-free direction from an image/ image sequence captured by imaging systems mounted on the robot is a fundamental task. This collision free direction provides the next view to direct attention for computing the next collision free direction. Therefore, the robot requires a cyclic mechanism directing attention to the view and computing the collision free direction from that view. We combine a geometric method for free space detection and a statistical method for visual navigation of the mobile robot. Firstly, we deal with a random-sampling-based method for the detection of free space. Secondly, we deal with a statistical method for the computation of the collision avoiding direction. The robot finds free space using the visual potential defined from a series of views captured by a monocular camera system mounted on the robot to observe the view in front of the robot, We examine the statistical property of the gradient field of the visual potential. We show that the principal component of the gradient of the visual potential field yields the attention direction of the mobile robot for collision avoidance. Some experimental results of navigating the mobile robot in synthetic and real environments are presented.

Keywords

Mobile Robot Optical Flow Ground Plane Error Ratio Robot Navigation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adorini, G., Cagnoni, S., Mordonini, M., Sgorbissa, A.: Omnidirectional stereo systems for robot navigation. In: OMNIVIS (2003)Google Scholar
  2. 2.
    Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. International J. of Computer Vision 12, 43–77 (1994)CrossRefGoogle Scholar
  3. 3.
    Bouguet, J.-Y.: Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm. Intel Corporation, Microprocessor Research Labs, OpenCV Documents (1999)Google Scholar
  4. 4.
    Conner, D.C., Rizzi, A.A., Choset, H.: Composition of local potential functions for global robot control and navigation. In: International Conference on Intelligent Robots and Systems, vol. 4, pp. 3546–3551 (2003)Google Scholar
  5. 5.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Guilherme, N.D., Avinash, C.K.: Vision for mobile robot navigation: A survey. IEEE Trans. on PAMI 24, 237–267 (2002)CrossRefGoogle Scholar
  7. 7.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  8. 8.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)CrossRefGoogle Scholar
  9. 9.
    Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. International J. of Robotics Research 5, 90–98 (1986)CrossRefGoogle Scholar
  10. 10.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, pp. 674–679 (1981)Google Scholar
  11. 11.
    Mallot, H.A., Bulthoff, H.H., Little, J.J., Bohrer, S.: Inverse perspective mapping simplifies optical flow computation and obstacle detection. Biological Cybernetics 64, 177–185 (1991)zbMATHCrossRefGoogle Scholar
  12. 12.
    Murray, D., Little, J.: Using real-time stereo vision for mobile robot navigation. Autonomous Robots 8, 161–171 (2000)CrossRefGoogle Scholar
  13. 13.
    Nagel, H.-H., Enkelmann, W.: An investigation of smoothness constraint for the estimation of displacement vector fields from image sequences. IEEE Trans. on PAMI 8, 565–593 (1986)CrossRefGoogle Scholar
  14. 14.
    Ohnishi, N., Imiya, A.: Featureless robot navigation using optical flow. Connection Science 17, 23–46 (2005)CrossRefGoogle Scholar
  15. 15.
    Ohnishi, N., Imiya, A.: Dominant plane detection from optical flow for robot navigation. Pattern Recognition Letters 27, 1009–1021 (2006)CrossRefGoogle Scholar
  16. 16.
    Ohnishi, N., Imiya, A.: Navigation of nonholonomic mobile robot using visual potential field. In: International Conference on Computer Vision Systems (2007)Google Scholar
  17. 17.
    Ohnishi, N., Imiya, A.: Corridor navigation and obstacle avoidance using visual potential for mobile robot. In: 4th Canadian Conference on Computer and Robot Vision, pp. 131–138 (2007)Google Scholar
  18. 18.
    Ohnishi, N., Imiya, A.: Independent component analysis of layer optical flow and its application. In: 2nd International Symposium on Brain, Vision and Artificial Intelligence, pp. 171–180 (2007)Google Scholar
  19. 19.
    Ohnishi, N., Imiya, A.: Independent component analysis of optical flow for robot navigation. Neurocomputing 71, 2140–2163 (2008) (accepted for publication)Google Scholar
  20. 20.
    Park, K.-Y., Jabri, M., Lee, S.-Y., Sejnowski, T.J.: Independent components of optical flows have MSTd-like receptive fields. In: Proc. of the 2nd International Workshop on ICA and Blind Signal Separation, pp. 597–601 (2000)Google Scholar
  21. 21.
    Santos-Victor, J., Sandini, G.: Uncalibrated obstacle detection using normal flow. Machine Vision and Applications 9, 130–137 (1996)CrossRefGoogle Scholar
  22. 22.
    Tews, A.D., Sukhatme, G.S., Matarić, M.J.: A multi-robot approach to stealthy navigation in the presence of an observer. In: ICRA, pp. 2379–2385 (2004)Google Scholar
  23. 23.
    Trihatmo, S., Jarvis, R.A.: Short-safe compromise path for mobile robot navigation in a dynamic unknown environment. In: Australian Conference on Robotics and Automation (2003)Google Scholar
  24. 24.
    Vaina, L.M., Beardsley, S.A., Rushton, S.K.: Optic flow and beyond. Kluwer Academic Publishers, Dordrecht (2004)Google Scholar
  25. 25.
    Wong, B., Spetsakis, M.: Scene reconstruction and robot navigation using dynamic fields. Autonomous Robots 8, 71–86 (2000)CrossRefGoogle Scholar
  26. 26.
    Zemel, R.S., Sejnowski, T.J.: A model for encoding multiple object motions and self-motion in area mst of primate visual cortex. Neuroscience 18, 531–547 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Naoya Ohnishi
    • 1
  • Atsushi Imiya
    • 2
  1. 1.School of Science and TechnologyChiba UniversityJapan
  2. 2.Institute of Media and Information TechnologyChiba University, JapanChibaJapan

Personalised recommendations