Abstract
This chapter introduces the basic concepts of autonomous navigation of mobile robots and the utility of using vision as the sensing mechanism in achieving the desired objectives. The chapter discusses the broad categories of vision-based navigation in indoor and outdoor environments. Different prominent directions of research in this context are introduced and also different broad modalities of obstacle detection and avoidance are presented.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Chen, Z., Birchfield, S.T.: Qualitative Vision-Based Mobile Robot Navigation. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), Orlando, Florida (May 2006)
Benet, G., Blanes, F., Simo, J.E., Perez, P.: Using infrared sensors for distance measurement in mobile robots. Robotics and Autonomous Systems 40, 255–266 (2002)
Flynn, A.M.: Combining sonar and infrared sensors for mobile robot navigation. The International Journal of Robotics Research 7(6), 5–14 (1988)
Saeedi, P., Lawrence, P.D., Lowe, D.G., Jacobsen, P., Kusalovic, D., Ardron, K., Sorensen, P.H.: An autonomous excavator with vision-based track-slippage. IEEE Transaction on Control Systems Technology 13(1), 67–84 (2005)
Bertozzi, M., Broggi, A., Fascioli, A.: Vision-based intelligent vehicles: state of the art and perspectives. Robotics and Autonomous Systems 32, 1–16 (2000)
DeSouza, G.N., Kak, A.C.: Vision for mobile robot navigation: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(2), 237–267 (2002)
Shin, D.H., Singh, S.: Path generation for robot vehicles using composite clothoid segments. The Robotics Institute, Internal Report CMU-RI-TR-90-31. Carnegie-Mellon University (1990)
Lebegue, X., Aggarwal, J.K.: Generation of architectural CAD models using a mobile robot. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 711–717 (1994)
Lebegue, X., Aggarwal, J.K.: Significant line segments for an indoor mobile robot. IEEE Transactions on Robotics and Automation 9(6), 801–815 (1993)
Egido, V., Barber, R., Boada, M.J.L., Salichs, M.A.: Self-generation by a mobile robot of topological maps of corridors. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), Washington, pp. 2662–2667 (May 2002)
Borenstein, J., Everett, H.R., Feng, L. (eds.): Navigating Mobile Robots: Systems and Techniques. A. K. Peters, Wellesley (1996)
Atiya, S., Hanger, G.D.: Real-time vision based robot localization. IEEE Transactions on Robotics and Automation 9(6), 785–800 (1993)
Kosaka, A., Kak, A.C.: Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties. Computer Vision, Graphics, and Image Processing – Image Understanding 56(3), 271–329 (1992)
Meng, M., Kak, A.C.: Mobile robot navigation using neural networks and nonmetrical environment models. IEEE Control Systems, 30–39 (October 1993)
Pan, J., Pack, D.J., Kosaka, A., Kak, A.C.: FUZZY-NAV: A vision-based robot navigation architecture using fuzzy inference for uncertainty. In: Proc. IEEE World Congress Neural Networks, vol. 2, pp. 602–607 (July 1995)
Yamauchi, B., Beer, R.: Spatial learning for navigation in dynamic environments. IEEE Transactions on Systems, Man, and Cybernetics: Part B 26(3), 496–505 (1996)
Zimmer, U.R.: Robust world-modeling and navigation in real world. In: Proc. Third International Conference Fuzzy Logic, Neural Nets, and Soft Computing, vol. 13(2-4), pp. 247–260 (October 1996)
Borenstein, J., Koren, Y.: The vector-field histogram-fast obstacle avoidance for mobile robots. IEEE Transactions on Robotics and Automation 7(3), 278–288 (1991)
Elfes, A.: Sonar-based real-world mapping and navigation. IEEE Journal of Robotics and Automation 3(6), 249–265 (1987)
Yagi, Y., Kawato, S., Tsuji, S.: Real-time ominidirectional image sensor (COPIS) for vision guided navigation. IEEE Transactions on Robotics and Automation 10(1), 11–22 (1994)
Thrun, S.: Learning metric-topological maps for indoor mobile robot navigation. Artificial Intelligence 99(1), 21–71 (1998)
Martin, M.C.: Evolving visual sonar: Depth from monocular images. Pattern Recognition Letters 27(11), 1174–1180 (2006)
Gartshore, R., Palmer, P.: Exploration of an unknown 2D environment using a view improvement strategy. Towards Autonomous Robotic Systems, 57–64 (2005)
Santos-victor, J., Sandini, G., Curotto, F., Garibaldi, S.: Divergent stereo for robot navigation: learning from bees. In: Proc. IEEE CS Conference Computer Vision and Pattern Recognition (1993)
Ohno, T., Ohya, A., Yuta, S.: Autonomous navigation for mobile robots referring pre-recorded image sequence. In: Proc. IEEE International Conference on Intelligent Robots and Systems, vol. 2, pp. 672–679 (November 1996)
Jones, A.D., Andersen, C., Crowley, J.L.: Appearance based processes for visual navigation. In: Proc. IEEE International Conference on Intelligent Robots and Systems, pp. 551–557 (September 1997)
Talukder, A., Goldberg, S., Matties, L., Ansar, A.: Real-time detection of moving objects in a dynamic scene from moving robotic vehicles. In: Proc. IEEE International Conference on Intelligent Robots and Systems (IROS), Las Vegas, Nevada, pp. 1308–1313 (October 2003)
Talukder, A., Matties, L.: Real-time detection of moving objects from moving vehicles using dense stereo and optical flow. In: Proc. IEEE International Conference on Intelligent Robots and Systems (IROS), Sendai, pp. 3718–3725 (October 2004)
Braillon, C., Usher, K., Pradalier, C., Crowley, J.L., Laugier, C.: Fusion of stereo and optical flow data using occupancy grid. In: Proc. IEEE International Conference on Intelligent Robots and Systems (IROS), Beijing, pp. 2302–2307 (October 2006)
Matsumoto, Y., Ikeda, K., Inaba, M., Inoue, H.: Visual navigation using omnidirectional view sequence. In: Proc. IEEE International Conference on Intelligent Robots and Systems (IROS), Kyongju, Korea, pp. 317–322 (October 1999)
Thorpe, C., Herbert, M.H., Kanade, T., Shafer, S.A.: Vision and Navigation for the Carnegie-Mellon Navlab. IEEE Transactions on Pattern Analysis and Machine Intelligence 10(3), 362–372 (1988)
Thorpe, C., Kanade, T., Shafer, S.A.: Vision and Navigation for the Carnegie-Mellon Navlab. In: Proc. Image Understand Workshop, pp. 143–152 (1987)
Broggi, A., Berte, S.: Vision-based road detection in automotive systems: A real-time expectation-driven approach. Journal of Artificial Intelligence Research 3(6), 325–348 (1995)
Ghurchian, R., Takahashi, T., Wang, Z.D., Nakano, E.: On robot self navigation in outdoor environments by color image processing. In: Proc. International Conference on Control, Automation, Robotics and Vision, pp. 625–630 (2002)
Jung, C.R., Kelber, C.R.: Lane following and lane departure using a linear-parabolic model. Image and Vision Computing 23(13), 1192–1202 (2005)
Schneiderman, H., Nashman, M.: A discriminating feature tracker for vision-based autonomous driving. IEEE Transactions on Robotics and Automation 10(6), 769–775 (1994)
Mejias, L.O., Saripalli, S., Sukhatme, G.S., Cervera, P.C.: Detection and tracking of external features in an urban environment using an autonomous helicopter. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), Barcelona, pp. 3972–3977 (April 2005)
Saeedi, P., Lawrence, P.D., Lowe, D.G.: Vision-based 3-D trajectory tracking for unknown environments. IEEE Transaction on Robotics 22(1), 119–136 (2006)
Moravec, H.P.: The stanford cart and the CMU rover. Proc. IEEE 71(7), 872–884 (1983)
Thorpe, C.: FIDO: Vision and navigation for a mobile robot. PhD dissertation, Department of computer science, Carnegie Mellon University (December 1984)
Horswill, I.: Visual collision avoidance by segmentation. In: Proc. IEEE International Conference on Intelligent Robots and Systems, Germany, pp. 902–909 (September 1994)
Horswill, I.: Specialzation of Perceptual Processes. PhD thesis, Massachusetts Institute of Technology (1995)
Ohya, A., Kosaka, A., Kak, A.: Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing. IEEE Transactions on Robotics and Automation 14(6), 969–978 (1998)
Aider, O.A., Hoppenot, P., Colle, E.: A model-based method for indoor mobile robot localization using monocular vision and straight-line correspondences. Robotics and Autonomous Systems 52, 229–246 (2005)
Gartshore, R., Aguado, A., Galambos, C.: Incremental map building using occupancy grid for an autonomous monocular robot. In: Proc. Seventh International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, pp. 613–618 (December 2002)
Murillo, A.C., Kosecka, J., Guerrero, J.J., Sagues, C.: Visual door detection integrating appearance and shape cues. Robotics and Autonomous Systems 56, 512–521 (2008)
Saitoh, T., Tada, N., Konishi, R.: Indoor mobile robot navigation by center following based on monocular vision. In: Computer Vision, pp. 352–366. In-teh Publishers
Birchfield, S.: KLT: An implementation of the Kanade- Lucas-Tomasi feature tracker, http://www.ces.clemson.edu/~stb/klt/
Kidono, K., Miura, J., Shirai, Y.: Autonomous visual navigation of a mobile robot using a human guided experience. Robotics and Autonomous Systems 40(23), 124–132 (2002)
Murray, D., Little, J.J.: Using real-time stereo vision for mobile robot navigation. Autonomous Robots 8, 161–171 (2000)
Davison, A.J.: Mobile robot navigation using active vision. PhD thesis (1998)
Ayache, N., Faugeras, O.D.: Maintaining representations of the environment of a mobile robot. IEEE Transactions on Robotics and Automation 5(6), 804–819 (1989)
Olson, C.F., Matthies, L.H., Schoppers, M., Maimone, M.W.: Rover navigation using stereo ego-motion. Robotics and Autonomous Systems 43(4), 215–229 (2003)
Konolige, K., Agrawal, M., Bolles, R.C., Cowan, C., Fischler, M., Gerkey, B.: Outdoor Mapping and Navigation using Stereo Vision. In: Proc. International Symposium on Experimental Robotics (ISER), Brazil, pp. 1–12 (July 2006)
Shi, J., Tomasi, C.: Good Features to Track. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 1994), Seattle, pp. 593–600 (June 1994)
Nishimoto, T., Yamaguchi, J.: Three dimensional measurements using fisheye stereo vision. In: Proc. SICE Annual Conference, Japan, pp. 2008–2012 (September 2007)
Yamaguti, N., Oe, S., Terada, K.: A Method of distance measurement by using monocular camera. In: Proc. SICE Annual Conference, Japan, pp. 1255–1260 (July 1997)
Chou, T.N., Wykes, C.: An integrated ultrasonic system for detection, recognition and measurement. Measurement 26, 179–190 (1999)
Conradt, J., Simon, P., Pescatore, M., Verschure, P.F.M.J.: Saliency Maps Operating on Stereo Images Detect Landmarks and Their Distance. In: Dorronsoro, J.R. (ed.) ICANN 2002. LNCS, vol. 2415, pp. 795–800. Springer, Heidelberg (2002)
Wooden, D.: A guide to vision-based map-building. IEEE Robotics and Automation Magazine, 94–98 (June 2006)
Goldberg, S.B., Maimone, M.W., Matthies, L.: Stereo vision and rover navigation software for planetary exploration. In: Proc. IEEE Aerospace Conference Proceedings, USA, vol. 5, pp. 5025–5036 (March 2002)
Fialaa, M., Basub, A.: Robot navigation using panoramic tracking. Pattern Recognition 37, 2195–2215 (2004)
Gasper, J., Santos- Victor, J.: Vision-based navigation and environmental representations with an omnidirectional camera. IEEE Transactions on Robotics and Automation 16(6), 890–898 (2000)
Winters, N., Santos-victor, J.: Ominidirectional visual navigation. In: Proc. IEEE International Symposium on Intelligent Robotic Systems (SIRS), pp. 109–118 (1999)
Gasper, J., Winters, N., Santos-victor, N.: Vision-based navigation and environmental representation with an ominidirectional camera. IEEE Transtations on Robotics and Automation 16(6), 890–898 (2000)
Srinivasan, M.V.: An image-interpolation technique for the computation of optic flow and Egomotion. Biological Cybernetics 71(5), 401–415 (1994)
Srinivasan, M.V., Zhang, S.: Visual navigation in flying insects. International Review of Neurobiology 44, 67–92 (2000)
Coombs, D., Roberts, K.: Centering behaviour using peripheral vision. In: Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, USA, pp. 440–445 (June 1993)
Sandini, G., Santos-Victor, J., Curotto, F., Garibaldi, S.: Robotic bees. In: Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, Yokohama, Japan, vol. 1, pp. 629–635 (1993)
Santos-Victor, J., Sandini, G.: Divergent stereo in autonomous navigation: From bees to robots. International Journal of Computer Vision 14(2), 159–177 (1995)
Lourakis, M.I.A., Orphanoudakis, S.C.: Visual Detection of Obstacles Assuming a Locally Planar Ground. In: Chin, R., Pong, T.-C. (eds.) ACCV 1998. LNCS, vol. 1352, pp. 527–534. Springer, Heidelberg (1997)
Camus, T.: Real-time quantized optical flow. Real-Time Imaging 3(2), 71–86 (1997)
Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proc. DARPA Image Understanding Workshop, pp. 121–130 (1984)
Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 13, 185–203 (1981)
Nagel, H.: On the estimation of optical flow: relations between different approaches and some new results. Artificial Intelligence 33(3), 299–324 (1987)
van der Zwaan, S., Santos-Victor, J.: An insect inspired visual sensor for the autonomous navigation of a mobile robot. In: Proc. Seventh International Sysposium on Intelligent Robotic Systems, Portugal (July 1999)
Netter, T., Franceschini, N.: A robotic aircraft that follows terrain using a neuromorphic eye. In: Proc. IEEE International Conference on Intelligent Robots and Systems (IROS), Switzerland, vol. 1, pp. 129–134 (Septemper 2002)
Zhang, H., Yuan, K., Mei, S., Zhou, Q.: Visual navigation of automated guided vehicle based on path recognition. In: Proc. Third International Conference on Machine Learning and Cybernectics, Shanghai, pp. 26–29 (August 2004)
Ishikawa, S., Kuwamoto, H., Ozawa, S.: Visual navigation of an autonomous vehicle using white line recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 10(5), 743–749 (1988)
Beccari, G., Caselli, S., Zanichelli, F., Calafiore, A.: Vision-based line tracking and navigation in structured environments. In: Proc. IEEE International Symposium on Computational Intelligent in Robotics and Automation, USA, pp. 406–411 (July 1997)
Ismail, A.H., Ramli, H.R., Ahmad, M.H., Marhaban, M.H.: Vision-based system for line following mobile robot. In: Proc. IEEE Symposium on Industrial Electronics and Applications (ISIEA), Malaysia, pp. 642–645 (October 2009)
Durrant-White, H., Bailey, T.: Simultaneous localization and mapping. IEEE Robotics and Automation Magazine 13(2), 99–108 (2006)
Zunino, G., Christensen, H.I.: Simultaneous localization and mapping in domestic environments. Multisensor Fusion and Integration for Intelligent Systems, 67–72 (2001)
Bosse, M., Newman, P., Leonard, J., Teller, S.: Slam in large-scale cyclic environments using the atlas framework. International Journal of Robotics Research 23(12), 1113–1139 (2004)
Dissanayake, M., Newman, P., Clark, S., Durrant-Whyte, H., Csorba, M.: A solution to the simultaneous localization and map building (slam) problem. IEEE Transactions on Robotics and Automation 17(3), 229–241 (2001)
Estrada, C., Neira, J., Tardos, J.D.: Hierarchical SLAM: Real-time accurate mapping of large environments. IEEE Transactions on Robotics 21(4), 588–596 (2005)
Guivant, J.E., Nebot, E.M.: Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Transactions on Robotics and Automation 17(3) (June 2001)
Andrade-Cetto, J., Sanfeliu, A.: Concurrent map building and localization on indoor dynamic environment. International Journal of Pattern Recognition and Artificial Intelligence 16(3), 361–374 (2002)
Liu, Y., Thrun, S.: Results for outdoor-SLAM using sparse extended information filters. In: Proc. IEEE Conference on Robotics and Automation (ICRA), Taipei, pp. 1227–1233 (September 2003)
Davison, A.J., Murray, D.: Simultaneous localization and map-building using active vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 865–880 (2002)
Newman, P., Bosse, M., Leonard, J.: Autonomous feature-based exploration. In: Proc. International Conference on Robotics and Automation (ICRA), Taipei, vol. 1, pp. 1234–1240 (September 2003)
Sim, R., Elinas, P., Griffin, M., Little, J.J.: Vision based SLAM using the Rao-Blackwellized particle filter. In: Proc. IJCAI Workshop Reasoning with Uncertainty in Robotics, Edinburgh, Scotland (July 2005)
Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B.: FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In: Proc. 18th International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, pp. 1151–1156 (August 2003)
Lorigo, L.M., Brooks, A., Grimson, W.E.L.: Visually-guided obstacle avoidance in unstructured environments. In: Proc. IEEE Conference on Intelligent Robots and Systems, France (1997)
Ulrich, I., Nourbakhsh, I.: Appearance-based obstacle detection with monocular colour vision. In: Proc. AAAI Conference on Artificial Intelligence, USA (July 2000)
Lenser, S., Veloso, M.: Visual Sonar: Fast obstacle avoidance using monocular vision. In: Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, pp. 886–891 (2003)
Kim, P.G., Park, C.G., Jong, Y.H., Yun, J.H., Mo, E.J., Kim, C.S., Jie, M.S., Hwang, S.C., Lee, K.W.: Obstacle Avoidance of a Mobile Robot Using Vision System and Ultrasonic Sensor. In: Huang, D.-S., Heutte, L., Loog, M. (eds.) ICIC 2007. LNCS, vol. 4681, pp. 545–553. Springer, Heidelberg (2007)
Bertozzi, M., Broggi, A., Fascioli, A.: Real-time obstacle detection using stereo vision. In: Proc. VIII European Signal Processing Conference, Italy, pp. 1463–1466 (September 1996)
Badal, S., Ravela, S., Draper, B., Hanson, A.: A practical obstacle detection and avoidance system. In: Proc. 2nd IEEE Workshop on Application of Computer Vision, pp. 97–104 (1994)
Nirmal Singh, N.: Vision Based Autonomous Navigation of Mobile Robots. Ph.D. Thesis, Jadavpur University, Kolkata, India (2010)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Chatterjee, A., Rakshit, A., Singh, N.N. (2013). Mobile Robot Navigation. In: Vision Based Autonomous Robot Navigation. Studies in Computational Intelligence, vol 455. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33965-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-33965-3_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33964-6
Online ISBN: 978-3-642-33965-3
eBook Packages: EngineeringEngineering (R0)