Summary
Landmark-based navigation in unknown unstructured environments is far from solved. The bottleneck nowadays seems to be the fast detection of reliable visual references in the image stream as the robot moves. In our research, we have decoupled the navigation issues from this visual bottleneck, by first using artificial landmarks that could be easily detected and identified. Once we had a navigation system working, we developed a strategy to detect and track salient regions along image streams by just performing on-line pixel sampling. This strategy continuously updates the mean and covariances of the salient regions, as well as creates, deletes and merges regions according to the sample flow. Regions detected as salient can be considered as potential landmarks to be used in the navigation task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
DeSouza, G., Kak, A.: Vision for mobile robot navigation: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(2), 237–267 (2002)
Leger, P., Deen, R., Bonitz, R.: Remote Image Analysis for Mars Exploration Rover Mobility and Manipulation Operations. In: IEEE Int. Conf. on Systems, Man and Cybernetics (2005)
Maxwell, S., Cooper, B., Hartman, F., Leger, C., Wright, J.: The Best of Both Worlds: Integrating Textual and Visual Command Interfaces for Mars Rover Operations. In: IEEE Int. Conf. on Systems, Man and Cybernetics (2005)
Kirigin, I., Singh, S.: Bearings based robot homing with robust landmark matching and limited horizon view. Technical Report CMU-RI-TR-05-02, Carnegie Mellon University (2005)
Lowe, D.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60, 91–110 (2004)
Murrieta-Cid, R., Parra, C., Devy, M.: Visual navigation in natural environments: from range and color data to a landmark-based model. Autonomous Robots 3(2), 143–168 (2002)
Busquets, D., Sierra, C., de Mántaras, R.L.: A multi-agent approach to qualitative landmark-based navigation. Autonomous Robots 15, 129–153 (2003)
Scharstein, D., Briggs, A.J.: Real-time recognition of self-similar landmarks. Image and Vision Computing 19, 763–772 (2000)
Nothdurft, H.C.: Saliency from Feature Contrast: Additivity Across Dimensions. Vision Research 40, 1183–1201 (2000)
Todt, E., Torras, C.: Detection of natural landmarks through multiscale opponent features. In: 15th Int. Conf. on Pattern Recognition, Barcelona, Spain, pp. 976–979 (2000)
Itti, L., Koch, C., Niebur, E.: A Model of Saliency-based visual Attention for Rapid Scene Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Celaya, E., Albarral, JL., Jiménez, P., Torras, C. (2008). Visually-Guided Robot Navigation: From Artificial to Natural Landmarks. In: Laugier, C., Siegwart, R. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol 42. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75404-6_27
Download citation
DOI: https://doi.org/10.1007/978-3-540-75404-6_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-75403-9
Online ISBN: 978-3-540-75404-6
eBook Packages: EngineeringEngineering (R0)