Skip to main content

Definition and Extraction of Visual Landmarks for Indoor Robot Navigation

  • Conference paper
  • First Online:
Book cover Methods and Applications of Artificial Intelligence (SETN 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2308))

Included in the following conference series:

Abstract

This paper presents a new method for defining and extracting visual landmarks for indoor navigation using a single camera. The approach considers that navigating from point A to point B amounts to navigating to intermediate positions, which are signified by recognition of local landmarks. To avoid the pose problem we seek scene representations that rely on clustered corners of physical objects on corridor walls. These representations are scale and translation independent and allow for the construction of a metric that can match pre-detected landmarks of a learning phase with landmarks extracted from images captured at run-time. The validity of our approach has been verified experimentally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S.M. Smith and J.M. Brady. SUSAN — a new approach to low level image processing. Int. Journal of Computer Vision, 23(1):45–78, May 1997.

    Article  Google Scholar 

  2. P. Trahanias, S. Velissaris, S. Orphanoudakis, “Visual recognition of workspace landmarks for topological navigation”, Autonomous Robots, 7, 143–158, 1999.

    Article  Google Scholar 

  3. Kampman, P. and Schmidt, G. Indoor navigation of mobile robots by use of learned maps, In “Information processing in Autonomous mobile robots”, Springer-Verlag, 1991.

    Google Scholar 

  4. Mataric, M. J. Integration of representation into goal-driven behaviour-based robots. IEEE Transactions on Robotics and Automation, 8(3), 1992

    Google Scholar 

  5. Kortencamp, D. and Weynmouth, T. Topological mapping for mobile robots using a combination of sonar and video sensing. In Proceedings of the AAAI, 1994

    Google Scholar 

  6. Kurz, A. Constructing maps for mobile robot navigation based on ultrasonic range data. IEEE Transactions on System, Man and Cybernetics, 26(2):233–242, 1996.

    Article  Google Scholar 

  7. Chandrinos, K.V., Tsonis, V. and Trahanias, P. Automated Landmark Extraction and Retrieval, SIRS, Edinburgh, 1998

    Google Scholar 

  8. Carlo Tomasi and Takeo Kanade. Detection and Tracking of Point Features. Carnegie Mellon University Technical Report CMU-CS-91-132, April 1991

    Google Scholar 

  9. D. I. Kosmopoulos, K.V. Chandrinos, Technical Report DEMO2000/14, Athens, November 2000

    Google Scholar 

  10. S. Se. D. Lowe, J. Little, “Vision— based mobile robot localization and mapping using scale-invariant features”, International Conference on Robotics and Automation, Seoul, Korea, pp. 2051–2058, 2001

    Google Scholar 

  11. V.C. Verdiere, J. L. Crowley, “Local appearance space for recognition of navigation landmarks”, Robotics and Autonomous Systems, 31, pp 61–69, 2000.

    Article  Google Scholar 

  12. N. Vlassis, Y. Motomura, I. Hara, H. Asoh, T. Matsui, “Edge-based features from omnidirectional images for robot localization”, International Conference on Robotics and Automation, Seoul, Korea, pp. 1579–1584, 2001.

    Google Scholar 

  13. P. Lamon, I. Nourbakhsh, B. Jensen, R. Siegwart, “Deriving and matching image fingerprint sequences for mobile robot localization”, International Conference on Robotics and Automation, Seoul, Korea, pp. 1609–1614, 2001.

    Google Scholar 

  14. L. Tang, S. Yuta, “Vision based navigation for mobile robots in indoor environment by teaching and playing-back scheme”, International Conference on Robotics and Automation, Seoul, Korea, pp. 3072–3077, 2001.

    Google Scholar 

  15. R. F. Vassallo, H. J. Schneebeli, J. Santos-Victor, “Visual servoing and appearance for navigation”, Robotics and Autonomous Systems, 31, pp. 87–97, 2000.

    Article  Google Scholar 

  16. M. Mata, J. M. Armingol, A. Escalera, M.A. Salichs, “A visual landmark recognition system for topological navigation of mobile robots”, International Conference on Robotics and Automation, Seoul, Korea, pp. 1124–1129, 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kosmopoulos, D.I., Chandrinos, K.V. (2002). Definition and Extraction of Visual Landmarks for Indoor Robot Navigation. In: Vlahavas, I.P., Spyropoulos, C.D. (eds) Methods and Applications of Artificial Intelligence. SETN 2002. Lecture Notes in Computer Science(), vol 2308. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46014-4_36

Download citation

  • DOI: https://doi.org/10.1007/3-540-46014-4_36

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43472-6

  • Online ISBN: 978-3-540-46014-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics