Skip to main content

Supervised reinforcement learning: Application to a wall following behaviour in a mobile robot

  • 2. Modification Tasks
  • Conference paper
  • First Online:
Tasks and Methods in Applied Artificial Intelligence (IEA/AIE 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1416))

Abstract

In this work we describe the design of a control approach in which, by way of supervised reinforcement learning, the learning potential is combined with the previous knowledge of the task in question, obtaining as a result rapid convergence to the desired behaviour as well as an increase in the stability of the process. We have tested the application of our approach in the design of a basic behaviour pattern in mobile robotics, such as that of wall following. We have carried out several experiments obtaining goods results which confirm the utility and advantages derived from the use of our approach.

This work has been possible thanks to Xunta de Galicia, project XUGA20608B, and thanks to the availability of a Nomad200 mobile robot acquired through a infraestructure project funded by Xunta de Galicia.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. Cichosz. Reinforcement learning algorithms based on the methods of temporal differences. Master's thesis, Warsaw University of Technology, Septeniber 1994.

    Google Scholar 

  2. D. Fox, W. Burgard, and S. Thrun. The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4(1):23–33, 1997.

    Google Scholar 

  3. R. Iglesias, C.V. Regueiro, J. Correa, and S. Barro. Implementation of a basic reactive behavior in mobile robotics through artificial neural networks. In Proc. of IWANN'97, 1997.

    Google Scholar 

  4. R. Iglesias, C.V. Regueiro, J. Correa, E. Sánchez, and S. Barro. Improving wall following lwhaviour in a mobile rol)ot using reinforcement learning. In Proceedings of the Iraternational ICSC Symposium on Engineering of Intelligent Systems, 1998.

    Google Scholar 

  5. R. Garcia J. Gasós, M.C. Garcia-Alegre. Fuzzy strategies for the navigation of autonomous mobile robots. In Proc. of IFES'91, pages 1024–1034, 1991.

    Google Scholar 

  6. R. Maclin and J.W. Shavlik. Creating advice-taking reinforcement learners. Machine, Learning, 22:251–281,1996.

    Google Scholar 

  7. Y. Smirnov, S. Koenig, M.M. Veloso, and R.G. Simmons. Efficient goal-directed exploration. In Proceedings of the Thirtenth National Conference on Artificial Intelligence, pages 292–297, 1996.

    Google Scholar 

  8. R. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.

    Google Scholar 

  9. C. Watkins. Learning from Delayed Reruards. PhD thesis, King's College, Cambridge, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Angel Pasqual del Pobil José Mira Moonis Ali

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Iglesias, R., Regueiro, C.V., Correa, J., Barro, S. (1998). Supervised reinforcement learning: Application to a wall following behaviour in a mobile robot. In: Pasqual del Pobil, A., Mira, J., Ali, M. (eds) Tasks and Methods in Applied Artificial Intelligence. IEA/AIE 1998. Lecture Notes in Computer Science, vol 1416. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64574-8_416

Download citation

  • DOI: https://doi.org/10.1007/3-540-64574-8_416

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64574-0

  • Online ISBN: 978-3-540-69350-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics