Skip to main content

A Gaussian Process Reinforcement Learning Algorithm with Adaptability and Minimal Tuning Requirements

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2014 (ICANN 2014)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8681))

Included in the following conference series:

Abstract

We present a novel Bayesian reinforcement learning algorithm that addresses model bias and exploration overhead issues. The algorithm combines different aspects of several state-of-the-art reinforcement learning methods that use Gaussian Processes model-based approaches to increase the use of the online data samples. The algorithm uses a smooth reward function requiring the reward value to be derived from the environment state. It works with continuous states and actions in a coherent way with a minimized need for expert knowledge in parameter tuning. We analyse and discuss the practical benefits of the selected approach in comparison to more traditional methodological choices, and illustrate the use of the algorithm in a motor control problem involving a two-link simulated arm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abbeel, P., Coates, A., Quigley, M., Ng, A.: An application of reinforcement learning to aerobatic helicopter flight. In: Advances in Neural Information Processing Systems, vol. 19, pp. 1–8 (2007)

    Google Scholar 

  2. Głowacka, D., Ruotsalo, T., Konuyshkova, K., Kaski, S., Jacucci, G.: Directing exploratory search: Reinforcement learning from user interactions with keywords. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 117–128. ACM (2013)

    Google Scholar 

  3. Kober, J., Peters, J.: Reinforcement learning in robotics: A survey. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning. ALO, vol. 12, pp. 569–600. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  4. Arleo, A., Smeraldi, F., Gerstner, W.: Cognitive navigation based on nonuniform gabor space sampling, unsupervised growing networks, and reinforcement learning. IEEE Transactions on Neural Networks 15(3), 639–652 (2004)

    Article  Google Scholar 

  5. Montazeri, H., Moradi, S., Safabakhsh, R.: Continuous state/action reinforcement learning: A growing self-organizing map approach. Neurocomputing 74(7), 1069–1082 (2011)

    Article  Google Scholar 

  6. Graziano, V., Koutník, J., Schmidhuber, J.: Unsupervised modeling of partially observable environments. In: Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M. (eds.) ECML PKDD 2011, Part I. LNCS (LNAI), vol. 6911, pp. 503–515. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  7. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. Cambridge Univ. Press (1998)

    Google Scholar 

  8. Rummery, G.A., Niranjan, M.: On-line Q-learning using connectionist systems. Technical report, Cambridge University Engineering Dept. (1994)

    Google Scholar 

  9. Deisenroth, M.P., Rasmussen, C.E.: PILCO: A model-based and data-efficient approach to policy search. In: Proceedings of the International Conference on Machine Learning (2011)

    Google Scholar 

  10. Deisenroth, M.P., Rasmussen, C.E.: Efficient reinforcement learning for motor control. In: Proceedings of the 10th International PhD Workshop on Systems and Control, Hlubok nad Vltavou, Czech Republic (2009)

    Google Scholar 

  11. Körding, K.P., Wolpert, D.M.: The loss function of sensorimotor learning. Proceedings of the National Academy of Sciences of the United States of America 101(26), 9839–9842 (2004)

    Google Scholar 

  12. Rasmussen, C.E., Kuss, M.: Gaussian processes in reinforcement learning. In: Advances in Neural Information Processing Systems 16, pp. 751–759. MIT Press (2004)

    Google Scholar 

  13. Jakab, H., Csató, L.: Improving Gaussian process value function approximation in policy gradient algorithms. In: Honkela, T. (ed.) ICANN 2011, Part II. LNCS, vol. 6792, pp. 221–228. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  14. Englert, P., Paraschos, A., Peters, J., Deisenroth, M.P.: Model-based imitation learning by probabilistic trajectory matching. In: Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA) (2013)

    Google Scholar 

  15. Ko, J., Klein, D.J.: Gaussian processes and reinforcement learning for identification and control of an autonomous blimp. In: IEEE Intl. Conf. on Robotics and Automation (ICRA) (2007)

    Google Scholar 

  16. Ghavamzadeh, M., Engel, Y.: Bayesian actor-critic algorithms. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, pp. 297–304. ACM, New York (2007)

    Google Scholar 

  17. Sugiyama, M., Hachiya, H., Towell, C., Vijayakumar, S.: Geodesic Gaussian kernels for value function approximation. Auton. Robots 25(3), 287–304 (2008)

    Article  Google Scholar 

  18. Engel, Y., Mannor, S., Meir, R.: Reinforcement learning with Gaussian processes. In: Proceedings of the 22nd International Conference on Machine Learning, ICML 2005, pp. 201–208. ACM, New York (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Strahl, J., Honkela, T., Wagner, P. (2014). A Gaussian Process Reinforcement Learning Algorithm with Adaptability and Minimal Tuning Requirements. In: Wermter, S., et al. Artificial Neural Networks and Machine Learning – ICANN 2014. ICANN 2014. Lecture Notes in Computer Science, vol 8681. Springer, Cham. https://doi.org/10.1007/978-3-319-11179-7_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11179-7_47

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11178-0

  • Online ISBN: 978-3-319-11179-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics