Advertisement

Qualitative Transfer for Reinforcement Learning with Continuous State and Action Spaces

  • Esteban O. Garcia
  • Enrique Munoz de Cote
  • Eduardo F. Morales
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8258)

Abstract

In this work we present a novel approach to transfer knowledge between reinforcement learning tasks with continuous states and actions, where the transition and policy functions are approximated by Gaussian Processes (GPs). The novelty in the proposed approach lies in the idea of transferring qualitative knowledge between tasks, we do so by using the GPs’ hyper-parameters used to represent the state transition function in the source task, which represents qualitative knowledge about the type of transition function that the target task might have. We show that the proposed technique constrains the search space, which accelerates the learning process. We performed experiments varying the relevance of transferring the hyper-parameters from the source task into the target task and show, in general, a clear improvement in the overall performance of the system when compared to a state of the art reinforcement learning algorithm for continuous state and action spaces without transfer.

Keywords

Transfer learning Reinforcement learning Gaussian Processes Hyper-parameters 

References

  1. 1.
    Deisenroth, M.P., Peters, J., Rasmussen, C.E.: Approximate dynamic programming with Gaussian processes. In: American Control Conference, pp. 4480–4485 (2008)Google Scholar
  2. 2.
    Deisenroth, M.P., Rasmussen, C.E.: PILCO: A Model-Based and Data-Efficient Approach to Policy Search. In: ICML 2011, pp. 465–472 (2011)Google Scholar
  3. 3.
    Deisenroth, M.P., Rasmussen, C.E., Fox, D.: Learning to control a low-cost manipulator using data-efficient reinforcement learning. In: Proceedings of Robotics: Science and Systems, Los Angeles, CA, USA (2011)Google Scholar
  4. 4.
    Deisenroth, M.P., Rasmussen, C.E., Peters, J.: Model-based reinforcement learning with continuous states and actions. In: 16th European Symposium on Artificial Neural Networks, pp. 19–24 ( April 2008)Google Scholar
  5. 5.
    Engel, Y., Mannor, S., Meir, R.: Bayes meets Bellman: The Gaussian process approach to temporal difference learning. ICML 20(1), 154 (2003)Google Scholar
  6. 6.
    Engel, Y., Mannor, S., Meir, R.: Reinforcement learning with Gaussian processes. In: ICML 2005, pp. 201–208 (2005)Google Scholar
  7. 7.
    Hasselt, H.V.: Insights in Reinforcement Learning Formal analysis and empirical evaluation of temporal-difference learning algorithms (2011)Google Scholar
  8. 8.
    Hasselt, H.V.: Reinforcement Learning in Continuous State and Action Spaces. In: Reinforcement Learning: State of the Art (2011)Google Scholar
  9. 9.
    Lazaric, A., Restelli, M., Bonarini, A.: Reinforcement learning in continuous action spaces through sequential monte carlo methods. In: Advances in Neural Information Processing Systems (2007)Google Scholar
  10. 10.
    Lazaric, A., Restelli, M., Bonarini, A.: Transfer of samples in batch reinforcement learning. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 544–551 (2008)Google Scholar
  11. 11.
    Martín, J.A., de Lope, H.J., Maravall, D.: Robust high performance reinforcement learning through weighted k-nearest neighbors. Neurocomputing 74(8), 1251–1259 (2011)CrossRefGoogle Scholar
  12. 12.
    Murray-Smith, R., Sbarbaro, D.: Nonlinear adaptive control using non-parametric Gaussian process prior models. In: 15TH IFAC, pp. 21–26 (July 2002)Google Scholar
  13. 13.
    Rasmussen, C.E., Deisenroth, M.P.: Probabilistic inference for fast learning in control. In: Girgin, S., Loth, M., Munos, R., Preux, P., Ryabko, D. (eds.) EWRL 2008. LNCS (LNAI), vol. 5323, pp. 229–242. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  14. 14.
    Rasmussen, C.E., Kuss, M.: Gaussian Processes in Reinforcement Learning. Advances in Neural Information Processing Systems 16, 16 (2004)Google Scholar
  15. 15.
    Rasmussen, C.E., Williams, C.: Gaussian Processes for Machine Learning. International Journal of Neural Systems 14(2), 69–106 (2006)Google Scholar
  16. 16.
    Sutton, R., Barto, A.G.: Introduction to Reinforcement Learning. MIT Press (1998)Google Scholar
  17. 17.
    Taylor, M.E., Jong, N.K., Stone, P.: Transferring Instances for Model-Based Reinforcement Learning. Machine Learning (September 2008)Google Scholar
  18. 18.
    Taylor, M.E., Stone, P.: Transfer Learning for Reinforcement Learning Domains: A Survey. Journal of Machine Learning Research 10, 1633–1685 (2009)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Esteban O. Garcia
    • 1
  • Enrique Munoz de Cote
    • 1
  • Eduardo F. Morales
    • 1
  1. 1.Instituto Nacional de Astrófisica, Óptica y ElectrónicaPueblaMéxico

Personalised recommendations