Skip to main content

Relational Macros for Transfer in Reinforcement Learning

  • Conference paper
Inductive Logic Programming (ILP 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4894))

Included in the following conference series:

Abstract

We describe an application of inductive logic programming to transfer learning. Transfer learning is the use of knowledge learned in a source task to improve learning in a related target task. The tasks we work with are in reinforcement-learning domains. Our approach transfers relational macros, which are finite-state machines in which the transition conditions and the node actions are represented by first-order logical clauses. We use inductive logic programming to learn a macro that characterizes successful behavior in the source task, and then use the macro for decision-making in the early learning stages of the target task. Through experiments in the RoboCup simulated soccer domain, we show that Relational Macro Transfer via Demonstration (RMT-D) from a source task can provide a substantial head start in the target task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Croonenborghs, T., Driessens, K., Bruynooghe, M.: Learning relational skills for inductive transfer in relational reinforcement learning. In: International Conference on Inductive Logic Programming (2007)

    Google Scholar 

  2. Dietterich, T.: Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research 13, 227–303 (2000)

    MATH  MathSciNet  Google Scholar 

  3. Driessens, K., Dzeroski, S.: Integrating guidance into relational reinforcement learning. Machine Learning 57(3), 271–304 (2004)

    Article  MATH  Google Scholar 

  4. Driessens, K., Ramon, J., Croonenborghs, T.: Transfer learning for reinforcement learning through goal and policy parametrization. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)

    Google Scholar 

  5. Fernandez, F., Veloso, M.: Policy reuse for transfer learning across tasks with different state and action spaces. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)

    Google Scholar 

  6. Gill, A.: Introduction to the Theory of Finite-state Machines. McGraw-Hill, New York (1962)

    MATH  Google Scholar 

  7. Maclin, R., Shavlik, J., Torrey, L., Walker, T.: Knowledge-based support vector regression for reinforcement learning. In: IJCAI Workshop on Reasoning, Representation, and Learning in Computer Games (2005)

    Google Scholar 

  8. Maclin, R., Shavlik, J., Torrey, L., Walker, T., Wild, E.: Giving advice about preferred actions to reinforcement learners via knowledge-based kernel regression. In: AAAI Conference on Artificial Intelligence (2005)

    Google Scholar 

  9. Noda, I., Matsubara, H., Hiraki, K., Frank, I.: Soccer server: A tool for research on multiagent systems. Applied Artificial Intelligence 12, 233–250 (1998)

    Article  Google Scholar 

  10. Perkins, T., Precup, D.: Using options for knowledge transfer in reinforcement learning. Technical Report UM-CS-1999-034 (1999)

    Google Scholar 

  11. Soni, V., Singh, S.: Using homomorphisms to transfer options across continuous reinforcement learning domains. In: AAAI Conference on Artificial Intelligence (2006)

    Google Scholar 

  12. Srinivasan, A.: The Aleph manual (2001)

    Google Scholar 

  13. Stone, P., Sutton, R.: Scaling reinforcement learning toward RoboCup soccer. In: International Conference on Machine Learning (2001)

    Google Scholar 

  14. Stracuzzi, D., Asgharbeygi, N.: Transfer of knowledge structures with relational temporal difference learning. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)

    Google Scholar 

  15. Sutton, R.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)

    Google Scholar 

  16. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  17. Tadepalli, P., Givan, R., Driessens, K.: Relational reinforcement learning: An overview. In: ICML Workshop on Relational Reinforcement Learning (2004)

    Google Scholar 

  18. Taylor, M., Stone, P.: Cross-domain transfer for reinforcement learning. In: International Conference on Machine Learning (2007)

    Google Scholar 

  19. Taylor, M., Stone, P., Liu, Y.: Value functions for RL-based behavior transfer: A comparative study. In: AAAI Conference on Artificial Intelligence (2005)

    Google Scholar 

  20. Torrey, L., Shavlik, J., Walker, T., Maclin, R.: Skill acquisition via transfer learning and advice taking. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  21. Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Using advice to transfer knowledge acquired in one reinforcement learning task to another. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  22. Watkins, C.: Learning from delayed rewards. PhD thesis, University of Cambridge (1989)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Hendrik Blockeel Jan Ramon Jude Shavlik Prasad Tadepalli

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Torrey, L., Shavlik, J., Walker, T., Maclin, R. (2008). Relational Macros for Transfer in Reinforcement Learning. In: Blockeel, H., Ramon, J., Shavlik, J., Tadepalli, P. (eds) Inductive Logic Programming. ILP 2007. Lecture Notes in Computer Science(), vol 4894. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78469-2_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-78469-2_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-78468-5

  • Online ISBN: 978-3-540-78469-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics