Abstract
We describe an application of inductive logic programming to transfer learning. Transfer learning is the use of knowledge learned in a source task to improve learning in a related target task. The tasks we work with are in reinforcement-learning domains. Our approach transfers relational macros, which are finite-state machines in which the transition conditions and the node actions are represented by first-order logical clauses. We use inductive logic programming to learn a macro that characterizes successful behavior in the source task, and then use the macro for decision-making in the early learning stages of the target task. Through experiments in the RoboCup simulated soccer domain, we show that Relational Macro Transfer via Demonstration (RMT-D) from a source task can provide a substantial head start in the target task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Croonenborghs, T., Driessens, K., Bruynooghe, M.: Learning relational skills for inductive transfer in relational reinforcement learning. In: International Conference on Inductive Logic Programming (2007)
Dietterich, T.: Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research 13, 227–303 (2000)
Driessens, K., Dzeroski, S.: Integrating guidance into relational reinforcement learning. Machine Learning 57(3), 271–304 (2004)
Driessens, K., Ramon, J., Croonenborghs, T.: Transfer learning for reinforcement learning through goal and policy parametrization. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)
Fernandez, F., Veloso, M.: Policy reuse for transfer learning across tasks with different state and action spaces. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)
Gill, A.: Introduction to the Theory of Finite-state Machines. McGraw-Hill, New York (1962)
Maclin, R., Shavlik, J., Torrey, L., Walker, T.: Knowledge-based support vector regression for reinforcement learning. In: IJCAI Workshop on Reasoning, Representation, and Learning in Computer Games (2005)
Maclin, R., Shavlik, J., Torrey, L., Walker, T., Wild, E.: Giving advice about preferred actions to reinforcement learners via knowledge-based kernel regression. In: AAAI Conference on Artificial Intelligence (2005)
Noda, I., Matsubara, H., Hiraki, K., Frank, I.: Soccer server: A tool for research on multiagent systems. Applied Artificial Intelligence 12, 233–250 (1998)
Perkins, T., Precup, D.: Using options for knowledge transfer in reinforcement learning. Technical Report UM-CS-1999-034 (1999)
Soni, V., Singh, S.: Using homomorphisms to transfer options across continuous reinforcement learning domains. In: AAAI Conference on Artificial Intelligence (2006)
Srinivasan, A.: The Aleph manual (2001)
Stone, P., Sutton, R.: Scaling reinforcement learning toward RoboCup soccer. In: International Conference on Machine Learning (2001)
Stracuzzi, D., Asgharbeygi, N.: Transfer of knowledge structures with relational temporal difference learning. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)
Sutton, R.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Tadepalli, P., Givan, R., Driessens, K.: Relational reinforcement learning: An overview. In: ICML Workshop on Relational Reinforcement Learning (2004)
Taylor, M., Stone, P.: Cross-domain transfer for reinforcement learning. In: International Conference on Machine Learning (2007)
Taylor, M., Stone, P., Liu, Y.: Value functions for RL-based behavior transfer: A comparative study. In: AAAI Conference on Artificial Intelligence (2005)
Torrey, L., Shavlik, J., Walker, T., Maclin, R.: Skill acquisition via transfer learning and advice taking. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, Springer, Heidelberg (2006)
Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Using advice to transfer knowledge acquired in one reinforcement learning task to another. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, Springer, Heidelberg (2005)
Watkins, C.: Learning from delayed rewards. PhD thesis, University of Cambridge (1989)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Torrey, L., Shavlik, J., Walker, T., Maclin, R. (2008). Relational Macros for Transfer in Reinforcement Learning. In: Blockeel, H., Ramon, J., Shavlik, J., Tadepalli, P. (eds) Inductive Logic Programming. ILP 2007. Lecture Notes in Computer Science(), vol 4894. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78469-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-540-78469-2_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78468-5
Online ISBN: 978-3-540-78469-2
eBook Packages: Computer ScienceComputer Science (R0)