Abstract
A general game player is an agent capable of taking as input a description of a game’s rules in a formal language and proceeding to play without any subsequent human input. To do well, an agent should learn from experience with past games and transfer the learned knowledge to new problems. We introduce a graph-based method for identifying previously encountered games and prove its robustness formally. We then describe how the same basic approach can be used to identify similar but non-identical games. We apply this technique to automate domain mapping for value function transfer and speed up reinforcement learning on variants of previously played games. Our approach is fully implemented with empirical results in the general game playing system.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Asgharbeygi, N., Stracuzzi, D., Langley, P.: Relational temporal difference learning. In: ICML (2006)
Banerjee, B., Stone, P.: General game learning using knowledge transfer. In: The 20th International Joint Conference on Artificial Intelligence, January 2007, pp. 672–677 (2007)
Campbell, M., Hoane Jr., A.J., Hsu, F.H.: Deep blue. Artificial Intelligence 134(1-2), 57–83 (2002)
Cordella, L.P., Foggia, P., Sansone, C., Vento, M.: An improved algorithm for matching large graphs. In: Proc. of the 3rd IAPR-TC-15 Internation Workshop on Graph-based Representations, Italy, pp. 149–159 (2001)
Costantini, S.: Comparing different graph representations of logic programs under the answer set semantics. In: Proceedings of the AAAI Spring Symposium on Answer Set Programming (2001)
Dimopoulos, Y., Torres, A.: Graph theoretical structures in logic programs and default theories. Theoretical Computer Science 170(1), 209–244 (1996)
Dzeroski, S., De Raedt, L., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7–52 (2001)
Genesereth, M., Love, N.: General game playing: Overview of the AAAI competition. AI Magazine 26(2) (2005)
Kuhlmann, G., Dresner, K., Stone, P.: Automatic heuristic contruction in a complete general game player. In: Proceedings of the Twenty-First National Conference on Artificial Intelligence (July 2006)
Pell, B.: Strategy generation and evaluation for meta-game playing. PhD thesis, University of Cambridge (1993)
Schaeffer, J., Culberson, J.C., Treloar, N., Knight, B., Lu, P., Szafron, D.: A world championship caliber checkers program. Artificial Intelligence 53(2-3), 273–289 (1992)
Scheffer, T., Herbrich, R., Wysotzki, F.: Efficient theta-subsumption based on graph algorithms. In: Inductive Logic Programming Workshop (1996)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA (1998)
Taylor, M.E., Whiteson, S., Stone, P.: Transfer via inter-task mappings in policy search reinforcement learning. In: The Sixth International Joint Conference on Autonomous Agents and Multiagent Systems (May 2007)
Xu, D.-Y., Tao, Z.-H.: Complexities of homomorphism and isomorphism for definite logic programs. Journal of Computer Science and Technology 20(6), 758–762 (2005)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kuhlmann, G., Stone, P. (2007). Graph-Based Domain Mapping for Transfer Learning in General Games. In: Kok, J.N., Koronacki, J., Mantaras, R.L.d., Matwin, S., Mladenič, D., Skowron, A. (eds) Machine Learning: ECML 2007. ECML 2007. Lecture Notes in Computer Science(), vol 4701. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74958-5_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-74958-5_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74957-8
Online ISBN: 978-3-540-74958-5
eBook Packages: Computer ScienceComputer Science (R0)