Abstract
When the number of possible moves in each state of a game becomes very high, standard methods for computer game playing are no longer feasible. We present an approach for learning to play such a game from human expert games. The high complexity of the action space is dealt with by collapsing the very large set of allowable actions into a small set of categories according to their semantic intent, while the complexity of the state space is handled by representing the states of collections of pieces by a few relevant features in a location-independent way. The state-action mappings implicit in the expert games are then learnt using neural networks. Experiments compare this approach to methods that have previously been applied to this domain.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
FĆ¼rnkranz, J., Kubat, M. (eds.): Machines That Learn to Play Games, Nova Science Publishers (2001).
FĆ¼rnkranz, J.: Machine learning in games: A survey. In: FĆ¼rnkranz, J., Kubat, M. (eds.): Machines That Learn to Play Games, Nova Science Publishers (2001) 11ā59.
Schlabach, J. L., Hayes, C. C., Goldberg, D. E.: FOX-GA: A genetic algorithm for generating and analyzing battlefield courses of action. Evolutionary Computation 7 (1999) 45ā68.
Boicu, M., Tecuci, G., Marcu, D., Bowman, M., Shyr, P., Ciucu, F., Levcovici, C.: Disciple-COA: From agent programming to agent teaching. In: Langley, P. (ed.): Proceedings of the 17th International Conference on Machine Learning (ICML-2000), Morgan Kaufmann (2000) 73ā80.
Dahl, F. A., Halck, O. M.: Three games designed for the study of human and automated decision making. Definitions and properties of the games Campaign, Operation Lucid and Operation Opaque. FFI/RAPPORT-98/02799, Norwegian Defence Research Establishment (FFI), Kjeller, Norway (1998).
Sendstad, O. J., Halck, O. M., Dahl, F. A.: A constraint-based agent design for playing a highly complex game. In: Proceedings of the 2nd International Conference on the Practical Application of Constraint Technologies and Logic Programming (PACLP 2000), The Practical Application Company Ltd (2000) 93ā109.
Aamodt, A., Plaza, E.: Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Communications 7 (1994) 39ā59.
Mitchell, T. M.: Machine Learning. WCB/McGraw-Hill (1997).
Kerner, Y.: Learning strategies for explanation patterns: Basic game patterns with application to chess. In: Veloso, M., Aamodt, A. (eds.): Proceedings of the 1st International Conference on Case-Based Reasoning (ICCBR-95). Lecture Notes in Artificial Intelligence Vol. 1010, Springer-Verlag (1995) 491ā500.
Callan, J. P., Fawcett, T. E., Rissland, E. L.: CABOT: An adaptive approach to case-based search. In: Proceedings of the 12th International Conference on Artificial Intelligence, Morgan Kaufmann (1991) 803ā809.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
KrĀ°akenes, T., Halck, O.M. (2002). Learning to Play a Highly Complex Game from Human Expert Games. In: Elomaa, T., Mannila, H., Toivonen, H. (eds) Machine Learning: ECML 2002. ECML 2002. Lecture Notes in Computer Science(), vol 2430. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36755-1_18
Download citation
DOI: https://doi.org/10.1007/3-540-36755-1_18
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44036-9
Online ISBN: 978-3-540-36755-0
eBook Packages: Springer Book Archive