Advertisement

Search, Abstractions and Learning in Real-Time Strategy Games

A Dissertation Summary
  • Nicolas A. BarrigaEmail author
Dissertation and Habilitation Abstracts
  • 5 Downloads

Abstract

Real-Time Strategy Games’ large state and action spaces pose a significant hurdle to traditional AI techniques. We propose decomposing the game into sub-problems and integrating the partial solutions into action scripts that can be used as abstract actions by a search or machine learning algorithm. The resulting high level algorithm produces sound strategic choices, and can then be combined with a low-level search algorithm to refine tactical choices. We show strong results in SparCraft, Starcraft: Brood War and \(\mu \)RTS against state-of-the-art agents. We expect advances in RTS AI can be used in commercial videogames for playtesting and game balancing, while also having possible real-world applications.

Keywords

Real-time strategy games Game tree search Deep convolutional neural networks Evolutionary algorithms 

References

  1. 1.
    Barriga NA (2017) Search, abstractions and learning in real-time strategy games. Ph.D. thesis, University of AlbertaGoogle Scholar
  2. 2.
    Barriga NA, Stanescu M, Buro M (2014) Building placement optimization in real-time strategy games. In: Workshop on artificial intelligence in adversarial real-time games, AIIDEGoogle Scholar
  3. 3.
    Barriga NA, Stanescu M, Buro M (2015) Puppet Search: enhancing scripted behaviour by look-ahead search with applications to real-time strategy games. In: Eleventh annual AAAI conference on artificial intelligence and interactive digital entertainment (AIIDE), pp 9–15Google Scholar
  4. 4.
    Barriga NA, Stanescu M, Buro M (2017) Combining scripted behavior with game tree search for stronger, more robust game AI. In: Game AI Pro 3: collected wisdom of game AI professionals, chap. 14. CRC PressGoogle Scholar
  5. 5.
    Barriga NA, Stanescu M, Buro M (2017) Combining strategic learning and tactical search in real-time Strategy games. In: Accepted for presentation at the thirteenth annual AAAI conference on artificial intelligence and interactive digital entertainment (AIIDE)Google Scholar
  6. 6.
    Barriga NA, Stanescu M, Buro M (2017) Game tree search based on non-deterministic action scripts in real-time strategy games. In: IEEE transactions on computational intelligence and AI in games (TCIAIG)Google Scholar
  7. 7.
    Bowling M, Burch N, Johanson M, Tammelin O (2015) Heads-up limit hold’em poker is solved. Science 347(6218):145–149.  https://doi.org/10.1126/science.1259433. http://www.sciencemag.org/content/347/6218/145.abstract
  8. 8.
    Churchill D (2013) SparCraft: open source StarCraft combat simulation. http://code.google.com/p/sparcraft/
  9. 9.
    Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533CrossRefGoogle Scholar
  10. 10.
    Ontañón S (2013) The combinatorial multi-armed bandit problem and its application to real-time strategy games. In: AIIDEGoogle Scholar
  11. 11.
    Ontañón S (2017) Combinatorial multi-armed bandits for real-time strategy games. J Artif Intell Res 58:665–702MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Ontañón S, Barriga NA, Silva CR, Moraes RO, Lelis LH (2018) The first microrts artificial intelligence competition. AI Mag 39(1):75–83CrossRefGoogle Scholar
  13. 13.
    Schaeffer J, Lake R, Lu P, Bryant M (1996) CHINOOK: the world man-machine Checkers champion. AI Mag 17(1):21–29Google Scholar
  14. 14.
    Shannon C (1950) A Chess-playing machine. Sci Am 182:48–51CrossRefGoogle Scholar
  15. 15.
    Shannon CE (1950) Programming a computer for playing chess. In: Philosophical Magazine (First presented at the National IRE Convention, March 9, 1949, New York, USA) ser.7, vol 41, no. 314Google Scholar
  16. 16.
    Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489CrossRefGoogle Scholar
  17. 17.
    Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A et al (2017) Mastering the game of Go without human knowledge. Nature 550(7676):354CrossRefGoogle Scholar
  18. 18.
    Tesauro G (1994) TD-Gammon, a self-teaching Backgammon program, reaches master-level play. Neural Comput 6(2):215–219CrossRefGoogle Scholar
  19. 19.
    Vinyals O, Babuschkin I, Chung J, Mathieu M, Jaderberg M, Czarnecki WM, Dudzik A, Huang A, Georgiev P, Powell R, Ewalds T, Horgan D, Kroiss M, Danihelka I, Agapiou J, Oh J, Dalibard V, Choi D, Sifre L, Sulsky Y, Vezhnevets S, Molloy J, Cai T, Budden D, Paine T, Gulcehre C, Wang Z, Pfaff T, Pohlen T, Wu Y, Yogatama D, Cohen J, McKinney K, Smith O, Schaul T, Lillicrap T, Apps C, Kavukcuoglu K, Hassabis D, Silver D (2019) AlphaStar: mastering the real-time strategy game StarCraft II. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/

Copyright information

© Gesellschaft für Informatik e.V. and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Universidad de TalcaTalcaChile

Personalised recommendations