Skip to main content

RTS AI Problems and Techniques

  • Living reference work entry
  • First Online:
Encyclopedia of Computer Graphics and Games

Synonyms

AI; Artificial intelligence; Game AI; Real-time strategy games; RTS games

Definition

Real-time strategy (RTS) games is a subgenre of strategy games where players need to build an economy (gathering resources and building a base) and military power (training units and researching technologies) in order to defeat their opponents (destroying their army and base). Artificial intelligence problems related to RTS games deal with the behavior of an artificial player. This consists among others to learn how to play, to have an understanding about the game and its environment, and to predict and infer game situations from a context and sparse information.

Introduction

The field of real-time strategy (RTS) game artificial intelligence (AI) has advanced significantly in the past few years, partially thanks to competitions such as the “ORTS RTS Game AI Competition” (held from 2006 to 2009), the “AIIDE StarCraft AI Competition” (held since 2010), and the “CIG StarCraft RTS AI Competition”...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References and Further Reading

  • Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological variations, and system approaches. Artif. Intell. Commun. 7(1), 39–59 (1994)

    Google Scholar 

  • Aha, D.W., Molineaux, M., Ponsen, M.J.V.: Learning to win: case-based plan selection in a real-time strategy game. In: ICCBR, pp. 5–20. Chicago, USA (2005)

    Google Scholar 

  • Avery, P., Louis, S., Avery, B.: Evolving coordinated spatial tactics for autonomous entities using influence maps. In: Proceedings of the 5th International Conference on Computational Intelligence and Games, pp. 341–348. CIG ’09. IEEE Press, Piscataway. http://dl.acm.org/citation.cfm?id=1719293.1719350 (2009)

  • Balla, R.K., Fern, A.: Uct for tactical assault planning in real-time strategy games. In: International Joint Conference of Artificial Intelligence, IJCAI, pp. 40–45. Morgan Kaufmann Publishers, San Francisco (2009)

    Google Scholar 

  • Buro, M.: Real-time strategy games: a new AI research challenge. In: IJCAI 2003, International Joint Conferences on Artificial Intelligence, pp. 1534–1535. Acapulco, Mexico (2003)

    Google Scholar 

  • Buro, M., Churchill, D.: Real-time strategy game competitions. AI Mag. 33(3), 106–108 (2012)

    Google Scholar 

  • Cadena, P., Garrido, L.: Fuzzy case-based reasoning for managing strategic and tactical reasoning in StarCraft. In: Batyrshin, I.Z., Sidorov, G. (eds.) MICAI (1). Lecture Notes in Computer Science, vol. 7094, pp. 113–124. Springer, Puebla (2011)

    Google Scholar 

  • Čertický, M.: Implementing a wall-in building placement in starcraft with declarative programming CoRR abs/1306.4460 (2013). http://arxiv.org/abs/1306.4460

  • Čertický, M., Čertický, M.: Case-based reasoning for army compositions in real-time strategy games. In: Proceedings of Scientific Conference of Young Researchers, pp. 70–73. Baku, Azerbaijan (2013)

    Google Scholar 

  • Chung, M., Buro, M., Schaeffer, J.: Monte Carlo planning in RTS games. In: IEEE Symposium on Computational Intelligence and Games (CIG), Colchester, UK (2005).

    Google Scholar 

  • Churchill, D., Buro, M.: Build order optimization in starcraft. In: Proceedings of AIIDE, pp. 14–19. Palo Alto, USA (2011)

    Google Scholar 

  • Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: AIIDE, Palo Alto, USA (2012a)

    Google Scholar 

  • Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2012) (2012b)

    Google Scholar 

  • Danielsiek, H., Stuer, R., Thom, A., Beume, N., Naujoks, B., Preuss, M.: Intelligent moving of groups in real-time strategy games. In: 2008 I.E. Symposium on Computational Intelligence and Games, pp. 71–78. Perth, Australia (2008)

    Google Scholar 

  • Demyen, D., Buro, M.: Efficient triangulation-based pathfinding. In: Proceedings of the 21st National Conference on Artificial intelligence, vol. 1, pp. 942–947. Boston, USA (2006)

    Google Scholar 

  • Dereszynski, E., Hostetler, J., Fern, A., Hoang, T.D.T.T., Udarbe, M.: Learning probabilistic behavior models in real-time strategy games. In: AAAI (ed.) Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Palo Alto, USA (2011)

    Google Scholar 

  • Forbus, K.D., Mahoney, J.V., Dill, K.: How qualitative spatial reasoning can improve strategy game AIs. IEEE Intell. Syst. 17, 25–30 (2002). doi:10.1109/MIS.2002.1024748

    Article  Google Scholar 

  • Geib, C.W., Goldman, R.P.: A probabilistic plan recognition algorithm based on plan tree grammars. Artif. Intell. 173, 1101–1132 (2009)

    Article  MathSciNet  Google Scholar 

  • Hagelbäck, J.: Potential-field based navigation in starcraft. In: CIG (IEEE), Granada, Spain (2012)

    Google Scholar 

  • Hagelbäck, J., Johansson, S.J.: Dealing with fog of war in a real time strategy game environment. In: CIG (IEEE), pp. 55–62. Perth, Australia (2008)

    Google Scholar 

  • Hagelbäck, J., Johansson, S.J.: A multiagent potential field-based bot for real-time strategy games. Int. J. Comput. Games Technol. 2009, 4:1–4:10 (2009)

    Google Scholar 

  • Hale, D.H., Youngblood, G.M., Dixit, P.N.: Automatically-generated convex region decomposition for real-time spatial agent navigation in virtual worlds. Artificial Intelligence and Interactive Digital Entertainment AIIDE, pp. 173–178. http://www.aaai.org/Papers/AIIDE/2008/AIIDE08-029.pdf (2008)

  • Hladky, S., Bulitko, V.: An evaluation of models for predicting opponent positions in first-person shooter video games. In: CIG (IEEE), Perth, Australia (2008)

    Google Scholar 

  • Hoang, H., Lee-Urban, S., Muñoz-Avila, H.: Hierarchical plan representations for encoding strategic game ai. In: AIIDE, pp. 63–68. Marina del Rey, USA (2005)

    Google Scholar 

  • Houlette, R., Fu, D.: The ultimate guide to FSMs in games. In: AI Game Programming Wisdom 2, Charles River Media, Hingham, MA, USA (2003)

    Google Scholar 

  • Hsieh, J.L., Sun, C.T.: Building a player strategy model by analyzing replays of real-time strategy games. In: IJCNN, pp. 3106–3111. Hong Kong (2008)

    Google Scholar 

  • Jaidee, U., Muñoz-Avila, H., Aha, D.W.: Case-based learning in goal-driven autonomy agents for real-time strategy combat tasks. In: Proceedings of the ICCBR Workshop on Computer Games, pp. 43–52. Greenwich, UK (2011)

    Google Scholar 

  • Jaidee, U., Muñoz-Avila, H.: Classq-l: A q-learning algorithm for adversarial real-time strategy games. In: Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, Palo Alto, USA (2012)

    Google Scholar 

  • Kabanza, F., Bellefeuille, P., Bisson, F., Benaskeur, A.R., Irandoust, H.: Opponent behaviour recognition for real-time strategy games. In: AAAI Workshops, Atlanta, USA (2010)

    Google Scholar 

  • Koenig, S., Likhachev, M.: D*lite. In: AAAI/IAAI, pp. 476–483. Edmonton, Canada (2002)

    Google Scholar 

  • Liu, L., Li, L.: Regional cooperative multi-agent q-learning based on potential field. In: Natural Computation, 2008. ICNC’08. Fourth International Conference on, vol. 6, pp. 535–539. IEEE (2008)

    Google Scholar 

  • Madeira, C., Corruble, V., Ramalho, G.: Designing a reinforcement learning-based adaptive AI for large-scale strategy games. In: AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), Marina del Rey, USA (2006)

    Google Scholar 

  • Marthi, B., Russell, S., Latham, D., Guestrin, C.: Concurrent hierarchical reinforcement learning. In: International Joint Conference of Artificial Intelligence, IJCAI, pp. 779–785. Edinburgh, UK (2005)

    Google Scholar 

  • Miles, C.E.: Co-evolving real-time strategy game players. ProQuest (2007)

    Google Scholar 

  • Miles, C., Louis, S.J.: Co-evolving real-time strategy game playing influence map trees with genetic algorithms. In: Proceedings of the International Congress on Evolutionary Computation, Portland (2006)

    Google Scholar 

  • Mishra, K., Ontañón, S., Ram, A.: Situation assessment for plan retrieval in real-time strategy games. In: ECCBR, pp. 355–369. Trier, Germany (2008)

    Google Scholar 

  • Molineaux, M., Aha, D.W., Moore, P.: Learning continuous action models in a real-time strategy strategy environment. In: FLAIRS Conference, pp. 257–262. Coconut Grove, USA (2008)

    Google Scholar 

  • Ontañón, S.: The combinatorial multi-armed bandit problem and its application to real-time strategy games. In: AIIDE, Boston, USA (2013)

    Google Scholar 

  • Ontañón, S., Mishra, K., Sugandh, N., Ram, A.: Learning from demonstration and case-based planning for real-time strategy games. In: Prasad, B. (ed.) Soft Computing Applications in Industry, Studies in Fuzziness and Soft Computing, vol. 226, pp. 293–310. Springer, Berlin (2008)

    Chapter  Google Scholar 

  • Ontañón, S., Mishra, K., Sugandh, N., Ram, A.: On-line case-based planning. Comput. Intell. 26(1), 84–119 (2010)

    Article  MathSciNet  Google Scholar 

  • Othman, N., Decraene, J., Cai, W., Hu, N., Gouaillard, A.: Simulation-based optimization of starcraft tactical AI through evolutionary computation. In: CIG (IEEE), Granada, Spain (2012)

    Google Scholar 

  • Perkins, L.: Terrain analysis in real-time strategy games: an integrated approach to choke point detection and region decomposition. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2010), vol. 10, pp. 1687–173 (2010)

    Google Scholar 

  • Ponsen, M., Spronck, I.P.H.M.: Improving adaptive game AI with evolutionary learning. In: University of Wolverhampton, pp. 389–396 (2004)

    Google Scholar 

  • Pottinger, D.C.: Terrain analysis for real-time strategy games. In: Proceedings of Game Developers Conference 2000, San Francisco, USA (2000)

    Google Scholar 

  • Preuss, M., Beume, N., Danielsiek, H., Hein, T., Naujoks, B., Piatkowski, N., Ster, R., Thom, A., Wessing, S.: Towards intelligent team composition and maneuvering in real-time strategy games. Trans. Comput. Intell. AI. Game (TCIAIG) 2(2), 82–98 (2010)

    Article  Google Scholar 

  • Reynolds, C.W.: Steering behaviors for autonomous characters. Proc. Game Dev. Conf. 1999, 763–782 (1999)

    Google Scholar 

  • Richoux, F., Uriarte, A., Ontañón, S.: Walling in strategy games via constraint optimization. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2014) (2014)

    Google Scholar 

  • Schadd, F., Bakkes, S., Spronck, P.: Opponent modeling in real-time strategy games. In: GAMEON, pp. 61–70. Bologna, Italy (2007)

    Google Scholar 

  • Sharma, M., Holmes, M., Santamaria, J., Irani, A., Isbell, C.L., Ram, A.: Transfer learning in real-time strategy games using hybrid CBR/RL. In: International Joint Conference of Artificial Intelligence, IJCAI, Hyderabad, India (2007)

    Google Scholar 

  • Smith, G., Avery, P., Houmanfar, R., Louis, S.: Using co-evolved RTS opponents to teach spatial tactics. In: CIG (IEEE), Copenhagen, Denmark (2010)

    Google Scholar 

  • Sturtevant, N.: Benchmarks for grid-based pathfinding. Transactions on Computational Intelligence and AI in Games. http://web.cs.du.edu/sturtevant/papers/benchmarks.pdf (2012)

  • Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press, Cambridge, MA (1998)

    Google Scholar 

  • Synnaeve, G., Bessiere, P.: A Bayesian model for opening prediction in RTS games with application to StarCraft. In: Proceedings of 2011 I.E. CIG, p. 000. Seoul, Corée, République De, Seoul, South Korea (2011a)

    Google Scholar 

  • Synnaeve, G., Bessière, P.: A Bayesian model for plan recognition in RTS games applied to StarCraft. In: AAAI (ed.) Proceedings of the Seventh Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2011), pp. 79–84. Proceedings of AIIDE, Palo Alto, États-Unis, Granada, Spain (2011b)

    Google Scholar 

  • Synnaeve, G., Bessiere, P.: A Bayesian model for RTS units control applied to StarCraft. In: Proceedings of IEEE CIG 2011, p. 000. Seoul, Corée, République De (2011c)

    Google Scholar 

  • Synnaeve, G., Bessiere, P.: A dataset for StarCraft AI & an example of armies clustering. In: AIIDE Workshop on AI in Adversarial Real-time games 2012, Seoul, South Korea (2012a)

    Google Scholar 

  • Synnaeve, G., Bessièere, P.: Special tactics: a Bayesian approach to tactical decision-making. In: CIG (IEEE), Granada, Spain (2012b)

    Google Scholar 

  • Treuille, A., Cooper, S., Popović, Z.: Continuum crowds. ACM Trans. Graph. 25(3), 1160–1168 (2006)

    Article  Google Scholar 

  • Uriarte, A., Ontañón, S.: Kiting in RTS games using influence maps. In: Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, Palo Alto, USA (2012)

    Google Scholar 

  • Uriarte, A., Ontañón, S.: Game-tree search over high-level game states in RTS games. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2014). AAAI Press (2014)

    Google Scholar 

  • Weber, B.G., Mateas, M.: A data mining approach to strategy prediction. In: IEEE Symposium on Computational Intelligence and Games (CIG), Milan, Italy (2009)

    Google Scholar 

  • Weber, B.G., Mateas, M., Jhala, A.: Applying goal-driven autonomy to starcraft. In: Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Palo Alto, USA (2010a)

    Google Scholar 

  • Weber, B.G., Mawhorter, P., Mateas, M., Jhala, A.: Reactive planning idioms for multi-scale game AI. In: IEEE Symposium on Computational Intelligence and Games (CIG), Copenhagen, Denmark (2010b)

    Google Scholar 

  • Weber, B.G., Mateas, M., Jhala, A.: Building human-level AI for real-time strategy games. In: Proceedings of AIIDE Fall Symposium on Advances in Cognitive Systems. AAAI Press, AAAI Press, Stanford (2011a)

    Google Scholar 

  • Weber, B.G., Mateas, M., Jhala, A.: A particle model for state estimation in real-time strategy games. In: Proceedings of AIIDE, pp. 103–108. AAAI Press, AAAI Press, Stanford (2011b)

    Google Scholar 

  • Wender, S., Watson, I.: Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: Broodwar. In: CIG (IEEE), Granada, Spain (2012)

    Google Scholar 

  • Wintermute, S., Joseph Xu, J.Z., Laird, J.E.: Sorts: A human-level approach to real-time strategy AI. In: AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), pp. 55–60. Palo Alto, USA (2007)

    Google Scholar 

  • Young, J., Hawes, N.: Evolutionary learning of goal priorities in a real-time strategy game. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2012) (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Santiago Ontañón .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this entry

Cite this entry

Ontañón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., Preuss, M. (2015). RTS AI Problems and Techniques. In: Lee, N. (eds) Encyclopedia of Computer Graphics and Games. Springer, Cham. https://doi.org/10.1007/978-3-319-08234-9_17-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08234-9_17-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Cham

  • Online ISBN: 978-3-319-08234-9

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics