Skip to main content

Strategizing Game Playing Using Evolutionary Approach

  • Conference paper
  • First Online:
  • 1540 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11508))

Abstract

Since the inception of evolutionary algorithms, the capabilities of genetic algorithms is showcased by games. This paper proposes the use of a genetic algorithm for the game, Tetris. An evolutionary approach is used to design a Tetris bot. The proposed approach uses a novel set of parameters to decide which move needs to be taken by the Tetris bot for each falling Tetromino. These parameters represent the various genes present in the chromosome. Each individual is being allowed to play the game once. Once the entire population has played, the population undergoes crossover and mutation. In this way, the parameters are evolved to get a better bot. The most evolved bot as per the fitness is allowed to simulate 200 rounds of Tetris during which its actions are recorded. Further, Frequent Pattern Growth algorithm, a data mining technique, is used to extract knowledge from the given stored actions. The extracted knowledge is used for mining association rules and identifying strategies used by the evolved bot to play the game.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Baccherini, D., Merlini, D.: Combinatorial analysis of tetris-like games. Discret. Math. 308, 4165–4176 (2008)

    Article  MathSciNet  Google Scholar 

  2. Demaine, E.D., Hohenberger, S., Liben-Nowell, D.: Tetris is hard, even to approximate. In: Warnow, T., Zhu, B. (eds.) COCOON 2003. LNCS, vol. 2697, pp. 351–363. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-45071-8_36

    Chapter  Google Scholar 

  3. Burgiel, H.: How to lose at tetris. Math. Gaz. 81, 194–200 (1997)

    Article  Google Scholar 

  4. Fahey, C.: Tetris AI: Computer Plays Tetris. http://www.colinfahey.com/tetris/tetris.html. Accessed 6 June 2018

  5. Bohm, N., Kokai, G., Mandl, S.: Evolving a heuristic function for the game of tetris. In: LWA, pp. 118–122. Humboldt University of Berlin, Berlin (2005)

    Google Scholar 

  6. Szita, I., Lorincz, A.: Learning tetris using the noisy cross-entropy method. Neural Comput. 18, 2936–2941 (2006)

    Article  Google Scholar 

  7. Langenhoven, L., Van Heerden, W.S., Engelbrecht, A.P.: Swarm tetris: applying particle swarm optimization to tetris. In: IEEE Congress on Evolutionary Computation, Barcelona, pp. 1–8 (2010). https://doi.org/10.1109/CEC.2010.5586033

  8. Lundgaard, N., McKee, B.: Reinforcement learning and neural networks for tetris. Technical report, University of Oklahoma, Oklahoma (2006)

    Google Scholar 

  9. Brzustowski, J.: Can you win at tetris? Master’s thesis, University of British Columbia, Canada (1992)

    Google Scholar 

  10. Bergmark, M.: Tetris: a heuristic study: using height-based weighing functions and breadth-first search heuristics for playing tetris. Bachelor’s thesis, KTH, School of Engineering Sciences, Stockholm, Sweden (2015)

    Google Scholar 

  11. Rollinson, D., Wagner, G.: Tetris AI generation using Nelder-Mead and genetic algorithms (2010)

    Google Scholar 

  12. Han, J., Pei, J., Yin, Y., Mao, R.: Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min. Knowl. Discov. 8, 53–87 (2004)

    Article  MathSciNet  Google Scholar 

  13. Farias, V.F., Van Roy, B.: Tetris: a study of randomized constraint sampling. In: Calafiore, G., Dabbene, F. (eds.) Probabilistic and Randomized Methods for Design under Uncertainty, pp. 189–201. Springer, London (2006). https://doi.org/10.1007/1-84628-095-8_6

    Chapter  Google Scholar 

  14. Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific, Belmont (1996)

    MATH  Google Scholar 

  15. Ramon, J., Driessens, K.: On the numeric stability of Gaussian processes regression for relational reinforcement learning. In: ICML 2004 Workshop on Relational Reinforcement Learning, pp. 10–14 (2004)

    Google Scholar 

  16. Lagoudakis, M.G., Parr, R., Littman, M.L.: Least-squares methods in reinforcement learning for control. In: Vlahavas, I.P., Spyropoulos, C.D. (eds.) SETN 2002. LNCS (LNAI), vol. 2308, pp. 249–260. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-46014-4_23

    Chapter  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhinav Nagpal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nagpal, A., Gabrani, G. (2019). Strategizing Game Playing Using Evolutionary Approach. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2019. Lecture Notes in Computer Science(), vol 11508. Springer, Cham. https://doi.org/10.1007/978-3-030-20912-4_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20912-4_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20911-7

  • Online ISBN: 978-3-030-20912-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics