Skip to main content

Effects of Input Addition in Learning for Adaptive Games: Towards Learning with Structural Changes

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2019)

Abstract

Adaptive Games (AG) involve a controller agent that continuously feeds from player actions and game state to tweak a set of game parameters in order to maintain or achieve an objective function such as the flow measure defined by Csíkszentmihályi. This can be considered a Reinforcement Learning (RL) situation, so that classical Machine Learning (ML) approaches can be used. On the other hand, many games naturally exhibit an incremental gameplay where new actions and elements are introduced or removed progressively to enhance player’s learning curve or to introduce variety within the game. This makes the RL situation unusual because the controller agent input/output signature can change over the course of learning. In this paper, we get interested in this unusual “protean” learning situation (PL). In particular, we assess how the learner can rely on its past shapes and experience to keep improving among signature changes without needing to restart the learning from scratch on each change. We first develop a rigorous formalization of the PL problem. Then, we address the first elementary signature change: “input addition”, with Recurrent Neural Networks (RNNs) in an idealized PL situation. As a first result, we find that it is possible to benefit from prior learning in RNNs even if the past controller agent signature has less inputs. The use of PL in AG thus remains encouraged. Investigating output addition, input/output removal and translating these results to generic PL will be part of future works.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning Series, 2nd edn. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  2. Hanna, C.J., Hickey, R.J., Charles, D.K., Black, M.M.: Modular reinforcement learning architectures for artificially intelligent agents in complex game environments. In: Computational Intelligence and Games. pp. 380–387. IEEE, Copenhagen, August 2010

    Google Scholar 

  3. Elman, J.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)

    Article  Google Scholar 

  4. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  5. Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: a survey. J. Mach. Learn. Res. 10(7), 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  6. Lazaric, A.: Transfer in reinforcement learning: a framework and a survey. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning. Adaptation, Learning, and Optimization, vol. 12, pp. 143–173. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_5

    Chapter  Google Scholar 

  7. Tanaka, F., Yamamura, M.: Multitask reinforcement learning on the distribution of MDPs. In: International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium, vol. 3, pp. 1108–1113. IEEE, Kobe (2003)

    Google Scholar 

  8. Devin, C., Gupta, A., Darrell, T., Abbeel, P., Levine, S.: Learning modular neural network policies for multi-task and multi-robot transfer. CoRR abs/1609.07088 (2016)

    Google Scholar 

  9. Frans, K., Ho, J., Chen, X., Abbeel, P., Schulman, J.: Meta learning shared hierarchies. CoRR abs/1710.09767 (2017)

    Google Scholar 

  10. Teh, Y.W., et al.: Distral: robust multitask reinforcement learning. CoRR abs/1707.04175 (2017)

    Google Scholar 

  11. Tsymbal, A.: The Problem of Concept Drift: Definitions and Related Work (2004)

    Google Scholar 

  12. Wang, H., Abraham, Z.: Concept drift detection for streaming data. In: International Joint Conference on Neural Networks, pp. 1–9. IEEE, Killarney, July 2015

    Google Scholar 

  13. Ring, M.B.: Continual learning in reinforcement environments. Ph.D. thesis, University of Texas at Austin, Austin (1994)

    Google Scholar 

  14. Xu, J., Zhu, Z.: Reinforced continual learning. CoRR abs/1805.12369 (2018)

    Google Scholar 

  15. Sweetser, P., Wyeth, P.: GameFlow: a model for evaluating player enjoyment in games. Comput. Entertainment 3(3), 3 (2005)

    Article  Google Scholar 

  16. Holt, R., Mitterer, J.: Examining video game immersion as a flow state. In: 108th Annual Psychological Association, Washington, DC (2000)

    Google Scholar 

  17. Chen, J.: flOw, January 2019. http://jenovachen.info/flow/

  18. Chen, J.: Flow in games (and everything else). ACM Commun. 50(4), 31 (2007)

    Article  Google Scholar 

  19. Bonnici, I., Gouaïch, A.: Formalisation of metamorph reinforcement learning. Technical report, LIRMM, Montpellier, November 2018. https://hal-lara.archives-ouvertes.fr/hal-01924642

  20. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR abs/1406.1078 (2014)

    Google Scholar 

  21. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)

    Google Scholar 

  22. Paszke, A., et al.: Automatic differentiation in PyTorch (2017)

    Google Scholar 

  23. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2018)

    Google Scholar 

  24. Le Hy, R., Arrigoni, A., Bessiere, P., Lebeltel, O.: Teaching Bayesian behaviours to video game characters. Robot. Auton. Syst. 47, 177–185 (2004)

    Article  Google Scholar 

  25. Tencé, F., Buche, C.: Automatable evaluation method oriented toward behaviour believability for video games. CoRR abs/1009.0501 (2010)

    Google Scholar 

  26. Polceanu, M., Mora, A., Jimenez, J., Buche, C., Fernandez-Leiva, A.: The believability gene in virtual bots. In: 29th International Flairs, p. 4. AAAI Press, Key Largo (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iago Bonnici .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bonnici, I., Gouaïch, A., Michel, F. (2019). Effects of Input Addition in Learning for Adaptive Games: Towards Learning with Structural Changes. In: Kaufmann, P., Castillo, P. (eds) Applications of Evolutionary Computation. EvoApplications 2019. Lecture Notes in Computer Science(), vol 11454. Springer, Cham. https://doi.org/10.1007/978-3-030-16692-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16692-2_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16691-5

  • Online ISBN: 978-3-030-16692-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics