Advertisement

Applying a Neural Network Architecture with Spatio-Temporal Connections to the Maze Exploration

  • Dmitry Filin
  • Aleksandr I. Panov
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 636)

Abstract

We present a model of Reinforcement Learning, which consists of modified neural-network architecture with spatio-temporal connections, known as Temporal Hebbian Self-Organizing Map (THSOM). A number of experiments were conducted to test the model on the maze solving problem. The algorithm demonstrates sustainable learning, building a near to optimal routes. This work describes an agents behavior in the mazes of different complexity and also influence of models parameters at the length of formed paths.

Notes

Acknowledgements

The reported study was supported by RFBR, research Projects No. 16-37-60055 and No. 15-07-06214.

References

  1. 1.
    Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning (2013)Google Scholar
  2. 2.
    Koutník, J., Šnorek, M.: Temporal Hebbian Self-Organizing Map for Sequences (2008)Google Scholar
  3. 3.
    Gupta, S., et al.: Cognitive Mapping and Planning for Visual Navigation. arXiv:1702.03920
  4. 4.
    Schrodt, F., et al.: Mario Becomes Cognitive. Top. Cogn. Sci. (2017). p. 131Google Scholar
  5. 5.
    Paxton, C., et al.: Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments (2017). arXiv:1703.07887
  6. 6.
    Broy, M.: Software engineering – from auxiliary to key technologies. In: Broy, M., Dener, E. (eds.) Software Pioneers, pp. 10–13. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  7. 7.
    Panov, A.I.: Behavior planning of intelligent agent with sign world model. Biol. Inspired Cogn. Archit. 19, 21–31 (2017)MathSciNetGoogle Scholar
  8. 8.
    Emelyanov, S., et al.: Multilayer cognitive architecture for UAV control. Cogn. Syst. Res. 39, 58–72 (2016)CrossRefGoogle Scholar
  9. 9.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)CrossRefGoogle Scholar
  10. 10.
    Chalita, M.A., Lis, D., Caverzasi, A.: Reinforcement learning in a bio-connectionist model based in the thalamo-cortical neural circuit. Biol. Inspired Cogn. Archit. 16, 45–63 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.National Research University Higher School of EconomicsMoscowRussia
  2. 2.Federal Research Center “Computer Science and Control” of Russian Academy of SciencesMoscowRussia

Personalised recommendations