Skip to main content

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1156))

  • 759 Accesses

Abstract

The paper describes the possibilities and basic means of constructing intelligent systems of real-time based on an integrated approach. A multi-agent approach, flexible decision search algorithms, forecasting algorithms based on reinforced learning are used. The architecture of forecasting module, module of deep reinforced learning and the architecture of the forecasting subsystem are given. The results of computer simulation of reinforcement learning algorithms based on temporal differences are presented and the corresponding recommendations for their use in multi-agent systems are given.

The work was supported by the Russian Foundation for Basic Research, projects №№ 18-01- 00201 a, 18-01-00459 a, 18-51-00007 Bel-a, 18-29-03088 MK.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT Press, London (2012)

    MATH  Google Scholar 

  2. Vagin, V.N., Eremeev, A.P.: Some basic principles of design of intelligent systems for supporting real-time decision making. J. Comput. Syst. Sci. Int. 6, 953–961 (2001)

    MATH  Google Scholar 

  3. Bashlykov, A.A., Eremeev, A.P.: Fundamentals of Design of Intelligent Decision Support Systems in Nuclear Power Engineering: Textbook. INFRA-M, Moscow (2018). (in Russian)

    Google Scholar 

  4. Mnih, V., Badia, A.P., Mirza, M., Graves, A., Harley, T., et al.: Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning (PMLR 48), pp. 1928–1937 (2016)

    Google Scholar 

  5. Nikolenko, S., Kadurin, A., Archangelskaya, E.: Deep learning. In: Immersion in the World of Neural Networks. PITER, St. Petersburg (2017). (in Russian)

    Google Scholar 

  6. Eremeev, A.P., Kozhukhov, A.A., Guliakina, N.A.: Implementation of intelligent forecasting subsystem of real-time. In: Proceedings of the International Conference on Open Semantic Technologies for Intelligent Systems (OSTIS-2019), Minsk, pp. 201–204 (2019)

    Google Scholar 

  7. Alekhin, R., Varshavsky, P., Eremeev, A., Kozhevnikov, A.: Application of the case-based reasoning approach for identification of acoustic-emission control signals of complex technical objects. In: 2018 3rd Russian-Pacific Conference on Computer Technology and Applications (RPC), pp. 28–31 (2018)

    Google Scholar 

  8. Busoniu, L., Babuska, R., De Schutter, B.: Multi-agent reinforcement learning: an overview. In: Innovations in Multi-agent Systems and Applications, vol. 310, pp. 183–221. Springer, Heidelberg (2010)

    Google Scholar 

  9. Eremeev, A.P., Kozhukhov, A.A.: About implementation of machine learning tools in real-time intelligent systems. J. Softw. Syst. 2, 239–245 (2018). (in Russian)

    Article  Google Scholar 

  10. Sort, J., Singh, S., Lewis, R.L.: Variance-based rewards for approximate Bayesian reinforcement learning. In: Proceedings of Uncertainty in Artificial Intelligence, pp. 564–571 (2010)

    Google Scholar 

  11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  12. Li, Y.: Deep reinforcement learning: an overview, arXiv (2017). http://arxiv.org/abs/1701.07274

  13. Hansen, E.A., Zilberstein, S.: Monitoring and control of anytime algorithms: a dynamic programming approach. J. Artif. Intell. 126, 139–157 (2001)

    Article  MathSciNet  Google Scholar 

  14. Mangharam, R., Saba, A.: Anytime algorithms for GPU architectures. In: IEEE Real-Time Systems Symposium (2011)

    Google Scholar 

  15. Eremeev, A.P., Gerasimova, A.T., Kozhukhov, A.A.: Comparative analysis of machine reinforcement learning methods applied to real time systems. In: Proceedings of the International Conference on Intelligent Systems and Information Technologies (IS&IT 2019), Taganrog, vol. 1, pp. 213–222 (2019)

    Google Scholar 

  16. Golenkov, V.V., Gulyakina, N.A., Grakova, N.V., Nikulenka, V.Y., Eremeev, A.P., Tarasov, V.B.: From training intelligent systems to training their development means: In: Proceedings of the International Conference on Open Semantic Technologies for Intelligent Systems (OSTIS-2018), Minsk, vol. 2, no. 8, pp. 81–99 (2018)

    Google Scholar 

  17. Eremeev, A.P., Kozhukhov, A.A., Golenkov, V.V., Guliakina, N.A.: On the implementation of the machine learning tools in intelligent systems of real-time. J. Softw. Syst. 31(2), 81–99 (2018). (in Russian)

    Google Scholar 

  18. Likhachev, M., Ferguson, D., Gordon, G., Stentz, A., Thrun, S.: Anytime dynamic A*: an anytime, replanning algorithm. In: Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), pp. 262–271 (2005)

    Google Scholar 

  19. Eremeev, A.P., Kozhukhov, A.A.: About the integration of learning and decision-making models in intelligent systems of real-time. In: Proceedings of the Second International Scientific Conference “Intelligent Information Technologies for Industry” (IITI 2018), vol. 2, pp. 181–189. Springer (2018)

    Google Scholar 

  20. Eremeev, A.P., Kozhukhov, A.A.: Methods and program tools based on prediction and reinforcement learning for the intelligent decision support systems of real-time. In: Proceedings of the Second International Scientific Conference “Intelligent Information Technologies for Industry” (IITI 2017), vol. 1, pp. 74–83. Springer (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. P. Eremeev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Eremeev, A.P., Kozhukhov, A.A., Gerasimova, A.E. (2020). Implementation of the Real-Time Intelligent System Based on the Integration Approach. In: Kovalev, S., Tarassov, V., Snasel, V., Sukhanov, A. (eds) Proceedings of the Fourth International Scientific Conference “Intelligent Information Technologies for Industry” (IITI’19). IITI 2019. Advances in Intelligent Systems and Computing, vol 1156. Springer, Cham. https://doi.org/10.1007/978-3-030-50097-9_11

Download citation

Publish with us

Policies and ethics