Abstract
In this study, we attempt to extract knowledge by collecting results from multiple environments using an autonomous learning agent. A common factor of the environment is extracted by applying non-negative matrix factorization to the set of learning results of the reinforcement learning agent. In transfer learning of knowledge management of agents, as the number of experienced tasks increases, the knowledge database becomes larger and the cost of knowledge selection increases. By the proposed approach, an agent that can adapt to multiple environments can be developed without increasing cost of knowledge selection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)
Saito, M., Kobayashi, I.: A study on efficient transfer learning for reinforcement learning using sparse coding. J. Autom. Control Eng. 4(4), 324–330 (2016)
Ohmura, H., Katagami, D., Nitta, K.: Multi user learning agent with clustering. In: Proceeding of 8th International Symposium on Advanced Intelligent Systems (ISIS 2007), pp. 70–72 (2007)
Fernández, F., García, J., Veloso, M.: Probabilistic policy reuse for inter-task transfer learning. Robot. Auton. Syst. 58, 866–871 (2010)
Fernandez, F., Veloso, M.: Probabilistic policy reuse in a reinforcement learning agent. In: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 720–727 (2006)
Fachantidis, A., Partalas, I., Tsoumakas, G., Vlahavas, I.: Transferring task models in reinforcement Learning agents. Neurocomputing 107, 23–32 (2013)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
Minato, T., Asada, M.: Environmental change adaptation for mobile robot navigation. J. Robot. Soc. Jpn, 18(5), 706–712 (2000)
Laroche, R., Barlier, M.: Transfer reinfrocement learning with shared dynamics. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 2147–2153 (2017)
Silva, F.L., Taylor, M.E., Costa, R.A.H.: Autonomously reusing knowledge in multiagent reinforcement learning. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pp. 5487–5493 (2018)
Zhang, H., et al.: Learning to design games: strategic environments in reinforcement learning. In: Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pp. 3068–3074 (2018)
Acknowledgements
This work was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B) Numbers 15K16295 and Scientific Research C 19K04887.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Saıtoh, F. (2019). Knowledge Reuse of Learning Agent Based on Factor Information of Behavioral Rules. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1142. Springer, Cham. https://doi.org/10.1007/978-3-030-36808-1_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-36808-1_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36807-4
Online ISBN: 978-3-030-36808-1
eBook Packages: Computer ScienceComputer Science (R0)