Abstract
The existing reinforcement learning approaches have been suffering from the policy alternation of others in multiagent dynamic environments such as RoboCup competitions since other agent behaviors may cause sudden changes of state transition probabilities of which constancy is necessary for the learning to converge. A modular learning approach would be able to solve this problem if a learning agent can assign each module to one situation in which the module can regard the state transition probabilities as constant. This paper presents a method of modular learning in a multiagent environment, by which the learning agent can adapt its behaviors to the situations as results of the other agent’s behaviors. Scheduling for learning is introduced to avoid the complexity in autonomous situation assignment.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Asada, M., Noda, S., Tawaratumida, S., Hosoda, K.: Purposive behavior acquisition for a real robot by vision-based reinforcement learning. Machine Learning 23, 279–303 (1996)
Connell, J.H., Mahadevan, S.: ROBOT LEARNING. Kluwer Academic Publishers, Dordrecht (1993)
Asada, M., Uchibe, E., Hosoda, K.: Cooperative behavior acquisition for mobile robots in dynamically changing real worlds via vision-based reinforcement learning and development. Artificial Intelligence 110, 275–292 (1999)
Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixture of local expoerts. Neural Computation 3, 79–87 (1991)
Singh, S.P.: Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning 8, 323–339 (1992)
Singh, S.P.: The effeicient learnig of multiple task sequences. Neural Information Processing Systems 4, 251–258 (1992)
Tani, J., Nolfi, S.: Self-organization of modules and their hierarchy in robot learning problems: A dynamical systems approach. Technical report, Technical Report: SCSL-TR-97-008 (1997)
Tani, J., Nolfi, S.: Self-organization of modules and their hierarchy in robot learning problems: A dynamical systems approach. Technical report, Sony CSL Technical Report, SCSL-TR-97-008 (1997)
Doya, K., Samejima, K., ichi Katagiri, K., Kawato, M.: Multiple model-based reinforcement learning. Technical report, Kawato Dynamic Brain Project Technical Report, KDB-TR-08, Japan Science and Technology Corporation (June 2000)
Takahashi, Y., Edazawa, K., Asada, M.: Multi-module learning system for behavior acquisition in multi-agent environment. In: Proceedings of 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2002, pp. CD–ROM 927–931 (2002)
Singh, S.P.: Reinforcement learning with a hierarchy of abstract models. National Conference on Artificial Intelligence, 202–207 (1992)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Takahashi, Y., Edazawa, K., Asada, M. (2005). Modular Learning System and Scheduling for Behavior Acquisition in Multi-agent Environment. In: Nardi, D., Riedmiller, M., Sammut, C., Santos-Victor, J. (eds) RoboCup 2004: Robot Soccer World Cup VIII. RoboCup 2004. Lecture Notes in Computer Science(), vol 3276. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-32256-6_51
Download citation
DOI: https://doi.org/10.1007/978-3-540-32256-6_51
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-25046-3
Online ISBN: 978-3-540-32256-6
eBook Packages: Computer ScienceComputer Science (R0)