Abstract
Training agents in a virtual crowd to achieve a task can be accomplished by allowing the agents to learn by trial-and-error and by sharing information with other agents. Since sharing enables agents to potentially reach optimal behavior more quickly, what type of sharing is best to use to achieve the quickest learning times? This paper categorizes sharing into three categories: realistic, unrealistic, and no sharing. Realistic sharing is defined as sharing that takes place amongst agents within close proximity and unrealistic sharing allows agents to share regardless of physical location. This paper demonstrates that all sharing methods converge to similar policies and that the differences between the methods are determined by analyzing the learning rates, communication frequencies, and total run times. Results show that the unrealistic-centralized sharing method – where agents update a common learning module – is the most effective of the sharing methods tested.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 38(2), 156–172 (2008)
Martinez-Gil, F., Lozano, M., Fernández, F.: Multi-agent Reinforcement Learning for Simulating Pedestrian Navigation. In: Vrancx, P., Knudson, M., Grześ, M. (eds.) ALA 2011. LNCS, vol. 7113, pp. 54–69. Springer, Heidelberg (2012)
Ribeiro, R., Borges, A., Enembreck, F.: Interaction models for multiagent reinforcement learning. In: 2008 International Conference on Computational Intelligence for Modelling Control Automation, pp. 464–469 (2008)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Takahashi, Y., Asada, M.: Multi-controller fusion in multi-layered reinforcement learning. In: International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 7–12 (2001)
Takahashi, Y., Asada, M.: Behavior acquisition by multi-layered reinforcement learning. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 716–721 (1999)
Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. In: Proceedings of the Tenth International Conference on Machine Learning, pp. 330–337. Morgan Kaufmann (1993)
Torrey, L.: Crowd simulation via multi-agent reinforcement learning. In: Youngblood, G.M., Bulitko, V. (eds.) Proceedings of the Sixth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. The AAAI Press (2010)
Zhang, P., Ma, X., Pan, Z., Li, X., Xie, K.: Multi-Agent Cooperative Reinforcement Learning in 3D Virtual World. In: Tan, Y., Shi, Y., Tan, K.C. (eds.) ICSI 2010, Part I. LNCS, vol. 6145, pp. 731–739. Springer, Heidelberg (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cunningham, B., Cao, Y. (2012). Levels of Realism for Cooperative Multi-Agent Reinforcement Learning. In: Tan, Y., Shi, Y., Ji, Z. (eds) Advances in Swarm Intelligence. ICSI 2012. Lecture Notes in Computer Science, vol 7331. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30976-2_69
Download citation
DOI: https://doi.org/10.1007/978-3-642-30976-2_69
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-30975-5
Online ISBN: 978-3-642-30976-2
eBook Packages: Computer ScienceComputer Science (R0)