Learning Multi-agent Strategies in Multi-stage Collaborative Games
An alternative approach to learning decision strategies in multi-state multiple agent systems is presented here. The method, which uses a game theoretic construction which is model free and does not rely on direct communication between the agents in the system. Limited experiments show that the method can find Nash equilibrium point for 3 player multi-stage game and converges more quickly than a comparable co-evolution method.
KeywordsNash Equilibrium Reinforcement Learning Mixed Strategy Pure Strategy Independent Learner
Unable to display preview. Download preview PDF.
- 1.C. Claus and C. Boutiler. The dynamics of reinforcement learning in cooperative multi-agent systems. In AAAI-98, volume 1, 1998.Google Scholar
- 3.D. Fudenberg and D.K. Levine. The Theory of Learning in Games. Economic Learning and Social Evolution. MIT Press, 1998.Google Scholar
- 5.M. Kearns, Y. Mansour, and S. In UAI 2000 Singh. Fast planning in stochastic games. In UAI, 2000.Google Scholar
- 6.D. Leslie and S. Collins. Convergent multiple-timescales reinforcement learning algorithms in normal form games. Submitted to Annals of Applied Probability, 2002.Google Scholar
- 10.M. Tan. Multi-agent reinforcement learning: Independent vs cooperative agents. In Proceedings of the tenth international conference on machine learning, pages 330–337, 1993.Google Scholar
- 12.W.A. Wright. Sequential strategy for learning multi-stage multi-agent collaborative games. In Georg Dorffner, Horst Bischof, and Kurt Hornik, editors, ICANN, volume 2130 of Lecture Notes in Computer Science, pages 874–884. Springer, 2001.Google Scholar
- 13.W.A. Wright. Solution to two predator single prey game: The two monster single princess game. In preparation, 2002.Google Scholar