Multi-agent System Environment Based on Repeated Local Effect Functions
This paper discusses a behavior of agents under multi-agent environment in game theory. Then we assume the behavior takes probabilistic Nash equilibrium in reinforcement learning. It is well-known that the behavior provides us with poor properties. For instance, no Nash equilibrium correspond to Pareto optimal and we can’t guarantee the convergence of learning. There, it is difficult to develop a multi-agent system to proceed cooperative work with agents. This paper takes the other approach to employee mixed Nash strategy based on correlated technique in terms of Local Effect Functions, and the model is useful to achieve cooperation among agents and they are designed to assess the convergence in learning through experiments in practice.
KeywordsMulti-Agent System Game theory Correlated Technique Nash equilibrium Local Effect Games
Unable to display preview. Download preview PDF.
- 3.Igoshi, K., Miura, T.: Strategic Knowledge By Nash-Q Learning for Reward Distribution. In: First IEEE International Conference on the Applications of Digital Information and Web Technologies, ICADIWT (2008)Google Scholar
- 4.Leyton-brown, K., Tennenholtz, M.: Local-Effect Games. In: International Joint Conference on Artificial Intelligence, IJCAI (2003)Google Scholar
- 7.Shoham, Y., Powers, R., Grenager, T.: Multi-Agent Reinforcement Learning - A Critical Survey, Technical Report (2003)Google Scholar
- 8.Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar