Advertisement

COllective INtelligence with Sequences of Actions

Coordinating Actions in Multi-agent Systems
  • Pieter Jan ’t Hoen
  • Sander M. Bohte
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2837)

Abstract

The design of a Multi-Agent System (MAS) to perform well on a collective task is non-trivial. Straightforward application of learning in a MAS can lead to sub optimal solutions as agents compete or interfere. The COllective INtelligence (COIN) framework of Wolpert et al. proposes an engineering solution for MASs where agents learn to focus on actions which support a common task. As a case study, we investigate the performance of COIN for representative token retrieval problems found to be difficult for agents using classic Reinforcement Learning (RL). We further investigate several techniques from RL (model-based learning, Q( λ )) to scale application of the COIN framework. Lastly, the COIN framework is extended to improve performance for sequences of actions.

Keywords

Reinforcement Learn Multiagent System COllective INtelligence Emergent Behavior Reinforcement Learn Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Barto, A., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete-Event Systems journal (2003) (to appear)Google Scholar
  2. 2.
    Guestrin, C., Lagoudakis, M., Parr, R.: Coordinated reinforcement learning. In: Proceedings of the ICML 2002 The Nineteenth International Conference on Machine Learning (2002)Google Scholar
  3. 3.
    Hardin, G.: The tragedy of the commons. Science 162, 1243–1248 (1968)CrossRefGoogle Scholar
  4. 4.
    Lauer, M., Riedmiller, M.: An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In: Proc. 17th International Conf. on Machine Learning, pp. 535–542. Morgan Kaufmann, San Francisco (2000)Google Scholar
  5. 5.
    Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  6. 6.
    Personal communication with A. AgoginoGoogle Scholar
  7. 7.
    Menache, I., Mannor, S., Shimkin, N.: Q-cut - dynamic discovery of sub-goals in Reinforcement Learning. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 295–306. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Sutton, R., Barto, A.: Reinforcement learning: An introduction. MIT Press, Cambridge (1998)Google Scholar
  9. 9.
    Thrun, S.B.: Efficient exploration in reinforcement learning. Technical Report CMU-CS-92-102, Carnegie Mellon University, Pittsburgh, Pennsylvania (1992)Google Scholar
  10. 10.
    Tumer, K., Agogino, A., Wolpert, D.: Learning sequences of actions in collectives of autonomous agents. In: Autonomous Agents & Multiagent Systems, part 1, pp. 378–385. ACM Press, New York (2002)Google Scholar
  11. 11.
    Tumer, K., Wolpert, D.: COllective INtelligence and Braess paradox. In: Proceedings of the Sixteenth National Conference on Artificial Intelligence, Austin, August 2000, pp. 104–109 (2000)Google Scholar
  12. 12.
    Watkins, Dayan: Q-learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar
  13. 13.
    Weiss, G.: A multiagent framework for planning, reacting, and learning. Technical Report FKI-233-99, Institut für Informatik, Technische Universität München (1999)Google Scholar
  14. 14.
    Wellman, M.P.: The economic approach to artificial intelligence. ACM Computing Surveys 28(4es), 14–15 (1996)CrossRefGoogle Scholar
  15. 15.
    Wellman, M.P.: Market-oriented programming: Some early lessons. In: Clearwater, S. (ed.) Market-Based Control: A Paradigm for Distributed Resource Allocation, World Scientific, River Edge (1996)Google Scholar
  16. 16.
    Wiering, M.: Explorations in Efficient Reinforcement Learning. PhD thesis, University of Amsterdam (1999)Google Scholar
  17. 17.
    Wolpert, D., Tumer, K.: An introduction to COllective INtelligence. Technical Report NASA-ARC-IC-99-63, NASA Ames Research Center, 1999. A shorter version of this paper is to appear in: Bradshaw, J.M. (edi.) Handbook of Agent Technology, AAAI Press/MIT Press (1999)Google Scholar
  18. 18.
    Wolpert, D., Tumer, K.: Optimal payoff functions for members of collectives. Advances in Complex Systems (2001) (in press)Google Scholar
  19. 19.
    Wolpert, D.H., Tumer, K., Frank, J.: Using collective intelligence to route internet traffic. In: Advances in Neural Information Processing Systems-11, December 1998, pp. 952–958, Denver (1998)Google Scholar
  20. 20.
    Wolpert, D.H., Tumer, K., Frank, J.: Using collective intelligence to route internet traffic. In: Advances in Neural Information Processing Systems-11, pp. 952–958, Denver (1998)Google Scholar
  21. 21.
    Wolpert, D.H., Wheeler, K.R., Tumer, K.: General principles of learning-based multi-agent systems. In: Etzioni, O., Müller, J.P., Bradshaw, J.M. (eds.) Proceedings of the Third Annual Conference on Autonomous Agents (AGENTS 1999), pp. 77–83. ACM Press, New York (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Pieter Jan ’t Hoen
    • 1
  • Sander M. Bohte
    • 1
  1. 1.CWI, Centre for Mathematics and Computer ScienceAmsterdamThe Netherlands

Personalised recommendations