Advertisement

Providing Effective Access to Shared Resources: A COIN Approach

  • Stéphane Airiau
  • Sandip Sen
  • David H. Wolpert
  • Kagan Tumer
Conference paper
  • 428 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2977)

Abstract

Managers of systems of shared resources typically have many separate goals. Examples are efficient utilization of the resources among its users and ensuring no user’s satisfaction in the system falls below a preset minimal level. Since such goals will usually conflict with one another, either implicitly or explicitly the manager must determine the relative importance of the goals, encapsulating that into an overall utility function rating the possible behaviors of the entire system. Here we demonstrate a distributed, robust, and adaptive way to optimize that overall function. Our approach is to interpose adaptive agents between each user and the system, where each such agent is working to maximize its own private utility function. In turn, each such agent’s function should be both relatively easy for the agent to learn to optimize, and “aligned” with the overall utility function of the system manager — an overall function that is based on but in general different from the satisfaction functions of the individual users. To ensure this we enhance the COllective INtelligence (COIN) framework to incorporate user satisfaction functions in the overall utility function of the system manager and accordingly in the associated private utility functions assigned to the users’ agents. We present experimental evaluations of different COIN-based private utility functions and demonstrate that those COIN-based functions outperform some natural alternatives.

Keywords

Utility Function Multiagent System Task Type COllective INtelligence Decay Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Durfee, E.H.: Scaling up coordination strategies. IEEE Computer 34, 39–46 (2001)Google Scholar
  2. 2.
    Hardin, G.: The tragedy of the commons. Science 162, 1243–1248 (1968)CrossRefGoogle Scholar
  3. 3.
    Tumer, K., Wolpert, D.H.: Collective intelligence and braess paradox. In: Proceedings of the Seventeenth National Conference on Artificial Intelligence, pp. 104–109. AAAI Press, Menlo Park (2000)Google Scholar
  4. 4.
    Wolpert, D.H., Tumer, K.: Beyond mechanism design. In: Gao, H., et al. (eds.) Proceedings of International Congress of Mathematicians, Qingdao Publishing (2002)Google Scholar
  5. 5.
    Wolpert, D.H., Wheeler, K.R., Turner, K.: General principles of learning-based multi-agent systems. In: Proceedings of the Third International Conference on Autonomous Agents, pp. 77–83. ACM Press, New York (1999)CrossRefGoogle Scholar
  6. 6.
    Wolpert, D.H., Tumer, K.: Optimal payoff functions for members of collectives. Advances in Complex Systems 4, 265–279 (2001)zbMATHCrossRefGoogle Scholar
  7. 7.
    Tumer, K., Agogino, A.K., Wolpert, D.H.: Learning sequences of actions in collectives of autonomous agents. In: Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 378–385. ACM Press, New York (2002)CrossRefGoogle Scholar
  8. 8.
    Wolpert, D.H., Kirshner, S., Merz, C.J., Tumer, K.: Adaptivity in agent-based routing for data networks. In: Proc. of the 4th International Conference on Autonomous Agents, pp. 396–403 (2000)Google Scholar
  9. 9.
    Parkes, D.C.: Iterative combinatorial auctions: Theory and practice (2001)Google Scholar
  10. 10.
    Crites, R.H., Barto, A.G.: Improving elevator performance using reinforcement learning. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems - 8, pp. 1017–1023. MIT Press, Cambridge (1996)Google Scholar
  11. 11.
    Kephart, J.O., Hogg, T., Hubermann, B.A.: Dynamics of computational ecosystems: Implications for DAI. In: Huhns, M.N., Gasser, L. (eds.) Distributed Artificial Intelligence, Pitman. Research Notes in Artificial Intelligence, vol. 2 (1989)Google Scholar
  12. 12.
    Rustogi, S.K., Singh, M.P.: Be patient and tolerate imprecision: How autonomous agents can coordinate effectively. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp.512–517 (1999)Google Scholar
  13. 13.
    Sen, S., Arora, N., Roychowdhury, S.: Using limited information to enhance group stability. International Journal of Human-Computer Studies 48, 69–82 (1998)CrossRefGoogle Scholar
  14. 14.
    Yamaki, H., Yamauchi, Y., Ishida, T.: Implementation issues on market-based qos control. In: Proceedings of the Third International Conference on Multi-Agent Systems, pp. 357–364 (1998)Google Scholar
  15. 15.
    Schaerf, A., Shoham, Y., Tennenholtz, M.: Adaptive load balancing: A study in multiagent learning. Journal of Artificial Intelligence Research 2, 475–500 (1995)zbMATHGoogle Scholar
  16. 16.
    Glance, N.S., Hogg, T.: Dilemmas in computational societies. In: First International Conference on Multiagent Systems, pp. 117–124. AAAI Press/MIT Press, Menlo Park, CA (1995)Google Scholar
  17. 17.
    van Dyke Parunak, H., Brueckner, S., Sauter, J., Savit, R.: Effort profiles in multi-agent resource allocation. In: Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 248–255. ACM Press, New York (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Stéphane Airiau
    • 1
  • Sandip Sen
    • 1
  • David H. Wolpert
    • 2
  • Kagan Tumer
    • 2
  1. 1.Mathematical and Computer Sciences DepartmentUniversity of TulsaTulsaUSA
  2. 2.NASA Ames Research CenterMoffett FieldUSA

Personalised recommendations