How Commitment Leads to Coordination: The Effect of Individual Reasoning Strategies on Multi-Agent Interaction

  • M. E. Pollack
Part of the Philosophical Studies Series book series (PSSP, volume 72)


Most agents, human or artificial, are situated in dynamic environments, i.e., environments in which the agent is not itself the only cause of change. Moreover, all agents have computational resource limits: their reasoning processes are not instantaneous, but take time. A dynamic environment may change during the time an agent is reasoning, and, indeed, may change in ways that undermine the very assumptions underlying the ongoing reasoning. Thus an agent that blindly pushes forward with a reasoning task, without regard to the amount of time it is taking or the changes meanwhile going on in the environment, is not likely to make rational decisions. Agents in dynamic environments need a way of deciding what to reason about when, and for how long. Recognition of this fact has led to a number of proposals for mechanisms for controlling reasoning (Russell and Wefald (1991), Dean and Boddy (1988), Horvitz et al. (1989), Zilberstein (1993), Dean et al. (1993).


Dynamic Environment Reasoning Task Commitment Strategy Cooperative State Computational Resource Limit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bicchieri, C., Pollack, M.E., Rovelli, C., and Tsamardinos, I. (1997): The Potential for the Evolution of Cooperation among Web Agents. International Journal of Computer-Human Systems. To appear.Google Scholar
  2. Bratman, M.E., Israel, D.J., and Pollack, M.E. (1988): Plans and resource-bounded practical reasoning. Computational Intelligence 4 (1988), 349–355.CrossRefGoogle Scholar
  3. Dean, T., and Boddy, M. (1988): An analysis of time-dependent planning, in Proceedings of the Seventh National Conference on Artificial Intelligence, St. Paul, MI, 49–54.Google Scholar
  4. Durfee, E.H., Lesser, V.R., and Corkill, D.D. (1987): Cooperation through communication in a distributed problem solving network, in M.N. Huhns (ed). Distributed Artificial Intelligence, Kaufmann Publishers, Morgan, 29–58.CrossRefGoogle Scholar
  5. Ephrati, E., Pollack, M.E., and Ur, S. (1995): Deriving multi-agent coordination through filtering strategies, in Proceedings of the 14th Joint International Conference on Artificial Intelligence (IJCAI), 679–685.Google Scholar
  6. Goldman, C., and Rosenschein, J.S. (1994): Emergent coordination through the use of cooperative state-changing rules, in Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94). 408–4137Google Scholar
  7. Hammond, K. (1989): Opportunistic memory, in Proceedings of the 11th International Joint Conference on Artificial. Intelligence, Detroit. MI.Google Scholar
  8. Horvitz, E.J., Cooper, G.F.. and Heckerman, D.E. (1989): Reflection and action under scarce resources: Theoretical principles and empirical study, in Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. Detroit. ML 1121–1127.Google Scholar
  9. Kolodner, J. (ed.) (1988): Proceedings of the DARPA Case-Based Reasoning Workshop. Morgan Kaufmann.Google Scholar
  10. Moses, Y.. and Tennenhollz, M. (1990): Artificial social systems, part I: Basic principles. Technical Report CS90–12, Weizmann Institute. Rehovot, Israel.Google Scholar
  11. Moses, Y., and Tennenhollz, M. (1992): On computational aspects of artificial social systems, in Proceedings of the 11th International Workshop on Distributed Artificial Intelligence, Glen Arbor, Michigan, 267–283.Google Scholar
  12. Pollack, M.E. (1991): Overloading intentions for efficient practical reasoning. Nous 25(4) (1991), 513–536.CrossRefGoogle Scholar
  13. Pollack, M.E. (1992): The uses of plans, Artificial Intelligence 57 (1992), 43–68.CrossRefGoogle Scholar
  14. Pollack, M.E., Joslin, D., Nunes, A., Ur, S. and Ephrati, E. (1994): Experimental investigation of an agent-commitment strategy, Technical Report 94–31. Univ. of Pittsburgh Dept. of Computer Science, Pittsburgh, PA.Google Scholar
  15. Pollack, M.E.. and Ringuette, M. (1990): Introducing the Tileworld: Experimentally evaluating agent architectures, in Proceedings of the Eighth National Conference on Artificial Intelligence. Boston. MA, 183–189.Google Scholar
  16. Rosenschein, LS., and Zlotkin, G. (1994): Rules of Encounter, MIT Press, Cambridge, MA.Google Scholar
  17. Russell, S.J. and Wefald, E.H. (1991): Do the Right Thing. MIT Press. Cambridge, MA.Google Scholar
  18. Simon, H.A. (1957): Models of Man. Macmillan Press. New York.Google Scholar
  19. Sycara, K.P. (1988): Resolving goal conflicts via negotiation, in Proceedings of the Seventh National Conference on Artificial Intelligence (AAAI-88), 245–250.Google Scholar
  20. Zilberstein, S. (1993): Operational rationality through compilation of anytime algorithms. Technical report, Computer Science Division. University of Berkeley. Ph.D. Dissertation.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • M. E. Pollack
    • 1
  1. 1.Department of Computer Science and Intelligent Systems ProgramUniversity of PittsburghPittsburghUSA

Personalised recommendations