Advertisement

User Modeling pp 389-400 | Cite as

Agent Modeling in Antiair Defense

  • Sanguk Noh
  • Piotr J. Gmytrasiewicz
Part of the International Centre for Mechanical Sciences book series (CISM, volume 383)

Abstract

This research addresses rational decision making and coordination among antiair units whose mission is to defend a specified territory from a number of attacking missiles. The automated units have to decide which missiles to attempt to intercept, given the characteristics of the threat, and given the other units’ anticipated actions, in their attempt to minimize the expected overall damages to the defended territory. Thus, an automated defense unit needs to model the other agents, either human or automated, that control the other defense batteries. For the purpose of this case study, we assume that the units cannot communicate among themselves, say, due to an imposed radio silence. We use the Recursive Modeling Method (RMM), which enables an agent to select his rational action by examining the expected utility of his alternative behaviors, and to coordinate with other agents by modeling their decision making in a distributed multiagent environment. We describe how decision making using RMM is applied to the antiair defense domain and show experimental results that compare the performance of coordinating teams consisting of RMM agents, human agents, and mixed RMM and human teams.

Keywords

Payoff Matrix Learn Classifier System Payoff Matrice Plan Recognition Independent Team 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Durfee, E. H., and Montgomery, T. A. (1989). MICE: A flexible testbed for intelligent coordination experiments. In Proceedings of the 1989 Distributed AI Workshop, 25–40.Google Scholar
  2. Gmytrasiewicz, P. J., and Durfee, E. H. (1995). A rigorous, operational formalization of recursive modeling. In Proceedings of the First International Conference on Multi-Agent Systems, 125–132. Menlo Park: AAAI Press/The MIT Press.Google Scholar
  3. Gmytrasiewicz, P. J. (1996). On reasoning about other agents. In Wooldridge, M., Müller, J. P., and Tambe, M., eds., Intelligent Agents II: Agent Theories, Architectures, and Languages, 143–155. Berlin: Springer.CrossRefGoogle Scholar
  4. Jameson, A., Schäfer, R., Simons, J., and Weis, T. (1995). Adaptive provision of evaluation-oriented information: Tasks and techniques. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1886–1893. San Mateo, CA: Morgan Kaufmann.Google Scholar
  5. Kellogg, T., and Gmytrasiewicz, P. J. (1996). Bayesian belief update in multi-agent systems. In preparation.Google Scholar
  6. Mor, Y., Goldman, C. V., and Rosenschein, J. S.(1996). Learn your opponent’s strategy (in polynomial time)! In Weiß, G., and Sen, S., eds., Adaptation and Learning in Multi-Agent SystemsIJCAI’95 Workshop, Lecture Notes in Artificial Intelligence. New York: Springer. 164–176.Google Scholar
  7. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufman.Google Scholar
  8. Poh, K. L., and Horvitz, E. J. (1993). Reasoning about the value of decision-model refinement: Methods and application. In Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence, 174–182. San Mateo, CA: Morgan Kaufmann.Google Scholar
  9. Poh, K. L., and Horvitz, E. J. (1996). A graph-theoretic analysis of information value. In Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence. San Mateo, CA: Morgan Kaufmann.Google Scholar
  10. Rao, A. S., and Murray, G. (1994). Multi-agent mental-state recognition and its application to air-combat modelling. In Proceedings of the 13th International Distributed Artificial Intelligence Workshop, 283–304.Google Scholar
  11. Russell, S. J., and Norvig, P. (1995). Artificial Intelligence: A Modern Approach. Englewood Cliffs, New Jersey: Prentice-Hall.MATHGoogle Scholar
  12. Sen, S., and Sekaran, M. (1996). Multiagent coordination with learning classifier systems. In Weiß, G., and Sen, S., eds., Adaptation and Learning in Multi-Agent Systems — IJCAI’95 Workshop, Lecture Notes in Artificial Intelligence. New York: Springer. 218–233.Google Scholar
  13. Tambe, M., and Rosenbloom, P. S. (1996). Architectures for agents that track other agents in multi-agent worlds. In Wooldridge, M., Müller, J. P., and Tambe, M., eds., Intelligent Agents II: Agent Theories, Architectures, and Languages, 156–170. Berlin: Springer.CrossRefGoogle Scholar
  14. Wellman, M. P., and Doyle, J. (1991). Preferential semantics for goals. In Proceedings of the Ninth National Conference on Artificial Intelligence, 698–703.Google Scholar

Copyright information

© Springer-Verlag Wien 1997

Authors and Affiliations

  • Sanguk Noh
    • 1
  • Piotr J. Gmytrasiewicz
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of Texas at ArlingtonUSA

Personalised recommendations