Agent Modeling in Antiair Defense
This research addresses rational decision making and coordination among antiair units whose mission is to defend a specified territory from a number of attacking missiles. The automated units have to decide which missiles to attempt to intercept, given the characteristics of the threat, and given the other units’ anticipated actions, in their attempt to minimize the expected overall damages to the defended territory. Thus, an automated defense unit needs to model the other agents, either human or automated, that control the other defense batteries. For the purpose of this case study, we assume that the units cannot communicate among themselves, say, due to an imposed radio silence. We use the Recursive Modeling Method (RMM), which enables an agent to select his rational action by examining the expected utility of his alternative behaviors, and to coordinate with other agents by modeling their decision making in a distributed multiagent environment. We describe how decision making using RMM is applied to the antiair defense domain and show experimental results that compare the performance of coordinating teams consisting of RMM agents, human agents, and mixed RMM and human teams.
KeywordsPayoff Matrix Learn Classifier System Payoff Matrice Plan Recognition Independent Team
Unable to display preview. Download preview PDF.
- Durfee, E. H., and Montgomery, T. A. (1989). MICE: A flexible testbed for intelligent coordination experiments. In Proceedings of the 1989 Distributed AI Workshop, 25–40.Google Scholar
- Gmytrasiewicz, P. J., and Durfee, E. H. (1995). A rigorous, operational formalization of recursive modeling. In Proceedings of the First International Conference on Multi-Agent Systems, 125–132. Menlo Park: AAAI Press/The MIT Press.Google Scholar
- Jameson, A., Schäfer, R., Simons, J., and Weis, T. (1995). Adaptive provision of evaluation-oriented information: Tasks and techniques. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1886–1893. San Mateo, CA: Morgan Kaufmann.Google Scholar
- Kellogg, T., and Gmytrasiewicz, P. J. (1996). Bayesian belief update in multi-agent systems. In preparation.Google Scholar
- Mor, Y., Goldman, C. V., and Rosenschein, J. S.(1996). Learn your opponent’s strategy (in polynomial time)! In Weiß, G., and Sen, S., eds., Adaptation and Learning in Multi-Agent Systems — IJCAI’95 Workshop, Lecture Notes in Artificial Intelligence. New York: Springer. 164–176.Google Scholar
- Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufman.Google Scholar
- Poh, K. L., and Horvitz, E. J. (1993). Reasoning about the value of decision-model refinement: Methods and application. In Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence, 174–182. San Mateo, CA: Morgan Kaufmann.Google Scholar
- Poh, K. L., and Horvitz, E. J. (1996). A graph-theoretic analysis of information value. In Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence. San Mateo, CA: Morgan Kaufmann.Google Scholar
- Rao, A. S., and Murray, G. (1994). Multi-agent mental-state recognition and its application to air-combat modelling. In Proceedings of the 13th International Distributed Artificial Intelligence Workshop, 283–304.Google Scholar
- Sen, S., and Sekaran, M. (1996). Multiagent coordination with learning classifier systems. In Weiß, G., and Sen, S., eds., Adaptation and Learning in Multi-Agent Systems — IJCAI’95 Workshop, Lecture Notes in Artificial Intelligence. New York: Springer. 218–233.Google Scholar
- Wellman, M. P., and Doyle, J. (1991). Preferential semantics for goals. In Proceedings of the Ninth National Conference on Artificial Intelligence, 698–703.Google Scholar