Abstract
Despite significant recent advances in decision theoretic frameworks for reasoning about multiagent teams, little attention has been paid to applying such frameworks in adversarial domains, where the agent team may face security threats from other agents. This paper focuses on domains where such threats are caused by unseen adversaries whose actions or payoffs are unknown. In such domains, action randomization is recognized as a key technique to deteriorate an adversary’s capability to predict and exploit an agent/agent team’s actions. Unfortunately, there are two key challenges in such randomization. First, randomization can reduce the expected reward (quality) of the agent team’s plans, and thus we must provide some guarantees on such rewards. Second, randomization results in miscoordination in teams. While communication within an agent team can help in alleviating the miscoordination problem, communication is unavailable in many real domains or sometimes scarcely available. To address these challenges, this paper provides the following contributions. First, we recall the Multiagent Constrained MDP (MCMDP) framework that enables policy generation for a team of agents where each agent may have a limited or no(communication) resource. Second, since randomized policies generated directly for MCMDPs lead to miscoordination, we introduce a transformation algorithm that converts the MCMDP into a transformed MCMDP incorporating explicit communication and no communication actions. Third, we show that incorporating randomization results in a non-linear program and the unavailability/limited availability of communication results in addition of non-convex constraints to the non-linear program. Finally, we experimentally illustrate the benefits of our work.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Burstein, M.H., Mulvehill, A.M., Deutsch, S.: An approach to mixed-initiative management of heterogeneous software agent teams. In: HICSS, p. 8055. IEEE Computer Society, Los Alamitos (1999)
Boutilier, C.: Sequential Optimality and Coordination in Multiagent Systems. In: IJCAI (1999)
Becker, R., Zilberstein, S., Lesser, V., Goldman, C.V.: Transition-Independent Decentralized Markov Decision Processes. In: AAMAS (2003)
Nair, R., Pynadath, D., Yokoo, M., Tambe, M., Marsella, S.: Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings. In: IJCAI (2003)
Paruchuri, P., Tambe, M., Ordonez, F., Kraus, S.: Security in Multiagent Systems by Policy Randomization. In: AAMAS (2006)
Carroll, D., Mikell, K., Denewiler, T.: Unmanned Ground Vehicles for Integrated Force Protection. In: SPIE Proc., vol. 5422 (2004)
Lewis, P.J., Torrie, M.R., Omilon, P.M.: Applications suitable for unmanned and autonomous missions utilizing the Tactical Amphibious Ground Support (TAGS) platform (2005), http://www.autonomoussolutions.com/Press/SPIE%20TAGS.html
Call for Papers: Safety and Security in Multiagent Systems, http://www.multiagent.com/dailist/msg00129.html
Beard, R., McLain, T.: Multiple UAV Cooperative Search under Collision Avoidance and Limited Range Communication Constraints. In: IEEE CDC (2003)
Serjantov, A.: On the Anonymity of Anonymity Systems. PhD Dissertation, University of Cambridge (2004)
Paruchuri, P., Tambe, M., Ordonez, F., Kraus, S.: Towards a Formalization of Teamwork With Resource Constraints. In: AAMAS (2004)
Rahimi, M.H., Shah, H., Sukhatme, G.S., Heidemann, J., Estrin, D.: Studying the Feasibility of Energy Harvesting in a Mobile Sensor Network. In: ICRA (2003)
Dolgov, D., Durfee, E.: Approximating Optimal Policies for Agents with Limited Execution Resources. In: IJCAI (2003)
Altman, E.: Constrained Markov Decision Process. Chapman and Hall, Boca Raton (1999)
Littman, M.: Markov Games as a Framework for Multi-Agent Reinforcement Learning (1994), http://citeseer.ist.psu.edu/littman94markov.html
Dolgov, D., Durfee, E.: Resource Allocation and Policy Formulation for Multiple Resource-Limited Agents Under Uncertainty. In: ICAPS (2004)
Shannon, C.: A Mathematical Theory of Communication. The Bell Labs Technical Journal (1948)
Pynadath, D., Tambe, M.: The communicative multiagent team decision problem: analyzing teamwork theories and models. In: JAIR (2002)
Goldman, C.V., Zilberstein, S.: Optimizing Information Exchange in Cooperative Multi-agent Systems. In: AAMAS (2003)
Jaakkola, T., Singh, S., Jordan, M.: Reinforcement learning algorithm for partially observable markov decision problems. In: Advances in NIPS (1994)
Parr, R., Russel, S.: Approximating Optimal Policies for partially observable stochastic domains. In: IJCAI (1995)
Kaelbling, L., Littman, M., Cassandra, A.: Planning and Acting in Partially Observable Stochastic Domains. In: Technical Report, Brown University (1995)
Poupart, P., Boutilier, C.: Bounded finite state controllers. In: NIPS (2003)
Bernstein, D.S., Hansen, E.A., Zilberstein, S.: Bounded Policy Iteration for Decentralized POMDPs. In: IJCAI (2005)
Xuan, P., Lesser, V.: Multi-Agent Policies: From Centralized Ones to Decentralized Ones. In: AAMAS (2002)
Becker, R., Lesser, V., Zilberstein, S.: Analyzing Myopic Approaches for Multi-Agent Communication. In: Proceedings of IAT (2005)
Ghavamzadeh, M., Mahadevan, S.: Learning to Communicate and Act in Cooperative Multiagent Systems using Hierarchical Reinforcement Learning. In: AAMAS (2004)
Nair, R., Roth, M., Yokoo, M., Tambe, M.: Communication for Improving Policy Computation in Distributed POMDPs. In: AAMAS (2004)
Hu, J., Wellman, P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Paruchuri, P., Tambe, M., Ordóñez, F., Kraus, S. (2009). Coordinating Randomized Policies for Increasing Security in Multiagent Systems. In: Barley, M., Mouratidis, H., Unruh, A., Spears, D., Scerri, P., Massacci, F. (eds) Safety and Security in Multiagent Systems. Lecture Notes in Computer Science(), vol 4324. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04879-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-642-04879-1_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04878-4
Online ISBN: 978-3-642-04879-1
eBook Packages: Computer ScienceComputer Science (R0)