Skip to main content

Coordinating Randomized Policies for Increasing Security in Multiagent Systems

  • Conference paper
  • 474 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4324))

Abstract

Despite significant recent advances in decision theoretic frameworks for reasoning about multiagent teams, little attention has been paid to applying such frameworks in adversarial domains, where the agent team may face security threats from other agents. This paper focuses on domains where such threats are caused by unseen adversaries whose actions or payoffs are unknown. In such domains, action randomization is recognized as a key technique to deteriorate an adversary’s capability to predict and exploit an agent/agent team’s actions. Unfortunately, there are two key challenges in such randomization. First, randomization can reduce the expected reward (quality) of the agent team’s plans, and thus we must provide some guarantees on such rewards. Second, randomization results in miscoordination in teams. While communication within an agent team can help in alleviating the miscoordination problem, communication is unavailable in many real domains or sometimes scarcely available. To address these challenges, this paper provides the following contributions. First, we recall the Multiagent Constrained MDP (MCMDP) framework that enables policy generation for a team of agents where each agent may have a limited or no(communication) resource. Second, since randomized policies generated directly for MCMDPs lead to miscoordination, we introduce a transformation algorithm that converts the MCMDP into a transformed MCMDP incorporating explicit communication and no communication actions. Third, we show that incorporating randomization results in a non-linear program and the unavailability/limited availability of communication results in addition of non-convex constraints to the non-linear program. Finally, we experimentally illustrate the benefits of our work.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Burstein, M.H., Mulvehill, A.M., Deutsch, S.: An approach to mixed-initiative management of heterogeneous software agent teams. In: HICSS, p. 8055. IEEE Computer Society, Los Alamitos (1999)

    Google Scholar 

  2. Boutilier, C.: Sequential Optimality and Coordination in Multiagent Systems. In: IJCAI (1999)

    Google Scholar 

  3. Becker, R., Zilberstein, S., Lesser, V., Goldman, C.V.: Transition-Independent Decentralized Markov Decision Processes. In: AAMAS (2003)

    Google Scholar 

  4. Nair, R., Pynadath, D., Yokoo, M., Tambe, M., Marsella, S.: Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings. In: IJCAI (2003)

    Google Scholar 

  5. Paruchuri, P., Tambe, M., Ordonez, F., Kraus, S.: Security in Multiagent Systems by Policy Randomization. In: AAMAS (2006)

    Google Scholar 

  6. Carroll, D., Mikell, K., Denewiler, T.: Unmanned Ground Vehicles for Integrated Force Protection. In: SPIE Proc., vol. 5422 (2004)

    Google Scholar 

  7. Lewis, P.J., Torrie, M.R., Omilon, P.M.: Applications suitable for unmanned and autonomous missions utilizing the Tactical Amphibious Ground Support (TAGS) platform (2005), http://www.autonomoussolutions.com/Press/SPIE%20TAGS.html

  8. Call for Papers: Safety and Security in Multiagent Systems, http://www.multiagent.com/dailist/msg00129.html

  9. Beard, R., McLain, T.: Multiple UAV Cooperative Search under Collision Avoidance and Limited Range Communication Constraints. In: IEEE CDC (2003)

    Google Scholar 

  10. Serjantov, A.: On the Anonymity of Anonymity Systems. PhD Dissertation, University of Cambridge (2004)

    Google Scholar 

  11. Paruchuri, P., Tambe, M., Ordonez, F., Kraus, S.: Towards a Formalization of Teamwork With Resource Constraints. In: AAMAS (2004)

    Google Scholar 

  12. Rahimi, M.H., Shah, H., Sukhatme, G.S., Heidemann, J., Estrin, D.: Studying the Feasibility of Energy Harvesting in a Mobile Sensor Network. In: ICRA (2003)

    Google Scholar 

  13. Dolgov, D., Durfee, E.: Approximating Optimal Policies for Agents with Limited Execution Resources. In: IJCAI (2003)

    Google Scholar 

  14. Altman, E.: Constrained Markov Decision Process. Chapman and Hall, Boca Raton (1999)

    MATH  Google Scholar 

  15. Littman, M.: Markov Games as a Framework for Multi-Agent Reinforcement Learning (1994), http://citeseer.ist.psu.edu/littman94markov.html

  16. Dolgov, D., Durfee, E.: Resource Allocation and Policy Formulation for Multiple Resource-Limited Agents Under Uncertainty. In: ICAPS (2004)

    Google Scholar 

  17. Shannon, C.: A Mathematical Theory of Communication. The Bell Labs Technical Journal (1948)

    Google Scholar 

  18. Pynadath, D., Tambe, M.: The communicative multiagent team decision problem: analyzing teamwork theories and models. In: JAIR (2002)

    Google Scholar 

  19. Goldman, C.V., Zilberstein, S.: Optimizing Information Exchange in Cooperative Multi-agent Systems. In: AAMAS (2003)

    Google Scholar 

  20. Jaakkola, T., Singh, S., Jordan, M.: Reinforcement learning algorithm for partially observable markov decision problems. In: Advances in NIPS (1994)

    Google Scholar 

  21. Parr, R., Russel, S.: Approximating Optimal Policies for partially observable stochastic domains. In: IJCAI (1995)

    Google Scholar 

  22. Kaelbling, L., Littman, M., Cassandra, A.: Planning and Acting in Partially Observable Stochastic Domains. In: Technical Report, Brown University (1995)

    Google Scholar 

  23. Poupart, P., Boutilier, C.: Bounded finite state controllers. In: NIPS (2003)

    Google Scholar 

  24. Bernstein, D.S., Hansen, E.A., Zilberstein, S.: Bounded Policy Iteration for Decentralized POMDPs. In: IJCAI (2005)

    Google Scholar 

  25. Xuan, P., Lesser, V.: Multi-Agent Policies: From Centralized Ones to Decentralized Ones. In: AAMAS (2002)

    Google Scholar 

  26. Becker, R., Lesser, V., Zilberstein, S.: Analyzing Myopic Approaches for Multi-Agent Communication. In: Proceedings of IAT (2005)

    Google Scholar 

  27. Ghavamzadeh, M., Mahadevan, S.: Learning to Communicate and Act in Cooperative Multiagent Systems using Hierarchical Reinforcement Learning. In: AAMAS (2004)

    Google Scholar 

  28. Nair, R., Roth, M., Yokoo, M., Tambe, M.: Communication for Improving Policy Computation in Distributed POMDPs. In: AAMAS (2004)

    Google Scholar 

  29. Hu, J., Wellman, P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Paruchuri, P., Tambe, M., Ordóñez, F., Kraus, S. (2009). Coordinating Randomized Policies for Increasing Security in Multiagent Systems. In: Barley, M., Mouratidis, H., Unruh, A., Spears, D., Scerri, P., Massacci, F. (eds) Safety and Security in Multiagent Systems. Lecture Notes in Computer Science(), vol 4324. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04879-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04879-1_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04878-4

  • Online ISBN: 978-3-642-04879-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics