Skip to main content

Mixed-Policy Asynchronous Deep Q-Learning

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 694))

Abstract

There are many open issues and challenges in the reinforcement learning field, such as handling high-dimensional environments. Function approximators, such as deep neural networks, have been successfully used in both single- and multi-agent environments with high dimensional state-spaces. The multi-agent learning paradigm faces even more problems, due to the effect of several agents learning simultaneously in the environment. One of its main concerns is how to learn mixed policies that prevent opponents from exploring them in competitive environments, achieving a Nash equilibrium. We propose an extension of several algorithms able to achieve Nash equilibriums in single-state games to the deep-learning paradigm. We compare their deep-learning and table-based implementations, and demonstrate how WPL is able to achieve an equilibrium strategy in a complex environment, where agents must find each other in an infinite-state game and play a modified version of the Rock Paper Scissors game.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abdallah, S., Lesser, V.R.: A multiagent reinforcement learning algorithm with non-linear dynamics. CoRR abs/1401.3454 (2014)

    Google Scholar 

  2. Abdolmaleki, A., Simoes, D., Lau, N., Reis, L.P., Neumann, G.: Learning a humanoid kick with controlled distance. In: Behnke, S., Lee, D.D., Sariel, S., Sheh, R. (eds.) RoboCup 2016: Robot World Cup XX. Lecture Notes in Artificial Intelligence, Leipzig, Germany. Springer (2016)

    Google Scholar 

  3. Awheda, M.D., Schwartz, H.M.: Exponential moving average q-learning algorithm. In: 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pp. 31–38, April 2013

    Google Scholar 

  4. Banerjee, B., Peng, J.: Generalized multiagent learning with performance bound. Auton. Agents Multi Agent Syst. 15(3), 281–312 (2007)

    Article  Google Scholar 

  5. Bowling, M.: Convergence and no-regret in multiagent learning. In: Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS 2004, pp. 209–216. MIT Press, Cambridge (2004)

    Google Scholar 

  6. Bowling, M., Veloso, M.: Rational and convergent learning in stochastic games. In: International Joint Conference on Artificial Intelligence, vol. 17, pp. 1021–1026. Lawrence Erlbaum Associates Ltd. (2001)

    Google Scholar 

  7. Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artif. Intell. 136(2), 215–250 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus) (2015). arXiv preprint: arXiv:1511.07289

  9. Conitzer, V., Sandholm, T.: Awesome: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Mach. Learn. 67(1–2), 23–43 (2007)

    Article  Google Scholar 

  10. Dorigo, M., Gambardella, L.: Ant-q: a reinforcement learning approach to the traveling salesman problem. In: Proceedings of ML 1995, Twelfth International Conference on Machine Learning, pp. 252–260 (2016)

    Google Scholar 

  11. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Aistats, vol. 9, pp. 249–256 (2010)

    Google Scholar 

  12. Hu, J., Wellman, M.P.: Nash q-learning for general-sum stochastic games. J. Mach. Learn. Res. 4, 1039–1069 (2003)

    MathSciNet  MATH  Google Scholar 

  13. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014)

    Google Scholar 

  14. Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)

    Google Scholar 

  15. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  16. Shapley, L.S.: A value for n-person games. Contrib. Theor. Games 2(28), 307–317 (1953)

    MathSciNet  MATH  Google Scholar 

  17. Simoes, D., Lau, N., Reis, L.P.: Multi-agent double deep q-networks. In: Portuguese Conference on Artificial Intelligence. Springer (2017)

    Google Scholar 

  18. Zhang, C., Lesser, V.: Multi-agent learning with policy prediction. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, pp. 927–934. AAAI Press (2010)

    Google Scholar 

Download references

Acknowledgements

The first author is supported by FCT (Portuguese Foundation for Science and Technology) under grant PD/BD/113963/2015. This research was partially supported by IEETA and LIACC. The work was also funded by project EuRoC, reference 608849 from call FP7-2013-NMP-ICT-FOF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Simões .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Simões, D., Lau, N., Reis, L.P. (2018). Mixed-Policy Asynchronous Deep Q-Learning. In: Ollero, A., Sanfeliu, A., Montano, L., Lau, N., Cardeira, C. (eds) ROBOT 2017: Third Iberian Robotics Conference. ROBOT 2017. Advances in Intelligent Systems and Computing, vol 694. Springer, Cham. https://doi.org/10.1007/978-3-319-70836-2_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70836-2_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70835-5

  • Online ISBN: 978-3-319-70836-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics