Skip to main content

Averaged-A3C for Asynchronous Deep Reinforcement Learning

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11303))

Included in the following conference series:

Abstract

In recent years, Deep Reinforcement Learning (DRL) has achieved unprecedented success in high-dimensional and large-scale space tasks. However, instability and variability of DRL algorithms have an important effect on their performance. To alleviate this problem, the Asynchronous Advantage Actor-Critic (A3C) algorithm uses the advantage function to update the policy and value network, but there still remains a certain variance in the advantage function. Aiming to reduce the variance of the advantage function, we propose a new A3C algorithm called Averaged Asynchronous Advantage Actor-Critic (Averaged-A3C). Averaged-A3C is an extension of the A3C algorithm, by averaging previously learned state value estimates to calculate the advantage function, which contributes to a more stable training procedure and improved performance. We evaluate the performance of the new algorithm through some games on the Atari 2600 and MuJoCo environment. Experimental results show that the Averaged-A3C algorithm effectively improves the performance of Agent and the stability of training process compared to the original A3C algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. J. Nature. 521, 436–444 (2015)

    Article  Google Scholar 

  2. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (1998)

    Google Scholar 

  3. Silver, D., et al.: Mastering the game of go without human knowledge. J. Nature. 550, 354–359 (2017)

    Article  Google Scholar 

  4. Watkins, H., Cornish, C.J.: Learning from Delayed Rewards. King’s College, Cambridge (1989)

    Google Scholar 

  5. Mnih, V., et al.: Human-level control through deep reinforcement learning. J. Nature. 518, 529–533 (2015)

    Article  Google Scholar 

  6. Lin, L.J.: Programming robots using reinforcement learning and teaching. In: AAAI Conference on Artificial Intelligence, pp. 781–786 (1991)

    Google Scholar 

  7. Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)

    Google Scholar 

  8. Bellemare, M.G., Ostrovski, G., Guez, A., Thomas, P.S., Munos, R.: Increasing the action gap: new operators for reinforcement learning. In: AAAI Conference on Artificial Intelligence, pp. 1476–1483 (2016)

    Google Scholar 

  9. Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438 (2015)

  10. Anschel, O., Baram, N., Shimkin, N.: Averaged-DQN: variance reduction and stabilization for deep reinforcement learning. In: International Conference on Machine Learning, pp. 176–185 (2017)

    Google Scholar 

  11. Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M.: Deterministic policy gradient algorithms. In: International Conference on Machine Learning, pp. 387–395 (2014)

    Google Scholar 

  12. Konda, V.R., Tsitsiklis, J.N.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, pp. 1008–1014 (2000)

    Google Scholar 

  13. Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized Experience Replay. arXiv:1511.05952 (2015)

  14. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Association for the Advance of Artificial Intelligence, pp. 2094–2100 (2016)

    Google Scholar 

  15. Schulman, J.., Levine, S., Moritz, P., Jordan, M.I., Abbeel, P.: Trust region policy optimization. In: International Conference on Machine Learning, pp. 1889–1897 (2015)

    Google Scholar 

  16. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv:1707.06347 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao-Fang Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, S., Zhang, XF., Wu, JJ., Liu, D. (2018). Averaged-A3C for Asynchronous Deep Reinforcement Learning. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11303. Springer, Cham. https://doi.org/10.1007/978-3-030-04182-3_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04182-3_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04181-6

  • Online ISBN: 978-3-030-04182-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics