Abstract
Among the many variants of RL, an important class of problems is where the state and action spaces are continuous—autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. In this paper, we introduce a prioritized form of a combination of state-of-the-art approaches such as Deep Q-learning (DQN) and Deep Deterministic Policy Gradient (DDPG) to outperform the earlier results for continuous state and action space problems. Our experiments also involve the use of parameter noise during training resulting in more robust deep RL models outperforming the earlier results significantly. We believe these results are a valuable addition for continuous state and action space problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Li, Y.: Deep reinforcement learning: an overview. CoRR. abs/1701.07274 (2017)
Doya, K.: Reinforcement learning in continuous time and space. Neural Comput. 12, 219–245 (2000)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: A brief survey of deep reinforcement learning. CoRR. abs/1708.05866 (2017)
Mnih, V., et al.: Playing atari with deep reinforcement learning. CoRR. abs/1312.5602 (2013)
Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. CoRR. abs/1511.05952 (2015)
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M.: Deterministic policy gradient algorithms. In: Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing (2014)
Konda, V.R., Tsitsiklis, J.N.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems (2000)
Feinberg, E.A., Shwartz, A.: Handbook of Markov Decision Processes: Methods and Applications. Springer, Heidelberg (2012). https://doi.org/10.1007/978-1-4615-0805-2
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. CoRR. abs/1509.02971 (2015)
Plappert, M., et al.: Parameter space noise for exploration. CoRR. abs/1706.01905 (2017)
Schulman, J., Levine, S., Moritz, P., Jordan, M.I., Abbeel, P.: Trust region policy optimization. CoRR. abs/1502.05477 (2015)
Todorov, E., Erez, T., Tassa, Y.: MuJoCo: a physics engine for model-based control. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033 (2012)
OpenAI Baselines Implementation. https://github.com/openai/baselines
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Mangannavar, R., Srinivasaraghavan, G. (2019). Learning Agents with Prioritization and Parameter Noise in Continuous State and Action Space. In: Lu, H., Tang, H., Wang, Z. (eds) Advances in Neural Networks – ISNN 2019. ISNN 2019. Lecture Notes in Computer Science(), vol 11554. Springer, Cham. https://doi.org/10.1007/978-3-030-22796-8_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-22796-8_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22795-1
Online ISBN: 978-3-030-22796-8
eBook Packages: Computer ScienceComputer Science (R0)