Abstract
We consider the problem of online learning in a Markov decision process (MDP) with finite states but continuous actions. This generalizes both the traditional problem of learning an MDP with finite actions and states, as well as the so-called continuum-armed bandit problem which has continuous actions but with no state involved. Based on previous works for these two problems, we propose a new algorithm for our problem, which dynamically discretizes the action spaces and learns to play strategies over these discretized actions that evolve over time. Our algorithm is able to achieve a T-step regret of about the order of \(T^{\frac{d+1}{d+2}}\) with high probability, where d is a newly defined near-optimality dimension we introduce to capture the hardness of learning the MDP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Auer, P., Cesa-Bianchi, N., Schapire, R.: Finite-Time Analysis of the Multi-Armed Bandit Problem. SIAM Journal on Computing 32(1), 48–77 (2002)
Bubeck, S., Munos, R., Stoltz, G., Szepesvári, C.: \({\cal X}\)-Armed Bandits. Journal of Machine Learning Research 12 (2011)
Jaksch, T., Ortner, R., Auer, P.: Near-Optimal Regret Bounds for Reinforcement Learning. Journal of Machine Learning Research 11 (2010)
Kleinberg, R., Slivkins, A., Upfal, E.: Multi-armed bandits in metric spaces. In: ACM Symp. on Theory of Computing (STOC) (2008)
Kober, J., Bagnell, J.A., Peters J.: Reinforcement Learning in Robotics: A Survey. International Journal of Robotic Research 32 (2013)
Munos, R.: From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning. Foundations and Trends in Machine Learning 7(1), 1–130 (2014)
Ortner, R., Ryabko, D.: Online Regret Bounds for Undiscounted Continuous Reinforcement Learning. Advances in Neural Information Processing Systems 25, 1772–1780 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Hong, YT., Lu, CJ. (2015). Online Learning in Markov Decision Processes with Continuous Actions. In: Chaudhuri, K., GENTILE, C., Zilles, S. (eds) Algorithmic Learning Theory. ALT 2015. Lecture Notes in Computer Science(), vol 9355. Springer, Cham. https://doi.org/10.1007/978-3-319-24486-0_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-24486-0_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-24485-3
Online ISBN: 978-3-319-24486-0
eBook Packages: Computer ScienceComputer Science (R0)