Abstract
Temporal difference (TD) learning is a form of approximate reinforcement learning using an incremental learning updates. For large, stochastic and dynamic systems, however, it is still on open question for lacking the methodology to analyse the convergence and sensitivity of TD algorithms. Meanwhile, analysis on convergence and sensitivity of parameters are very expensive, such analysis metrics are obtained only by running an experiment with different parameter values. In this paper, we utilise the TD(λ) learning control algorithm with a linear function approximation technique known as tile coding in order to help soccer agent learn the optimal control processes. The aim of this paper is to propose a methodology for analysing the performance for adaptively selecting a set of optimal parameter values in TD(λ) learning algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Teambots (2000), http://www.cs.cmu.edu/~trb/Teambots/Domains/SoccerBots
Albus, J.S.: A Theory of Cerebellar Function. Mathematical Biosciences 10, 25–61 (1971)
Bellman, R.: A Markovian Decision Process. Journal of Mathematics and Mechanics 6 (1957)
Bellman, R.: Dynamic Programming. Princeton University Press, Princeton, NJ (1957)
Dayan, P., Sejnowski, T.J.: TD(λ) Converges with Probability 1. Machine Learning 14(1), 295–301 (1994)
Howard, R.A.: Dynamic Programming and Markov Processes. MIT Press, Cambridge (1960)
Leng, J., Jain, L., Fyfe, C.: Simulation and Reinforcement Learning with Soccer Agents. In: Journal of Multiagent and Grid systems, vol. 4(4), IOS Press, The Netherlands (to be published in 2008)
Sutton, R.S.: Learning to Predict by the Method of Temporal Differences. Machine Learning 3, 9–44 (1988)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Watkins, C.J.C.H.: Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England (1989)
Wooldridge, M., Jennings, N.: Intelligent Agents: Theory and Practice. Knowledge Engineering Review 10(2), 115–152 (1995)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Leng, J., Jain, L., Fyfe, C. (2007). Convergence Analysis on Approximate Reinforcement Learning. In: Zhang, Z., Siekmann, J. (eds) Knowledge Science, Engineering and Management. KSEM 2007. Lecture Notes in Computer Science(), vol 4798. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76719-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-540-76719-0_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-76718-3
Online ISBN: 978-3-540-76719-0
eBook Packages: Computer ScienceComputer Science (R0)