Reinforcement Learning: Insights from Interesting Failures in Parameter Selection
We investigate reinforcement learning methods, namely the temporal difference learning TD(λ) algorithm, on game-learning tasks. Small modifications in algorithm setup and parameter choice can have significant impact on success or failure to learn. We demonstrate that small differences in input features influence significantly the learning process. By selecting the right feature set we found good results within only 1/100 of the learning steps reported in the literature. Different metrics for measuring success in a reproducible manner are developed. We discuss why linear output functions are often preferable compared to sigmoid output functions.
KeywordsHide Neuron Strategic Game Learning Agent Board Position Reinforcement Learning Agent
Unable to display preview. Download preview PDF.
- 1.Sutton, R.S.: Learning to predict by the method of temporal differences. Machine Learning 3, 9–44 (1988)Google Scholar
- 3.Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
- 4.Stenmark, M.: Synthesizing board evaluation functions for connect4 using machine learning techniques. Master’s thesis, Østfold University College, Norway (2005)Google Scholar
- 5.Sutton, R.S.: Reinforcement learning FAQ (2008), Cited 20.4.2008, http://www.cs.ualberta.ca/sutton/RL-FAQ.html
- 6.Togelius, J., Gomez, F., Schmidhuber, J.: Learning what to ignore: Memetic climbing in weight and topology space. Congress on Evolutionary Computation (to appear, 2008)Google Scholar
- 7.Levkovich, C.: Temporal difference learning project (2008), Cited 10.3.2008, www.geocities.com/chen_levkovich/tdlearningproject.html