Reinforcement Learning Based on Extreme Learning Machine
Extreme learning machine not only has the best generalization performance but also has simple structure and convenient calculation. In this paper, its merits are used for reinforcement learning. The use of extreme learning machine on Q function approximation can improve the speed of reinforcement learning. As the number of hidden layer nodes is equal to that of samples, the larger sample size will seriously affect the learning speed. To solve this problem, a rolling time-window mechanism is introduced to the algorithm, which can reduce the size of the sample space to a certain extent. Finally, our algorithm is compared with a reinforcement learning based on a traditional BP neural network using a boat problem. Simulation results show that the proposed algorithm is faster and more effective.
KeywordsExtreme learning machine Neural network Q learning Rolling time-window Boat problem
Unable to display preview. Download preview PDF.
- 1.Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)Google Scholar
- 2.Abe, K.: Reinforcement Learning-Value Function Estimation and Policy Search. Society of Instrument and Control Engineers 41(9), 680–685 (2002)Google Scholar
- 5.Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks. In: Proceedings of the International Joint Conference on Neural Networks, pp. 25–29. The MIT Press, Budapest (2004)Google Scholar
- 6.Ding, S., Su, C.Y.: Application of Optimizing Bp Neural Networks Algorithm Based on Genetic Algorithm. In: Proceedings of the 29th Chinese Control Conference, pp. 2425–2428. The MIT Press, Beijing (2010)Google Scholar
- 7.Wang, G., Li, P.: Dynamic Adaboost Ensemble Extreme Learning Machine. In: Proceedings of the International Conference on Advanced Computer Theory and Engineering, pp. 54–58. The MIT Press, Chengdu (2010)Google Scholar
- 9.Thomas, A., Marcus, S.I.: Reinforcement Learning for MDPs Using Temporal Difference Schemes. In: Proceedings of the IEEE Conference on Decision and Control, pp. 577–583. The MIT Press, San Diego (1997)Google Scholar