Abstract
A motor control model based on reinforcement learning (RL) is proposed here. The model is inspired by organizational principles of the cerebral cortex, specifically on cortical maps and functional hierarchy in sensory and motor areas of the brain. Self-Organizing Maps (SOM) have proven to be useful in modeling cortical topological maps. The SOM maps the input space in response to the real-valued state information, and a second SOM is used to represent the action space. We use the Q-learning algorithm with a neighborhood update function, and an SOM for Q-function to avoid representing very large number of states or continuous action space in a large tabular form. The final model can map a continuous input space to a continuous action space.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing, vol. 1. MIT Press, Cambridge (1986)
Gullapalli, V.: A stochastic reinforcement learning algorithm for learning real-valued functions. Neural Networks 3, 671–692 (1990)
Smith, J.A.: Applications of the self-organizing map to reinforcement learning. Neural Networks 15, 8–9 (2002)
Tesauro, G.J.: Practical issues in temporal difference learning. Machine Learning 8, 257–277 (1992)
Sutton, R.S., Andrew, G.B.: Reinforcement Learning. MIT Press (1998)
Luis, R.S.: The Hierarchical Map Forming Model. Master’s thesis, Department of Computer Science and Information Engineering, College of Electrical Engineering and Computer Science, National Taiwan University (2006)
Watkins, C.J., Dayan, P.: Technical Note: Q-Learning. Machine Learning 8, 22 (1992)
Kohonen, T.: Self organization and associative memory, 2nd edn. Springer, Berlin (1987)
Smith, J.A.: Applications of the self-organizing map to reinforcement learning. Neural Networks 15, 8 (2002)
Smith, A.J.: Dynamic generalization of continuous action spacesin reinforcement learning: A neutrally inspired approach. PhD dissertation, Division of Informatics, Edinburgh University, UK (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Uang, CH., Liou, JW., Liou, CY. (2012). Self-Organizing Reinforcement Learning Model. In: Pan, JS., Chen, SM., Nguyen, N.T. (eds) Intelligent Information and Database Systems. ACIIDS 2012. Lecture Notes in Computer Science(), vol 7196. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28487-8_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-28487-8_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-28486-1
Online ISBN: 978-3-642-28487-8
eBook Packages: Computer ScienceComputer Science (R0)