Reinforcement Learning in MirrorBot
For this special session of EU projects in the area of NeuroIT, we will review the progress of the MirrorBot project with special emphasis on its relation to reinforcement learning and future perspectives. Models inspired by mirror neurons in the cortex, while enabling a system to understand its actions, also help in the solving of the curse of dimensionality problem of reinforcement learning. Reinforcement learning, which is primarily linked to the basal ganglia, is a powerful method to teach an agent such as a robot a goal-directed action strategy. Its limitation is mainly that the perceived situation has to be mapped to a state space, which grows exponentially with input dimensionality. Cortex-inspired computation can alleviate this problem by pre-processing sensory information and supplying motor primitives that can act as modules for a superordinate reinforcement learning scheme.
KeywordsState Space Motor Cortex Motor Unit Reinforcement Learning Target Object
Unable to display preview. Download preview PDF.
- 5.Elshaw, M., Weber, C., Zochios, A., Wermter, S.: A mirror neuron inspired hierarchical network for action selection. In: Proc. NeuroBotics, pp. 89–97 (2004)Google Scholar
- 12.Humphrys, M.: W-learning: A simple RL-based society of mind. In: 3rd European Conference on Artificial Life, p. 30 (1995)Google Scholar
- 13.Knoblauch, A., Markert, H., Palm, G.: An associative model of cortical language and action processing. In: Proc. 9th Neural Comp. and Psych. Workshop (2004)Google Scholar
- 14.Vitay, J., Rougier, N., Alexandre, F.: A distributed model of visual spatial attention. In: Biomimetic Neural Learning for Intelligent Robotics. Springer, Heidelberg (2005)Google Scholar