Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning

  • Christian Mannes
Part of the Informatik-Fachberichte book series (INFORMATIK, volume 252)


This work shows how a neural network can learn a motor control task by trial and error using a reinforcement learning scheme, exemplified by a system that learns to focus an “eye” on moving objects or salient parts of pictures. No explicit knowledge about the details of the “oculomotor system” is used during training. The system described is embedded in an environment in which it acts. It can perceive the changes it causes in its environment and evaluates them with respect to some goal implicit in its architecture. The solutions the network arrives at are achieved by correlation of visual input with random gestures (experimentation) by a reinforcement learning scheme that makes use of “heterosynaptic modulation,” as proposed by Reeke & Edelman (1989). Through learning, the performance of the system gradually improves so that random move generation becomes obsolete. Simulations have shown that the system is able to learn to track moving objects, as well as to trace the contours of stationary pictures.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Carpenter G.A. & Grossberg S. (1987): A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine. in: Computer Vision, Graphics, and Image Processing 1987, 37, 54–115. In: Neural Networks and Natural Intelligence. A Bradford Book, MIT Press, Cambridge, Mass. 1988.Google Scholar
  2. Edelman G. M. (1978): Group Selection and Phasic Reentrant Signalling: A Theory of Higher Brain Function. in: Edelman G. M., Mountcastle V. B. (Eds.): The Mindful Brain., MIT Press, Cambridge, Massachusetts.Google Scholar
  3. Kohonen T. (1982): Self-organized formation of topologically correct feature maps. Biological Cybernetics 43:59–69.MathSciNetzbMATHCrossRefGoogle Scholar
  4. Kuperstein M. & Rubinstein J. (1989): Implementation of an Adaptive Neural Controller for Sensory-Motor Coordination. in: Connectionism in Perspective, R. Pfeifer, Z. Schreter, F. Fogelman-Soulie, L. Steels, eds., pp. 49–61, Elsevier, Amsterdam.Google Scholar
  5. v. d. Malsburg C., 1973: Self-organization of orientation sensitive cells in the striata cortex. Kybernetik 14: 85–100.CrossRefGoogle Scholar
  6. Pabon J., Gossard D. (1988): Connectionist Networks for Learning Coordinated Motion in Autonomous Systems. in: Proc. AAAI 1988.Google Scholar
  7. Reeke G.N., Sporns O., and Edelman G.M., (1989): Synthetic Neural Modelling: Comparisons of Population and Connectionist Approaches. in: Connectionism in Perspective, R. Pfeifer, Z. Schreter, F. Fogelman-Soulie, L. Steels, eds., pp. 113–139, Elsevier, Amsterdam.Google Scholar
  8. Rumelhart D. E., Zipser D. (1986): Feature Discovery by Competitive Learning, in: Rumelhart D. E., McClelland J. L. (Eds.): Parallel Distributed Processing. Vol. 1, MIT Press, Cambridge, Massachusetts.Google Scholar
  9. Yamaguchi Y., Fukushima K., Yasuda M., Nagata S. (1971): Electronic Retina NHK Laboratories Note 141.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Christian Mannes
    • 1
  1. 1.Austrian Research Institute for Artificial IntelligenceAustria

Personalised recommendations