Incremental Acquisition of Complex Behaviour by Structured Evolution

  • S. Perkins
  • G. Hayes
Conference paper


In practice, general-purpose learning algorithms are not sufficient by themselves to allow robots to acquire complex skills — domain knowledge from a human designer is needed to bias the learning in order to achieve success. In this paper we argue that there are good ways and bad ways of supplying this bias and we present a novel evolutionary architecture that supports our particular approach. Results from preliminary experiments are presented in which we attempt to evolve a simple tracking behaviour in simulation.


Structure Evolution Real Robot Evolutionary Architecture Task Decomposition Robot Learn 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J. H. Connell and S. Mahadevan. Rapid task learning for real robots. In Jonathan H. Connell and Sridhar Mahadevan, editors, Robot Learning, chapter 5, pages 105–139. Kluwer Academic Press, 1993.Google Scholar
  2. [2]
    M. Dorigo and M. Colombetti. Robot shaping: Developing situated agents through learning. Technical Report TR-92-040, International Computer Science Institute, Berkley, CA 94704, April 1993.Google Scholar
  3. [3]
    J. J. Grefenstette and A. C. Schultz. An evolutionary approach to learning in robots. In Proc. Machine Learning Workshop on Robot Learning, New Brunswick, NJ, 1994.Google Scholar
  4. [4]
    I. Harvey, P. Husbands, and D. Cliff. Seeing the light: Artificial evolution, real vision. In D. Cliff, J.-A. Meyer, and S. Wilson, editors, From Animals to Animats 3: Proc. 3rd Int. Conf. Simulation of Adaptive Behavior. MIT Press, 1994.Google Scholar
  5. [5]
    J. H. Holland. Adaptive algorithms for discovering and using general patterns in growing knowledge bases. Int. Journal of Policy Analysis and Information, 4(2):217–240, 1980.Google Scholar
  6. [6]
    J. H. Holland and J. S. Reitman. Cognitive systems based on adaptive algorithms. In D.A. Waterman and F Hayes-Roth, (Eds.), Pattern Directed Inference Systems, pages 313–329. Academic Press, New York, 1978.Google Scholar
  7. [7]
    M. Humphrys. Action selection methods using reinforcement learning. In Prom Animals to Animats 4: Proc. 4th Int. Conf. Simulation of Adaptive Behavior, Cape Cod, Massachusetts, USA, September 1996.Google Scholar
  8. [8]
    L-J. Lin. Hierarchical learning of robot skills by reinforcement. In International Conference on Neural Networks, 1993.Google Scholar
  9. [9]
    M. J. Mataric. Reward functions for accelerated learning. In William W. Cohen and Haym Hirsh, editors, Machine Learning: Proceedings of the Eleventh International Conference, San Fransisco CA, 1994. Morgan Kaufmann Publishers.Google Scholar
  10. [10]
    D. E. Moriarty and R. Mikkulainen. Efficient reinforcement learning through symbiotic evolution. Machine Learning, (22), 1996.Google Scholar
  11. [11]
    S. Nolfi and D. Parisi. Evolving non-trivial behaviours on real robots: An autonomous robot that picks up objects. In M. Gori and G. Soda, editors, Proceedings of the fourth congress of the Italian Association of Artificial Intelligence, pages 243–254, 15 Viale Marx, 00137 — Rome — Italy, 1995. Springer-Verlag.Google Scholar
  12. [12]
    M. A. Potter, K. A. De Jong, and J. J. Grefenstette. A coevolutionary approach to learning in sequential decision rules. In Proc. 6th Int. Conf. on Genetic Algorithms, Pitsburgh, July 1995. Morgan Kaufmann.Google Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • S. Perkins
    • 1
  • G. Hayes
    • 1
  1. 1.Department of Artificial IntelligenceUniversity of EdinburghEdinburghScotland

Personalised recommendations