Evolutionary Optimization of Neural Networks for Reinforcement Learning Algorithms

  • H. Braun
  • T. Ragg
Conference paper


In this paper we study the combination of two powerful approaches, evolutionary topology optimization (ENZO) and temporal difference learning (TD(λ)) which is up to our knowledge the first time. Temporal difference learning was proven to be a well suited technique for learning strategies for solving reinforcement problems based on neural network models, whereas evolutionary topology optimization is concurrently the most efficient network optimization technique. On two benchmarks, a labyrinth problem and the game Nine Men’s Morris, the power of the approach is demonstrated. We conclude that this combination of evolution and reinforcement learning algorithms is a suitable framework that uses the advantages of both methods leading to small and high performing networks for reinforcement problems.


Hide Unit Policy Iteration Reinforcement Learn Algorithm Board Position Suitable Framework 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A.G. Barto, S.J. Bradtke, and S.P. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, (72):81–138, 1995.Google Scholar
  2. [2]
    Heinrich Braun and Joachim Weisbrod. Evolving feedforward neural networks. In R.F. Albrecht, C. R. Reeves, N.C. Steele, (editors), Artificial Neural Networks and Genetic Algorithms, Wien New York, 1993. Springer-Verlag.Google Scholar
  3. [3]
    R. Gasser and J. Nievergelt. Es ist entschieden: Das Mühlespiel ist unentschieden. Informatik Spektrum, 5(17):314–317, 1994.Google Scholar
  4. [4]
    T. Ragg, H. Braun, and J. Feulner. Improving temporal differnce learning for deterministic sequential decision problems. In Proceedings of the ICANN’ 95, Paris, 1995.Google Scholar
  5. [5]
    T. Ragg, H. Braun, and H. Landsberg. A Comparative Study of Neural Network Optimization Techniques. In this volume, pages 341–345.Google Scholar
  6. [6]
    M. Riedmiller. Learning to Control Dynamic Systems. In Proceedings of the Europeean Meeting on Cybernetics and System Resarch EMCSR, Vienna, 1996.Google Scholar
  7. [7]
    J. Schäfer and H. Braun. Optimizing classifiers for handwritten digits by genetic algorithms. In D. W. Pearson, N.C. Steele, R.F. Albrecht (editors), Artificial Neural Networks and Genetic Algorithms, pages 10–13, Wien New York, 1995. Springer-Verlag.Google Scholar
  8. [8]
    R.S. Sutton. Learning to predict by the method of temporal differences. Machine Learning, 3:9–44, 1988.Google Scholar
  9. [9]
    G. Tesauro. Practical issues in temporal difference learning. Machine Learning, 8:257–277, 1992.MATHGoogle Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • H. Braun
    • 1
  • T. Ragg
    • 1
  1. 1.Institute of Logic, Complexity and Deduction SystemsUniversity of KarlsruheKarlsruheGermany

Personalised recommendations