Q-Learning and Parallelism in Evolutionary Rule Based Systems

  • Antonella Giani
  • Fabrizio Baiardi
  • Antonina Starita
Conference paper


We present PANIC (Parallelism And Neural networks In Classifier systems), a parallel learning system which uses a genetic algorithm to evolve behavioral strategies codified by sets of rules. The fitness of an individual is evaluated through a learning mechanism, QCA (Q-Credit Assignment), to assign credit to rules. QCA evaluates a rule depending on the context where it is applied. This new mechanism, based on Q-learning and implemented through a multi-layer feed-forward neural network, has been devised to solve the rule sharing problem posed by traditional credit assignment methods. To overcome the heavy computational cost of this approach, we propose a decentralized and asynchronous parallel model of the genetic algorithm.


Genetic Algorithm Reinforcement Learning Rule Sharing Learn Classifier System Credit Assignment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    P. J. Angeline, G. M. Saunders, and J. B. Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Transaction on Neural Networks, 1994.Google Scholar
  2. [2]
    R. K. Belew, J. Mclnerney, and N. N. Schraudolph. Evolving networks: using the genetic algorithm with connectionist learning. Technical Report CSE CS90-174, Comp. Sci. and Engr., Univ. California, 1990.Google Scholar
  3. [3]
    D.J. Chalmers. The evolution of learning: An experiment in genetic connectionism. In Proceedings of the 1990 Connectionist Models Summer School, 1990.Google Scholar
  4. [4]
    A. Giani. Un nuovo approccio alia definizione e all’implement azione di sistemi a classificatori. Master’s thesis, Dip. di Informatica, University of Pisa, Italy, 1992.Google Scholar
  5. [5]
    A. Giani, F. Baiardi, and A. Starita. Panic: A parallel evolutionary rule-based system. In Proceedings of the Fourth Annual Conference on Evolutionary Programming, 1995.Google Scholar
  6. [6]
    J. J. Grefenstette. Credit assignment in rule discovery systems based on genetic algorithms. Machine Learning, 3 (23), 1988.Google Scholar
  7. [7]
    J. J. Grefenstette. A system for learning control strategies with genetic algorithms. In Proceedings of the Third International Conference on Genetic Algorithms and Their Applications, 1989.Google Scholar
  8. [8]
    J. H. Holland. Escaping brittleness: The possibilities of general-purpose learning algorithm applied to parallel rule-based systems, volume 2 of Machine learning: An artificial inteligence approach. Morgan Kaufmann, 1986.Google Scholar
  9. [9]
    J. Koza. Genetic programming: On the programming of computers by the means of natural selection. MIT Press, 1992.Google Scholar
  10. [10]
    L. Lin. Self-improving reactive agents: Case studies of reinforcement learning frameworks. In From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behaviour, 1990.Google Scholar
  11. [11]
    R. L. Riolo. Empirical studies of default hierarchies and sequences of rules in learning classifier systems. PhD thesis, University of Michigan, 1988.Google Scholar
  12. [12]
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representation by error propagation, volume 1 of Parallel Distributed Processing. MIT Press, 1986.Google Scholar
  13. [13]
    S. F. Smith. A learning system based on genetic adaptive algorithms. PhD thesis, University of Pittsburgh, 1980.Google Scholar
  14. [14]
    R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3, 1988.Google Scholar
  15. [15]
    R. S. Sutton. Reinforcement learning architectures for animats. In From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behaviour, 1990.Google Scholar
  16. [16]
    G. J. Tesauro. Practical issues in temporal difference learning. Technical Report RC 17223, 1MB T. J. Watson Research Center, Yorktown Heights, NY, 1991.Google Scholar
  17. [17]
    C. J. C. H. Watkins. Learning with delayed rewards. PhD thesis, University of Cambridge, England, 1989.Google Scholar

Copyright information

© Springer-Verlag/Wien 1995

Authors and Affiliations

  • Antonella Giani
    • 1
  • Fabrizio Baiardi
    • 1
  • Antonina Starita
    • 1
  1. 1.Dipartimento di InformaticaUniversità di PisaPisaItaly

Personalised recommendations