Advertisement

Q-learning in Evolutionary Rule Based Systems

  • Antonella Giani
  • Fabrizio Baiardi
  • Antonina Starita
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 866)

Abstract

PANIC (Parallelism And Neural networks In Classifier systems), an Evolutionary Rule Based System (ERBS) to evolve behavioral strategies codified by sets of rules, is presented. PANIC assigns credit to the rules through a new mechanism, Q-Credit Assignment (QCA), based on Q-learning. By taking into account the context where a rule is applied, QCA is more accurate than classical methods when a single rule can fire in different situations. QCA is implemented through a multi-layer feed-forward neural network.

Keywords

Evolutionary Rule Credit Assignment Effector Message Temporal Difference Error Temporal Difference Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    P. J. Angeline, G. M. Saunders, and J. B. Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Transaction on Neural Networks, 1994.Google Scholar
  2. 2.
    A. G. Barto. R. S. Sutton, and C. J. C. H. Watkins. Learning and sequential decision making. Technical Report COINS 89-95, Comp. and Inform. Sci., Univ. Mass., 1989.Google Scholar
  3. 3.
    R. K. Belew, J. McInerney, and N. N. Schraudolph. Evolving networks: using the genetic algorithm with connectionist learning. Technical Report CSE CS90-174, Comp. Sci. and Engr., Univ. California, 1990.Google Scholar
  4. 4.
    D. J. Chalmers. The evolution of learning: An experiment in genetic connectionism. In Proceedings of the 1990 Connectionist Models Summer School, 1990.Google Scholar
  5. 5.
    A. Giani. Un nuovo approccio alla definizione e all'implementazione di siatemi a classificatori. Master's thesis. Dip. di Informatica. University of Pisa, Italy, 1992.Google Scholar
  6. 6.
    J. J. Grefenstette. Credit assignment in rule discovery systems based on genetic algorithms. Machine Learning, 3(2–3), 1988.Google Scholar
  7. 7.
    J. J. Grefenstette. A system for learning control strategies with genetic algorithms. In Proceedings of the Third International Conference on Genetic Algorithms and Their Applications. 1989.Google Scholar
  8. 8.
    J. J. Grefenstette. Lamarckian learning in multi-agent environments. In Proceedings of the Fourth International Conference on Genetic Algorithms and Their Applications, 1991.Google Scholar
  9. 9.
    J. H. Holland. Escaping brittleness: The possibilities of general-purpose learning algorithm applied to parallel rule-based systems, volume 2 of Machine learning: An artificial inteligencc approach. Morgan Kaufmann, 1986.Google Scholar
  10. 10.
    J. Koza. Genetic programming: On the programming of computers by the means of natural selection. MIT Press, 1992.Google Scholar
  11. 11.
    L. Lin. Self-improving reactive agents: Case studies of reinforcement learning frameworks. In From Animals to Animals: Proceedings of the First International Conference on Simulation of Adaptive Behaviour, 1990.Google Scholar
  12. 12.
    R. L. Riolo. Empirical studies of default hierarchies and sequences of rules in learning classifier systems. PhD thesis, University of Michigan, 1988.Google Scholar
  13. 13.
    G. G. Robertson and R. L. Riolo. A tale of two classifier systems. Machine Learning, 3, 1988.Google Scholar
  14. 14.
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representation by error propagation, volume 1 of Parallel Distributed Processing. MIT Press, 1986.Google Scholar
  15. 15.
    S. F. Smith. A learning system based on genetic adaptive algorithms. PhD thesis, University of Pittsburgh, 1980.Google Scholar
  16. 16.
    R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3, 1988.Google Scholar
  17. 17.
    R. S. Sutton. Reinforcement learning architectures for animats. In From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behaviour, 1990.Google Scholar
  18. 18.
    G. J. Tesauro. Practical issues in temporal difference learning. Technical Report RC 17223, IMB T. J. Watson Research Center, Yorktown Heights, NY, 1991.Google Scholar
  19. 19.
    C. J. C. H. Watkins. Learning with delayed rewards. PhD thesis, University of Cambridge, England, 1989.Google Scholar
  20. 20.
    T. H. Westerdale. A defence of the bucket brigade. In Proceedings of the Third International Conference on Genetic Algorithms and Their Applications, 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Antonella Giani
    • 1
  • Fabrizio Baiardi
    • 1
  • Antonina Starita
    • 1
  1. 1.Dipartimento di InformaticaUniversità di PisaPisaItaly

Personalised recommendations