Reinforcement Learning for Rule Generation

  • D. Vogiatzis
  • A. Stafylopatis
Conference paper


The algorithm extracts propositional rules from a labeled data set. The constituent parts of a rule are the features of the labeled data-set, each accompanied by an appropriate interval of activation and a label denoting the class. Initially, the input space is partitioned using tiles. The algorithm tries to compose the largest possible orthogonal intervals out of tiles. After the creation of intervals for each feature the rule receives credit for its classification ability. This credit will be used to improve the rule. We have obtained encouraging results on 5 different classification problems: the iris data set, the concentric data, the four gaussians, the pima-indians set and the image segmentation data set.


Hide Layer Input Space Rule Extraction Classification Ability Eligibility Trace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A. Tickle R. Andrews M. Golea J. Dietrich, “The truth will come to light: Directions and challenges in extracting the knowledge embedded withing trained artificial neural networks” IEEE Transactions on Neural Networks, vol. 9, pp. 1057–1068, 1998.CrossRefGoogle Scholar
  2. [2]
    R. S. Sutton A. G. Barto, Reinforcement Learning. The MIT Press, 1998.Google Scholar
  3. [3]
    V. Cherkassky, Learning from Data, ch. 6. John Wiley & Sons, INC., 1998.Google Scholar
  4. [4]
    R. S. Sutton, “Implementation details of the TD(⋋) procedure for the case of vector predictors and backpropagation,” Tech. Rep. 87-509.1, GTE Laboratories Incorporated, Aug 1989.Google Scholar
  5. [5]
    “ bases,”Google Scholar
  6. [6]
    R. Munos A. Moore, “Barycentric interpolators for continous space & time reinforcement learning” Neural Information Processing Systems, 1998.Google Scholar
  7. [7]
    J. Santamaria R. Sutton A. Ram, “Experiments with reinforcement learning in problems with continuous state and action spaces” Adaptive behavior, vol. 6, no. 2, pp. 163–217, 1997.CrossRefGoogle Scholar
  8. [8]
    M. Sato S. Ishii, “Reinforcement learning based on on-line em algorithm” Advances in Neural Information Processing Systems, vol. 11, 1999.Google Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • D. Vogiatzis
    • 1
  • A. Stafylopatis
    • 1
  1. 1.Department of Electrical and Computer EngineeringNational Technical University of AthensAthensGreece

Personalised recommendations