Efficient Parallel Learning in Classifier Systems

  • U. Hartmann
Conference paper


Classifier systems are simple production systems working on binary messages of a fixed length. Genetic algorithms are employed in classifier systems in order to discover new classifiers. We use methods of the computational complexity theory in order to analyse the inherent difficulty of learning in classifier systems. Hence our results do not depend on special (possibly genetic) learning algorithms. The paper formalises this rule discovery or learning problem for classifier systems which has been proved to be hard in general. It will be proved that restrictions on two distinct learning problems lead to problems in NC, i.e. problems which are efficiently solvable in parallel.


Classifier Condition Classifier System Disjunctive Normal Form Probably Approximately Correct Parallel Random Access Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J. L. Balcázar, J. Díaz, and J. Gabarró. Structural Complexity I. EATCS, Monographs on Theoretical Computer Science. Springer-Verlag, Berlin, 1 edition, 1988.Google Scholar
  2. [2]
    L. B. Booker, D. E. Goldberg, and J. H. Holland. Classifier systems and genetic algorithms. Artificial Intelligence, 40: 235–282, 1989.CrossRefGoogle Scholar
  3. [3]
    A. Gibbons and W. Rytter. Efficient Parallel Algorithms. Cambridge University Press, Cambridge, UK, 1988.MATHGoogle Scholar
  4. [4]
    D. E. Goldberg. Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley Publishing Company, Reading, Massachusetts, 1989.MATHGoogle Scholar
  5. [5]
    Uwe Hartmann. Computational complexity of neural networks and classifier systems. Diplomarbeit, University of Dortmund, PO. Box 50 05 00, D-4600 Dortmund 50, Germany, July 1992.Google Scholar
  6. [6]
    J. H. Holland. Adaptation. In R. Rosen and F. M. Snell, editors, Progress in Theoretical Biology IV, pages 263–293. Academic Press, New York, 1976.Google Scholar
  7. [7]
    John H. Holland. Properties of the bucket brigade algorithm. In John J. Grefenstette, editor, Proceedings of an International Conference on Genetic Algorithms and Their Applications, pages 1-7, Pittsburgh, PA, 1985.Google Scholar
  8. [8]
    Stephen Judd. Neural Network Design and the Complexity of Learning. Neural Network Modeling and Connectionism. The MIT Press, Cambridge, MA, 1990.Google Scholar
  9. [9]
    Leonard Pitt and Leslie G. Valiant. Computational limitaions on learning from examples. Journal of the Association of Computing Machinery, 35(4): 965–984, October 1988.MATHCrossRefGoogle Scholar
  10. [10]
    Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11): 1134–1142, November 1984.MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 1993

Authors and Affiliations

  • U. Hartmann
    • 1
  1. 1.Department of InformaticsUniversity of DortmundEssen 1Germany

Personalised recommendations