Advertisement

XCS and the Monk’s Problems

  • Shaun Saxon
  • Alwyn Barry
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1813)

Abstract

It has been known for some time that Learning Classifier Systems (LCS) [15] have potential for application as Data Mining tools. Parodi and Bonelli [25] applied the Boole LCS [36] to the Lymphography data set and reported 82% classification rates. More recent work, such as GA-Miner [10] has sought to extend the application of the GA-based classification system to larger commercial data sets, introducing more complex attribute encoding techniques, static niching, and hybrid genetic operators in order to address the problems presented by large search spaces. Despite these results, the traditional LCS formulation has shown itself to be unreliable in the formation of accurate optimal generalisations, which are vital for the reduction of results to a human readable form. XCS [39,40] has been shown to be capable of generating a complete and optimally accurate mapping of a test environment [18] and therefore presents a new opportunity for the application of Learning Classifier Systems to the classification task in Data Mining. As part of a continuing research effort this paper presents some first results in the application of XCS to a particular Data Mining task. It demonstrates that XCS is able to produce a classification performance and rule set which exceeds the performance of most current Machine Learning techniques when applied to the Monk’s problems [34]

Keywords

Genetic Algorithm Data Mining Machine Learn Technique Data Mining Technique Target Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Bala and E. Bloedorn. Applying various AQ programs to the MONK’s problems: Results and brief descriptions of the methods. In The Monk’s problems: A Performance Comparison of Different Learning Algorithms. 1991.Google Scholar
  2. 2.
    Alwyn Barry. The XCS classifier system. Technical report, Faculty of Computer Science and Mathematics, University of the West of England, 1998.Google Scholar
  3. 3.
    C. Blake, E. Keogh, and C. J. Merz. UCI Repository of Machine Learning Databases. http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998. Irvine, CA: University of California, Dept. of Information and Computer Science.Google Scholar
  4. 4.
    L. B. Booker, D. E. Goldberg, and J. H. Holland. Classifier Systems and Genetic Algorithms. In Artificial Intelligence, volume 40, pages 235–282. September 1989.CrossRefGoogle Scholar
  5. 5.
    Lashon B. Booker. Triggered rule discovery in classifier systems. In J. David Schaffer, editor, Proceedings of the 3rd International Conference on Genetic Algorithms, pages 265–274, George Mason University, June 1989. Morgan Kaufmann.Google Scholar
  6. 6.
    Lashon B. Booker. Representing attribute-based concepts in a classifier system. In Gregory J. E. Rawlings, editor, Foundations of Genetic Algorithms, pages 115–127. Morgan Kaufmann, San Mateo, 1991.Google Scholar
  7. 7.
    L. Brieman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, 1984.Google Scholar
  8. 8.
    Lawrence Davis. Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York, 1991.Google Scholar
  9. 9.
    U. M. Fayyad, G. Piatetsky-Shapiro, and Padraig Smyth. From data mining to knowledge discovery: An overview. In Advances in Knowledge Discovery and Data Mining. 1996.Google Scholar
  10. 10.
    I. W. Flockhart. GA-miner: Parallel data mining and hierarchical genetic algorithms. Technical Report EPCC-AIKMS-GA-MINER-REPORT 1.0, University of Edinburgh, 1995.Google Scholar
  11. 11.
    A. Giordana and F. Neri. Search-intensive concept induction. Evolutionary Computation Journal, 3(4):375–416, 1996.CrossRefGoogle Scholar
  12. 12.
    David E. Goldberg. Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley, Reading, MA, 1989.zbMATHGoogle Scholar
  13. 13.
    David P. Greene and Stephen F. Smith. Competition-based induction of decision models from examples. Machine Learning, 13:229–257, 1993.CrossRefGoogle Scholar
  14. 14.
    David P. Greene and Stephen F. Smith. Using coverage as a model-building constraint in learning classifier systems. Evolutionary Computation, 2(1), 1994.Google Scholar
  15. 15.
    John H. Holland. Escaping brittleness: the possibilities of general purpose algorithms applied to parallel rule-based systems. In Ryszard S. Michalski, Jaime G. Carbonell, and Tom M. Mitchell, editors, Machine Learning, an Artificial Intelligence approach, volume 2, pages 593–623. Morgan Kaufmann, San Mateo, California, 1986.Google Scholar
  16. 16.
    Cezary Z. Janikow and Zbigniew Michalewicz. An experimental comparison of binary and floating point representations in genetic algorithms. In Lashon B. Booker and Richard K. Belew, editors, Proceedings of the 4th International Conference on Genetic Algorithms, pages 31–36, San Diego, CA, July 1991. Morgan Kaufmann.Google Scholar
  17. 17.
    G. V. Kass. An exploratory technique for investigating large quantities of categorical data. Applied Statistics, 29:119–127, 1980.CrossRefGoogle Scholar
  18. 18.
    Tim Kovacs. Evolving optimal populations with XCS classifier systems. Technical Report CS-96-17 and CSRP-96-17, also Master’s thesis, School of Computer Science, University of Birmingham, UK, 1996.Google Scholar
  19. 19.
    Tim Kovacs. XCS classifier system reliably evolves accurate, complete and minimal representations for boolean functions. In Roy, Chawdhry, and Pant, editors, Soft Computing in Engineering Design and Manufacturing, pages 59–68. Springer-Verlag, London, 1997.Google Scholar
  20. 20.
    H. Lu, R. Setiono, and H. Liu. NeuroRule: A connectionist approach to data mining. In Umeshwar Dayal, Peter M. D. Gray, and Shojiro Nishio, editors, VLDB’ 95: Proceedings of the 21st International Conference on Very Large Data Bases, Zurich, Switzerland, Sept. 11–15, 1995, pages 478–489, Los Altos, CA 94022, USA, 1995. Morgan Kaufmann.Google Scholar
  21. 21.
    D. Michie, D. J. Spiegelhalter, and C. C. Taylor, editors. Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  22. 22.
    J. M. O. Mitchell. Classical statistical methods. In D. Michie, D. J. Spiegelhalter, and C. C. Taylor, editors, Machine Learning, Neural and Statistical Classification, pages 17–28. Ellis Horwood, 1994.Google Scholar
  23. 23.
    Tom M. Mitchell. Machine Learning. McGraw-Hill, 1997.Google Scholar
  24. 24.
    R. Molina, N. Perez de al Blanca, and C. C. Taylor. Modern statistical techniques. In D Michie, D J Spiegelhalter, and C C Taylor, editors, Machine Learning, Neural and Statistical Classification, pages 29–49. Ellis-Horwood, 1994.Google Scholar
  25. 25.
    A. Parodi and P. Bonelli. The animat and the physician. In J. A. Meyer and S. W. Wilson, editors, Proceedings of the First International Conference on Simulation of Adaptive Behavior — From Animals to Animats I (SAB-90). MIT Press, 1990.Google Scholar
  26. 26.
    W._F. Punch, E. D. Goodman, Min Pei, Lai Chia-Shun, P. Hoyland, and R. Enbody. Further research on feature selection and classification using genetic algorithms. In Proceedings of the Fifth International Conference on Genetic Algorithms, pages 557–564, 1993.Google Scholar
  27. 27.
    M. L. Raymer, W. F. Punch, E. D. Goodman, and L. A. Kuhn. Genetic programming for improved data mining: An application to the biochemistry of protein interactions. In John R. Koza, David E. Goldberg, David B. Fogel, and Rick L. Riolo, editors, Genetic Programming 1996: Proceedings of the First Annual Conference, pages 375–380, Stanford University, CA, USA, 28-31 July 1996. MIT Press. GP-96 Also available as TR GARAGe96-04-01.Google Scholar
  28. 28.
    Rick L. Riolo. Bucket brigade performance: Ii. default hierarchies. In John J. Grefenstette, editor, Genetic Algorithms and their Applications: Proceedings of the Second International Conference on Genetic Algorithms (ICGA-2), pages 196–201. New Jersey: Lawrence Erlbaum Associates, 1987.Google Scholar
  29. 29.
    S. Ronald. Robust encodings in genetic algorithms: A survey of encoding issues. In Proceedings of the IEEE Conference on Evolutionary Computation, pages 43–48, 1997.Google Scholar
  30. 30.
    R. Rowher, M. Wynne-Jones, and F. Wysotzki. Neural Networks. In D. Michie, D. J. Spiegelhalter, and C. C. Taylor, editors, Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  31. 31.
    Shaun Saxon. Data mining techniques. Technical report, Faculty of Computer Science and Mathematics, University of the West of England, 1998.Google Scholar
  32. 32.
    N. Schraundolph and R. Belew. Dynamic parameter encoding for genetic algorithms. Technical Report CS-90-175, University of California, 1992.Google Scholar
  33. 33.
    W. M. Spears and K. A. DeJong. Using genetic algorithms for supervised concept learning. In N. G. Bourbakis, editor, Artficial Intelligence Methods and Applications. World Scientific, 1992.Google Scholar
  34. 34.
    S. B. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. DeJong, S. Džeroski, S. E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R. S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich, H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang. The MONK’s problems: A performance comparison of different learning algorithms. Technical Report CS-91-197, Carnegie Mellon University, Pittsburgh, PA, 1991.Google Scholar
  35. 35.
    C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Oxford, May 1989.Google Scholar
  36. 36.
    Stewart W. Wilson. Knowledge growth in an Artificial animal. In John J. Grefenstette, editor, Proceedings of an International Conference on Genetic Algorithms and their Applications, pages 16–23, Pittsburgh, PA, July 1985. Lawrence Erlbaum Associates.Google Scholar
  37. 37.
    Stewart W. Wilson. Bid competition and specificity reconsidered. Complex Systems, 2(6):705–723, 1988.zbMATHMathSciNetGoogle Scholar
  38. 38.
    Stewart W. Wilson. ZCS: A zeroth level classifier system. Evolutionary Computation, 2(1):1–18, 1994.CrossRefGoogle Scholar
  39. 39.
    Stewart W. Wilson. Classifier fitness based on accuracy. Evolutionary Computation, 3(2):149–175, 1995.CrossRefGoogle Scholar
  40. 40.
    Stewart W. Wilson. Generalization in the XCS classifier system. In J. Koza et al., editor, Genetic Programming 1998: Proceedings of the Third Annual Conference, San Francisco, CA, 1998. Morgan Kaufmann.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Shaun Saxon
    • 1
  • Alwyn Barry
    • 2
  1. 1.The Database GroupColston CentreBristolUK
  2. 2.Faculty of Computer Studies and MathematicsUniversity of the West of EnglandBristolUK

Personalised recommendations