Skip to main content

Symbolic Classifiers: Conditions to Have Good Accuracy Performance

  • Conference paper
Selecting Models from Data

Part of the book series: Lecture Notes in Statistics ((LNS,volume 89))

  • 499 Accesses

Abstract

Symbolic classifiers from Artificial Intelligence compete with those from the established and emerging fields of statistics and neural networks. Traditional view is that symbolic classifiers are good in that they are easier to use, are faster, and produce human understandable rules. However, as this paper shows, through a comparison of fourteen established state-of-the-art symbolic, statistical, and neural classifiers on eight large real-world problems, symbolic classifiers also have superior, or at least comparable, accuracy performance when the characteristics of the data suit them. These data characteristics are measured using a set of statistical and qualitative descriptors, first proposed in the present (and the related) work. This has implications for algorithm users and method designers, in that the strength of various algorithms can be exploited in application, and in that superior features of other algorithms can be incorporated into existing algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Acid, S.L. de Campos, A. González, R.Molina, and Pérez de la Blanca. CASTLE: A Tool for Bayesian Learning. In Esprit Conference 1991.

    Google Scholar 

  2. D. Aha. Generalizing case studies: a case study. In 9th Int. Conf. on Machine Learning, pages 1–10, San Mateo, Cal., 1992. Morgan Kaufmann.

    Google Scholar 

  3. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, Belmont, 1984.

    MATH  Google Scholar 

  4. R Clark and R. Boswell. Rule induction with CN2: some recent improvements. In EWSL ’91, pages 151–163, Porto, Portugal, 1991. Berlin: Springer-Verlag.

    Google Scholar 

  5. J. H. Friedman and W. Stuetzle. Projection pursuit regression. Journal of American Statistics Association, 76: 817–823, 1981.

    Article  MathSciNet  Google Scholar 

  6. R. P. Gorman and T. J. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets. Neural networks, 1 (Part 1): 75–89, 1988.

    Article  Google Scholar 

  7. J. Hermans, J. D. F. Habhema, and A. T. Van der Burght. Cases of doubt in allocation problems, k populations. Bulletin of International Statistics Institute, 45: 523–529, 1974.

    Google Scholar 

  8. T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78 (9, September): 1481–1497, 1990.

    Article  Google Scholar 

  9. J. R. Quinlan. Simplifying decision trees. International journal of man-machine studies, 27 ((3) September): 221–234, 1987.

    Article  Google Scholar 

  10. B. D. Ripley. Statistical aspects of neural networks. In Invited talk at SemSat, Sandbjerg, Denmark, April 1992. Chapman and Hall, 1992.

    Google Scholar 

  11. J. W. Shavlik, R. J. Mooney, and G. G. Towell. Symbolic and neural learning algorithms: an experimental comparison. Journal of Machine learning, 6 (2, March): 111–143, 1991.

    Google Scholar 

  12. S. Thrun, J. Bala, E. Bloedorn, and I. Bratko, editors, The MONK’s problems - a performance comparison of different learning algorithms. Carnegie Mellon University, Computer Science Department, 1991.

    Google Scholar 

  13. S. M. Weiss and C. A. Kulikowski. Computer systems that learn: classification and pre-diction methods from statistics, neural networks, machine learning, and expert systems. Morgan Kaufmann, San Mateo, CA, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag New York, Inc.

About this paper

Cite this paper

Feng, C., King, R., Sutherland, A., Muggleton, S., Henery, R. (1994). Symbolic Classifiers: Conditions to Have Good Accuracy Performance. In: Cheeseman, P., Oldford, R.W. (eds) Selecting Models from Data. Lecture Notes in Statistics, vol 89. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-2660-4_38

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-2660-4_38

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-0-387-94281-0

  • Online ISBN: 978-1-4612-2660-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics