Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu


  • Ian H. WittenEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_552


Classification learning; Concept learning; Learning with a teacher; Statistical decision techniques; Supervised learning


In Classification learning, an algorithm is presented with a set of classified examples or “instances” from which it is expected to infer a way of classifying unseen instances into one of several “classes”. Instances have a set of features or “attributes” whose values define that particular instance. Numeric prediction, or “regression,” is a variant of classification learning in which the class attribute is numeric rather than categorical. Classification learning is sometimes called supervised because the method operates under supervision by being provided with the actual outcome for each of the training instances. This contrasts with clustering where the classes are not given, and with association learning which seeks any association – not just one that predicts the class.

Historical Background

Classification learning grew out of two strands of...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and regression trees. Pacific Grove: Wadsworth; 1984.zbMATHGoogle Scholar
  2. 2.
    Bush RR, Mosteller F. Stochastic models for learning. New York: Wiley; 1955.zbMATHCrossRefGoogle Scholar
  3. 3.
    Holte RC. Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11:63–91.zbMATHCrossRefGoogle Scholar
  4. 4.
    Kononebko I. ID3, sequential Bayes, naïve Bayes and Bayesian neural networks. In: Proceedings of the 4th European Working Session on Learning; 1989. p. 91–8.Google Scholar
  5. 5.
    Maron ME, Kuhns JL. On relevance, probabilistic indexing and information retrieval. J ACM. 1960;7(3):216–44.CrossRefGoogle Scholar
  6. 6.
    Minsky ML, Papert S. Perceptrons. Cambridge: MIT Press; 1969.zbMATHGoogle Scholar
  7. 7.
    Nilsson NJ. Learning machines. New York: McGraw-Hill; 1965.zbMATHGoogle Scholar
  8. 8.
    Quinlan JR. Induction of decision trees. Mach Learn. 1986;1(1):81–106.Google Scholar
  9. 9.
    Quinlan JR. C4.5: programs for machine learning. San Francisco: Morgan Kaufmann; 1993.Google Scholar
  10. 10.
    Rosenblatt F. Principles of neurodynamics. Washington, DC: Spartan; 1961.Google Scholar
  11. 11.
    Witten IH, Frank E. Data mining: practical machine learning tools and techniques. 2nd ed. San Francisco: Morgan Kaufmann; 2003.zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of WaikatoHamiltonNew Zealand

Section editors and affiliations

  • Kyuseok Shim
    • 1
  1. 1.School of Elec. Eng. and Computer ScienceSeoul National Univ.SeoulRepublic of Korea