Advertisement

Data Mining pp 381-417 | Cite as

Supervised Learning: Decision Trees, Rule Algorithms, and Their Hybrids

  • Krzysztof J. Cios
  • Roman W. Swiniarski
  • Witold Pedrycz
  • Lukasz A. Kurgan

Keywords

Decision Tree Supervise Learn Information Gain Hybrid Algorithm Decision Tree Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cios, K.J., and Liu, N. 1992. Machine learning in generation of a neural network architecture: a Continuous ID3 approach. IEEE Transactions on Neural Networks, 3(2):280–291CrossRefGoogle Scholar
  2. 2.
    Cios, K.J., and Liu, N. 1995. An algorithm which learns multiple covers via integer linear programming. Part I – The CLILP2 Algorithm. Kybernetes, 24: 29–50CrossRefGoogle Scholar
  3. 3.
    Cios, K.J., Wedding, D.K. and Liu, N. 1997. CLIP3: cover learning using integer programming. Kybernetes, 26(4-5):51536CrossRefGoogle Scholar
  4. 4.
    Cios, K.J., Pedrycz, W., and Swiniarski, R. 1998. Data Mining Methods for Knowledge Discovery. KluwerGoogle Scholar
  5. 5.
    Cios, K.J., and Kurgan, L. 2004. CLIP4: Hybrid inductive machine learning algorithm that generates inequality rules. Information Sciences, 163(1–3):37–83CrossRefGoogle Scholar
  6. 6.
    Clark, P., and Niblett, T. 1989. The CN2 algorithm. Machine Learning, 3:261–283Google Scholar
  7. 7.
    Cohen, W. 1995. Fast effective rule induction. Proceedings of the 12th International Conference on Machine Learning, 115–123, Lake Tahoe, CAGoogle Scholar
  8. 8.
    Cohen, W., and Singer, Y. 1999. A simple, fast and effective rule learner. Proceedings of the 16th National Conference on Artificial Intelligence, 335–342Google Scholar
  9. 9.
    Han, J., and Kamber, M. 2001. Data Mining: Concepts and Techniques, Morgan KaufmannGoogle Scholar
  10. 10.
    Holte, R.C. 1993. Very simple classification rules perform wellon most commonly used data sets. Machine Learning, 11:63–90zbMATHCrossRefGoogle Scholar
  11. 11.
    Hunt, E.B., Marin, J., and Stone, P.J. 1966. Experiments in Induction, Academic PressGoogle Scholar
  12. 12.
    Kodratoff, Y. 1988. Introduction to Machine Learning, Morgan-KaufmannGoogle Scholar
  13. 13.
    Kurgan, L., Cios, K.J., and Dick, S. 2006. Highly scalable and robust rule learner: performance evaluation and comparison, IEEE Transactions on Systems Man and Cybernetics, Part B, 36(1):32–53CrossRefGoogle Scholar
  14. 14.
    Langley, P. 1996. Elements of Machine Learning, Morgan-KaufmannGoogle Scholar
  15. 15.
    Michalski, R.S. 1969. On the quasi-minimal solution of the general covering problem. In: Proceedings of the 5th International Symposium on Information Processing (FCIP 69), Bled, Yugoslavia, A3:25–128Google Scholar
  16. 16.
    Mitchell, T.M. 1997. Machine Learning, McGraw-HillGoogle Scholar
  17. 17.
    Quinlan, J.R. 1993. C4.5 Programs for Machine Learning, Morgan-KaufmannGoogle Scholar
  18. 18.
    Utgoff, P.E. 1989. Incremental induction of decision trees.Machine Learning, 4:161–186CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Krzysztof J. Cios
    • 1
    • 2
  • Roman W. Swiniarski
    • 3
  • Witold Pedrycz
    • 4
  • Lukasz A. Kurgan
    • 5
  1. 1.Computer Science DeptVirginia Commonwealth UniversityRichmond
  2. 2.University of ColoradoUSA
  3. 3.Computer Science DeptSan Diego State University & Polish Academy of SciencesSan DiegoUSA
  4. 4.Electrical and Computer Engineering DeptUniversity of AlbertaEdmontonCanada
  5. 5.Electrical and Computer Engineering DeptUniversity of AlbertaEdmontonCanada

Personalised recommendations