Constructing decision trees from examples and their explanation-based generalizations

  • Kai Zercher
General Reasoning
Part of the Lecture Notes in Computer Science book series (LNCS, volume 462)


Two algorithms which learn decision trees from examples and their EBL (explanation-based learning) generated rules are presented. The first, IDG-1, learns correct but incomplete trees. It transforms — guided by examples — a rule set into a decision tree which is tailored to efficient execution. Tests done in an example domain show that these trees can be executed much faster than the corresponding EBL generated rule sets even if various methods to optimize rule execution have been applied. Consequently, IDG-1 is one method to ease the utility problem of EBL. The second algorithm, IDG-2, induces complete but no longer entirely correct trees. When compared with trees learned by ID3, the trees induced by IDG-2 showed significantly lower error rates. Since both algorithms construct a tree in a very similar way this demonstrates that the conditions derived from examples and a domain theory via EBL are better suited for tree induction than the simple conditions ID3 constructs from the example descriptions.

The example application comes from the area of model-based diagnosis of robot operations. The experiments demonstrate that the average execution time — which is crucial in such a domain — can be significantly reduced with the help of the learned decision trees.


Decision Tree Domain Theory Average Execution Time Efficient Execution Learn Decision Tree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Davis 84]
    Davis, R., “Diagnostic reasoning based on structure and behavior”, Artificial Intelligence 24,p. 347–410, 1984.Google Scholar
  2. [DeJong 86]
    DeJong, G., Mooney,R., “Explanation-based learning: An alternative view”, Machine Learning, Vol. 1, Nr. 2, 1986.Google Scholar
  3. [deKleer 87]
    deKleer, J., Williams, B.C., “Diagnosing multiple faults”, Artificial Intelligence 32,p. 97–130, 1987.Google Scholar
  4. [Flann 89]
    Flann, N.S., Dietterich, T.G., “A study of explanation-based methods for inductive learning”, Machine Learning, Vol. 3, Nr. 4, 1989.Google Scholar
  5. [Forgy 82]
    Forgy, C.L., “Rete: A fast algorithm for the many pattern/many object pattern match problem”, Artificial Intelligence 19, p. 17–37, 1982.Google Scholar
  6. [Friedrich 89]
    Friedrich,G., Nejdl, W., “Increasing the information-theoretic content of diagnostic examples using a domain model”, 9th International workshop expert systems and their applications, Specialized conference Second generation expert systems, Avignon 1989.Google Scholar
  7. [Keller 87]
    Keller, R.M., “Defining operationality for explanation-based learning”, AAAI 87.Google Scholar
  8. [Matheus 89]
    Matheus, C.J., “Feature construction: An analytic framework and an application to decision trees”, Ph.D. thesis, University of Illinois at Urbana-Champaign, Report No. UIUCDCS-R-89-1559, 1989.Google Scholar
  9. [Minton 88]
    Minton, S., “Quantitative results concerning the utility of explanation-based learning”, AAAI 88.Google Scholar
  10. [Mitchell 86]
    Mitchell, T.M., Keller, R., Kedar-Cabelli, S., “Explanation-based generalization: a unifying view”, Machine Learning, Vol. 1, Nr. 1, 1986.Google Scholar
  11. [Norton 89]
    Norton, S.W., “Generating better decision trees”, IJCAI 89.Google Scholar
  12. [Quinlan 86]
    Quinlan, J.R., “Induction of decision trees”, Machine Learning, Vol. 1, Nr. 1, 1986.Google Scholar
  13. [Resnick 89]
    Resnick, P., “Generalizing on multiple grounds: Performance learning in model-based troubleshooting”, AI-TR 1052, MIT, 1989.Google Scholar
  14. [Utgoff 89]
    Utgoff, P.E., “Incremental induction of decision trees”, Machine Learning, Vol. 4, Nr. 2, 1989.Google Scholar
  15. [Zercher 88a]
    Zercher, K., “Model-based learning of rules for error diagnosis”, in Hoeppner, W. (Ed.), Proceedings of the 12th German workshop on artificial intelligence (GWAI 88), Springer, 1988.Google Scholar
  16. [Zercher 88b]
    Zercher, K., “Modellbasiertes Lernen von Regeln zur Fehlerdiagnose”, Diplomarbeit, Universität Karlsruhe, 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Kai Zercher
    • 1
    • 2
  1. 1.Siemens AG, ZFE IS INF 32München 83
  2. 2.TU München, Institut für InformatikMünchen 80West-Germany

Personalised recommendations