Constructing decision trees from examples and their explanation-based generalizations
Two algorithms which learn decision trees from examples and their EBL (explanation-based learning) generated rules are presented. The first, IDG-1, learns correct but incomplete trees. It transforms — guided by examples — a rule set into a decision tree which is tailored to efficient execution. Tests done in an example domain show that these trees can be executed much faster than the corresponding EBL generated rule sets even if various methods to optimize rule execution have been applied. Consequently, IDG-1 is one method to ease the utility problem of EBL. The second algorithm, IDG-2, induces complete but no longer entirely correct trees. When compared with trees learned by ID3, the trees induced by IDG-2 showed significantly lower error rates. Since both algorithms construct a tree in a very similar way this demonstrates that the conditions derived from examples and a domain theory via EBL are better suited for tree induction than the simple conditions ID3 constructs from the example descriptions.
The example application comes from the area of model-based diagnosis of robot operations. The experiments demonstrate that the average execution time — which is crucial in such a domain — can be significantly reduced with the help of the learned decision trees.
KeywordsDecision Tree Domain Theory Average Execution Time Efficient Execution Learn Decision Tree
Unable to display preview. Download preview PDF.
- [Davis 84]Davis, R., “Diagnostic reasoning based on structure and behavior”, Artificial Intelligence 24,p. 347–410, 1984.Google Scholar
- [DeJong 86]DeJong, G., Mooney,R., “Explanation-based learning: An alternative view”, Machine Learning, Vol. 1, Nr. 2, 1986.Google Scholar
- [deKleer 87]deKleer, J., Williams, B.C., “Diagnosing multiple faults”, Artificial Intelligence 32,p. 97–130, 1987.Google Scholar
- [Flann 89]Flann, N.S., Dietterich, T.G., “A study of explanation-based methods for inductive learning”, Machine Learning, Vol. 3, Nr. 4, 1989.Google Scholar
- [Forgy 82]Forgy, C.L., “Rete: A fast algorithm for the many pattern/many object pattern match problem”, Artificial Intelligence 19, p. 17–37, 1982.Google Scholar
- [Friedrich 89]Friedrich,G., Nejdl, W., “Increasing the information-theoretic content of diagnostic examples using a domain model”, 9th International workshop expert systems and their applications, Specialized conference Second generation expert systems, Avignon 1989.Google Scholar
- [Keller 87]Keller, R.M., “Defining operationality for explanation-based learning”, AAAI 87.Google Scholar
- [Matheus 89]Matheus, C.J., “Feature construction: An analytic framework and an application to decision trees”, Ph.D. thesis, University of Illinois at Urbana-Champaign, Report No. UIUCDCS-R-89-1559, 1989.Google Scholar
- [Minton 88]Minton, S., “Quantitative results concerning the utility of explanation-based learning”, AAAI 88.Google Scholar
- [Mitchell 86]Mitchell, T.M., Keller, R., Kedar-Cabelli, S., “Explanation-based generalization: a unifying view”, Machine Learning, Vol. 1, Nr. 1, 1986.Google Scholar
- [Norton 89]Norton, S.W., “Generating better decision trees”, IJCAI 89.Google Scholar
- [Quinlan 86]Quinlan, J.R., “Induction of decision trees”, Machine Learning, Vol. 1, Nr. 1, 1986.Google Scholar
- [Resnick 89]Resnick, P., “Generalizing on multiple grounds: Performance learning in model-based troubleshooting”, AI-TR 1052, MIT, 1989.Google Scholar
- [Utgoff 89]Utgoff, P.E., “Incremental induction of decision trees”, Machine Learning, Vol. 4, Nr. 2, 1989.Google Scholar
- [Zercher 88a]Zercher, K., “Model-based learning of rules for error diagnosis”, in Hoeppner, W. (Ed.), Proceedings of the 12th German workshop on artificial intelligence (GWAI 88), Springer, 1988.Google Scholar
- [Zercher 88b]Zercher, K., “Modellbasiertes Lernen von Regeln zur Fehlerdiagnose”, Diplomarbeit, Universität Karlsruhe, 1988.Google Scholar