Construction of Decision Trees by Using Feature Importance Value for Improved Learning Performance
Decision Tree algorithms cannot learn accurately with a small training set. This is because, decision tree algorithms recursively partition the data set that leaves very few instances in the lower levels of the tree. Additional domain knowledge has been shown to enhance the performance of learners. We present an algorithm named Importance Aided Decision Tree (IADT) that takes Feature Importance as an additional domain knowledge. Decision Tree algorithm always finds the most important attributes in each node. Thus, Feature Importance can be useful to Decision Tree learning. Our algorithm uses a novel approach to incorporate this feature importance score into decision tree learning. This approach makes decision trees more accurate and robust. We demonstrated theoretical and empirical performance analysis to show that IADT is superior to standard decision tree learning algorithms.
KeywordsSupervised Learning Decision Tree Domain Knowledge
Unable to display preview. Download preview PDF.
- 1.Scott, A.C., Clayton, J.E., Gibson, E.L.: A practical guide to knowledge acquisition. Addison-Wesley (1991)Google Scholar
- 3.Breiman, L., Freidman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression treesGoogle Scholar
- 5.Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc., 13–30Google Scholar
- 6.Iqbal, R.A.: Empirical learning aided by weak knowledge in the form of feature importance. In: CMSP 2011. IEEE (2011)Google Scholar
- 7.Mitchell, T.M.: Artificial neural networks, pp. 81–126. McGraw-Hill Science/Engineering/Math. (1997)Google Scholar
- 8.Mitchell, T.M.: Machine Learning. McGraw-Hill (1997a)Google Scholar
- 9.Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
- 12.Zhang, L., Wang, Z.: Ontology-based clustering algorithm with feature weights. Journal of Computational Information Systems 6(9) (2010)Google Scholar