Pre-pruning Classification Trees to Reduce Overfitting in Noisy Domains
The automatic induction of classification rules from examples in the form of a classification tree is an important technique used in data mining. One of the problems encountered is the overfitting of rules to training data. In some cases this can lead to an excessively large number of rules, many of which have very little predictive value for unseen data. This paper describes a means of reducing overfitting known as J-pruning, based on the J-measure, an information theoretic means of quantifying the information content of a rule. It is demonstrated that using J-pruning generally leads to a substantial reduction in the number of rules generated and an increase in predictive accuracy. The advantage gained becomes more pronounced as the proportion of noise increases.
KeywordsPredictive Accuracy Classification Tree Classification Rule Categorical Attribute Unseen Data
Unable to display preview. Download preview PDF.
- 1.Hunt, E.B., Marin J. and Stone, P.J. (1966). Experiments in Induction. Academic PressGoogle Scholar
- 2.Quinlan, J.R. (1993). C4.5: Programs for Machine Learning. Morgan KaufmannGoogle Scholar
- 3.Bramer, M.A. (2002). An Information-Theoretic Approach to the Pre-pruning of Classification Rules. Proceedings of the IFIP World Computer Congress, Montreal 2002.Google Scholar
- 6.Bramer, M.A. (2002). Using J-Pruning to Reduce Overfitting in Classification Trees. In: Research and Development in Intelligent Systems XVIII. Springer-Verlag, pp. 25–38.Google Scholar
- 7.Smyth, P. and Goodman, R.M. (1991). Rule Induction Using Information Theory. In: Piatetsky-Shapiro, G. and Frawley, W.J. (eds.), Knowledge Discovery in Databases. AAAI Press, pp. 159–176Google Scholar