Classification and Trees
Breiman, Friedman, Gordon and Stone recognized that tree classifiers would be very valuable to practicing statisticians. Their cart algorithm became very popular indeed. Designing tree-based classifiers, however, has its pitfalls. It is easy to make them too simple or too complicated so that Bayes risk consistency is compromised. In this talk, we explore the relationship between algorithmic complexity of tree-based methods and performance.
- 3.Biau, G., Devroye, L.: On the layered nearest neighbour estimate, the bagged nearest neighbour estimate and the random forest method in regression and classification. Technical Report (2008)Google Scholar
- 6.Breiman, L.: Some infinite theory for predictor ensembles. Technical Report 577, Statistics Department, UC Berkeley (2000), http://www.stat.berkeley.edu/~breiman
- 8.Breiman, L.: Consistency for a simple model of random forests. Technical Report 670, Statistics Department, UC Berkeley (2004), http://www.stat.berkeley.edu/~breiman
- 10.Cutler, A., Zhao, G.: Pert – Perfect random tree ensembles. Computing Science and Statistics 33, 490–497 (2001)Google Scholar
- 14.Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. In: Saitta, L. (ed.) Machine Learning: Proceedings of the 13th International Conference, pp. 148–156. Morgan Kaufmann, San Francisco (1996)Google Scholar