Computationally Efficient Linear Regression Trees
This paper describes a method for obtaining regression trees using linear regression models in the leaves in a computationally efficient way that allows the use of this method on large data sets. This work is focused on deriving a set of formulae with the goal of allowing an efficient evaluation of all candidate tests that are considered during tree growth.
Unable to display preview. Download preview PDF.
- BONTEMPI, G. (2000): Local Learning Techniques for Modeling, Prediction and Control. PhD thesis, Universit Libre de Bruxelles, Belgium.Google Scholar
- CATLETT, J. (1991): Megainduction: machine learning on very large databases. PhD thesis, Basser Department of Computer Science, University of Sydney.Google Scholar
- CLEVELAND, W.S. and LOADER, C.R. (1995): Smoothing by local regression: Principles and methods (with discussion). Computational Statistics.Google Scholar
- GOODWIN, G. and SIN, K. (1984): Adaptive Filtering Prediction and Control. Prentice-Hall.Google Scholar
- KARALIC, A. (1992): Employing linear regression in regression tree leaves. In Proceedings of ECAI-92. WileySons.Google Scholar
- MYERS, R. (1990): Classical and modern Regression with Applications 2nd edition. Duxbury Press.Google Scholar
- QUINLAN, J.R. (1992): Learning with continuous classes. In Adams Sterling, editor, Proceedings of AI’92,pages 343–348. World Scientific.Google Scholar
- TORGO, L. (1999): Inductive Learning of Tree-based Regression Models. PhD thesis, Faculty of Sciences, University of Porto.Google Scholar