Abstract
We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions on new examples are made by averaging the mean values of classes with votes that are close in number to the most likely class. We provide experimental evidence that this indirect approach can often yield strong results for many applications, generally outperforming direct approaches such as regression trees and rivaling bagged regression trees.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning, 36, 105–139.
Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123–140.
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Monterrey, CA.: Wadsworth.
Cohen, W., & Singer, Y. (1999). A simple, fast, and effective rule learner. Proceedings of Annual Conference of American Association for Artificial Intelligence (pp. 335–342).
Dougherty, J., Kohavi, R., & Sahami, M. (1995). Supervised and unsupervised discretization of continuous features. Proceedings of the 12th Int’l Conference on Machine Learning (pp. 194–202).
Friedman, J., Hastie, T., & Tibshirani, R. (1998). Additive logistic regression: A statistical view of boosting (Technical Report). Stanford University Statistics Department. http://www.stat-stanford.edu/~tibs.
Hartigan, J., & Wong, M. (1979). A k-means clustering algorithm, ALGORITHM AS 136. Applied Statistics, 28.
Schapire, R., Freund, Y., Bartlett, P., & Lee, W. (1998). Boosting the margin: A new explanation for the effectiveness of voting methods. Proceedings of the Fourteenth Int’l Conference on Machine Learning (pp. 322–330). Morgan Kaufmann.
Weiss, S., & Indurkhya, N. (1995). Rule-based machine learning methods for functional prediction. Journal of Artificial Intelligence Research, 3, 383–403.
Weiss, S., & Indurkhya, N. (2000). Lightweight rule induction. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 1135–1142).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Indurkhya, N., Weiss, S.M. (2001). Rule-Based Ensemble Solutions for Regression. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2001. Lecture Notes in Computer Science(), vol 2123. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44596-X_6
Download citation
DOI: https://doi.org/10.1007/3-540-44596-X_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42359-1
Online ISBN: 978-3-540-44596-8
eBook Packages: Springer Book Archive