Abstract
Most boosting regression algorithms use the weighted average of base regressors as their final regressor. In this paper we analyze the choice of the weighted median. We propose a general boosting algorithm based on this approach. We prove boosting-type convergence of the algorithm and give clear conditions for the convergence of the robust training error. The algorithm recovers \(\textsc{AdaBoost}\) and \(\textsc{AdaBoost}_\varrho\) as special cases. For boosting confidence-rated predictions, it leads to a new approach that outputs a different decision and interprets robustness in a different manner than the approach based on the weighted average. In the general, non-binary case we suggest practical strategies based on the analysis of the algorithm and experiments.
This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Avnimelech, R., Intrator, N.: Boosting regression estimators. Neural Computation 11, 491–513 (1999)
Bertoni, A., Campadelli, P., Parodi, M.: A boosting algorithm for regression. In: Proceedings of the Int. Conf. on Artificial Neural Networks, pp. 343–348 (1997)
Bühlmann, P.: Bagging, subagging and bragging for improving some prediction algorithms. In: Akritas, M.G., Politis, D.N. (eds.) Recent Advances and Trends in Nonparametric Statistics (2003) (to appear)
Drucker, H.: Improving regressors using boosting techniques. In: Proceedings of the 14th International Conference on Machine Learning, pp. 107–115 (1997)
Duffy, N., Helmbold, D.P.: Leveraging for regression. In: Proceedings of the 13th Conference on Computational Learning Theory, pp. 208–219 (2000)
Freund, Y.: Boosting a weak learning algorithm by majority. Information and Computation 121(2), 256–285 (1995)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)
Friedman, J.: Greedy function approximation: a gradient boosting machine. Technical report, Dept. of Statistics, Stanford University (1999)
Mason, L., Bartlett, P., Baxter, J., Frean, M.: Boosting algorithms as gradient descent. In: NIPS 1999, vol. 12, pp. 512–518. The MIT Press, Cambridge (2000)
Rätsch, G., Warmuth, M.K.: Marginal boosting. In: Proceedings of the 15th Conference on Computational Learning Theory (2002)
Rätsch, G., Warmuth, M.K.: Efficient margin maximizing with boosting. Journal of Machine Learning Research (2003) (submitted)
Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics 26(5), 1651–1686 (1998)
Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)
Zemel, R.S., Pitassi, T.: A gradient-based boosting algorithm for regression problems. In: NIPS 2000, vol. 13, pp. 696–702 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kégl, B. (2003). Robust Regression by Boosting the Median. In: Schölkopf, B., Warmuth, M.K. (eds) Learning Theory and Kernel Machines. Lecture Notes in Computer Science(), vol 2777. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45167-9_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-45167-9_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40720-1
Online ISBN: 978-3-540-45167-9
eBook Packages: Springer Book Archive