Skip to main content

Robust Regression by Boosting the Median

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2777))

Abstract

Most boosting regression algorithms use the weighted average of base regressors as their final regressor. In this paper we analyze the choice of the weighted median. We propose a general boosting algorithm based on this approach. We prove boosting-type convergence of the algorithm and give clear conditions for the convergence of the robust training error. The algorithm recovers \(\textsc{AdaBoost}\) and \(\textsc{AdaBoost}_\varrho\) as special cases. For boosting confidence-rated predictions, it leads to a new approach that outputs a different decision and interprets robustness in a different manner than the approach based on the weighted average. In the general, non-binary case we suggest practical strategies based on the analysis of the algorithm and experiments.

This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Avnimelech, R., Intrator, N.: Boosting regression estimators. Neural Computation 11, 491–513 (1999)

    Google Scholar 

  2. Bertoni, A., Campadelli, P., Parodi, M.: A boosting algorithm for regression. In: Proceedings of the Int. Conf. on Artificial Neural Networks, pp. 343–348 (1997)

    Google Scholar 

  3. Bühlmann, P.: Bagging, subagging and bragging for improving some prediction algorithms. In: Akritas, M.G., Politis, D.N. (eds.) Recent Advances and Trends in Nonparametric Statistics (2003) (to appear)

    Google Scholar 

  4. Drucker, H.: Improving regressors using boosting techniques. In: Proceedings of the 14th International Conference on Machine Learning, pp. 107–115 (1997)

    Google Scholar 

  5. Duffy, N., Helmbold, D.P.: Leveraging for regression. In: Proceedings of the 13th Conference on Computational Learning Theory, pp. 208–219 (2000)

    Google Scholar 

  6. Freund, Y.: Boosting a weak learning algorithm by majority. Information and Computation 121(2), 256–285 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  7. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  8. Friedman, J.: Greedy function approximation: a gradient boosting machine. Technical report, Dept. of Statistics, Stanford University (1999)

    Google Scholar 

  9. Mason, L., Bartlett, P., Baxter, J., Frean, M.: Boosting algorithms as gradient descent. In: NIPS 1999, vol. 12, pp. 512–518. The MIT Press, Cambridge (2000)

    Google Scholar 

  10. Rätsch, G., Warmuth, M.K.: Marginal boosting. In: Proceedings of the 15th Conference on Computational Learning Theory (2002)

    Google Scholar 

  11. Rätsch, G., Warmuth, M.K.: Efficient margin maximizing with boosting. Journal of Machine Learning Research (2003) (submitted)

    Google Scholar 

  12. Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics 26(5), 1651–1686 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  13. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  14. Zemel, R.S., Pitassi, T.: A gradient-based boosting algorithm for regression problems. In: NIPS 2000, vol. 13, pp. 696–702 (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kégl, B. (2003). Robust Regression by Boosting the Median. In: Schölkopf, B., Warmuth, M.K. (eds) Learning Theory and Kernel Machines. Lecture Notes in Computer Science(), vol 2777. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45167-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45167-9_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40720-1

  • Online ISBN: 978-3-540-45167-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics