Skip to main content

Bagging Can Stabilize without Reducing Variance

  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 2001 (ICANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2130))

Included in the following conference series:

Abstract

Bagging is a procedure averaging estimators trained on bootstrap samples. Numerous experiments have shown that bagged estimates almost consistently yield better results than the original predictor. It is thus important to understand the reasons for this success, and also for the occasional failures. Several arguments have been given to explain the effectiveness of bagging, among which the original “bagging reduces variance by averaging” is widely accepted. This paper provides experimental evidence supporting another explanation, based on the stabilization provided by spreading the influence of examples. With this viewpoint, bagging is interpreted as a case-weight perturbation technique, and its behavior can be explained when other arguments fail.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 189.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Breiman. Bagging predictors. Machine learning, 24(2):123–140, 1996.

    MATH  MathSciNet  Google Scholar 

  2. L. Breiman. Prediction games and arcing algorithms. Technical Report 504, Statistics Department, University of California at Berkeley, 1997.

    Google Scholar 

  3. P. Bühlmann and B. Yu. Explaining bagging. Technical Report 92, Seminar für Statistik, ETH, Zürich, 2000.

    Google Scholar 

  4. A. N. Burgess. Estimating equivalent kernels for neural networks: A data perturbation approach. In M.C. Mozer, M.I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 382–388. MIT Press, 1997.

    Google Scholar 

  5. T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning, 40(2):1–19, 2000.

    Article  Google Scholar 

  6. J. H. Friedman. On bias, variance, 0/1 loss, and the curse of dimensionality. Data Mining and Knowledge Discovery, 1(1):55–77, 1997.

    Article  Google Scholar 

  7. J. H. Friedman and P. Hall. On bagging and non-linear estimation. Technical report, Stanford University, Stanford, CA., January 2000.

    Google Scholar 

  8. Y. Grandvalet. Bagging down-weights leverage points. In S.-I. Amari, C. Lee Giles, M. Gori, and V. Piuri, editors, IJCNN, volume IV, pages 505–510. IEEE, 2000.

    Google Scholar 

  9. T. J. Hastie and R. J. Tibshirani. Generalized Additive Models, volume 43 of Monographs on Statistics and Applied Probability. Chapman & Hall, 1990.

    Google Scholar 

  10. R. Maclin and D. Opitz. An empirical evaluation of bagging and boosting. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, pages 546–551. AAAI Press, 1997.

    Google Scholar 

  11. R. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651–1686, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  12. R. J. Tibshirani and K. Knight. The covariance inflation criterion for adaptive model selection. Journal of the Royal Statistical Society, B, 61(3):529–546, 1999.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Grandvalet, Y. (2001). Bagging Can Stabilize without Reducing Variance. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-44668-0_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42486-4

  • Online ISBN: 978-3-540-44668-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics