Skip to main content

Ensemble of Neural Networks

  • Chapter
  • First Online:
Effective Statistical Learning Methods for Actuaries III

Part of the book series: Springer Actuarial ((SPACLN))

  • 1261 Accesses

Abstract

The most frequent approach to data-driven modeling consists to estimate only a single strong predictive model. A different strategy is to build a bucket, or an ensemble of models for some particular learning task. One can consider building a set of weak or relatively weak models like small neural networks, which can be further combined altogether to produce a reliable prediction. The most prominent examples of such machine-learning ensemble techniques are random forests Breiman (Mach Learn 45:5–32, 2001) and neural network ensembles Hansen and Salamon (IEEE Trans Pattern Anal Mach Intell 12:993–1001, 1990), which have found many successful applications in different domains. Liu et al. (Earthquake prediction by RBF neural network ensemble. In: Yin F-L, Wang J, Guo C (eds) Advances in neural networks ISNN2004. Springer, Berlin, p 962–969, 2004) use this approach for predicting earthquakes. Shu and Burn (Water Resour Res 40:1–10, 2004) forecast flood frequencies with an ensemble of networks. We start this chapter by describing the bias-variance decomposition of the prediction error. Next, we discuss how aggregated models and randomized models reduce the prediction error by decreasing the variance term in the bias-variance decomposition. Theoretical developments are inspired from the PhD thesis of Louppe (Understanding random forests: from theory to practice. PhD Dissertation, Faculty of Applied Sciences, Liége University) on random forests, 2014.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Each contract (y k, x k, ν k)k=1,…,n has a probability of \(\frac {1}{n}\) to be included in the bootstrapped data sample.

  2. 2.

    E.g. If we approach \(\ln (.)\) in the Poisson deviance by a first order Taylor’s development, we retrieve the expression of the deviance for a normal distribution.

References

  • Amit Y, Geman D (1997) Shape quantization and recognition with randomized trees. Neural Comput 9(7):1545–1588

    Article  Google Scholar 

  • Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  • Breiman L (2001) Random forests. Mach Learn 45:5–32

    Article  Google Scholar 

  • Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4:1–58

    Article  Google Scholar 

  • Hansen L, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993–1001

    Article  Google Scholar 

  • Liu Y, Wang Y, Li Y, Zhang B, Wu G (2004) Earthquake prediction by RBF neural network ensemble. In: Yin F-L, Wang J, Guo C (eds) Advances in neural networks ISNN2004. Springer, Berlin, pp 962–969

    Google Scholar 

  • Louppe G (2014) Understanding random forests: from theory to practice. PhD Dissertation, Faculty of Applied Sciences, Liége University

    Google Scholar 

  • Luedtke AR, van der Laan MJ (2016) Super-learning of an optimal dynamic treatment rule. Int J Biochem 12(1):305–332

    MathSciNet  Google Scholar 

  • Pirracchio R, Petersen ML, Carone M, Rigon MR, Chevret S, van der Laan MJ (2015) Mortality prediction in intensive care units with the super ICU learner algorithm (SICULA): a population based study. Lancet Respir Med 3(1):42–52

    Article  Google Scholar 

  • Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33:1–39

    Article  Google Scholar 

  • Shimshoni Y, Intrator N (1998) Classification of seismic signals by integrating ensembles of neural networks. IEEE Trans Signal Process 46(5):1194–1201

    Article  Google Scholar 

  • Shu C, Burn DH (2004) Artificial neural network ensembles and their application in pooled flood frequency analysis. Water Resour Res 40:1–10

    Article  Google Scholar 

  • Tin Kam H (1995) Random decision forests. In: Proceedings of the 3rd international conference on document analysis and recognition. IEEE, Piscataway, pp 278–282

    Chapter  Google Scholar 

  • Trufin J, Denuit M, Hainaut D (2019) Effective statistical learning methods for actuaries. From CART to GBM. Springer, Berlin

    Google Scholar 

  • Wolpert DH (1992) Stacked generalization. Neural Netw 5:241–259

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Denuit, M., Hainaut, D., Trufin, J. (2019). Ensemble of Neural Networks. In: Effective Statistical Learning Methods for Actuaries III. Springer Actuarial(). Springer, Cham. https://doi.org/10.1007/978-3-030-25827-6_6

Download citation

Publish with us

Policies and ethics