Abstract
We study how the error of an ensemble regression estimator can be decomposed into two components: one accounting for the individual errors and the other accounting for the correlations within the ensemble. This is the well known Ambiguity decomposition; we show an alternative way to decompose the error, and show how both decompositions have been exploited in a learning scheme. Using a scaling parameter in the decomposition we can blend the gradient (and therefore the learning process) smoothly between two extremes, from concentrating on individual accuracies and ignoring diversity, up to a full non-linear optimization of all parameters, treating the ensemble as a single learning unit. We demonstrate how this also applies to ensembles using a soft combination of posterior probability estimates, so can be utilised for classifier ensembles.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. Journal of Information Fusion 6(1), 5–20 (2005)
Brown, G., Wyatt, J., Tino, P.: Managing diversity in regression ensembles. Journal of Machine Learning Research 6 (2005)
Friedman, J.H.: Bias, variance, 0-1 loss and the curse of dimensionality. Technical report, Stanford University (1996)
Fumera, G., Roli, F.: Linear combiners for classifier fusion: Some theoretical and experimental results. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 74–83. Springer, Heidelberg (2003)
Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias/variance dilemma. Neural Computation 4(1), 1–58 (1992)
University College London Neural Network Group. The Elena Project., http://www.dice.ucl.ac.be/neural-nets/Research/Projects/ELENA/elena.htm
Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: NIPS 1995, vol. 7, pp. 231–238 (1995)
Kuncheva, L.: Combining Pattern Classifiers: Methods and Algorithms. Wiley Press, Chichester (2004), ISBN 0-471-21078-1
Liu, Y.: Negative Correlation Learning and Evolutionary Neural Network Ensembles. PhD thesis, University College, The University of New South Wales, Australian Defence Force Academy, Canberra, Australia (1998)
Ueda, N., Nakano, R.: Generalization error of ensemble estimators. In: Proceedings of International Conference on Neural Networks, pp. 90–95 (1996)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Brown, G., Wyatt, J., Sun, P. (2005). Between Two Extremes: Examining Decompositions of the Ensemble Objective Function. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2005. Lecture Notes in Computer Science, vol 3541. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11494683_30
Download citation
DOI: https://doi.org/10.1007/11494683_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26306-7
Online ISBN: 978-3-540-31578-0
eBook Packages: Computer ScienceComputer Science (R0)