Bias Estimation for Neural Network Predictions
This paper looks at the problem of performance assessment in the use of neural networks for classification tasks. It is well-known that the prediction obtained from a trained neural network is subject to errors, both in terms of bias and variance in the estimated error rate.
In order to estimate these measures, it is customary to reserve some data as a test set. This is reasonable if data are plentiful, but when the data set is small in size, this is likely to reduce the accuracy of the network estimates, simply because there are not enough data left for adequate training. An alternative approach will allows the use of all the data, is to employ the bootstrap method.
Here we give a brief introduction to the bootstrap, and then report on some computational experiments on artificial data sets in order to investigate the potential of this approach for the estimation of error bias.
KeywordsBootstrap Method Bootstrap Procedure Trained Neural Network Network Estimate Neural Network Prediction
Unable to display preview. Download preview PDF.
- C.R. Reeves and N.C. Steele (1993) Neural networks for multivariate analysis: results of some cross-validation studies. Proc. of 6th International Symposium on Applied Stochastic Models and Data Analysis, World Scientific Publishing, Singapore, Vol II, 780–791.Google Scholar
- P. Hall (1992) The Bootstrap and Edgeworth Expansion, Springer-Verlag, New York.Google Scholar
- C.R. Reeves and J. O’Brien (1993) Estimation of misclassification rates in neural network applications. British Neural Net Society Symposium on Neural Networks, Birmingham, UK, January 29, 1993.Google Scholar
- G. Paass (1993) Assessing and improving neural network predictions by the bootstrap algorithm. In S.J.Hanson, J.D.Cowan and C.L.Giles (1993) Advances in Neural Information Processing Systems 5, 196–203.Google Scholar