Comparison of Pruning Algorithms in Neural Networks
In order to select the right-sized network, many pruning algorithms have been proposed. One may ask which of the pruning algorithms is best in terms of the generalization error of the resulting artificial neural network classifiers. In this paper, we compare the performance of four pruning algorithms in small training sample size situations. A comparative study with artificial and real data suggests that the weight-elimination method proposed by Weigend et al. is best.
KeywordsHide Layer Training Sample Gabor Filter Generalization Error Gabor Feature
Unable to display preview. Download preview PDF.
- Gabor, D. (1946): Theory of communication, J. Inst. Elect. Engr., 93, 429–459.Google Scholar
- Hamamoto, Y.(1996b): Recognition of handwritten numerals using Gabor features, In Proc. of 13th Int. Conf. Pattern Recognition,Vienna, in press.Google Scholar
- Jain, A. K. and Chandrasekaran, B. (1982): Dimensionality and sample size considerations in pattern recognition practice, In Handbook of Statistics, Vol. 2, P. R. Krishnaiah and L. N. Kanal, Eds., North-Holland, 835–855.Google Scholar
- Kruschke, J. K. (1988): Creating local and distributed bottlenecks in hidden layers of back-propagation networks, In Proc. 1988 Connectionist Models Summer School, 120–126.Google Scholar
- Le Cun, Y.(1990): Optimal brain damage, In Advances in Neural Information Processing (2), Denver, 598–605.Google Scholar
- Rumelhart, D. E. et al. (1986): Learning internal representations by error propagation, In D. E. Rumelhart and J. L. McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol.1: Foundations., MIT Press.Google Scholar
- Weigend, A. S. et al. (1991): Generalization by weight-elimination with application to forecasting, In Advances in Neural Information Processing ( 3 ), 875–882.Google Scholar