The Evolution of a Feedforward Neural Network trained under Backpropagation
This paper presents a theoretical and empirical analysis of the evolution of a feedforward neural network (FFNN) trained using backpropagation (BP). The results of two sets of experiments axe presented which illustrate the nature of BP’s search through weight space as the network learns to classify the training data. The search is shown to be driven by the initial values of the weights in the output layer of neurons.
KeywordsOutput Layer Feedforward Neural Network Weight Space Random Weight Initialisation Random Neural Network
Unable to display preview. Download preview PDF.
- S.E. Fahlman. Faster-learning variations on BP: An empirical study. In D. Touretszky, G. Hinton, and T. Sejinowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 38–51, 1989.Google Scholar
- I.K. Sethi. Entropy nets: From decision trees to neural networks. Proceedings of the IEEE, 78(10):1605–1613, 1990.Google Scholar