Abstract
While the gradient-learning algorithm with error back-propagation is a practical method of properly choosing the synaptic weights and thresholds of neurons, it provides no insight into the problem of how to choose the network architecture that is appropriate for the solution of a given problem. How many hidden layers are needed and how many neurons should be contained in each layer? If the number of hidden neurons is too small, no choice of the synapses may yield the accurate mapping between input and output, and the network will fail in the learning stage. If the number is too large, many different solutions will exist, most of which will not result in the ability to generalize correctly for new input data, and the network will usually fail in the operational stage. Instead of learning salient features of the underlying input-output relationship, the network simply learns to distinguish somehow between the various input patterns of the training set and to associate them with the correct output.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Müller, B., Reinhardt, J. (1990). Network Architecture and Generalization. In: Neural Networks. Physics of Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-97239-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-97239-3_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-97241-6
Online ISBN: 978-3-642-97239-3
eBook Packages: Springer Book Archive