Elimination of Overtraining by a Mutual Information Network
The presented learning paradigm uses supervised back-propagation and introduces an extra penalty term in the cost function which controls the complexity and the internal representation of the hidden neurons in an unsupervised form. This term is the mutual information that punishes the learning of noise. This learning algorithm was applied to predict German interest rates by using real world data of the past Excellent results are obtained. The effect of overtraining was eliminated, allowing implementation which finds the solution automatically without interactive strategies such as stopped training and pruning.
KeywordsHide Layer Mutual Information Hide Neuron Penalty Term Real World Data
Unable to display preview. Download preview PDF.
- Becker S., 1992, “An Information-theoretic Unsupervised Learning Algorithm for Neural Networks”, Ph.D. Thesis, Univ. of Toronto.Google Scholar