Abstract
The presented learning paradigm uses supervised back-propagation and introduces an extra penalty term in the cost function which controls the complexity and the internal representation of the hidden neurons in an unsupervised form. This term is the mutual information that punishes the learning of noise. This learning algorithm was applied to predict German interest rates by using real world data of the past Excellent results are obtained. The effect of overtraining was eliminated, allowing implementation which finds the solution automatically without interactive strategies such as stopped training and pruning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Barlow H., 1989, “Unsupervised Learning”, Neural Computation 1, 295–311.
Linsker R., 1989, “How to generate ordered maps by maximizing the mutual information between input and output signals”, Neural Computation 1, 402–411.
Linsker R., 1992, “Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network”, Neural Computation 4, 691–702.
Becker S., 1992, “An Information-theoretic Unsupervised Learning Algorithm for Neural Networks”, Ph.D. Thesis, Univ. of Toronto.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer-Verlag London Limited
About this paper
Cite this paper
Deco, G., Finnoff, W., Zimmermann, H.G. (1993). Elimination of Overtraining by a Mutual Information Network. In: Gielen, S., Kappen, B. (eds) ICANN ’93. ICANN 1993. Springer, London. https://doi.org/10.1007/978-1-4471-2063-6_208
Download citation
DOI: https://doi.org/10.1007/978-1-4471-2063-6_208
Published:
Publisher Name: Springer, London
Print ISBN: 978-3-540-19839-0
Online ISBN: 978-1-4471-2063-6
eBook Packages: Springer Book Archive