Advertisement

ICANN ’93 pp 744-749 | Cite as

Elimination of Overtraining by a Mutual Information Network

  • G. Deco
  • W. Finnoff
  • H. G. Zimmermann

Abstract

The presented learning paradigm uses supervised back-propagation and introduces an extra penalty term in the cost function which controls the complexity and the internal representation of the hidden neurons in an unsupervised form. This term is the mutual information that punishes the learning of noise. This learning algorithm was applied to predict German interest rates by using real world data of the past Excellent results are obtained. The effect of overtraining was eliminated, allowing implementation which finds the solution automatically without interactive strategies such as stopped training and pruning.

Keywords

Hide Layer Mutual Information Hide Neuron Penalty Term Real World Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Barlow H., 1989, “Unsupervised Learning”, Neural Computation 1, 295–311.CrossRefGoogle Scholar
  2. [2]
    Linsker R., 1989, “How to generate ordered maps by maximizing the mutual information between input and output signals”, Neural Computation 1, 402–411.CrossRefGoogle Scholar
  3. [3]
    Linsker R., 1992, “Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network”, Neural Computation 4, 691–702.CrossRefGoogle Scholar
  4. [4]
    Becker S., 1992, “An Information-theoretic Unsupervised Learning Algorithm for Neural Networks”, Ph.D. Thesis, Univ. of Toronto.Google Scholar

Copyright information

© Springer-Verlag London Limited 1993

Authors and Affiliations

  • G. Deco
    • 1
  • W. Finnoff
    • 1
  • H. G. Zimmermann
    • 1
  1. 1.Corporate Research and Development, ZFE ST SN 41Siemens AGMunich 83Germany

Personalised recommendations