Advertisement

Learning and Evolution by Minimization of Mutual Information

  • Yong Liu
  • Xin Yao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2439)

Abstract

Based on negative correlation learning [1] and evolutionary learning, evolutionary ensembles with negative correlation learning (EENCL) was proposed for learning and designing of neural network ensembles [2]. The idea of EENCL is to regard the population of neural networks as an ensemble, and the evolutionary process as the design of neural network ensembles. EENCL used a fitness sharing based on the covering set. Such fitness sharing did not make accurate measurement on the similarity in the population. In this paper, a fitness sharing scheme based on mutual information is introduced in EENCL to evolve a diverse and cooperative population. The effectiveness of such evolutionary learning approach was tested on two real-world problems. This paper has also analyzed negative correlation learning in terms of mutual information on a regression task in the different noise conditions.

Keywords

Neural Network Mutual Information Hide Node Individual Network Evolutionary Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Y. Liu and X. Yao. Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, 29(6):716–725, 1999.CrossRefGoogle Scholar
  2. 2.
    Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation, 4(4):380–387, 2000.CrossRefGoogle Scholar
  3. 3.
    Y. Liu and X. Yao. Towards designing neural network ensembles by evolution. In Parallel Problem Solving from Nature—PPSN V: Proc. of the Fifth International Conference on Parallel Problem Solving from Nature, volume 1498 of Lecture Notes in Computer Science, pages 623–632. Springer-Verlag, Berlin, 1998.Google Scholar
  4. 4.
    D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989.zbMATHGoogle Scholar
  5. 5.
    J. C. A. van der Lubbe. Information Theory. Prentice-Hall International, Inc., 2nd edition, 1999.Google Scholar
  6. 6.
    R. T. Clemen and R. L. Winkler. Limits for the precision and value of information from dependent sources. Operations Research, 33:427–442, 1985.zbMATHCrossRefGoogle Scholar
  7. 7.
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. I, pages 318–362. MIT Press, Cambridge, MA, 1986.Google Scholar
  8. 8.
    R. A. Jacobs. Bias/variance analyses of mixture-of-experts architectures. Neural Computation, 9:369–383, 1997.zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    D. B. Fogel. Evolutionary Computation: Towards a New Philosophy of Machine Intelligence. IEEE Press, New York, NY, 1995.Google Scholar
  10. 10.
    D. Michie, D. J. Spiegelhalter, and C. C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood Limited, London, 1994.zbMATHGoogle Scholar
  11. 11.
    M. Stone. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, 36:111–147, 1974.zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Yong Liu
    • 1
  • Xin Yao
    • 2
  1. 1.The University of AizuFukushimaJapan
  2. 2.School of Computer ScienceThe University of BirminghamEdgbaston, BirminghamUK

Personalised recommendations