Skip to main content

Learning and Evolution by Minimization of Mutual Information

  • Conference paper
  • First Online:
Parallel Problem Solving from Nature — PPSN VII (PPSN 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2439))

Included in the following conference series:

Abstract

Based on negative correlation learning [1] and evolutionary learning, evolutionary ensembles with negative correlation learning (EENCL) was proposed for learning and designing of neural network ensembles [2]. The idea of EENCL is to regard the population of neural networks as an ensemble, and the evolutionary process as the design of neural network ensembles. EENCL used a fitness sharing based on the covering set. Such fitness sharing did not make accurate measurement on the similarity in the population. In this paper, a fitness sharing scheme based on mutual information is introduced in EENCL to evolve a diverse and cooperative population. The effectiveness of such evolutionary learning approach was tested on two real-world problems. This paper has also analyzed negative correlation learning in terms of mutual information on a regression task in the different noise conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Y. Liu and X. Yao. Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, 29(6):716–725, 1999.

    Article  Google Scholar 

  2. Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation, 4(4):380–387, 2000.

    Article  Google Scholar 

  3. Y. Liu and X. Yao. Towards designing neural network ensembles by evolution. In Parallel Problem Solving from Nature—PPSN V: Proc. of the Fifth International Conference on Parallel Problem Solving from Nature, volume 1498 of Lecture Notes in Computer Science, pages 623–632. Springer-Verlag, Berlin, 1998.

    Google Scholar 

  4. D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989.

    MATH  Google Scholar 

  5. J. C. A. van der Lubbe. Information Theory. Prentice-Hall International, Inc., 2nd edition, 1999.

    Google Scholar 

  6. R. T. Clemen and R. L. Winkler. Limits for the precision and value of information from dependent sources. Operations Research, 33:427–442, 1985.

    Article  MATH  Google Scholar 

  7. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. I, pages 318–362. MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  8. R. A. Jacobs. Bias/variance analyses of mixture-of-experts architectures. Neural Computation, 9:369–383, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  9. D. B. Fogel. Evolutionary Computation: Towards a New Philosophy of Machine Intelligence. IEEE Press, New York, NY, 1995.

    Google Scholar 

  10. D. Michie, D. J. Spiegelhalter, and C. C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood Limited, London, 1994.

    MATH  Google Scholar 

  11. M. Stone. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, 36:111–147, 1974.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Y., Yao, X. (2002). Learning and Evolution by Minimization of Mutual Information. In: Guervós, J.J.M., Adamidis, P., Beyer, HG., Schwefel, HP., Fernández-Villacañas, JL. (eds) Parallel Problem Solving from Nature — PPSN VII. PPSN 2002. Lecture Notes in Computer Science, vol 2439. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45712-7_48

Download citation

  • DOI: https://doi.org/10.1007/3-540-45712-7_48

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44139-7

  • Online ISBN: 978-3-540-45712-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics