Advertisement

Limiting the Number of Fitness Cases in Genetic Programming Using Statistics

  • Mario Giacobini
  • Marco Tomassini
  • Leonardo Vanneschi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2439)

Abstract

Fitness evaluation is often a time consuming activity in genetic programming applications and it is thus of interest to find criteria that can help in reducing the time without compromising the quality of the results. We use well-known results in statistics and information theory to limit the number of fitness cases that are needed for reliable function reconstruction in genetic programming. By using two numerical examples, we show that the results agree with our theoretical predictions. Since our approach is problem-independent, it can be used together with techniques for choosing an efficient set of fitness cases.

Keywords

Genetic Programming Boolean Function Target Function Function Entropy Discrete Random Variable 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Applebaum. Probability and Information: An Integrated Approach. Cambridge, Cambridge, UK, 1996.Google Scholar
  2. 2.
    W. Banzhaf, P. Nordin, R. E. Keller, and F. D. Francone. Genetic Programming, An Introduction. Morgan Kaufmann, San Francisco CA, 1998.zbMATHGoogle Scholar
  3. 3.
    C. Gathercole and P. Ross. Dynamic training subset selection for supervised learning in genetic programming. In Y. Davidor, H.-P. Schwefel, and R. Männer, editors, Parallel Problem Solving from Nature-PPSN III, volume 866 of Lecture Notes in Computer Science, pages 312–321, Heidelberg, 1994. Springer-Verlag.Google Scholar
  4. 4.
    C. Gathercole and P. Ross. Tackling the boolean even N parity problem with genetic programming and limited-error fitness. In John R. Koza, Kalyanmoy Deb, Marco Dorigo, David B. Fogel, Max Garzon, Hitoshi Iba, and Rick L. Riolo, editors, Genetic Programming 1997: Proceedings of the Second Annual Conference, pages 119–127, San Francisco, CA, USA, 1997. Morgan Kaufmann.Google Scholar
  5. 5.
    S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice-Hall, London, UK, 1999.zbMATHGoogle Scholar
  6. 6.
    W. D. Hillis. Co-evolving parasites improve simulated evolution as an optimization procedure. In C. G. Langton, C. Taylor, J. D. Farmer, and S. Rasmussen, editors, Artificial Life II, volume X of SFI Studies in the Sciences of Complexity, pages 313–324, Redwood City, CA, 1992. Addison-Wesley.Google Scholar
  7. 7.
    P. Nordin and W. Banzhaf. Genetic programming controlling a miniature robot. In E. V. Siegel and J. R. Koza, editors, Working Notes for the AAAI Symposium on Genetic Programming, pages 61–67. MIT Press, Cambridge, MA, 1995.Google Scholar
  8. 8.
    B. J. Ross. The effects of randomly sampled training data on program evolution. In D. Whitley, D. Goldberg, and E. Cantu-Paz, editors, GECCO 2000 Proceedings of the Genetic and Evolutionary Computation Conference, pages 443–450. Morgan Kaufmann, 2000.Google Scholar
  9. 9.
    S. M. Ross. Introduction to Probability and Statistics for Engineers and scientists. Academic Press, New York, 2000.zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Mario Giacobini
    • 1
  • Marco Tomassini
    • 1
  • Leonardo Vanneschi
    • 1
  1. 1.Computer Science InstituteUniversity of LausanneLausanneSwitzerland

Personalised recommendations