Skip to main content

Inductive learning of mutation step-size in evolutionary parameter optimization

  • Enhanced Evolutionary Operators
  • Conference paper
  • First Online:
Evolutionary Programming VI (EP 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1213))

Included in the following conference series:

Abstract

The problem of setting the mutation step-size for real-coded evolutionary algorithms has received different answers: exogenous rules like the 1/5 rule, or endogenous factor like the self-adaptation of the step-size in the Gaussian mutation of modern Evolution Strategies. On the other hand, in the bitstring framework, the control of both crossover and mutation by means of Inductive Leaning has proven beneficial to evolution, mostly by recognizing — and forbidding — past errors (i.e. crossover or mutations leading to offspring that will not survive next selection step). This Inductive Learning-based control is transposed to the control of mutation step-size in evolutionary parameter optimization, and the resulting algorithm is experimentally compared to the self-adaptive step-size of Evolution Strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P.J. Angeline. The effects of noise on self-adaptive evolutionary optimization. In L. J. Fogel, P. J. Angeline, and T. Bäck, editors, Proceedings of the 5 th Annual Conference on Evolutionary Programming, pages 433–439. MIT Press, 1996.

    Google Scholar 

  2. T. Bäck and H.-P. Schwefel. An overview of evolutionary algorithms for parameter optimization. Evolutionary Computation, 1(1):1–23, 1993.

    Google Scholar 

  3. R. A. Caruna and J. D. Schaffer. Representation and hidden bias: Gray vs binary coding for genetic algorithms. In Proceedings of ICML-88, International Conference on Machine Learning. Morgan Kaufmann, 1988.

    Google Scholar 

  4. L. Davis. Adapting operator probabilities in genetic algorithms. In J. D. Schaffer, editor, Proceedings of the 3 rd International Conference on Genetic Algorithms, pages 61–69. Morgan Kaufmann, 1989.

    Google Scholar 

  5. L. Eshelman and J. D. Schaffer. Real-coded genetic algorithms and interval-schemata. In L. D. Whitley, editor, Foundations of Genetic Algorithms 2, pages 187–202, Los Altos, CA, 1993. Morgan Kaufmann.

    Google Scholar 

  6. D. B. Fogel. An analysis of evolutionary programming. In D. B. Fogel and W. Atmar, editors, Proceedings of the 1 st Annual Conference on Evolutionary Programming, pages 43–51. Evolutionary Programming Society, 1992.

    Google Scholar 

  7. D. B. Fogel, L. J. Fogel, W. Atmar, and G. B. Fogel. Hierarchic methods of evolutionary programming. In D. B. Fogel and W. Atmar, editors, Proceedings of the 1 st Annual Conference on Evolutionary Programming, pages 175–182, La Jolla, CA, 1992. Evolutionary Programming Society.

    Google Scholar 

  8. D. B. Fogel and A. Ghozeil. Using fitness distributions to design more efficient evolutionary computations. In T. Fukuda, editor, Proceedings of the Third IEEE International Conference on Evolutionary Computation, pages 11–19. IEEE, 1996.

    Google Scholar 

  9. L. J. Fogel, A. J. Owens, and M. J. Walsh. Artificial Intelligence through Simulated Evolution. New York: John Wiley, 1966.

    Google Scholar 

  10. D. E. Goldberg. Genetic algorithms in search, optimization and machine learning. Addison Wesley, 1989.

    Google Scholar 

  11. J.H. Holland. Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor, 1975.

    Google Scholar 

  12. C. Z. Janikow and Z. Michalewicz. An experimental comparison of binary and floating point representations in genetic algorithms. In R. K. Belew and L. B. Booker, editors, Proceedings of 4th International Conference on Genetic Algorithms, pages 31–36. Morgan Kaufmann, July 1991.

    Google Scholar 

  13. T. Jones. Crossover, macromutation and population-based search. In L. J. Eshelman, editor, Proceedings of the 6 th International Conference on Genetic Algorithms, pages 73–80. Morgan Kaufmann, 1995.

    Google Scholar 

  14. T. Jones and S. Forrest. Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In L. J. Eshelman, editor, Proceedings of the 6 th International Conference on Genetic Algorithms, pages 184–192. Morgan Kaufmann, 1995.

    Google Scholar 

  15. Y. Kodratoff. Introduction to Machine Learning. Pitman Publishing, London, 1988.

    Google Scholar 

  16. Z. Michalewicz. Genetic Algorithms+Data Structures=Evolution Programs. Springer Verlag, New-York, 1996. 3rd edition.

    Google Scholar 

  17. R.S. Michalski. A theory and methodology of inductive learning. In R.S Michalski, J.G. Carbonell, and T.M. Mitchell, editors, Machine Learning: an artificial intelligence approach, volume 1, pages 83–134. Morgan Kaufmann, 1983.

    Google Scholar 

  18. T.M. Mitchell. Generalization as search. Artificial Intelligence, 18:203–226, 1982.

    Article  Google Scholar 

  19. J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.

    Google Scholar 

  20. N. J. Radcliffe. Equivalence class analysis of genetic algorithms. Complex Systems, 5:183–20, 1991.

    Google Scholar 

  21. N. J. Radcliffe. Forma analysis and random respectful recombination. In R. K. Belew and L. B. Booker, editors, Proceedings of the 4 th International Conference on Genetic Algorithms, pages 222–229. Morgan Kaufmann, 1991.

    Google Scholar 

  22. C. Ravisé and M. Sebag. An advanced evolution should not repeat its past errors. In L. Saitta, editor, Proceedings of the 13 th International Conference on Machine Learning, pages 400–408, 1996.

    Google Scholar 

  23. C. Ravisé, M. Sebag, and M. Schoenauer. An induction-based control for genetic algorithms. In J.-M. Alliot, E. Lutton, E. Ronald, M. Schoenauer, and D. Snyers, editors, Artificial Evolution. Springer-Verlag, 1996.

    Google Scholar 

  24. I. Rechenberg. Evolutionstrategie: Optimierung Technisher Systeme nach Prinzipien des Biologischen Evolution. Fromman-Holzboog Verlag, Stuttgart, 1973.

    Google Scholar 

  25. G. Rudolph. Convergence of non-elitist strategies. In Z. Michalewicz, J. D. Schaffer, H.-P. Schwefel, D. B. Fogel, and H. Kitano, editors, Proceedings of the First IEEE International Conference on Evolutionary Computation, pages 63–66. IEEE Press, 1994.

    Google Scholar 

  26. N. Saravanan, D. B. Fogel, and K. M. Nelson. A comparison of methods for self-adaptation in evolutionary algorithms. Biosystems, 36:157–166, 1995.

    Article  PubMed  Google Scholar 

  27. H.-P. Schwefel. Numerical Optimization of Computer Models. John Wiley & Sons, New-York, 1981. 1995 — 2nd edition.

    Google Scholar 

  28. M. Sebag. Delaying the choice of bias: A disjunctive version space approach. In L. Saitta, editor, Proceedings of the 13 th International Conference on Machine Learning, pages 444–452. Morgan Kaufmann, 1996.

    Google Scholar 

  29. M. Sebag and M. Schoenauer. Controlling crossover through inductive learning. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Proceedings of the 3 rd Conference on Parallel Problems Solving from Nature. Springer-Verlag, LNCS 866, 1994.

    Google Scholar 

  30. A. Törn and A. Zilinskas. Global Optimization. Springer Verlag, New-York, 1989.

    Google Scholar 

  31. A. Wright. Genetic algorithms for real parameter optimization. In G. J. E. Rawlins, editor, Foundations of Genetic Algorithms, pages 205–218. Morgan Kaufmann, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Peter J. Angeline Robert G. Reynolds John R. McDonnell Russ Eberhart

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sebag, M., Schoenauer, M., Ravisé, C. (1997). Inductive learning of mutation step-size in evolutionary parameter optimization. In: Angeline, P.J., Reynolds, R.G., McDonnell, J.R., Eberhart, R. (eds) Evolutionary Programming VI. EP 1997. Lecture Notes in Computer Science, vol 1213. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0014816

Download citation

  • DOI: https://doi.org/10.1007/BFb0014816

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62788-3

  • Online ISBN: 978-3-540-68518-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics