On the Analysis of Dynamic Restart Strategies for Evolutionary Algorithms

  • Thomas Jansen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2439)


Since evolutionary algorithms make heavy use of randomness it is typically the case that they succeed only with some probability. In cases of failure often the algorithm is restarted. Of course, it is desirable that the point of time when the current run is considered to be a failure and therefore the algorithm is stopped and restarted is determined by the algorithm itself rather than by the user. Here, very simple restart strategies that are non-adaptive are compared on a number of examples with different properties. Circumstances under which specific types of dynamic restart strategies should be applied are described and the potential loss by choosing an inadequate restart strategy is estimated.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Th. Bäck. An overview of parameter control methods by self-adaptation in evolutionary algorithms. Fundamenta Informaticae, 35:51–66, 1998.zbMATHGoogle Scholar
  2. 2.
    E. Cantú-Paz. Single vs. multiple runs under constant computation cost. In Proc. of he Genetic and Evolutionary Computation Conf. (GECCO 2001), page 754. Morgan Kaufmann, 2001.Google Scholar
  3. 3.
    S. Droste, Th. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionary algorithm. CI 21/98, SFB 531, Univ. Dortmund, 1998. To appear in: TCS.Google Scholar
  4. 4.
    W. Feller. An Introduction to Probability Theory and Its Applications. Wiley, 1968.Google Scholar
  5. 5.
    A. S. Fukunaga. Restart scheduling for genetic algorithms. In Parallel Problem Solving from Nature (PPSN V), LNCS 1498, pages 357–366. Springer, 1998.CrossRefGoogle Scholar
  6. 6.
    J. Garnier, L. Kallel, and M. Schoenauer. Rigorous hitting times for binary mutations. Evolutionary Computation, 7(2):173–203, 1999.CrossRefGoogle Scholar
  7. 7.
    M. Hulin. An optimal stop criterion for genetic algorithms: A Bayesian approach. In Proc. of the Seventh International Conf. on Genetic Algorithms (ICGA’97), pages 135–143. Morgan Kaufmann, 1997.Google Scholar
  8. 8.
    A. Juels and M. Wattenberg. Hillclimbing as a baseline method for the evaluation of stochastic optimization algorithms. In Advances in Neural Information Processing Systems 8, pages 430–436. MIT Press, 1995.Google Scholar
  9. 9.
    S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220:671–680, 1983.CrossRefMathSciNetGoogle Scholar
  10. 10.
    S. Luke. When short runs beat long runs. In Proc. of he Genetic and Evolutionary Computation Conf. (GECCO 2001), pages 74–80. Morgan Kaufmann, 2001.Google Scholar
  11. 11.
    J. Maresky, Y. Davidor, D. Gitler, Gad A., and A. Barak. Selectively destructive restart. In Proc. of the Sixth International Conf. on Genetic Algorithms (ICGA’ 95), pages 144–150. Morgan Kaufmann, 1995.Google Scholar
  12. 12.
    M. Mitchell, J. H. Holland, and S. Forrest. When will a genetic algorithm outperform hill climbing? In Advances in Neural Information Processing Systems. Morgan Kaufmann, 1994.Google Scholar
  13. 13.
    R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995.Google Scholar
  14. 14.
    H. Mühlenbein. How genetic algorithms really work. Mutation and hillclimbing. In Proc. of the 2nd Parallel Problem Solving from Nature (PPSN II), pages 15–25. North-Holland, 1992.Google Scholar
  15. 15.
    G. Rudolph. Convergence Properties of Evolutionary Algorithms. Dr. Kovač, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Thomas Jansen
    • 1
  1. 1.George Mason UniversityFairfaxUSA

Personalised recommendations