On the Analysis of Dynamic Restart Strategies for Evolutionary Algorithms
Since evolutionary algorithms make heavy use of randomness it is typically the case that they succeed only with some probability. In cases of failure often the algorithm is restarted. Of course, it is desirable that the point of time when the current run is considered to be a failure and therefore the algorithm is stopped and restarted is determined by the algorithm itself rather than by the user. Here, very simple restart strategies that are non-adaptive are compared on a number of examples with different properties. Circumstances under which specific types of dynamic restart strategies should be applied are described and the potential loss by choosing an inadequate restart strategy is estimated.
Unable to display preview. Download preview PDF.
- 2.E. Cantú-Paz. Single vs. multiple runs under constant computation cost. In Proc. of he Genetic and Evolutionary Computation Conf. (GECCO 2001), page 754. Morgan Kaufmann, 2001.Google Scholar
- 3.S. Droste, Th. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionary algorithm. CI 21/98, SFB 531, Univ. Dortmund, 1998. To appear in: TCS.Google Scholar
- 4.W. Feller. An Introduction to Probability Theory and Its Applications. Wiley, 1968.Google Scholar
- 7.M. Hulin. An optimal stop criterion for genetic algorithms: A Bayesian approach. In Proc. of the Seventh International Conf. on Genetic Algorithms (ICGA’97), pages 135–143. Morgan Kaufmann, 1997.Google Scholar
- 8.A. Juels and M. Wattenberg. Hillclimbing as a baseline method for the evaluation of stochastic optimization algorithms. In Advances in Neural Information Processing Systems 8, pages 430–436. MIT Press, 1995.Google Scholar
- 10.S. Luke. When short runs beat long runs. In Proc. of he Genetic and Evolutionary Computation Conf. (GECCO 2001), pages 74–80. Morgan Kaufmann, 2001.Google Scholar
- 11.J. Maresky, Y. Davidor, D. Gitler, Gad A., and A. Barak. Selectively destructive restart. In Proc. of the Sixth International Conf. on Genetic Algorithms (ICGA’ 95), pages 144–150. Morgan Kaufmann, 1995.Google Scholar
- 12.M. Mitchell, J. H. Holland, and S. Forrest. When will a genetic algorithm outperform hill climbing? In Advances in Neural Information Processing Systems. Morgan Kaufmann, 1994.Google Scholar
- 13.R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995.Google Scholar
- 14.H. Mühlenbein. How genetic algorithms really work. Mutation and hillclimbing. In Proc. of the 2nd Parallel Problem Solving from Nature (PPSN II), pages 15–25. North-Holland, 1992.Google Scholar
- 15.G. Rudolph. Convergence Properties of Evolutionary Algorithms. Dr. Kovač, 1997.Google Scholar