Abstract
The aim of this paper is to extend selection learning, initially designed for the optimization of real functions over fixed-length binary strings, toward fixed-length strings on an arbitrary finite alphabet. We derive selection learning algorithms from clear principles. First, we are looking for product probability measures over d-ary strings, or equivalently, random variables whose components are statistically independent. Second, these distributions are evaluated relatively to the expectation of the fitness function. More precisely, we consider the logarithm of the expectation to introduce fitness proportional and Boltzmann selections. Third, we define two kinds of gradient systems to maximize the expectation. The first one drives unbounded parameters, whereas the second one directly drives probabilities, à la PBIL. We also introduce composite selection, that is algorithms which take into account positively as well as negatively selected strings.We propose stochastic approximations for the gradient systems, and finally, we apply three of the resulting algorithms to two test functions, One Max and Big Jump, and draw some conclusions on their relative strengths and weaknesses.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. Baluja and R. Caruana. Removing the genetics from the standard genetic algorithm. In A. Prieditis and S. Russel, editors, Proceedings of the 12th International Conference on Machine Learning, pages 38–46. Morgan Kaufmann, 1995.
A. Berny. Statistical machine learning and combinatorial optimization. in [7].
A. Berny. An adaptive scheme for real function optimization acting as a selection operator. In X. Yao and D.B. Fogel, editors, First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, pages 140–149, San Antonio, May 2000.
A. Berny. Apprentissage et optimisation statistiques, application à la radiotéléphonie mobile. PhD thesis, Université de Nantes, 2000. in french.
A. Berny. Selection and reinforcement learning for combinatorial optimization. In M. Schoenauer et al., editors, Parallel Problem Solving from Nature VI, Lecture Notes in Computer Science, pages 601–610, Paris, September 2000. Springer-Verlag.
A. Johnson and J. Shapiro. The importance of selection mechanisms in distribution estimation algorithms. In Artificial Evolution, Le Creusot, France, October 2001.
L. Kallel, B. Naudts, and A. Rogers, editors. Theoretical Aspects of Evolutionary Computing. Natural Computing Series. Springer-Verlag, 2001.
C.-C. Lo and C.-C. Hsu. An annealing framework with learning memory. IEEE Trans. on Systems, Man, and Cybernetics part A, 28(5):648–661, September 1998.
N. Meuleau and M. Dorigo. Ant colony optimization and stochastic gradient descent. Technical report, IRIDIA, December 2000.
H. Mühlenbein. Evolutionary algorithms: from recombination to search distributions. in [7].
A. Ratle and M. Sebag. Avoiding the bloat with stochastic grammar-based genetic programming. In Artificial Evolution, Le Creusot, France, October 2001.
D. Robilliard and C. Fonlupt. A shepherd and a sheepdog to guide evolutionary computation? In C. Fonlupt, J.-K. Hao, E. Lutton, E. Ronald, and M. Schoenauer, editors, Artificial Evolution, Lecture Notes in Computer Science, pages 277–291. Springer-Verlag, 1999.
M. Sebag and M. Schoenauer. A society of hill-climbers. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 319–324, Indianapolis, April 1997.
M. P. Servais, G. de Jaer, and J. R. Geene. Function optimization using multiplebase population based incremental learning. In Proc. Eight South African Workshop on Pattern Recognition, 1997.
G. Yin, G. Rudolph, and H.-P. Schwefel. Analyzing (1, λ) Evolution Strategy via stochastic approximation methods. Informatica, 3(4):473–489, 1995.
G. Yin, G. Rudolph, and H.-P. Schwefel. Establishing connections between evolutionary algorithms and stochastic approximation. Informatica, 6(1):93–116, 1995.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Berny, A. (2002). Extending Selection Learning toward Fixed-Length d-Ary Strings. In: Collet, P., Fonlupt, C., Hao, JK., Lutton, E., Schoenauer, M. (eds) Artificial Evolution. EA 2001. Lecture Notes in Computer Science, vol 2310. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46033-0_5
Download citation
DOI: https://doi.org/10.1007/3-540-46033-0_5
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43544-0
Online ISBN: 978-3-540-46033-6
eBook Packages: Springer Book Archive