Advertisement

Self-organizing Neural Network for Adaptive Operator Selection in Evolutionary Search

  • Teck-Hou TengEmail author
  • Stephanus Daniel Handoko
  • Hoong Chuin Lau
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10079)

Abstract

Evolutionary Algorithm is a well-known meta-heuristics para-digm capable of providing high-quality solutions to computationally hard problems. As with the other meta-heuristics, its performance is often attributed to appropriate design choices such as the choice of crossover operators and some other parameters. In this chapter, we propose a continuous state Markov Decision Process model to select crossover operators based on the states during evolutionary search. We propose to find the operator selection policy efficiently using a self-organizing neural network, which is trained offline using randomly selected training samples. The trained neural network is then verified on test instances not used for generating the training samples. We evaluate the efficacy and robustness of our proposed approach with benchmark instances of Quadratic Assignment Problem.

References

  1. 1.
    Battiti, R., Brunato, M., Campigotto, P.: Learning while optimizing an unknown fitness surface. In: Maniezzo, V., Battiti, R., Watson, J.-P. (eds.) LION 2007. LNCS, vol. 5313, pp. 25–40. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-92695-5_3 CrossRefGoogle Scholar
  2. 2.
    Battiti, R., Tecchiolli, G.: The reactive Tabu search. ORSA J. Comput. 6(2), 126–140 (1994)CrossRefzbMATHGoogle Scholar
  3. 3.
    Birattari, M., Yuan, Z., Balaprakash, P., Stützle, T.: F-Race, iterated F-Race: an overview. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 311–336. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  4. 4.
    Boomsma, W.: A comparison of adaptive operator scheduling methods on the traveling salesman problem. In: Gottlieb, J., Raidl, G.R. (eds.) EvoCOP 2004. LNCS, vol. 3004, pp. 31–40. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-24652-7_4 CrossRefGoogle Scholar
  5. 5.
    Candan, C., Goeffon, A., Lardeux, F., Saubion, F.: A dynamic island model for adaptive operator selection. In: Proceedings of 14th GECCO, pp. 1253–1260 (2012)Google Scholar
  6. 6.
    Carpenter, G.A., Grossberg, S.: A massively parallel architecture for a self-organizing neural pattern recognition machine. Comput. Vis. Graph. Image Process. 37(1), 54–115 (1987)CrossRefzbMATHGoogle Scholar
  7. 7.
    Davis, L.: Adapting operator probabilities in genetic algorithms. In: Proceedings of 3rd International Conference on Genetic Algorithms, pp. 61–69 (1989)Google Scholar
  8. 8.
    de Jong, K.A.: Evolutionary Computation - A Unified Approach. MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  9. 9.
    Eiben, A.E., Horvath, M., Kowalczyk, W., Schut, M.C.: Reinforcement learning for online control of evolutionary algorithms. In: Brueckner, S.A., Hassas, S., Jelasity, M., Yamins, D. (eds.) ESOA 2006. LNCS (LNAI), vol. 4335, pp. 151–160. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-69868-5_10 CrossRefGoogle Scholar
  10. 10.
    Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing. Springer, Heidelberg (2003)CrossRefzbMATHGoogle Scholar
  11. 11.
    Fialho, Á., Costa, L., Schoenauer, M., Sebag, M.: Dynamic multi-armed bandits and extreme value-based rewards for adaptive operator selection in evolutionary algorithms. In: Stützle, T. (ed.) LION 2009. LNCS, vol. 5851, pp. 176–190. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-11169-3_13 CrossRefGoogle Scholar
  12. 12.
    Fialho, Á., Costa, L., Schoenauer, M., Sebag, M.: Extreme value based adaptive operator selection. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 175–184. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-87700-4_18 CrossRefGoogle Scholar
  13. 13.
    Francesca, G., Pellegrini, P., Stützle, T., Birattari, M.: Off-line and on-line tuning: a study on operator selection for a memetic algorithm applied to the QAP. In: Merz, P., Hao, J.-K. (eds.) EvoCOP 2011. LNCS, vol. 6622, pp. 203–214. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-20364-0_18 CrossRefGoogle Scholar
  14. 14.
    Handoko, S.D., Yuan, Z., Nguyen, D.T., Lau, H.C.: Reinforcement learning for adaptive operator selection in memetic search applied to quadratic assignment problem. In: Proceedings of GECCO, pp. 193–194 (2014)Google Scholar
  15. 15.
    Hutter, F., Hoos, H.H., Leyton-Brown, K., Stützle, T.: Paramils: an automatic algorithm configuration framework. J. Artif. Intell. Res. 36(1), 267–306 (2009)zbMATHGoogle Scholar
  16. 16.
    Julstrom, A.B.: What have you done for me lately? Adapting operator probabilities in a steady-state genetic algorithm. In: Proceedings of 6th International Conference on Genetic Algorithms, San Francisco, USA, pp. 81–87 (1995)Google Scholar
  17. 17.
    Krempser, E., Fialho, Á., Barbosa, H.J.C.: Adaptive operator selection at the hyper-level. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012. LNCS, vol. 7492, pp. 378–387. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-32964-7_38 CrossRefGoogle Scholar
  18. 18.
    Li, K., Fialho, Á., Kwong, S., Zhang, Q.: Adaptive operator selection with bandits for multiobjective evolutionary algorithm based decomposition. IEEE Trans. Evol. Comput. 18(1), 114–130 (2013)CrossRefGoogle Scholar
  19. 19.
    Maturana, J., Lardeux, F., Saubion, F.: Autonomous operator management for evolutionary algorithms. J. Heuristics 16(6), 881–909 (2010)CrossRefzbMATHGoogle Scholar
  20. 20.
    Merz, P., Freisleben, B.: Fitness landscape analysis and memetic algorithms for the quadratic assignment problem. IEEE Trans. Evol. Comput. 4(4), 337–352 (2000)CrossRefGoogle Scholar
  21. 21.
    Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs, 3rd edn. Springer, London (1996)CrossRefzbMATHGoogle Scholar
  22. 22.
    Müller, S., Schraudolph, N.N., Koumoutsakos, P.D.: Step size adaptation in evolution strategies using reinforcement learning. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 151–156 (2002)Google Scholar
  23. 23.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (1994)CrossRefzbMATHGoogle Scholar
  24. 24.
    Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 1st edn. MIT Press, Cambridge (1998)Google Scholar
  25. 25.
    Tan, A.-H.: FALCON: a fusion architecture for learning, cognition, and navigation. In: Proceedings of IJCNN, pp. 3297–3302 (2004)Google Scholar
  26. 26.
    T.-H. Teng and A.-H. Tan. Fast reinforcement learning under uncertainties with self-organizing neural networks. In: Proceedings of IAT, pp. 51–58, December 2015Google Scholar
  27. 27.
    Thierens, D.: An adaptive pursuit strategy for allocating operator probabilities. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1539–1546 (2005)Google Scholar
  28. 28.
    Tuson, A., Ross, P.: Adapting operator settings in genetic algorithms. Evol. Comput. 6(2), 161–184 (1998)CrossRefGoogle Scholar
  29. 29.
    Veerapen, N., Maturana, J., Saubion, F.: An exploration-exploitation compromise-based adaptive operator selection for local search. In: Proceedings of 14th GECCO, pp. 1277–1284 (2012)Google Scholar
  30. 30.
    Wiering, M., van Otterlo, M.: Reinforcement Learning: State-of-the-Art. Springer, Berlin (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Teck-Hou Teng
    • 1
    Email author
  • Stephanus Daniel Handoko
    • 1
  • Hoong Chuin Lau
    • 1
  1. 1.School of Information SystemsSingapore Management UniversitySingaporeSingapore

Personalised recommendations