Advertisement

Data-Driven Adaptation in Memetic Algorithms

  • Abhishek GuptaEmail author
  • Yew-Soon Ong
Chapter
Part of the Adaptation, Learning, and Optimization book series (ALO, volume 21)

Abstract

As has been empirically demonstrated in the previous chapter, an inattentively configured combination of memetics and a base evolutionary algorithm (EA) can potentially lead to below par optimization performance. The typical issues that must be addressed in the design of such memetic algorithms (MAs), often requiring some degree of domain expertise or extensive human intervention in the tuning of control parameters, include, (i) finding the subset of solutions for which local refinements must be carried out, (ii) determining local search intensity (i.e., the computational budget to be allocated for lifetime learning of individuals in the MA), and (iii) defining the lifetime learning method, i.e., meme, to be used for a particular problem at hand given a catalogue of multiple memes (multi-memes) to choose from. In this chapter, we offer a data-driven alternative to tackling some of these issues.

References

  1. 1.
    Meuth, R., Lim, M. H., Ong, Y. S., & Wunsch, D. C. (2009). A proposition on memes and meta-memes in computing for higher-order learning. Memetic Computing, 1, 85–100.CrossRefGoogle Scholar
  2. 2.
    Nguyen, Q. H., Ong, Y. S., & Lim, M. H. (2009). A probabilistic memetic framework. IEEE Transactions on Evolutionary Computation, 13(3), 604–623.CrossRefGoogle Scholar
  3. 3.
    Cowling, P., Kendall, G., & Soubeiga, E. (2000). A hyperheuristic approach to scheduling a sales summit. In International Conference on the Practice and Theory of Automated Timetabling (pp. 176–190). Berlin, Heidelberg: Springer.Google Scholar
  4. 4.
    Ong, Y. S., Lim, M. H., Zhu, N., & Wong, K. W. (2006). Classification of adaptive memetic algorithms: A comparative study. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 36(1), 141–152.Google Scholar
  5. 5.
    Neri, F., & Cotta, C. (2012). Memetic algorithms and memetic computing optimization: A literature review. Swarm and Evolutionary Computation, 2, 1–14.CrossRefGoogle Scholar
  6. 6.
    Ong, Y. S., & Keane, A. J. (2004). Meta-Lamarckian learning in memetic algorithms. IEEE Transactions on Evolutionary Computation, 8(2), 99–110.CrossRefGoogle Scholar
  7. 7.
    Le, M. N., Ong, Y. S., Jin, Y., & Sendhoff, B. (2012). A unified framework for symbiosis of evolutionary mechanisms with application to water clusters potential model design. IEEE Computational Intelligence Magazine, 7(1), 20–35.CrossRefGoogle Scholar
  8. 8.
    Chen, X., & Ong, Y. S. (2012). A conceptual modeling of meme complexes in stochastic search. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(5), 612–625.CrossRefGoogle Scholar
  9. 9.
    Zhou, Z., Ong, Y. S., Lim, M. H., & Lee, B. S. (2007). Memetic algorithm using multi-surrogates for computationally expensive optimization problems. Soft Computing, 11(10), 957–971.CrossRefGoogle Scholar
  10. 10.
    Larranaga, P. (2002). A review on estimation of distribution algorithms. In Estimation of distribution algorithms (pp. 57–100). Boston, MA: Springer.CrossRefGoogle Scholar
  11. 11.
    Dawkins, R. (1976). The selfish gene. Oxford: Oxford University Press.Google Scholar
  12. 12.
    Blackmore, S. (2000). The meme machine (Vol. 25). Oxford Paperbacks.Google Scholar
  13. 13.
    Min, A. T. W., Sagarna, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2017). Knowledge transfer through machine learning in aircraft design. IEEE Computational Intelligence Magazine, 12(4), 48–60.CrossRefGoogle Scholar
  14. 14.
    Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13(4), 455–492.MathSciNetCrossRefGoogle Scholar
  15. 15.
    Jin, Y. (2011). Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation, 1(2), 61–70.CrossRefGoogle Scholar
  16. 16.
    Ong, Y. S., Nair, P. B., & Keane, A. J. (2003). Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA Journal, 41(4), 687–696.CrossRefGoogle Scholar
  17. 17.
    Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & De Freitas, N. (2016). Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1), 148–175.CrossRefGoogle Scholar
  18. 18.
    Baluja, S. (2017). Deep learning for explicitly modeling optimization landscapes. arXiv preprint arXiv:1703.07394.
  19. 19.
    Perrone, M. P., & Cooper, L. N. (1995). When networks disagree: Ensemble methods for hybrid neural networks. In How we learn; How we remember: Toward an understanding of brain and neural systems (pp. 342–358).CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore
  2. 2.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore

Personalised recommendations