Advertisement

The Memetic Automaton

  • Abhishek GuptaEmail author
  • Yew-Soon Ong
Chapter
Part of the Adaptation, Learning, and Optimization book series (ALO, volume 21)

Abstract

Real-world problems of interest seldom exist in isolation. Thus, we humans routinely resort to exploiting pre-existing ideas, either of our own, and/or those gleaned from others, whenever faced with a never before seen challenge or task. It is these building-blocks of knowledge, that reside in our brains, that were first referred to as “memes” by Richard Dawkins in his 1976 book The Selfish Gene. Incidentally, in the present-day, a perennial source of rich and diverse memes, infiltrating all aspects of human and industrial activity, happens to be the internet. Despite the growing ubiquity of this technology, and its known association with the memetics concept (as evidenced by the spread of so-called “internet memes”), it is striking that most computational systems, including optimization engines, continue to adhere to a tabula rasa-style approach of tackling problems from scratch. In contrast to humans, their capabilities do not grow with experience. This holds true even for the (admittedly limited) algorithmic realizations of memetics in earlier chapters of the book, where discussions were focused on hybrid optimizers in which memes merely served a complementary role in the “lifetime learning” phase of an evolutionary cycle. What is more, even the simultaneous problem learning and optimization strategies in Chap. 3 offered only a partial glimpse of what comprehensive memetic computation (MC) can achieve in practice, as the learning was restricted to datasets originating from a single problem at a time; with little scope for information transfers across distinct optimization exercises. Thus, in order to bring MC closer to human-like problem-solving prowess, in this chapter, we put forward the novel concept of memetic automatons.

References

  1. 1.
    Chen, X., Ong, Y. S., Lim, M. H., & Tan, K. C. (2011). A multi-facet survey on memetic computation. IEEE Transactions on Evolutionary Computation, 15(5), 591–607.CrossRefGoogle Scholar
  2. 2.
    Zeng, Y., Chen, X., Ong, Y. S., Tang, J., & Xiang, Y. (2017). Structured memetic automation for online human-like social behavior learning. IEEE Transactions on Evolutionary Computation, 21(1), 102–115.CrossRefGoogle Scholar
  3. 3.
    Hou, Y., Ong, Y. S., Feng, L., & Zurada, J. M. (2017). An evolutionary transfer reinforcement learning framework for multiagent systems. IEEE Transactions on Evolutionary Computation, 21(4), 601–615.CrossRefGoogle Scholar
  4. 4.
    Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.CrossRefGoogle Scholar
  5. 5.
    Min, A. T. W., Sagarna, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2017). Knowledge transfer through machine learning in aircraft design. IEEE Computational Intelligence Magazine, 12(4), 48–60.CrossRefGoogle Scholar
  6. 6.
    Ong, Y. S., Nair, P. B., & Keane, A. J. (2003). Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA Journal, 41(4), 687–696.CrossRefGoogle Scholar
  7. 7.
    Feng, L., Ong, Y. S., Lim, M. H., & Tsang, I. W. (2015). Memetic search with interdomain learning: A realization between CVRP and CARP. IEEE Transactions on Evolutionary Computation, 19(5), 644–658.CrossRefGoogle Scholar
  8. 8.
    Gupta, A., Ong, Y. S., & Feng, L. (2016). Multifactorial evolution: toward evolutionary multitasking. IEEE Transactions on Evolutionary Computation, 20(3), 343–357.CrossRefGoogle Scholar
  9. 9.
    Ong, Y. S., & Gupta, A. (2016). Evolutionary multitasking: a computer science view of cognitive multitasking. Cognitive Computation, 8(2), 125–142.CrossRefGoogle Scholar
  10. 10.
    Gupta, A., Ong, Y. S., Feng, L., & Tan, K. C. (2017). Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Transactions on Cybernetics, 47(7), 1652–1665.CrossRefGoogle Scholar
  11. 11.
    Bean, J. C. (1994). Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing, 6(2), 154–160.CrossRefGoogle Scholar
  12. 12.
    Gonçalves, J. F., & Resende, M. G. (2011). Biased random-key genetic algorithms for combinatorial optimization. Journal of Heuristics, 17(5), 487–525.CrossRefGoogle Scholar
  13. 13.
    Gupta, A., Ong, Y. S., & Feng, L. (2018). Insights on transfer optimization: because experience is the best teacher. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(1), 51–64.CrossRefGoogle Scholar
  14. 14.
    Joyce, J. M. (2011). Kullback-leibler divergence. In International encyclopedia of statistical science (pp. 720–722). Berlin Heidelberg: Springer.CrossRefGoogle Scholar
  15. 15.
    Smyth, P., & Wolpert, D. (1998). Stacked density estimation. In Advances in neural information processing systems (pp. 668–674).Google Scholar
  16. 16.
    Larrañaga, P., & Lozano, J. A. (Eds.). (2001). Estimation of distribution algorithms: A new tool for evolutionary computation (Vol. 2). Springer Science & Business Media.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore
  2. 2.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore

Personalised recommendations