Advertisement

Future Direction: Compressed Meme Space Evolutions

  • Abhishek GuptaEmail author
  • Yew-Soon Ong
Chapter
Part of the Adaptation, Learning, and Optimization book series (ALO, volume 21)

Abstract

So far in the book, we have demonstrated how the notion of problem learning can be incorporated into the design of search and optimization algorithms. It is the learned knowledge, expressed in arbitrary computational representations, that we refer to as memes. Thus, by augmenting a base optimizer with a memetics module (i.e., learning), it becomes possible for custom search behaviors to be tailored on the fly. Following on this, Part II of the book shed light on the fact that the impact of learned memes need not be restricted to a single task; presenting theories/methods for their adaptive transmission across problems/machines. Notably, the practical realization of such a system aligns well with modern-day technologies like the cloud and the Internet of Things (IoT) that offer large-scale data storage and seamless communication facilities. With the above in mind, the goal of this (final) chapter is to emphasize on a different implication of the afore-stated technologies that remains to be fully explored in the context of memetic computation. It is deemed that in addition to influencing the course of algorithm development, the widespread inter-linking of physical devices (driven by the IoT) will affect the nature of problems themselves. In particular, the combined space of possible solution configurations for inter-connected problems will naturally give rise to large-scale optimization scenarios that push the limits of existing optimizers. We contend that in such settings it makes sense to dissolve the existing distinction between the memetics module and the base optimizer, such that evolutionary processes can be directly carried over to a compressed meme space—in the spirit of universal Darwinism.

References

  1. 1.
    Bonyadi, M. R., Michalewicz, Z., Neumann, F., & Wagner, M. (2016). Evolutionary computation for multicomponent problems: opportunities and future directions. arXiv preprint arXiv:1606.06818.
  2. 2.
    Hodgson, G. M. (2005). Generalizing Darwinism to social evolution: Some early attempts. Journal of Economic Issues, 39(4), 899–914.CrossRefGoogle Scholar
  3. 3.
    Feng, L., Gupta, A., & Ong, Y. S. (2017). Compressed representation for higher-level meme space evolution: a case study on big knapsack problems. Memetic Computing, 1–15.Google Scholar
  4. 4.
    Bartholdi, J. J. (2008). The knapsack problem. In Building intuition (pp. 19–31). Boston: Springer.CrossRefGoogle Scholar
  5. 5.
    Zhai, Y., Ong, Y. S., & Tsang, I. W. (2016). Making trillion correlations feasible in feature grouping and selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(12), 2472–2486.CrossRefGoogle Scholar
  6. 6.
    Tan, A. W., Sagarna, R., Gupta, A., Chandra, R., & Ong, Y. S. (2017). Coping with data scarcity in aircraft engine design. In 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference (p. 4434).Google Scholar
  7. 7.
    Langevin, A., Soumis, F., & Desrosiers, J. (1990). Classification of travelling salesman problem formulations. Operations Research Letters, 9(2), 127–132.MathSciNetCrossRefGoogle Scholar
  8. 8.
    Babaioff, M., Immorlica, N., Kempe, D., & Kleinberg, R. (2007). A knapsack secretary problem with applications. In Approximation, randomization, and combinatorial optimization. Algorithms and techniques (pp. 16–28). Berlin: Springer.CrossRefGoogle Scholar
  9. 9.
    Streichert, F., Ulmer, H., & Zell, A. (2004). Evolutionary algorithms and the cardinality constrained portfolio optimization problem. In Operations Research Proceedings 2003 (pp. 253–260). Berlin: Springer.Google Scholar
  10. 10.
    Aarts, E. H., Stehouwer, H. P., Wessels, J., & Zwietering, P. J. (1994). Neural networks for combinatorial optimization. Eindhoven University of Technology, Department of Mathematics and Computing Science. Memorandum COSOR 94–29.Google Scholar
  11. 11.
    Cover, T. M. (1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, 3, 326–334.CrossRefGoogle Scholar
  12. 12.
    Michalewicz, Z., & Arabas, J. (1994, October). Genetic algorithms for the 0/1 knapsack problem. In International Symposium on Methodologies for Intelligent Systems (pp. 134–143). Berlin: Springer.Google Scholar
  13. 13.
    Mahdavi, S., Shiri, M. E., & Rahnamayan, S. (2015). Metaheuristics in large-scale global continues optimization: A survey. Information Sciences, 295, 407–428.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore
  2. 2.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeSingapore

Personalised recommendations