Advertisement

Automatically Modeling Hybrid Evolutionary Algorithms from Past Executions

  • Santiago Muelas
  • José-María Peña
  • Antonio LaTorre
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6024)

Abstract

The selection of the most appropriate Evolutionary Algorithm for a given optimization problem is a difficult task. Hybrid Evolutionary Algorithms are a promising alternative to deal with this problem. By means of the combination of different heuristic optimization approaches, it is possible to profit from the benefits of the best approach, avoiding the limitations of the others. Nowadays, there is an active research in the design of dynamic or adaptive hybrid algorithms. However, little research has been done in the automatic learning of the best hybridization strategy. This paper proposes a mechanism to learn a strategy based on the analysis of the results from past executions. The proposed algorithm has been evaluated on a well-known benchmark on continuous optimization. The obtained results suggest that the proposed approach is able to learn very promising hybridization strategies.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Caponio, A., Neri, F., Tirronen, V.: Super-fit control adaptation in memetic differential evolution frameworks. Soft Computing - A Fusion of Foundations, Methodologies and Applications 13(8), 811–831 (2009)Google Scholar
  2. 2.
    Grefenstette, J.J.: Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Man, and Cybernetics 16, 122–128 (1986)CrossRefGoogle Scholar
  3. 3.
    Hendtlass, T.: A combined swarm differential evolution algorithm for optimization problems. In: Monostori, L., Váncza, J., Ali, M. (eds.) IEA/AIE 2001. LNCS (LNAI), vol. 2070, pp. 11–18. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  4. 4.
    Lampinen, J., Zelinka, I.: On stagnation of the differential evolution algorithm. In: Proc. of Mendel 2000, pp. 76–83 (2000)Google Scholar
  5. 5.
    Muelas, S., LaTorre, A., Peña, J.M.: A memetic differential evolution algorithm for continuous optimization. In: Proc. of ISDA 2009 (November 2009)Google Scholar
  6. 6.
    Ong, Y.-S., Keane, A.: Meta-lamarckian learning in memetic algorithms. IEEE Transactions on Evolutionary Computation 8(2), 99–110 (2004)CrossRefGoogle Scholar
  7. 7.
    Talbi, E.-G.: A taxonomy of hybrid metaheuristics. Journal of Heuristics 8(5), 541–564 (2002)CrossRefGoogle Scholar
  8. 8.
    Tang, K., Yao, X., Suganthan, P., MacNish, C., Chen, Y., Chen, C., Yang, Z.: Benchmark functions for the cec 2008 special session and competition on large scale global optimization. Technical report, USTC (2007)Google Scholar
  9. 9.
    Thangaraj, R., Pant, M., Abraham, A., Badr, Y.: Hybrid evolutionary algorithm for solving global optimization problems. In: Corchado, E., Wu, X., Oja, E., Herrero, Á., Baruque, B. (eds.) HAIS 2009. LNCS, vol. 5572, pp. 310–318. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  10. 10.
    Tirronen, V., Neri, F., Kärkkäinen, T., Majava, K., Rossi, T.: An enhanced memetic differential evolution in filter design for defect detection in paper production. Evolutionary Computation 16(4), 529–555 (2008)CrossRefGoogle Scholar
  11. 11.
    Tseng, L., Chen, C.: Multiple trajectory search for large scale global optimization. In: Proc. of IEEE CEC 2008, June 2008, pp. 3052–3059 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Santiago Muelas
    • 1
  • José-María Peña
    • 1
  • Antonio LaTorre
    • 1
  1. 1.DATSI, Facultad de InformáticaUniversidad Politécnica de MadridSpain

Personalised recommendations