Advertisement

Not All Parents Are Equal for MO-CMA-ES

  • Ilya Loshchilov
  • Marc Schoenauer
  • Michèle Sebag
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6576)

Abstract

The Steady State variants of the Multi-Objective Covariance Matrix Adaptation Evolution Strategy (SS-MO-CMA-ES) generate one offspring from a uniformly selected parent. Some other parental selection operators for SS-MO-CMA-ES are investigated in this paper. These operators involve the definition of multi-objective rewards, estimating the expectation of the offspring survival and its Hypervolume contribution. Two selection modes, respectively using tournament, and inspired from the Multi-Armed Bandit framework, are used on top of these rewards. Extensive experimental validation comparatively demonstrates the merits of these new selection operators on unimodal MO problems.

Keywords

Pareto Front Multiobjective Optimization Premature Convergence Parent Selection Decision Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arnold, D.V., Hansen, N.: Active covariance matrix adaptation for the (1+1)-CMA-ES. In: Branke, J., et al. (eds.) ACM-GECCO, pp. 385–392 (2010)Google Scholar
  2. 2.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multi-armed bandit problem. Machine Learning 47(2-3), 235–256 (2002)CrossRefzbMATHGoogle Scholar
  3. 3.
    Auger, A., Bader, J., Brockhoff, D., Zitzler, E.: Theory of the hypervolume indicator: Optimal μ-distributions and the choice of the reference point. In: FOGA, pp. 87–102. ACM, New York (2009)CrossRefGoogle Scholar
  4. 4.
    Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: Multiobjective Selection based on Dominated Hypervolume. European Journal of Operational Research 181(3), 1653–1669 (2007)CrossRefzbMATHGoogle Scholar
  5. 5.
    Brockhoff, D., Auger, A., Hansen, N., Arnold, D.V., Hohm, T.: Mirrored sampling and sequential selection for evolution strategies. In: Schaefer, R., et al. (eds.) PPSN XI. LNCS, vol. 6238, pp. 11–20. Springer, Heidelberg (2010)Google Scholar
  6. 6.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A Fast Elitist Multi-Objective Genetic Algorithm: NSGA-II. IEEE TEC 6, 182–197 (2000)Google Scholar
  7. 7.
    Hansen, N., Auger, A., Ros, R., Finck, S., Posík, P.: Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009. In: Branke, J., et al. (eds.) GECCO (Companion), pp. 1689–1696. ACM, New York (2010)Google Scholar
  8. 8.
    Hansen, N., Müller, S., Koumoutsakos, P.: Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). Evolution Computation 11(1) (2003)Google Scholar
  9. 9.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  10. 10.
    Igel, C., Hansen, N., Roth, S.: Covariance Matrix Adaptation for Multi-objective Optimization. Evolutionary Computation 15(1), 1–28 (2007)CrossRefGoogle Scholar
  11. 11.
    Igel, C., Suttorp, T., Hansen, N.: A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. In: Keijzer, M., et al. (eds.) GECCO 2006, pp. 453–460. ACM Press, New York (2006)Google Scholar
  12. 12.
    Igel, C., Suttorp, T., Hansen, N.: Steady-state selection and efficient covariance matrix update in the multi-objective CMA-ES. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) EMO 2007. LNCS, vol. 4403, pp. 171–185. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    Knowles, J., Thiele, L., Zitzler, E.: A tutorial on the performance assessment of stochastic multiobjective optimizers. Technical Report TIK 214, ETH Zürich (2006)Google Scholar
  14. 14.
    Li, H., Zhang, Q.: Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II. IEEE Trans. Evolutionary Computation 13(2), 284–302 (2009)CrossRefGoogle Scholar
  15. 15.
    Ros, R., Hansen, N.: A simple modification in CMA-ES achieving linear time and space complexity. In: Rudolph, G., et al. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 296–305. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  16. 16.
    Schumer, M., Steiglitz, K.: Adaptive step size random search. IEEE Transactions on Automatic Control 13, 270–276 (1968)CrossRefGoogle Scholar
  17. 17.
    Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation 8, 173–195 (2000)CrossRefGoogle Scholar
  18. 18.
    Zitzler, E., Thiele, L.: Multiobjective optimization using evolutionary algorithms - A comparative case study. In: Eiben, A.E., et al. (eds.) PPSN V 1998. LNCS, vol. 1498, pp. 292–301. Springer, Heidelberg (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Ilya Loshchilov
    • 1
    • 2
  • Marc Schoenauer
    • 1
    • 2
  • Michèle Sebag
    • 2
    • 1
  1. 1.TAO Project-teamINRIA Saclay - Île-de-FranceFrance
  2. 2.Laboratoire de Recherche en Informatique(UMR CNRS 8623)Université Paris-SudOrsay CedexFrance

Personalised recommendations