Advertisement

Bayesian Optimization Algorithms for Multi-objective Optimization

  • Marco Laumanns
  • Jiri Ocenasek
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2439)

Abstract

In recent years, several researchers have concentrated on using probabilistic models in evolutionary algorithms. These Estimation Distribution Algorithms (EDA) incorporate methods for automated learning of correlations between variables of the encoded solutions. The process of sampling new individuals from a probabilistic model respects these mutual dependencies such that disruption of important building blocks is avoided, in comparison with classical recombination operators. The goal of this paper is to investigate the usefulness of this concept in multi-objective optimization, where the aim is to approximate the set of Pareto-optimal solutions. We integrate the model building and sampling techniques of a special EDA called Bayesian Optimization Algorithm, based on binary decision trees, into an evolutionary multi-objective optimizer using a special selection scheme. The behavior of the resulting Bayesian Multi-objective Optimization Algorithm (BMOA) is empirically investigated on the multi-objective knapsack problem.

Keywords

Bayesian Network Multiobjective Optimization Knapsack Problem Dependency Graph Split Node 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Heckerman, D. Geiger, and M. Chickering. Learning bayesian networks: The combination of knowledge and statistical data. Technical report, Microsoft Research, Redmont, WA, 1994.Google Scholar
  2. 2.
    J. D. Knowles and D. W. Corne. M-PAES: A memetic algorithm for multiobjective optimization. In Congress on Evolutionary Computation (CEC 2000), volume 1, pages 325–332, Piscataway, NJ, 2000. IEEE Press.Google Scholar
  3. 3.
    M. Laumanns, G. Rudolph, and H.-P. Schwefel. Mutation control and convergence in evolutionary multi-objective optimization. In MENDEL 2001. 7th Int. Conf. on Soft Computing, pages 24–29. Brno University of Technology, 2001.Google Scholar
  4. 4.
    M. Laumanns, L. Thiele, K. Deb, and E. Zitzler. Combining convergence and diversity in evolutionary multi-objective optimization. Evolutionary Computation, 10(3), 2002.Google Scholar
  5. 5.
    M. Pelikan, D. E. Goldberg, and F. Lobo. A survey of optimization by building and using probabilistic models. IlliGAL Report No. 99018, 1999.Google Scholar
  6. 6.
    M. Pelikan, D. E. Goldberg, and K. Sastry. Bayesian optimization algorithm, decision graphs, and occams razor. IlliGAL Report No. 2000020, 2000.Google Scholar
  7. 7.
    G. Rudolph. On a multi-objective evolutionary algorithm and its convergence to the pareto set. In IEEE Int’l Conf. on Evolutionary Computation (ICEC’98), pages 511–516, Piscataway, 1998. IEEE Press.Google Scholar
  8. 8.
    J. Schwarz and J. Ocenasek. Multiobjective bayesian optimization algorithm for combinatorial problems: Theory and practice. Neural Network World, 11(5):423–441, 2001.Google Scholar
  9. 9.
    D. Thierens and P. A. N. Bosman. Multi-objective mixture-based iterated density estimation evolutionary algorithms. In Proc. of the Genetic and Evolutionary Computation Conference (GECCO-2001), pages 663–670. Morgan Kaufmann, 2001.Google Scholar
  10. 10.
    E. Zitzler, M. Laumanns, and L. Thiele. SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization. In K. Giannakoglou et al., editors, Evolutionary Methods for Design, Optimisation, and Control, 2002.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Marco Laumanns
    • 1
  • Jiri Ocenasek
    • 2
  1. 1.Computer Engineering and Networks LaboratoryETH ZürichZürich
  2. 2.Faculty of Information TechnologyVUT BrnoBrno

Personalised recommendations