Advertisement

Bounded Pareto Archiving: Theory and Practice

  • Joshua Knowles
  • David Corne
Conference paper
Part of the Lecture Notes in Economics and Mathematical Systems book series (LNE, volume 535)

Abstract

We consider algorithms for the sequential storage, or ‘archiving’, of solutions discovered during a Pareto optimization procedure. We focus particularly on the case when an a priori hard bound, N,is placed on the capacity of the archive of solutions. We show that for general sequences of points, no algorithm that respects the bound can maintain an ideal minimumnumber of points in the archive. One consequence of this, for example, is that a strictly bounded archive cannot be expected in general to contain the Pareto front of the points generated so far (where this set is smaller than the archive bound). Using the notion of an ideal ε-approximation set—the subset (of size ≤ N)of a whole sequence of points which minimizes ε—we also show that no archiving algorithm can attain this ideal for general sequences. This means that in general no archiving algorithm can be expected to maintain an ‘optimal’ representation of the Pareto front when the size of that set is larger than the archive bound. Furthermore, if the ranges of the PF of the sequence are not known a priori, no algorithm that certifies (using its own internal epsilon parameter ε arc ) that it maintains an epsilon-approximate set of the sequence, can maintain ε arc )within any fixed multiple of the minimal (ideal) ε value. In a case study we go on to demonstrate several scenarios where e-based archiving algorithms proposed by Laumanns et al. — which perform well when the archive’s capacity is nota priori bounded—perform poorly when ε is adapted ‘on the fly’ in order to respect a capacity constraint. For each scenario we demonstrate that the performance of an adaptive grid archiving (AGA) algorithm (which does notassure a formally guaranteed approximation level) performs comparatively well, in practice.

Keywords

Pareto Front Input Sequence Pareto Optimal Solution Objective Space Pareto Optimal Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    David W. Corne, Nick R. Jerram, Joshua D. Knowles, and Martin J. Oates. PESA-II: Region-based Selection in Evolutionary Multiobjective Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001),pages 283–290, San Francisco, California, 2001. Morgan Kaufmann Publishers.Google Scholar
  2. 2.
    Mark Fleischer. The measure of Pareto optima: Applications to Multiobjective metaheuristics. In Carlos M. Fonseca et ai,editor, Evolutionary Multi-Criterion Optimization, Second International Conference, EMO 2003,number 2632 in LNCS, pages 519–533. Springer, 2003.Google Scholar
  3. 3.
    Michael Pilegaard Hansen and Andrzej Jaszkiewicz. Evaluating the quality of approximations to the non-dominated set. Technical Report IMM-REP-1998-7, Technical University of Denmark, March 1998.Google Scholar
  4. 4.
    Joshua Knowles. Pareto archiving using the Lebesgue measure: Empirical observations. Technical report, IRIDIA, Université Libre de Bruxelles, Belgium, May 2003.Google Scholar
  5. 5.
    Joshua Knowles and David Corne. Properties of an adaptive archiving algorithm for storing nondominated vectors. IEEE Transactions on Evolutionary Computation,7(2):100–116, April 2003.CrossRefGoogle Scholar
  6. 6.
    Joshua D. Knowles. Local-Search and Hybrid Evolutionary Algorithms for Pareto Optimization.PhD thesis, The University of Reading, Department of Computer Science, Reading, UK, January 2002.Google Scholar
  7. 7.
    Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, and Eckart Zitzler. On the Convergence and Diversity-Preservation Properties of Multi-Objective Evolutionary Algorithms. Technical Report 108, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse 35, CH-8092 Zurich, Switzerland, May 2001.Google Scholar
  8. 8.
    Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, and Eckart Zitzler. Combining convergence and diversity in evolutionary Multiobjective optimization. Evolutionary Computation,10(3):263–282, Fall 2002.CrossRefGoogle Scholar
  9. 9.
    Marco Laumanns, Lothar Thiele, Eckart Zitzler, and Kalyanmoy Deb. Archiving with Guaranteed Convergence and Diversity in Multi-Objective Optimization. In W. B. Langdon et al., editor, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO′2002),pages 439–447, San Francisco, California, July 2002. Morgan Kaufmann Publishers.Google Scholar
  10. 10.
    Günter Rudolph and Alexandru Agapie. Convergence Properties of Some Multi-Objective Evolutionary Algorithms. In Proceedings of the 2000 Conference on Evolutionary Computation,volume 2, pages 1010–1016, Piscataway, New Jersey, July 2000. IEEE Press.Google Scholar
  11. 11.
    Eckart Zitzler. Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications.PhD thesis, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland, November 1999.Google Scholar
  12. 12.
    Eckart Zitzler, Marco Laumanns, Lothar Thiele, Carlos M. Fonseca, and Viviane Grunert da Fonseca. Why Quality Assessment of Multiobjective Optimizers Is Difficult. In W. B. Langdon et al., editor, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO′2002),pages 666–673, San Francisco, California, July 2002. Morgan Kaufmann Publishers.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Joshua Knowles
    • 1
  • David Corne
    • 2
  1. 1.IRIDIA — CP 194/6Université Libre de BruxellesBrusselsBelgium
  2. 2.School of Systems EngineeringUniversity of ReadingUK

Personalised recommendations