Permutation Optimization by Iterated Estimation of Random Keys Marginal Product Factorizations

  • Peter A. N. Bosman
  • Dirk Thierens
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2439)


In IDEAs, the probability distribution of a selection of solutions is estimated each generation. From this probability distribution, new solutions are drawn. Through the probability distribution, various relations between problem variables can be exploited to achieve efficient optimization. For permutation optimization, only real valued probability distributions have been applied to a real valued encoding of permutations. In this paper, we present two approaches to estimating marginal product factorized probability distributions in the space of permutations directly. The estimated probability distribution is used to identify crossover positions in a real valued encoding of permutations. The resulting evolutionary algorithm (EA) is capable of more efficient scalable optimization of deceptive permutation problems of a bounded order of difficulty than when real valued probability distributions are used.


Crossover Operator Iterate Estimation Index Cluster Swap Operation Binary Random Variable 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    J. C. Bean. Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing, 6:154–160, 1994.zbMATHGoogle Scholar
  2. 2.
    P. A. N. Bosman and D. Thierens. Advancing continuous IDEAs with mixture distributions and factorization selection metrics. In M. Pelikan and K. Sastry, editors, Proceedings of the Optimization by Building and Using Probabilistic Models OBUPM Workshop at the Genetic and Evolutionary Computation Conference GECCO-2001, pages 208–212. Morgan Kaufmann, 2001.Google Scholar
  3. 3.
    P. A. N. Bosman and D. Thierens. Crossing the road to efficient IDEAs for permutation problems. In L. Spector et al., editor, Proc. of the Genetic and Evolutionary Computation Conf.—GECCO-2001, pages 219–226. Morgan Kaufmann, 2001.Google Scholar
  4. 4.
    P. A. N. Bosman and D. Thierens. Random keys on ICE: Marginal product factorized probability distributions in permutation optimization. Utrecht University Technical Report UU-CS-2002-xx., 2002.Google Scholar
  5. 5.
    K. Deb and D. E. Goldberg. Sufficient conditions for deception in arbitrary binary functions. Annals of Mathematics and Artificial Intelligence, 10(4):385–408, 1994.zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    N. Friedman and M. Goldszmidt. Learning Bayesian networks with local structure. In E. Horvitz and F. Jensen, editors, Proc. of the 12th Conference on Uncertainty in Artificial Intelligence (UAI-96), pages 252–262. Morgan Kaufmann, 1996.Google Scholar
  7. 7.
    G. Harik, E. Cantú-Paz, D. E. Goldberg, and B. L. Miller. The gambler’s ruin problem, genetic algorithms, and the sizing of populations. Evolutionary Computation, 7(3):231–253, 1999.CrossRefGoogle Scholar
  8. 8.
    G. Harik and D. E. Goldberg. Linkage learning through probabilistic expression. Comp. methods in applied mechanics and engineering, 186:295–310, 2000.zbMATHCrossRefGoogle Scholar
  9. 9.
    John H. Holland. Adaptation in Natural and Artifical Systems. University of Michigan Press, Ann Arbor, Michigan, 1975.Google Scholar
  10. 10.
    H. Kargupta, K. Deb, and D. E. Goldberg. Ordering genetic algorithms and deception. In R. Männer and B. Manderick, editors, Parallel Problem Solving from Nature—PPSN II, pages 47–56. Springer Verlag, 1992.Google Scholar
  11. 11.
    D. Knjazew. Application of the fast messy genetic algorithm to permutation and scheduling problems. IlliGAL Technical Report 2000022, 2000.Google Scholar
  12. 12.
    H. Mühlenbein and T. Mahnig. FDA—a scalable evolutionary algorithm for the optimization of additively decomposed functions. Evol. Comp., 7(4):353–376, 1999.CrossRefGoogle Scholar
  13. 13.
    A. Ochoa, H. Mühlenbein, and M. Soto. A factorized distribution algorithm using single connected bayesian networks. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature—PPSN VI, pages 787–796. Springer Verlag, 2000.Google Scholar
  14. 14.
    M. Pelikan and D. E. Goldberg. Escaping hierarchical traps with competent genetic algorithms. In L. Spector, E. D. Goodman, A. Wu, W. B. Langdon, H.-M. Voigt, M. Gen, S. Sen, M. Dorigo, S. Pezeshk, M. H. Garzon, and E. Burke, editors, Proceedings of the GECCO-2001 Genetic and Evolutionary Computation Conference, pages 511–518. Morgan Kaufmann, 2001.Google Scholar
  15. 15.
    V. Robles, P. de Miguel, and P. Larrañaga. Solving the traveling salesman problem with EDAs. In P. Larrañaga and J. A. Lozano, editors, Estimation of Distribution Algorithms. A new tool for Evolutionary Computation. Kluwer Academic, 2001.Google Scholar
  16. 16.
    R. Santana, A. Ochoa, and M. R. Soto. The mixture of trees factorized distribution algorithm. In L. Spector et al., editor, Proc. of the GECCO-2001 Genetic and Evolutionary Computation Conference, pages 543–550. Morgan Kaufmann, 2001.Google Scholar
  17. 17.
    S.-Y. Shin and B.-T. Zhang. Bayesian evolutionary algorithms for continuous function optimization. In Proceedings of the 2001 Congress on Evolutionary Computation—CEC2001, pages 508–515. IEEE Press, 2001.Google Scholar
  18. 18.
    D. Thierens and D. E. Goldberg. Mixing in genetic algorithms. In S. Forrest, editor, Proceedings of the fifth conference on Genetic Algorithms, pages 38–45. Morgan Kaufmann, 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Peter A. N. Bosman
    • 1
  • Dirk Thierens
    • 1
  1. 1.Institute of Information and Computing SciencesUtrecht UniversityUtrechtThe Netherlands

Personalised recommendations