Multi-objective Optimization with Joint Probabilistic Modeling of Objectives and Variables

  • Hossein Karshenas
  • Roberto Santana
  • Concha Bielza
  • Pedro Larrañaga
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6576)


The objective values information can be incorporated into the evolutionary algorithms based on probabilistic modeling in order to capture the relationships between objectives and variables. This paper investigates the effects of joining the objective and variable information on the performance of an estimation of distribution algorithm for multi-objective optimization. A joint Gaussian Bayesian network of objectives and variables is learnt and then sampled using the information about currently best obtained objective values as evidence. The experimental results obtained on a set of multi-objective functions and in comparison to two other competitive algorithms are presented and discussed.


Multi-objective Optimization Estimation of Distribution Algorithms Joint Probabilistic Modeling 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Abraham, A., Jain, L., Goldberg, R. (eds.): Evolutionary Multiobjective Optimization: Theoretical Advances and Applications. Advanced Information and Knowledge Processing. Springer, Berlin (2005)Google Scholar
  2. 2.
    Bielza, C., Li, G., Larrañaga, P.: Multi-dimensional classification with Bayesian networks. Technical Report UPM-FI/DIA/2010-1, Artificial Intelligence Department, Technical University of Madrid, Madrid, Spain (2010)Google Scholar
  3. 3.
    Brockhoff, D., Zitzler, E.: Dimensionality reduction in multiobjective optimization: The minimum objective subset problem. In: Waldmann, K.-H., Stocker, U.M. (eds.) Operations Research Proceedings 2006, pp. 423–429. Springer, Berlin (2007)CrossRefGoogle Scholar
  4. 4.
    Brockhoff, D., Zitzler, E.: Objective reduction in evolutionary multiobjective optimization: Theory and applications. Evolutionary Computation 17(2), 135–166 (2009)CrossRefGoogle Scholar
  5. 5.
    Buntine, W.: Theory refinement on Bayesian networks. In: Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence, vol. 91, pp. 52–60. Morgan Kaufmann, San Francisco (1991)Google Scholar
  6. 6.
    Coello, C.: An updated survey of evolutionary multiobjective optimization techniques: State of the art and future trends. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC 1999), vol. 1, pp. 3–13 (1999)Google Scholar
  7. 7.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2), 182–197 (2002)CrossRefGoogle Scholar
  8. 8.
    Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable multi-objective optimization test problems. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002), vol. 1, pp. 825–830 (2002)Google Scholar
  9. 9.
    Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 105–145. Springer, Heidelberg (2005)Google Scholar
  10. 10.
    Garza-Fabre, M., Pulido, G., Coello, C.: Ranking methods for many-objective optimization. In: Aguirre, A.H., Borja, R.M., Garciá, C.A.R. (eds.) MICAI 2009. LNCS, vol. 5845, pp. 633–645. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  11. 11.
    Huband, S., Barone, L., While, L., Hingston, P.: A scalable multi-objective test problem toolkit. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 280–295. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation 10(5), 477–506 (2006)CrossRefzbMATHGoogle Scholar
  13. 13.
    Ishibuchi, H., Tsukamoto, N., Hitotsuyanagi, Y., Nojima, Y.: Effectiveness of scalability improvement attempts on the performance of NSGA-II for many-objective problems. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO 2008), pp. 649–656. ACM, New York (2008)Google Scholar
  14. 14.
    Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. In: Adaptive Computation and Machine Learning. The MIT Press, Cambridge (2009)Google Scholar
  15. 15.
    Larrañaga, P., Etxeberria, R., Lozano, J., Peña, J.: Optimization in continuous domains by learning and simulation of Gaussian networks. In: Wu, A. (ed.) Proceedings of the 2000 Genetic and Evolutionary Computation Conference (GECCO 2000) Workshop Program, pp. 201–204. Morgan Kaufmann, San Francisco (2000)Google Scholar
  16. 16.
    Larrañaga, P., Lozano, J. (eds.): Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht (2001)Google Scholar
  17. 17.
    Lauritzen, S.L.: Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association 87(420), 1098–1108 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    López Jaimes, A., Coello Coello, C.A., Chakraborty, D.: Objective reduction using a feature selection technique. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO 2008), pp. 673–680. ACM, New York (2008)Google Scholar
  19. 19.
    Lozano, J., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.): Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms. Studies in Fuzziness and Soft Computing, vol. 192. Springer, Heidelberg (2006)zbMATHGoogle Scholar
  20. 20.
    Martí, L., Garcia, J., Antonio, B., Coello, C.A., Molina, J.: On current model-building methods for multi-objective estimation of distribution algorithms: Shortcommings and directions for improvement. Technical Report GIAA2010E001, Department of Informatics, Universidad Carlos III de Madrid, Madrid, Spain (2010)Google Scholar
  21. 21.
    Miquélez, T., Bengoetxea, E., Larrañaga, P.: Evolutionary computation based on Bayesian classifiers. International Journal of Applied Mathematics and Computer Science 14(3), 335–350 (2004)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Miquélez, T., Bengoetxea, E., Larrañaga, P.: Evolutionary Bayesian Classifier-Based Optimization in Continuous Domains. In: Wang, T.-D., Li, X., Chen, S.-H., Wang, X., Abbass, H.A., Iba, H., Chen, G., Yao, X. (eds.) SEAL 2006. LNCS, vol. 4247, pp. 529–536. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  23. 23.
    Mühlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions i. binary parameters. In: Ebeling, W., Rechenberg, I., Voigt, H.-M., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 178–187. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  24. 24.
    Okabe, T., Jin, Y., Olhofer, M., Sendhoff, B.: On test functions for evolutionary multi-objective optimization. In: Yao, X., Burke, E., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J., Tino, P., Kabán, A., Schwefel, H.-P. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 792–802. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  25. 25.
    Pelikan, M., Sastry, K., Cantú-Paz, E. (eds.): Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications. SCI. Springer, Heidelberg (2006)zbMATHGoogle Scholar
  26. 26.
    Pelikan, M., Sastry, K., Goldberg, D.: Multiobjective estimation of distribution algorithms. In: Pelikan, et al. (eds.) [25], pp. 223–248Google Scholar
  27. 27.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. In: Adaptive Computation and Machine Learning. The MIT Press, Cambridge (2005)Google Scholar
  28. 28.
    Santana, R., Bielza, C., Larrañaga, P., Lozano, J.A., Echegoyen, C., Mendiburu, A., Armañanzas, R., Shakya, S.: Mateda-2.0: Estimation of distribution algorithms in MATLAB. Journal of Statistical Software 35(7), 1–30 (2010)CrossRefGoogle Scholar
  29. 29.
    Sbalzarini, I., Mueller, S., Koumoutsakos, P.: Multiobjective optimization using evolutionary algorithms. In: Proceedings of the 2000 Summer Program of Studying Turbulence Using Numerical Simulation Databases–VIII, vol. 1, pp. 63–74. Center for Turbulence Research (November 2000)Google Scholar
  30. 30.
    Schäfer, J., Strimmer, K.: A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology 4(1) (2005)Google Scholar
  31. 31.
    Schmidt, M., Niculescu-Mizil, A., Murphy, K.: Learning graphical model structure using L1-regularization paths. In: Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI 2007), vol. 2, pp. 1278–1283. AAAI Press, Menlo Park (2007)Google Scholar
  32. 32.
    Thierens, D., Bosman, P.: Multi-objective mixture-based iterated density estimation evolutionary algorithms. In: Spector, L., Goodman, E.D., Wu, A., Langdon, W.B., Voigt, H.-M., Gen, M., Sen, S., Dorigo, M., Pezeshk, S., Garzon, M.H., Burke, E. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2001), pp. 663–670. Morgan Kaufmann, San Francisco (2001)Google Scholar
  33. 33.
    Zhang, Q., Zhou, A., Jin, Y.: RM-MEDA: A regularity model based multiobjective estimation of distribution algorithm. IEEE Transactions on Evolutionary Computation 12(1), 41–63 (2008)CrossRefGoogle Scholar
  34. 34.
    Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation 8(2), 173–195 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Hossein Karshenas
    • 1
  • Roberto Santana
    • 1
  • Concha Bielza
    • 1
  • Pedro Larrañaga
    • 1
  1. 1.Computational Intelligence Group, School of Computer ScienceTechnical University of MadridBoadilla del Monte, MadridSpain

Personalised recommendations