Advertisement

Journal of Computer Science and Technology

, Volume 27, Issue 5, pp 1077–1090 | Cite as

Exploiting Bivariate Dependencies to Speedup Structure Learning in Bayesian Optimization Algorithm

  • Amin NikanjamEmail author
  • Adel Rahmani
Regular Paper

Abstract

Bayesian optimization algorithm (BOA) is one of the successful and widely used estimation of distribution algorithms (EDAs) which have been employed to solve different optimization problems. In EDAs, a model is learned from the selected population that encodes interactions among problem variables. New individuals are generated by sampling the model and incorporated into the population. Different probabilistic models have been used in EDAs to learn interactions. Bayesian network (BN) is a well-known graphical model which is used in BOA. Learning a proper model in EDAs and particularly in BOA is distinguished as a computationally expensive task. Different methods have been proposed in the literature to improve the complexity of model building in EDAs. This paper employs bivariate dependencies to learn accurate BNs in BOA efficiently. The proposed approach extracts the bivariate dependencies using an appropriate pairwise interaction-detection metric. Due to the static structure of the underlying problems, these dependencies are used in each generation of BOA to learn an accurate network. By using this approach, the computational cost of model building is reduced dramatically. Various optimization problems are selected to be solved by the algorithm. The experimental results show that the proposed approach successfully finds the optimum in problems with different types of interactions efficiently. Significant speedups are observed in the model building procedure as well.

Keywords

evolutionary computation Bayesian optimization algorithm Bayesian network model building bivariate interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

11390_2012_1285_MOESM1_ESM.docx (16 kb)
(DOC 15.7 kb)

References

  1. [1]
    Larrañaga P, Lozano J A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic, 2002.Google Scholar
  2. [2]
    Pelikan M, Goldberg D E, Cantu-Paz E. BOA: The Bayesian optimization algorithm. In Proc. Genetic and Evolutionary Computation Conference, Orlando, Florida, USA, July 13-17, 1999, pp.525–532.Google Scholar
  3. [3]
    Mühlenbein H, Mahnig T (1999) FDA-A scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation 7(4):353–376CrossRefGoogle Scholar
  4. [4]
    Etxeberria R, Larrañaga P. Global optimization using Bayesian networks. In Proc. the 2nd Symposium on Artificial Intelligence, La Habana, Cuba, July 15-17, 1999, pp.332–339.Google Scholar
  5. [5]
    Pelikan M. Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithm. Springer-Verlag, 2005.Google Scholar
  6. [6]
    Mühlenbein H, Mahnig T. Evolutionary synthesis of Bayesian networks for optimization. In Advances in Evolutionary Synthesis of Intelligent Agents, Honavar V, Patel M, Balakrishnan K (eds.), MIT Press, 2001, pp.429–455.Google Scholar
  7. [7]
    Soto M, Ochoa A, Acid S, Campos L M. Bayesian evolutionary algorithms based on simplified models. In Proc. the 2nd Symposium on Artificial Intelligence, La Habana, Cuba, July 15-17, 1999, pp.360–367.Google Scholar
  8. [8]
    Chen T, Tang K, Chen G, Yao X (2010) Analysis of computational time of simple estimation of distribution algorithms. IEEE Transactions on Evolutionary Computation 14(1):1–22zbMATHCrossRefGoogle Scholar
  9. [9]
    Sastry K, Pelikan M, Goldberg D E. Efficiency enhancement of estimation of distribution algorithms. In Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications, Pelikan M, Sastry K, Cantu-Paz E (eds.), Springer, 2006, pp.161–185.Google Scholar
  10. [10]
    Pelikan M, Sastry K, Goldberg DE (2008) Sporadic model building for efficiency enhancement of the hierarchical BOA. Genetic Programming and Evolvable Machines 9(1):53–84CrossRefGoogle Scholar
  11. [11]
    Dong W, Yao X. NichingEDA: Utilizing the diversity inside a population of EDAs for continuous optimization. In Proc. IEEE Congress on Evolutionary Computation, Hong Kong, China, June 1-6, 2008, pp.1260–1267.Google Scholar
  12. [12]
    Dong W, Chen T, Tino P, Yao X. Scaling up estimation of distribution algorithms for continuous optimization. Arxiv Preprint ArXiv: 1111.2221 v1, 2011.Google Scholar
  13. [13]
    Dong W, Yao X (2008) Unified eigen analysis on multivariate Gaussian based estimation of distribution algorithms. Information Sciences 178(15):3000–3023MathSciNetCrossRefGoogle Scholar
  14. [14]
    Luong HN, Nguyen HTT, Ahn CW (2012) Entropy-based efficiency enhancement techniques for evolutionary algorithms. Information Sciences 188(1):100–120CrossRefGoogle Scholar
  15. [15]
    Howard R A, Matheson J E. Influence diagrams. In Readings on the Principles and Applications of Decision Analysis, Sdg Decision Systems, 1981, pp.721–762.Google Scholar
  16. [16]
    Pearl J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.Google Scholar
  17. [17]
    Chickering DM (2002) Learning equivalence classes of Bayesian network structures. J Machine Learning Research 2(3/1):445–498MathSciNetzbMATHGoogle Scholar
  18. [18]
    Chickering DM, Heckerman D, Meek C (2004) Large-sample learning of Bayesian networks is NP-hard. J Machine Learning Research 5(12/1):1287–1330MathSciNetzbMATHGoogle Scholar
  19. [19]
    Henrion M. Propagation of uncertainty in Bayesian networks by probabilistic logic sampling. In Proc. Uncertainty in Artificial Intelligence, Seattle, USA, July 10-12, 1988, pp.149–163.Google Scholar
  20. [20]
    Cano R, Sordo C, Gutiérrez J M. Applications of Bayesian networks in meteorology. In Advances in Bayesian Networks, Gámez M S, Salmerón J A (eds.), Springer, 2004, pp.309–327.Google Scholar
  21. [21]
    Friedman N, Nachman I, Peér D. Learning Bayesian network structure from massive datasets: The “sparse candidate” algorithm. In Proc. Uncertainty in Artificial Intelligence, Stockholm, Sweden, July 30-August 1, 1999, pp.206–215.Google Scholar
  22. [22]
    Tsamardinos I, Brown LE, Aliferis CF (2006) The max-min hillclimbing Bayesian network structure learning algorithm. Machine Learning 65(1):31–78CrossRefGoogle Scholar
  23. [23]
    Gámez JA, Mateo JL, Puerta JM (2011) Learning Bayesian networks by hill climbing: Efficient methods based on progressive restriction of the neighborhood. Data Mining and Knowledge Discovery 22(1):106–148MathSciNetzbMATHCrossRefGoogle Scholar
  24. [24]
    Yu T L, Goldberg D E, Yassine A, Chen Y P. Genetic algorithm design inspired by organizational theory: Pilot study of a dependency structure matrix driven genetic algorithm. In Proc. Artificial Neural Networks in Engineering, St. Louis, Missouri, USA, November 2-5, 2003, pp.327–332.Google Scholar
  25. [25]
    Yu TL, Goldberg DE, Sastry K, Lima CF, Pelikan M (2009) Dependency structure matrix, genetic algorithms, and effective recombination. Evolutionary Computation 17(4):595–626CrossRefGoogle Scholar
  26. [26]
    Yassine A, Joglekar N, Braha D, Eppinger S, Whitney D (2003) Information hiding in product development: The design churn effect. Research in Engineering Design 14(3):145–161CrossRefGoogle Scholar
  27. [27]
    Nikanjam A, Sharifi H, Helmi B H, Rahmani A. Enhancing the efficiency of genetic algorithm by identifying linkage groups using DSM clustering. In Proc. IEEE Congress on Evolutionary Computation, Barcelona, Spain, July 18-23, 2010, pp.1–8.Google Scholar
  28. [28]
    Nikanjam A, Sharifi H, Rahmani AT (2010) Efficient model building in competent genetic algorithms using DSM clustering. AI Communications 24(3):213–231MathSciNetGoogle Scholar
  29. [29]
    Duque T S P C, Goldberg D E. ClusterMI: Building probabilistic models using hierarchical clustering and mutual information. In Exploitation of Linkage Learning in Evolutionary Algorithms, Chen Y P (ed.), Springer, 2010, pp.123–137.Google Scholar
  30. [30]
    Lu Q, Yao X (2005) Clustering and learning Gaussian distribution for continuous optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 35(2):195–204CrossRefGoogle Scholar
  31. [31]
    Aporntewan C, Chongstitvatana P (2007) Building-block identification by simultaneity matrix. Soft Computing 11(6):541–548CrossRefGoogle Scholar
  32. [32]
    Hauschild M, Pelikan M, Lima C F, Sastry K. Analyzing probabilistic models in hierarchical BOA on traps and spin glasses. In Proc. Genetic and Evolutionary Computation Conference, London, England, July 7-11, 2007, pp.523–530.Google Scholar
  33. [33]
    Goldberg D E, Sastry K. Genetic Algorithms: The Design of Innovation (2nd edition), Springer, 2010.Google Scholar
  34. [34]
    Munetomo M, Goldberg D E. Identifying linkage groups by nonlinearity/non-monotonicity detection. In Proc. Genetic and Evolutionary Computation Conference, Orlando, Florida, USA, July 13-17, 1999, pp.433–440.Google Scholar
  35. [35]
    Yu T L. A matrix approach for finding extreme: Problems with modularity, hierarchy and overlap [PhD Thesis]. University of Illinois at Urbana-Champaign, USA, 2006.Google Scholar
  36. [36]
    Tsuji M, Munetomo M. Linkage analysis in genetic algorithms. In Computational Intelligence Paradigms: Innovative Applications, Springer, 2008, pp.251–279.Google Scholar
  37. [37]
    Fischer K H, Hertz J A. Spin Glasses. Cambridge University Press, 1991.Google Scholar
  38. [38]
    Mühlenbein H, Mahnig T, Rodriguez AO (1999) Schemata, distributions and graphical models in evolutionary optimization. J Heuristics 5(2):215–247zbMATHCrossRefGoogle Scholar
  39. [39]
    Santana R (2005) Estimation of distribution algorithms with Kikuchi approximations. Evolutionary Computation 13(1):67–97CrossRefGoogle Scholar
  40. [40]
    Monien B, Sudborough IH (1988) Min cut is NP-complete for edge weighted trees. Theoretical Computer Science 58(1–3):209–229MathSciNetzbMATHCrossRefGoogle Scholar
  41. [41]
    Karshenas H, Nikanjam A, Helmi B H, Rahmani A T. Combinatorial effects of local structures and scoring metrics in Bayesian optimization algorithm. In Proc. World Summit on Genetic and Evolutionary Computation, Shanghai, China, June 12-14, 2008, pp.263–270.Google Scholar
  42. [42]
    Ocenasek J, Schwarz J. The parallel Bayesian optimization algorithm. In Proc. European Symposium on Computational Intelligence, Kosice, Slovak Republic, August 30-September 1, 2000, pp.61–67.Google Scholar
  43. [43]
    Lima C F, Lobo F G, Pelikan M. From mating pool distributions to model overfitting. In Proc. Genetic and Evolutionary Computation Conference, Atlanta, GA, USA, July 12-16, 2008, pp.431–438.Google Scholar
  44. [44]
    Ackley D H. An empirical study of bit vector function optimization. In Genetic Algorithms and Simulated Annealing, Davies L (ed.), Morgan Kaufmann, 1987, pp.170–204.Google Scholar

Copyright information

© Springer Science+Business Media New York & Science Press, China 2012

Authors and Affiliations

  1. 1.School of Computer EngineeringIran University of Science and TechnologyTehranIran

Personalised recommendations