Skip to main content

Efficiently Learning Bayesian Network Structures Based on the B&B Strategy: A Theoretical Analysis

  • Conference paper
  • First Online:
Advanced Methodologies for Bayesian Networks (AMBN 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9505))

Included in the following conference series:

Abstract

This paper addresses the problem of efficiently finding an optimal Bayesian network structure w.r.t. maximizing the posterior probability and minimizing the description length. In particular, we focus on the branch and bound strategy to save computational effort. To obtain an efficient search, a larger lower bound of the score is required (when we seek its minimum). We generalize an existing lower bound (Campose and Ji, 2011) for the Bayesian Dirichlet BDeu (Bayesian Dirichlet equivalent uniform) to one for the BD (Bayesian Dirichlet) and mathematically prove that the number of variables in each parent set cannot be bounded for maximizing the posterior probability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    We denote \(X\perp \!\!\!\perp Y|Z\) if X and Y are independent given Z.

  2. 2.

    The idea of using dynamic programing was invented by A. P. Singh & A. W. Moore (2005).

  3. 3.

    At the same conference (Uncertainty in Artificial Intelligence 1993), Wai and Bucchus [20] presented another approach for MDL-based Bayesian network learning.

References

  1. Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: 2nd International Symposium on Information Theory, Budapest, Hungary (1973)

    Google Scholar 

  2. Buntine, W.: Theory refinement on Bayesian networks. In: Uncertainty in Artificial Intelligence, Los Angels, CA pp. 52–60 (1991)

    Google Scholar 

  3. Cussens, J., Bartlett, M.: GOBNILP 1.6.2 User/Developer Manual1, University of York (2015)

    Google Scholar 

  4. Chickering, D.M., Meek, C., Heckerman, D.: Large-sample learning of Bayesian networks is NP-hard. In: Uncertainty in Artificial Intelligence, Acapulco, Mexico, pp. 124–133 (2003)

    Google Scholar 

  5. Cooper, G.F., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Mach. Learn. 9(4), 309–347 (1992)

    Article  MATH  Google Scholar 

  6. de Campos, C.P., Ji, Q.: Efficient structure learning of bayesian networks using constraints. J. Mach. Learn. Res. 12, 663–689 (2011)

    MathSciNet  MATH  Google Scholar 

  7. Krichevsky, R.E., Trofimov, V.K.: The performance of universal encoding. IEEE Trans. Inf. Theory IT–27(2), 199–207 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  8. Rissanen, J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)

    Article  MATH  Google Scholar 

  9. Silander, T., Myllymaki, P.: A simple approach for finding the globally optimal bayesian network structure. In: Uncertainty in Artificial Intelligence (2006)

    Google Scholar 

  10. Singh, A.P., Moore, A.W.: Finding optimal Bayesian networks by dynamic programming. Research Showcase@CMU (2005)

    Google Scholar 

  11. Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction and Search. Springer, Berlin (1993)

    Book  MATH  Google Scholar 

  12. Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Representation and Reasoning), 2nd edn. Morgan Kaufmann Pub., San Mateo (1988)

    MATH  Google Scholar 

  13. Suzuki, J.: Learning bayesian belief networks based on the minimum description length principle: an efficient algorithm using the B & B technique. In: International Conference on Machine Learning, pp. 462–470 (1996)

    Google Scholar 

  14. Suzuki, J.: A construction of bayesian networks from databases on an MDL principle. In: The Ninth Conference on Uncertainty in Artificial Intelligence, Washington D.C., pp. 266–273 (1993)

    Google Scholar 

  15. Suzuki, J.: The bayesian Chow-Liu algorithm. In: The Proceedings of the Sixth European Workshop on Probabilistic Graphical Models, Granada, Spain (2012)

    Google Scholar 

  16. Suzuki, J.: Consistency of learning bayesian network structures with continuous variables: an information theoretic approach. Entropy 2015 17(8), 5752–5770 (2015)

    MathSciNet  Google Scholar 

  17. Tian, J.: A branch-and-bound algorithm for MDL learning Bayesian networks. In: Uncertainty in Artificial Intelligence, Palo Alto, pp. 580–588 (2000)

    Google Scholar 

  18. Ueno, M.: Robust learning Bayesian networks for prior belief. In: Uncertainty in Artificial Intelligence, Corvallis, Oregon, pp. 698–707 (2011)

    Google Scholar 

  19. Yuan, C., Malone, B.: Learning optimal bayesian networks: a shortest path perspective. J. Mach. Learn. Res. 48, 23–65 (2013)

    MathSciNet  MATH  Google Scholar 

  20. Lam, W., Bacchus, F.: Using causal information and local measures to learn bayesian networks. In: The Ninth Conference on Uncertainty in Artificial Intelligence, Washington D.C., pp. 243–250 (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joe Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Suzuki, J. (2015). Efficiently Learning Bayesian Network Structures Based on the B&B Strategy: A Theoretical Analysis. In: Suzuki, J., Ueno, M. (eds) Advanced Methodologies for Bayesian Networks. AMBN 2015. Lecture Notes in Computer Science(), vol 9505. Springer, Cham. https://doi.org/10.1007/978-3-319-28379-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-28379-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-28378-4

  • Online ISBN: 978-3-319-28379-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics