Skip to main content

Learning Bayesian Networks Structure with Continuous Variables

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4093))

Abstract

In this paper, a new method for learning Bayesian networks structure with continuous variables is proposed. The continuous variables are discretized based on hybrid data clustering. The discrete values of a continuous variable are obtained by using father node structure and Gibbs sampling. Optimal dimension of discretized continuous variable is found by MDL principle to the Markov blanket. Dependent relationship is refined by optimization regulation to Bayesian network structure in iteration learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cooper, G.F., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9(4), 309–347 (1992)

    MATH  Google Scholar 

  2. Lam, W., Bacchus, F.: Learning Bayesian belief networks: an approach based on the MDL principle. Computational Intelligence 10(4), 269–293 (1994)

    Article  Google Scholar 

  3. Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference, pp. 383–408. Morgan Kaufmann, San Mateo, California (1988)

    Google Scholar 

  4. Heckerman, D., Geiger, D., Chickering, D.M.: Learning Bayesian networks: the combination of knowledge and statistical data. Machine Learning 20(3), 197–243 (1995)

    MATH  Google Scholar 

  5. Jie, C., Bell, D., Wei-ru, L.: Learning Bayesian networks from data: An efficient approach based on information theory. Artificial Intelligence 137(1-2), 43–90 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  6. Geiger, D., Heckerman, D.: Learning Gaussian networks. Technical Report MSR-TR-94-10, Microsoft Research, Redmond (1994)

    Google Scholar 

  7. Olesen, K.G.: Causal probabilistic networks with both discrete and continuous variables. IEEE Trans. on Pattern Analysis and Machine Intelligence 3(15), 275–279 (1993)

    Article  Google Scholar 

  8. Xu, L., Jordan, M.I.: On convergence properties of the EM algorithm for Gaussian mixtures. Neural Computation 8(1), 129–151 (1996)

    Article  Google Scholar 

  9. Monti, S., Cooper, G.F.: Learning hybrid Bayesian networks from data. Learning in Graphical Models. Kluwer Academic Publishers, Dordrecht (1998)

    Google Scholar 

  10. Fayyad, U., Irani, K.: Mult-interval discretization of continuous-valued attributes for calssification learning. In: Proceedings International Joint Conference on Artificial Intelligence, Chambery, France, pp. 1022–1027 (1993)

    Google Scholar 

  11. Friedman, N., Goldszmidt, M.: Discretization of continuous attributes while learning Bayesian networks. In: Proceedings 13th International Conference on Machine Learning, Bari, Italy, pp. 157–165 (1996)

    Google Scholar 

  12. Wang, F., Liu, D.Y., Xue, W.X.: Discretizing continuous variables of Bayesian networks based on genetic algorithms. Chinese Journal of Computers 25(8), 794–800 (2002)

    MathSciNet  Google Scholar 

  13. Chickering, D.M.: Learning Bayesian networks is NP-Hard. Technical Report MSR-TR-94-17, Microsoft Research, Redmond (1994)

    Google Scholar 

  14. Mao, S.S., Wang, J.L., Pu, X.L.: Advanced mathematical statistics, 1st edn., pp. 401–459. China Higher Education Press, Beijing, Springer, Berlin (1998)

    Google Scholar 

  15. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6(6), 721–742 (1984)

    Article  MATH  Google Scholar 

  16. Buntine, W.L.: Chain graphs for learning. In: Besnard, P., Hanks, S. (eds.) Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, pp. 46–54. Morgan Kaufmann, San Francisco (1995)

    Google Scholar 

  17. Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning 29(2-3), 103–130 (1997)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wang, SC., Li, XL., Tang, HY. (2006). Learning Bayesian Networks Structure with Continuous Variables. In: Li, X., Zaïane, O.R., Li, Z. (eds) Advanced Data Mining and Applications. ADMA 2006. Lecture Notes in Computer Science(), vol 4093. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11811305_49

Download citation

  • DOI: https://doi.org/10.1007/11811305_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-37025-3

  • Online ISBN: 978-3-540-37026-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics