Advertisement

Conditioning Graphs: Practical Structures for Inference in Bayesian Networks

  • Kevin Grant
  • Michael C. Horsch
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3809)

Abstract

Programmers employing inference in Bayesian networks typically rely on the inclusion of the model as well as an inference engine into their application. Sophisticated inference engines require non-trivial amounts of space and are also difficult to implement. This limits their use in some applications that would otherwise benefit from probabilistic inference. This paper presents a system that minimizes the space requirement of the model. The inference engine is sufficiently simple as to avoid space-limitation and be easily implemented in almost any environment. We show a fast, compact indexing structure that is linear in the size of the network. The additional space required to compute over the model is linear in the number of variables in the network.

Keywords

Bayesian Network Leaf Node Internal Node Inference Engine Space Requirement 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cooper, G.F.: The computational complexity of probabilistic inference using Bayesian Inference. Artificial Intelligence 42, 393–405 (1990)zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Darwiche, A.: Any-space probabilistic inference. In: Proceedings of the Sixteenth Conference on Uncertainty and Artificial Intelligence, pp. 133–142 (2000)Google Scholar
  3. 3.
    Darwiche, A.: Recursive Conditioning: Any-space conditioning algorithm with treewidth-bounded complexity. Artificial Intelligence, pp. 5–41 (2000)Google Scholar
  4. 4.
    Darwiche, A., Provan, G.: Query dags: A practical paradigm for implementing belief network inference. In: Proceedings of the 12th Annual Conference on Uncertainty in Artificial Intelligence (UAI-1996), San Francisco, CA, pp. 203–210. Morgan Kaufmann, San Francisco (1996)Google Scholar
  5. 5.
    Dechter, R.: Bucket elimination: A unifying framework for reasoning. Artificial Intelligence 113(1-2), 41–85 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Grant, K., Horsch, M.: Conditioning Graphs: Practical Structures for Inference in Bayesian Networks. Technical Report 2005-04, Dept. of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada (2005)Google Scholar
  7. 7.
    Lauritzen, S., Spiegelhalter, D.: Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society 50, 157–224 (1988)zbMATHMathSciNetGoogle Scholar
  8. 8.
    Monti, S., Cooper, G.F.: Bounded recursive decomposition: a search-based method for belief-network inference under limited resources. Int. J. Approx. Reasoning 15(1), 49–75 (1996)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Kaufmann Publishers Inc., San Francisco (1988)Google Scholar
  10. 10.
    Poole, D., Mackworth, A., Goebel, R.: Computational Intelligence. Oxford University Press, Oxford (1998)zbMATHGoogle Scholar
  11. 11.
    Ramos, F., Cozman, F., Ide, J.: Embedded Bayesian Networks: Anyspace, Anytime Probabilistic Inference. In: AAAI/KDD/UAI Workshop in Real-time Decision Support and Diagnosis Systems (2002)Google Scholar
  12. 12.
    Zhang, N., Poole, D.: A Simple Approach to Bayesian Network Computations. In: Proc. of the Tenth Canadian Conference on Artificial Intelligence, pp. 171–178 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Kevin Grant
    • 1
  • Michael C. Horsch
    • 1
  1. 1.Dept. of Computer ScienceUniversity of SaskatchewanSaskatoon

Personalised recommendations