Advertisement

Classical Bayesian Theory and Networks

  • Walker H. LandJr.
  • J. David Schaffer
Chapter

Abstract

By their very nature, Bayesian networks (BN) represent cause-effect relationships by their parent-child structure. One can provide an observation of some events and then execute a Bayesian network with this information to ascertain the estimated probabilities of other events. Another significant advantage is that they can make very good estimates in the presence of missing information, which means that they will make the most accurate estimate with whatever information (or knowledge) is available and will provide these results in a computationally efficient manner as well.

This chapter comprises three separate sections. The first develops some of the basic probability concepts on which classical Bayes theory is based. The second develops Bayes theorem and describes several examples using Bayes theorem. The third addresses classical methods for constructing the Bayesian network structure.

Keywords

Bayes theory Bayes network K2 algorithm Chain rule of conditional probabilities 

Abbreviations

AUC

Area under curve

BN

Bayesian network

CH

Cooper Herskovitz

CI

Conditional independence

DAG

Directed acyclic graph

FN

False negative

FP

False positive

GA

Genetic algorithm

K2

Metric by Cooper and Herskovitz

MI

Machine intelligence

NPV

Negative predicted value

ORACLE

GRNN oracle

PC

Prediction-causal

PPV

Positive predicted value

ROC

Receiver operator characteristic

SVM

Support vector machine

TN

True negative

TNR

True negative rate

TP

True positive

TPR

True positive rate

References

  1. Bishop CM (2006) Pattern recognition and learning. Springer, New York. ISBN-13: 978-0387-31073-2zbMATHGoogle Scholar
  2. Buntine W (1991) Theory refinement in Bayesian networks. In: Proceedings of the seventh conference on uncertainty in artificial intelligence, Los Angeles, pp 52–60Google Scholar
  3. Cooper GF, Herskovitz E (1992) A Bayesian method for the induction of probabilistic networks from data. Mach Learn 9(4):309–347Google Scholar
  4. Keynes JM (1962) The principle of indifference, a treatise on probability, chap IV. Harper Torch book, New York, pp 41–64Google Scholar
  5. Kjaerulff U, Madsen AL (2008) Bayesian Network influence diagrams: a guide to construction and analysis. Springer, New YorkCrossRefGoogle Scholar
  6. Kjaerulff UB, Madsen AL (2013) Bayesian Networks and influence diagrams: a guide to construction and analysis, 2nd edn. Springer, New York., ISBN 978-1-4614-5103-7zbMATHGoogle Scholar
  7. Neapolitan RE (2005) Learning Bayesian Networks. Prentice Hall series in Artificial Intelligence, Upper Saddle River. ISBN 0-13-012534-2Google Scholar
  8. Singh M, Valtorta M (1995) Construction of Bayesian network structures from data: a brief survey and an efficient algorithm. Int J Approx Reason 12(2):111–131CrossRefGoogle Scholar
  9. Spirtes P, Glymour C (1991) An algorithm for fast recovery of sparse causal graphs. Soc Sci Comput Rev 9:62–72CrossRefGoogle Scholar
  10. Spirtes P, Glymour C, Scheines R (1993) Causation, prediction and search. In: Causation, prediction, and search. Lecture notes in statistics, vol 81. Springer, New YorkCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Walker H. LandJr.
    • 1
  • J. David Schaffer
    • 2
  1. 1.Binghamton UniversityBowieUSA
  2. 2.Binghamton UniversityBinghamtonUSA

Personalised recommendations