Advertisement

Learning the Structure of Bayesian Networks

Part of the Information Science and Statistics book series (ISS)

Abstract

Consider the following situation. Some agent produces samples of cases from a Bayesian network N over the universe \( \mathcal{U} \). The cases are handed over to you, and you are asked to reconstruct the Bayesian network from the cases. This is the general setting for structural learning of Bayesian networks. In the real world you cannot be sure that the cases are actually sampled from a “true” network, but this we will assume. We will also assume that the sample is fair. That is, the set \( \mathcal{D} \) of cases reflects the distribution \( P_N \left( \mathcal{U} \right) \) determined by N. In other words, the distribution \( P_\mathcal{D}^\# \left( \mathcal{U} \right) \) of the cases is very close to \( P_N \left( \mathcal{U} \right) \). Furthermore, we assume that all links in N are essential, i.e., if you remove a link, then the resulting network cannot represent \( P\left( \mathcal{U} \right) \). Mathematically, it can be expressed as follows: if pa(A) are the parents of A, and B is any of them, then there are two states b1 and b2 of B and a configuration c of the other parents such that \( P\left( {A\left| {b_1 ,c} \right.} \right) \ne P\left( {A\left| {b_2 ,c} \right.} \right) \).

Keywords

Mutual Information Bayesian Network Bayesian Information Criterion Score Function Conditional Independence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science +Business Media, LLC 2007

Personalised recommendations