Abstract
When using Bayesian networks, practitioners often express constraints among variables by conditioning a common child node to induce the desired distribution. For example, an ‘or’ constraint can be easily expressed by a node modelling a logical ‘or’ of its parents’ values being conditioned to true. This has the desired effect that at least one parent must be true. However, conditioning also alters the distributions of further ancestors in the network. In this paper we argue that these side effects are undesirable when constraints are added during model design. We describe a method called shielding to remove these side effects while remaining within the directed language of Bayesian networks. This method is then compared to chain graphs which allow undirected and directed edges and which model equivalent distributions. Thus, in addition to solving this common modelling problem, shielded Bayesian networks provide a novel method for implementing chain graphs with existing Bayesian network tools.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Crowley, M.: Shielding against conditioning side effects in graphical models. Master’s thesis, University of British Columbia, Canada (October 2005)
Dechter, R., Mateescu, R.: Mixtures of deterministic-probabilistic networks and their and/or search space. In: Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (UAI-04), pp. 120–129. AUAI Press, Arlington (2004)
Lauritzen, S.L., Wermuth, N.: Graphical models for associations between variables, some of which are qualitative and some quantitative. Annals of Statistics 17, 31–57 (1989)
Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco (1988)
Pearl, J.: Graphical models, causality and intervention. Statistical Science 8, 266–269 (1993)
Zhang, N., Poole, D.: Exploiting causal independence in bayesian network inference. Journal of Artificial Intelligence Research 5, 301–328 (1996)
Dechter, R.: Bucket elimination: A unifying framework for probabilistic inference. In: Horvitz, E., Jensen, F. (eds.) Proceeding of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI-96), pp. 211–219 (1996)
Lauritzen, S.L., Richardson, T.S.: Chain graph models and their causal interpretations. Journal of the Royal Statistical Society 64(3), 321–361 (2002)
Jensen, F.V.: Junction trees and decomposable hypergraphs. Technical report, Judex Datasystemer, Aalborg, Denmark (1988)
Lauritzen, S.L.: Graphical Models. Clarendon, Oxford (1996)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comp. 18(7), 1527–1554 (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Crowley, M., Boerlage, B., Poole, D. (2007). Adding Local Constraints to Bayesian Networks. In: Kobti, Z., Wu, D. (eds) Advances in Artificial Intelligence. Canadian AI 2007. Lecture Notes in Computer Science(), vol 4509. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72665-4_30
Download citation
DOI: https://doi.org/10.1007/978-3-540-72665-4_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72664-7
Online ISBN: 978-3-540-72665-4
eBook Packages: Computer ScienceComputer Science (R0)