Advertisement

The Hidden Elegance of Causal Interaction Models

  • Silja RenooijEmail author
  • Linda C. van der Gaag
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11940)

Abstract

Causal interaction models such as the noisy-or model, are used in Bayesian networks to simplify probability acquisition for variables with large numbers of modelled causes. These models essentially prescribe how to complete an exponentially large probability table from a linear number of parameters. Yet, typically the full probability tables are required for inference with Bayesian networks in which such interaction models are used, although inference algorithms tailored to specific types of network exist that can directly exploit the decomposition properties of the interaction models. In this paper we revisit these decomposition properties in view of general inference algorithms and demonstrate that they allow an alternative representation of causal interaction models that is quite concise, even with large numbers of causes involved. In addition to forestalling the need of tailored algorithms, our alternative representation brings engineering benefits beyond those widely recognised.

Keywords

Bayesian networks Causal interaction models Maintenance robustness 

References

  1. 1.
    Castillo, E., Gutiérrez, J.M., Hadi, A.S.: Sensitivity analysis in discrete Bayesian networks. IEEE Trans. Syst. Man Cybern. 27, 412–423 (1997)CrossRefGoogle Scholar
  2. 2.
    Chan, H., Darwiche, A.: Sensitivity analysis in Bayesian networks: From single to multiple parameters. In: Halpern, J., Meek, C. (eds.) Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, pp. 67–75 (2004)Google Scholar
  3. 3.
    Coupé, V.M.H., van der Gaag, L.C.: Properties of sensitivity analysis of Bayesian belief networks. Ann. Math. Artif. Intell. 36, 323–356 (2002)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Díez, F.J., Druzdzel, M.J.: Canonical Probabilistic Models for Knowledge Engineering. Technical Report CISIAD-06-01 (2007)Google Scholar
  5. 5.
    Díez, F.J., Galán, S.F.: Efficient computation for the noisy max. Int. J. Intell. Syst. 18, 165–177 (2003)CrossRefGoogle Scholar
  6. 6.
    Frey, B.J., Patrascu, R., Jaakkola, T., Moran, J.: Sequentially fitting inclusive trees for inference in noisy- or networks. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems 13, pp. 493–499. MIT Press, Cambridge (2001)Google Scholar
  7. 7.
    Getoor, L.: Learning Statistical Models from Relational Data. PhD Thesis. Stanford University (2001)Google Scholar
  8. 8.
    Heckerman, D.: A tractable inference algorithm for diagnosing multiple diseases. In: Henrion, M., Kanal, L., Lemmer, J., Shachter, R. (eds.) Proceedings of the 5th Conference on Uncertainty in Artificial Intelligence, pp. 163–172 (1989)Google Scholar
  9. 9.
    Heckerman, D.: Causal independence for knowledge acquisition and inference. In: Heckerman, D., Mamdani, E. (eds.) Proceedings of the 9th Conference on Uncertainty in Artificial Intelligence, pp. 122–127 (1993)CrossRefGoogle Scholar
  10. 10.
    Heckerman, D., Breese, J.: Causal independence for probability assessment and inference using Bayesian networks. IEEE Trans. Syst. Man Cybern. 26, 826–831 (1996)CrossRefGoogle Scholar
  11. 11.
    Henrion, M.: Some practical issues in constructing belief networks. In: Kanal, L.N., Levitt, T.S., Lemmer, J.F. (eds.) Uncertainty in Artificial Intelligence 3, pp. 161–173. Elsevier (1989)Google Scholar
  12. 12.
    Huang, K., Henrion, M.: Efficient search-based inference for noisy- or belief networks: TopEpsilon. In: Horvitz, E., Jensen, F. (eds.) Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence, pp. 325–331 (1996)Google Scholar
  13. 13.
    Jesus, P., Baquero, C., Almeida, P.S.: A survey of distributed data aggregation algorithms. IEEE Commun. Surv. Tutorials 17, 381–404 (2011)CrossRefGoogle Scholar
  14. 14.
    Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. The MIT Press, Cambridge (2009)zbMATHGoogle Scholar
  15. 15.
    Li, W., Poupart, P., van Beek, P.: Exploiting structure in weighted model counting approaches to probabilistic inference. J. Artif. Intell. Res. 40, 729–765 (2011)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Olesen, K.G., et al.: A MUNIN network for the median nerve: a case study on loops. Appl. Artif. Intell. 3, 385–403 (1989)CrossRefGoogle Scholar
  17. 17.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, Burlington (1988)zbMATHGoogle Scholar
  18. 18.
    del Sagrado, J., Salmerón, A.: Representing canonical models as probability trees. In: Conejo, R., Urretavizcaya, M., Pérez-de-la-Cruz, J.-L. (eds.) CAEPIA/TTIA -2003. LNCS (LNAI), vol. 3040, pp. 478–487. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-25945-9_47CrossRefGoogle Scholar
  19. 19.
    Zhang, N.L., Yan, L.: Independence of causal influence and clique tree propagation. Int. J. Approximate Reasoning 19, 335–349 (1998)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Information and Computing SciencesUtrecht UniversityUtrechtThe Netherlands
  2. 2.Dalle Molle Institute for Artificial IntelligenceLuganoSwitzerland

Personalised recommendations