Skip to main content

Network Based on the Joint Probability Distribution of Random Variables

  • Chapter
  • First Online:
Weighted Network Analysis
  • 3490 Accesses

Abstract

The chapter describes approaches for defining networks based on modeling the joint distribution between a set of random variables. Since it is notoriously difficult to estimate the joint probability function, simplifying assumptions are often made, e.g., multivariate normality. The joint probability distribution can be parameterized using structural equation models, Bayesian network models, or a partitioning function approach. The Kullback–Leibler divergence, which is closely related to the mutual information, can be used to measure the difference between an observed probability distribution and a model-based probability distribution. By minimizing the KL divergence, one can estimate parameter values. This chapter is rather theoretical and requires some background in calculus and probability theory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Burnham K, anderson DR (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition. Springer Science, NY

    Google Scholar 

  • Chickering D (1996) Learning bayesian networks is np-complete. Learning from Data: Artificial Intelligence and Statistics pp 121–130

    Google Scholar 

  • Cooper G, Herskovits E (1992) A bayesian method for the induction of probabilistic networks from data. Machine Learning 9:309–347

    Google Scholar 

  • Cover T, Thomas J (1991) Elements of information theory. John Wiley Sons, New York

    Book  Google Scholar 

  • Friedman N (2004) Inferring cellular networks using probabilistic graphical models. Science 303(5659):799–805

    Article  PubMed  CAS  Google Scholar 

  • Friedman N, Elidan G (2011) Libb 21 http://wwwcshujiacil/labs/compbio/LibB/

  • Hartemink A, Gifford D, Jaakkola T, Young R (2001) Using graphical models and genomic expression data to statistically validate models of genetic regulatory networks. Pac Symp Biocomput pp 422–433

    Google Scholar 

  • Heckerman D (1996) A tutorial on learning with bayesian networks. Microsoft Research

    Google Scholar 

  • Hershey J, Olsen P (2007) Approximating the kullback leibler divergence between gaussian mixture models

    Google Scholar 

  • Joe H (1997) Multivariate models and dependence concepts

    Google Scholar 

  • Kullback S (1987) Letter to the editor: The kullbackleibler distance. The American Statistician 41(4):340–341

    Google Scholar 

  • Kullback S, Leibler R (1951) On information and sufficiency. Annals of Mathematical Statistics 22(1):79–86

    Article  Google Scholar 

  • Latham P, Roudi Y (2009) Mutual Information. Scholarpedia 4(1):1658

    Article  Google Scholar 

  • Margolin A, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Favera R, Califano A (2006) Aracne: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics 7

    Google Scholar 

  • Nemenman I (2004) Information theory, multivariate dependence, and genetic network inference. Tech Rep NSF-KITP-04-54, KITP, UCSB ArXiv: q-bio/0406015

    Google Scholar 

  • Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. San Francisco, CA: Morgan Kaufmann Publishers, Inc

    Google Scholar 

  • Ranneby B (1984) The maximum spacing method an estimation method related to the maximum likelihood method. Scandinavian Journal of Statistics 11(2):93–112

    Google Scholar 

  • Vapnik V (1998) Statistical Learning Theory. John Wiley Sons, New York

    Google Scholar 

  • Yu J, Smith A, Wang P, Hartemink A, Jarvis E (2002) Using bayesian network inference algorithms to recover molecular genetic regulatory networks. 3rd International Conference on Systems Biology

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steve Horvath .

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Horvath, S. (2011). Network Based on the Joint Probability Distribution of Random Variables. In: Weighted Network Analysis. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-8819-5_15

Download citation

Publish with us

Policies and ethics