Summary
Bayesian networks are a combination of probability theory and graph theory. Graph theory provides a framework to represent complex structures of highly-interacting sets of variables. Probability theory provides a method to infer these structures from observations or measurements in the presence of noise and uncertainty. Many problems in computational molecular biology and bioinformatics, like sequence alignment, molecular evolution, and genetic networks, can be treated as particular instances of the general problem of learning Bayesian networks from data. This chapter provides a brief introduction, in preparation for later chapters of this book.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
P. Baldi and P. Brunak. Bioinformatics — The Machine Learning Approach. MIT Press, Cambridge, MA, 1998.
R. Balian. From Microphysics to Macrophysics. Methods and Applications of Statistical Physics., volume 1. Springer-Verlag, 1982.
C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, New York, 1995. ISBN 0-19-853864-2.
S. Chib and E. Greenberg. Understanding the Metropolis-Hastings algorithm. The American Statistician, 49(4):327–335, 1995.
D. M. Chickering. A transformational characterization of equivalent Bayesian network structures. International Conference on Uncertainty in Artificial Intelligence (UAI), 11:87–98, 1995.
D. M. Chickering. Learning Bayesian networks is NP-complete. In D. Fisher and H. J. Lenz, editors, Learning from Data: Artificial Intelligence and Statistics, volume 5, pages 121–130, New York, 1996. Springer.
A. P. Dawid. Applications of general propagation algorithm for probabilistic expert systems. Statistics and Computing, 2:25–36, 1992.
A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B39(1):1–38, 1977.
R. Durbin, S. R. Eddy, A. Krogh, and G. Mitchison. Biological sequence analysis. Probabilistic models of proteins and nucleic acids. Cambridge University Press, Cambridge, UK, 1998.
N. Friedman. Learning belief networks in the presence of missing values and hidden variables. In D. H. Fisher, editor, Proceedings of the Fourteenth International Conference on Machine Learning (ICML), pages 125–133, Nashville, TN, 1997. Morgan Kaufmann.
N. Friedman. The Bayesian structural EM algorithm. In G. F. Cooper and S. Moral, editors, Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 129–138, Madison, WI, 1998. Morgan Kaufmann.
N. Friedman, I. Nachman, and D. Pe’er. Learning Bayesian network structure from massive datasets: The “sparse candidate” algorithm. In Proceedings of the Fifteenth Annual Conference on Uncertainty in Artificial Intelligence, pages 196–205, San Francisco, CA, 1999. Morgan Kaufmann Publishers.
W. R. Gilks, S. Richardson, and D. J. Spiegelhalter. Introducing Markov chain Monte Carlo. In W. R. Gilks, S. Richardson, and D. J. Spieglehalter, editors, Markov Chain Monte Carlo in Practice, pages 1–19, Suffolk, 1996. Chapman & Hall. ISBN 0-412-05551-1.
P. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711–732, 1995.
W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57:97–109, 1970.
D. Heckerman. A tutorial on learning with Bayesian networks. In M. I. Jordan, editor, Learning in Graphical Models, Adaptive Computation and Machine Learning, pages 301–354, The Netherlands, 1998. Kluwer Academic Publishers.
D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:245–274, 1995.
D. Husmeier. Neural Networks for Conditional Probability Estimation: Forecasting Beyond Point Predictions. Perspectives in Neural Computing. Springer, London, 1999. ISBN 1-85233-095-3.
D. Husmeier. The Bayesian evidence scheme for regularising probability-density estimating neural networks. Neural Computation, 12(11):2685–2717, 2000.
T. S. Jaakola and M. I. Jordan. Improving the mean field approximation via the use of mixture distributions. In M. I. Jordan, editor, Learning in Graphical Models, Adaptive Computation and Machine Learning, pages 163–173, The Netherlands, 1998. Kluwer Academic Publishers.
M. I. Jordan, Z. Ghahramani, T. S. Jaakola, and L. K. Saul. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models, pages 105–161, The Netherlands, 1998. Kluwer Academic Publishers.
S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220:671–680, 1983.
P. J. Krause. Learning probabilistic networks. Knowledge Engineering Review, 13:321–351, 1998.
S. L. Lauritzen, A. P. Dawid, B. N. Larsen, and H. G. Leimer. Independence properties of directed Markov fields. Networks, 20:491–505, 1990.
S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and their applications to expert systems. Journal of the Royal Statistical Society, Series B, 50:157–224, 1988.
D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4:415–447, 1992.
D. J. C. MacKay. A practical Bayesian framework for backpropagation networks. Neural Computation, 4:448–472, 1992.
D. J. C. MacKay. Introduction to Monte Carlo methods. In M. I. Jordan, editor, Learning in Graphical Models, pages 301–354, The Netherlands, 1998. Kluwer Academic Publishers.
D. Madigan and J. York. Bayesian graphical models for discrete data. International Statistical Review, 63:215–232, 1995.
N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21:1087–1092, 1953.
K. P. Murphy. An introduction to graphical models. Technical report, MIT Artificial Intelligence Laboratory, 2001. http://www.ai.mit.edu/~murphyk/Papers/intro_gm.pdf.
K. P. Murphy. Bayes net toolbox. Technical report, MIT Artificial Intelligence Laboratory, 2002. http://www.ai.mit.edu/~murphyk/.
R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355–368, The Netherlands, 1998. Kluwer Academic Publishers.
J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
C. Petersen and J. R. Anderson. A mean field theory learning algorithm for neural networks. Complex Systems, 1:995–1019, 1987.
L. Rabiner. A tutorial on hiddenMarkov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286, 1989.
J. J. Rissanen. Modeling by shortest data description. Automatica, 14:465–471, 1978.
G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461–464, 1978.
H. Sies. A new parameter for sex-education. Nature, 332:495, 1988.
P. Spirtes, C. Meek, and T. Richardson. An algorithm for causal inference in the presence of latent variables and selection bias. In G. Cooper and C. Glymour, editors, Computation, Causation, and Discovery, pages 211–252. MIT Press, 1999.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag London Limited
About this chapter
Cite this chapter
Husmeier, D. (2005). Introduction to Learning Bayesian Networks from Data. In: Husmeier, D., Dybowski, R., Roberts, S. (eds) Probabilistic Modeling in Bioinformatics and Medical Informatics. Advanced Information and Knowledge Processing. Springer, London. https://doi.org/10.1007/1-84628-119-9_2
Download citation
DOI: https://doi.org/10.1007/1-84628-119-9_2
Publisher Name: Springer, London
Print ISBN: 978-1-85233-778-0
Online ISBN: 978-1-84628-119-8
eBook Packages: Computer ScienceComputer Science (R0)