This chapter gives a tutorial introduction to ensemble learning, a recently developed Bayesian method. For many problems it is intractable to perform inferences using the true posterior density over the unknown variables. Ensemble Learning allows the true posterior to be approximated by a simpler approximate distribution for which the required inferences are tractable. When we say we are making a model of a system, we are setting up a tool which can be used to make inferences, predictions and decisions. Each model can be seen as a hypothesis, or explanation, which makes assertions about the quantities which are directly observable and those which can only be inferred from their effect on observable quantities.
Unable to display preview. Download preview PDF.
- 1.G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the COLT’93, pp. 5–13, Santa Cruz, California, 1993.Google Scholar
- 3.R M. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics No. 118. Springer-Verlag, 1996.Google Scholar
- 4.R M. Neal and G E. Hinton. A view of the EM algorithm that justifies incremental, sparse and other variants. In Learning in Graphical Models. M. I. Jordan, editor, 1998.Google Scholar