Advertisement

Introduction

  • Radford M. Neal
Part of the Lecture Notes in Statistics book series (LNS, volume 118)

Abstract

This book develops the Bayesian approach to learning for neural networks by examining the meaning of the prior distributions that are the starting point for Bayesian learning, by showing how the computations required by the Bayesian approach can be performed using Markov chain Monte Carlo methods, and by evaluating the effectiveness of Bayesian methods on several real and synthetic data sets. This work has practical significance for modeling data with neural networks. From a broader perspective, it shows how the Bayesian approach can be successfully applied to complex models, and in particular, challenges the common notion that one must limit the complexity of the model used when the amount of training data is small. I begin here by introducing the Bayesian framework, discussing past work on applying it to neural networks, and reviewing the basic concepts of Markov chain Monte Carlo implementation.

Keywords

Posterior Distribution Hide Unit Gaussian Approximation Predictive Distribution Markov Chain Monte Carlo Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • Radford M. Neal
    • 1
  1. 1.Department of Statistics and Department of Computer ScienceUniversity of TorontoTorontoCanada

Personalised recommendations