Advertisement

Monte Carlo Implementation

  • Radford M. Neal
Part of the Lecture Notes in Statistics book series (LNS, volume 118)

Abstract

This chapter presents a Markov chain Monte Carlo implementation of Bayesian learning for neural networks in which network parameters are updated using the hybrid Monte Carlo algorithm, a form of the Metropolis algorithm in which candidate states are found by means of dynamical simulation. Hyperparameters are updated separately using Gibbs sampling, allowing their values to be used in chosing good stepsizes for the discretized dynamics. I show that hybrid Monte Carlo performs better than simple Metropolis,due to its avoidance of random walk behaviour. I also discuss variants of hybrid Monte Carlo in which dynamical computations are done using “partial gradients”, in which acceptance is based on a “window” of states,and in which momentum updates incorporate “persistence”.

Keywords

Gibbs Sampling Hide Unit Metropolis Algorithm Monte Carlo Estimate Bayesian Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • Radford M. Neal
    • 1
  1. 1.Department of Statistics and Department of Computer ScienceUniversity of TorontoTorontoCanada

Personalised recommendations