Abstract
This chapter presents a Markov chain Monte Carlo implementation of Bayesian learning for neural networks in which network parameters are updated using the hybrid Monte Carlo algorithm, a form of the Metropolis algorithm in which candidate states are found by means of dynamical simulation. Hyperparameters are updated separately using Gibbs sampling, allowing their values to be used in chosing good stepsizes for the discretized dynamics. I show that hybrid Monte Carlo performs better than simple Metropolis,due to its avoidance of random walk behaviour. I also discuss variants of hybrid Monte Carlo in which dynamical computations are done using “partial gradients”, in which acceptance is based on a “window” of states,and in which momentum updates incorporate “persistence”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Neal, R.M. (1996). Monte Carlo Implementation. In: Bayesian Learning for Neural Networks. Lecture Notes in Statistics, vol 118. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0745-0_3
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0745-0_3
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-94724-2
Online ISBN: 978-1-4612-0745-0
eBook Packages: Springer Book Archive