Training Restricted Boltzmann Machines with Multi-tempering: Harnessing Parallelization
Restricted Boltzmann Machines (RBM’s) are unsupervised probabilistic neural networks that can be stacked to form Deep Belief Networks. Given the recent popularity of RBM’s and the increasing availability of parallel computing architectures, it becomes interesting to investigate learning algorithms for RBM’s that benefit from parallel computations. In this paper, we look at two extensions of the parallel tempering algorithm, which is a Markov Chain Monte Carlo method to approximate the likelihood gradient. The first extension is directed at a more effective exchange of information among the parallel sampling chains. The second extension estimates gradients by averaging over chains from different temperatures. We investigate the efficiency of the proposed methods and demonstrate their usefulness on the MNIST dataset. Especially the weighted averaging seems to benefit Maximum Likelihood learning.
KeywordsMarkov Chain Monte Carlo Restricted Boltzmann Machines Neural Networks Machine Learning
Unable to display preview. Download preview PDF.
- 2.Bengio, Y.: Learning deep architectures for AI. Foundations and Trends in Machine Learning 2(1), 1–127 (2009), also published as a book. Now Publishers (2009)Google Scholar
- 3.Brenner, P., Sweet, C.R., VonHandorf, D., Izaguirre, J.A.: Accelerating the Replica Exchange Method through an Efficient All-Pairs Exchange. The Journal of Chemical Physics 126(7), 074103 (2007)Google Scholar
- 4.Desjardins, G., Courville, A.C., Bengio, Y., Vincent, P., Delalleau, O.: Tempered markov chain monte carlo for training of restricted boltzmann machines. Journal of Machine Learning Research - Proceedings Track 9, 145–152 (2010)Google Scholar
- 5.Freund, Y., Haussler, D.: Unsupervised Learning of Distributions on Binary Vectors Using Two Layer Networks. Tech. rep., Santa Cruz, CA, USA (1994)Google Scholar
- 9.Salakhutdinov, R.: Learning in markov random fields using tempered transitions. In: Bengio, Y., Schuurmans, D., Lafferty, J.D., Williams, C.K.I., Culotta, A. (eds.) NIPS, pp. 1598–1606. Curran Associates, Inc. (2009)Google Scholar
- 10.Salakhutdinov, R., Murray, I.: On the quantitative analysis of Deep Belief Networks. In: McCallum, A., Roweis, S. (eds.) Proceedings of the 25th Annual International Conference on Machine Learning (ICML 2008), pp. 872–879. Omnipress (2008)Google Scholar
- 12.Tieleman, T., Hinton, G.: Using Fast Weights to Improve Persistent Contrastive Divergence. In: Proceedings of the 26th International Conference on Machine Learning, pp. 1033–1040. ACM, New York (2009)Google Scholar
- 13.Tieleman, T.: Training restricted Boltzmann machines using approximations to the likelihood gradient. In: Proceedings of the International Conference on Machine Learning (2008)Google Scholar