Abstract
The algorithm for stacking RBMs is surprisingly simple. Roughly stated, you train the bottom-most RBM, the one whose input is the training data. Once that’s trained, you run the training cases through this model and use its hidden-layer activations as inputs to the next RBM, which you then train. When it is trained, you run the training data through the first and second RBMs and use the second’s hidden activation as inputs for the third, which is then trained, and so on.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Timothy Masters
About this chapter
Cite this chapter
Masters, T. (2018). Greedy Training. In: Deep Belief Nets in C++ and CUDA C: Volume 1. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3591-1_4
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3591-1_4
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3590-4
Online ISBN: 978-1-4842-3591-1
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)