In the last chapter, supervised learning has already been used to classify the outputs of a Neural Abstraction Pyramid that was trained with unsupervised learning. In this chapter, it is discussed how supervised learning techniques can be applied in the Neural Abstraction Pyramid itself.
After an introduction, supervised learning in feed-forward neural networks is covered. Attention is paid to the issues of weight sharing and the handling of network borders, which are relevant for the Neural Abstraction Pyramid architecture. Section 6.3 discusses supervised learning for recurrent networks. The difficulty of gradient computation in recurrent networks makes it necessary to employ algorithms that use only the sign of the gradient to update the weights.
KeywordsGradient Descent Recurrent Neural Network Hide Unit Output Unit Recurrent Network
Unable to display preview. Download preview PDF.