Advertisement

Supervised Learning

  • Sven Behnke
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2766)

Abstract

In the last chapter, supervised learning has already been used to classify the outputs of a Neural Abstraction Pyramid that was trained with unsupervised learning. In this chapter, it is discussed how supervised learning techniques can be applied in the Neural Abstraction Pyramid itself.

After an introduction, supervised learning in feed-forward neural networks is covered. Attention is paid to the issues of weight sharing and the handling of network borders, which are relevant for the Neural Abstraction Pyramid architecture. Section 6.3 discusses supervised learning for recurrent networks. The difficulty of gradient computation in recurrent networks makes it necessary to employ algorithms that use only the sign of the gradient to update the weights.

Keywords

Gradient Descent Recurrent Neural Network Hide Unit Output Unit Recurrent Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Sven Behnke

    There are no affiliations available

    Personalised recommendations