Advertisement

Improved Training Scheme Combining the Expectation Maximisation (EM) Algorithm with the RVFL Approach

  • Dirk Husmeier
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

This chapter reviews the Expectation Maximisation (EM) algorithm, and points out its principled advantage over a conventional gradient descent scheme. It is shown, however, that its application to the GM network is not directly feasible since the M-step of this algorithm is intractable with respect to one class of network parameters. A simple simulation demonstrates that as a consequence of this bottleneck effect, a gain in training speed can only be achieved at the price of a concomitant deterioration in the generalisation performance. It is therefore suggested to combine the EM algorithm with the RVFL concept discussed in the previous chapter. The parameter adaptation rules for the resulting GM-RVFL network are derived, and questions of numerical stability of the algorithm discussed. The superiority of this scheme over training a GM model with standard gradient descent will be demonstrated in the next chapter.

Keywords

Expectation Maximisation Expectation Maximisation Algorithm Output Weight Kernel Width Conditional Probability Density 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag London Limited 1999

Authors and Affiliations

  • Dirk Husmeier
    • 1
  1. 1.Neural Systems Group, Department of Electrical & Electronic EngineeringImperial CollegeLondonUK

Personalised recommendations