Improved Training Scheme Combining the Expectation Maximisation (EM) Algorithm with the RVFL Approach
This chapter reviews the Expectation Maximisation (EM) algorithm, and points out its principled advantage over a conventional gradient descent scheme. It is shown, however, that its application to the GM network is not directly feasible since the M-step of this algorithm is intractable with respect to one class of network parameters. A simple simulation demonstrates that as a consequence of this bottleneck effect, a gain in training speed can only be achieved at the price of a concomitant deterioration in the generalisation performance. It is therefore suggested to combine the EM algorithm with the RVFL concept discussed in the previous chapter. The parameter adaptation rules for the resulting GM-RVFL network are derived, and questions of numerical stability of the algorithm discussed. The superiority of this scheme over training a GM model with standard gradient descent will be demonstrated in the next chapter.
KeywordsExpectation Maximisation Expectation Maximisation Algorithm Output Weight Kernel Width Conditional Probability Density
Unable to display preview. Download preview PDF.