Advertisement

A Maximum Likelihood Training Scheme

  • Dirk Husmeier
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

An error function E for a mixture model is derived from a maximum likelihood approach. The derivation of a gradient descent scheme is performed for both the DSM and the GM networks, and leads to a modified form of the backpropagation algorithm. However, a straightforward application of this method is shown to suffer from considerable inherent convergence problems due to large curvature variations of the error surface. A simple rectification scheme based on a curvature-based shape modification of E is presented.

Keywords

Learning Rate Gradient Descent Training Scheme Output Weight Error Surface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag London Limited 1999

Authors and Affiliations

  • Dirk Husmeier
    • 1
  1. 1.Neural Systems Group, Department of Electrical & Electronic EngineeringImperial CollegeLondonUK

Personalised recommendations