Advertisement

Demonstration: Committees of Networks Trained with Different Regularisation Schemes

  • Dirk Husmeier
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

An ensemble of GM-RVFL networks is applied to the stochastic time series generated from the logistic-kappa map, and the dependence of the generalisation performance on the regularisation method and the weighting scheme is studied. For a single-model predictor, application of the Bayesian evidence scheme is found to lead to superior results. However, when using network committees, under-regularisation can be advantageous, since it leads to a larger model diversity, as a result of which a more substantial decrease of the generalisation ‘error’ can be achieved.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  1. 1.
    This partitioning of the available data into a small training set and a large cross-validation set is not realistic for practical applications. The small training set size was chosen for testing the effects of overfitting. The large cross-validation set was used for getting a reliable estimate of the weighting scheme (13.33), with which the alternative weighting scheme (13.31) and a uniform weighting scheme are to be compared.Google Scholar
  2. 2.
    Values that had achieved good results in the simulations of Chapter 16 were simply used again.Google Scholar

Copyright information

© Springer-Verlag London Limited 1999

Authors and Affiliations

  • Dirk Husmeier
    • 1
  1. 1.Neural Systems Group, Department of Electrical & Electronic EngineeringImperial CollegeLondonUK

Personalised recommendations