Improving Generalisation Using Modular Neural Networks

  • David McLean
  • Zuhair Bandar
  • Jim O’Shea
Conference paper


This paper deals with improving generalisation performances of feed forward neural networks (FFNN) on real world data domains using more complex architectures for modelling. The convention in neural networks is to use as small an architecture as possible to force better generalisation by modelling the underlying distribution and ignoring the details [1]. This practice involves the loss of information from the training data which in real world domains may represent important though poorly represented decision regions. The problem with introducing extra free parameters (more neurons and weights) to a network is that over-fitting can occur causing the network to model the training data too closely and generalise badly on new data from the same domain. This problem is overcome by combining a number of FFNN (with small architectures) that have been trained on the same data, though generalise differently, to produce more complex decision regions and improved generalisation. Committee decision theory is used to produce the combined model and has been shown to give promising results in the past [2][3][4].

A real world medical data set consisting of non discrete attribute values and FFNN trained using Back Propagation (BP) [5] were used to test the validity of the concepts presented.


Back Propagation Feed Forward Neural Network Network Output Neural Information Processing System Component Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Hertz J., Krogh A. and Palmer R.: Introduction to the Theory of Neural Computation, Sante Fe Institute, Addison Wesley, 1991.Google Scholar
  2. [2]
    Wolpert D.: Stacked Generalisation, Neural Networks, Vol. 5, p. 241, 1992.CrossRefGoogle Scholar
  3. [3]
    LeBlanc M, Tibshirani R: Combining Estimates in Regression and Classification, Univ. Toronto Statistics Dept., Technical Report. 1993.Google Scholar
  4. [4]
    Battiti R, Colla A: Democracy in Neural Nets: Voting Schemes for Clasification, Neural Networks, 7, pp. 691–707, 1994.CrossRefGoogle Scholar
  5. [5]
    Rumelhart D., Hinton G., Williams R.: Learning Representations by Back-Propagating Errors, Letters to Nature, vol. 323, pp. 533–535, 1986.CrossRefGoogle Scholar
  6. [6]
    McLean D., Bandar Z., O’Shea J.: Improved Interpolation and Extrapolation from Continuous Training Examples Using a New Neuronal Model with an Adaptive Steepness, 2nd Australian and New Zealand Conference on Intelligent Information Systems, IEEE, pp. 125–129, 1994.Google Scholar
  7. [7]
    McLean D., Bandar Z., O’Shea J.: An Empirical Comparison of Back Propagation and the RDSE Algorithm on Continuously Valued Real World Data, Neural Networks, vol. 11, pp. 1685–1694, 1998.CrossRefGoogle Scholar
  8. [8]
    Martin G., Pittman J.: Recognizing Hand-Printed Letters and Digits, Advances in Neural Information Processing Systems, II., pp. 405–414, 1990.Google Scholar
  9. [9]
    Tesauro G., Sejinowski T.J.: A Parallel Network that Learns to Play Backgammon, Artificial Intelligence, No 39, pp. 357–390, 1988.Google Scholar
  10. [10]
    Morgan N., Bourland H.: Generalization and Parameter Estimation in Feed Forward Nets: Some Experiments, Advances in Neural Information Processing Systems, II., pp. 405–414, 1990.Google Scholar
  11. [11]
    McLean D., Bandar Z., O’Shea L: A Constructive Decision Boundary Modelling Algorithm, IASTED ‘98, Mexico. 1998.Google Scholar
  12. [12]
    McLean D., Bandar Z., O’Shea L: The Evolution of a Feed Forward Neural Network Trained under Back Propagation’, ICANNGA′97, Springer-Verlag, 1997.Google Scholar
  13. [13]
    Michie D., Spiegelhalter D.J., Taylor C.C.: Machine Learning, Neural and Statistical Classification, Ellis Hopwood Series in Artificial Intelligence, Ellis Hopwood, 1994.Google Scholar

Copyright information

© Springer-Verlag Wien 1999

Authors and Affiliations

  • David McLean
    • 1
  • Zuhair Bandar
    • 1
  • Jim O’Shea
    • 1
  1. 1.The Intelligent Systems GroupThe Manchester Metropolitan UniversityManchesterUK

Personalised recommendations