A Modular Neural Network Architecture with Additional Generalization Abilities for Large Input Vectors

  • A. Schmidt
  • Z. Bandar
Conference paper


This paper proposes a two layer modular neural system. The basic building blocks of the architecture are multilayer perceptrons trained with the backpropagation algorithm. Due to the proposed modular architecture the number of weight connections is less than in a fully connected multilayer perceptron. The modular network is designed to combine two different approaches of generalization known from connectionist and logical neural networks; this enhances the generalization abilities of the network. The architecture introduced here is especially useful in solving problems with a large number of input attributes.


Multilayer Perceptron Generalization Ability Decision Module Input Module Modular Architecture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    I. Aleksander and H. Morton. An Introduction to Neural Computing. Chapman & Hall, second edition, 1995.Google Scholar
  2. [2]
    R. Bellotti, M. Castellano, C. De Marzo, and G. Satalino. Signal/background classification in a cosmic ray space experiment by a modular neural system. In Proc. of the SPIE — The International Society for Optical Engineering, volume 2492, pages 1153–1161. Springer-Verlag, 1995.Google Scholar
  3. [3]
    T. Kohonen, G. Barna, and R. Chrisley. Statistical pattern recognition with neural networks: benchmarking studies. In Proc. IEEE International Conference on Neural Networks, pages 61–67. San Diego, 1988.Google Scholar
  4. [4]
    L. Mui, A. Agarwal, A. Gupta, and P. Shen-Pei Wang. An adaptive modular neural network with application to unconstrained character recognition. International Journal of Pattern Recognition and Artificial Intelligence, 8(5):1189–1204, October 1994.CrossRefGoogle Scholar
  5. [5]
    University of Stuttgart. Picture directory. FTP from ftp.uni-stuttgart.de, /pub/graphics/pictures and tv_film/staxtrek/next-gen/portrait.Google Scholar
  6. [6]
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation, volume I: Foundations. MIT Press, Cambridge, MA, 1986.Google Scholar
  7. [7]
    J. V. Stone and C. J. Thorton. Can Artificial Neural Networks Discover Useful Regularities?, pages 201–205. 1995. Conference Publication No. 409 IEE. 26–28 June 1995.Google Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • A. Schmidt
    • 1
  • Z. Bandar
    • 1
  1. 1.The Intelligent Systems Group, Department of ComputingThe Manchester Metropolitan UniversityUK

Personalised recommendations