A Modular Neural Network Architecture with Additional Generalization Abilities for Large Input Vectors
This paper proposes a two layer modular neural system. The basic building blocks of the architecture are multilayer perceptrons trained with the backpropagation algorithm. Due to the proposed modular architecture the number of weight connections is less than in a fully connected multilayer perceptron. The modular network is designed to combine two different approaches of generalization known from connectionist and logical neural networks; this enhances the generalization abilities of the network. The architecture introduced here is especially useful in solving problems with a large number of input attributes.
KeywordsMultilayer Perceptron Generalization Ability Decision Module Input Module Modular Architecture
Unable to display preview. Download preview PDF.
- I. Aleksander and H. Morton. An Introduction to Neural Computing. Chapman & Hall, second edition, 1995.Google Scholar
- R. Bellotti, M. Castellano, C. De Marzo, and G. Satalino. Signal/background classification in a cosmic ray space experiment by a modular neural system. In Proc. of the SPIE — The International Society for Optical Engineering, volume 2492, pages 1153–1161. Springer-Verlag, 1995.Google Scholar
- T. Kohonen, G. Barna, and R. Chrisley. Statistical pattern recognition with neural networks: benchmarking studies. In Proc. IEEE International Conference on Neural Networks, pages 61–67. San Diego, 1988.Google Scholar
- University of Stuttgart. Picture directory. FTP from ftp.uni-stuttgart.de, /pub/graphics/pictures and tv_film/staxtrek/next-gen/portrait.Google Scholar
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation, volume I: Foundations. MIT Press, Cambridge, MA, 1986.Google Scholar
- J. V. Stone and C. J. Thorton. Can Artificial Neural Networks Discover Useful Regularities?, pages 201–205. 1995. Conference Publication No. 409 IEE. 26–28 June 1995.Google Scholar