Advertisement

Are Multilayer Perceptrons Adequate for Pattern Recognition and Verification?

  • Marco Gori
  • Franco Scarselli
Conference paper
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

This paper discusses the ability of multilayer perceptrons (MLP) to model the probability distributions of the inputs in typical pattern recognition problems. It is shown that multilayer perceptrons may be unable to model patterns distributed in typical clusters, since in most practical cases these networks draw open separation surfaces in the pattern space. Unlike multilayer perceptrons, autoassociators and radial basis function networks (RBF) create closed separation surfaces. This make them more suitable especially for pattern verification, but also for dealing with large pattern recognition problems with many classes, where modular structures are typically used.

Keywords

Radial Basis Function Hide Neuron Radial Basis Function Network Multilayer Perceptrons Hide Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    M. Bianchini, P. Frasconi, and M. Gori. Learning in multilayered networks used as autoassociators. IEEE Transactions on Neural Networks, 6(2): 512–515, March 1995.CrossRefGoogle Scholar
  2. [2]
    A. FVosini, M. Gori, and P. Priani. A neural network-based model for paper currency recognition and verification. IEEE Transactions on Neural Networks; 1996.To appearGoogle Scholar
  3. [3]
    M. Gori, L. Lastrucci, and G. Soda. Autoassociator-based models for speaker verification. Pattern Recognition Letters, 1996. To appear.Google Scholar
  4. [4]
    M. Gori and A. Tesi. On the problem of local minima in backpropagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-14(1): 76–86, January 1992.CrossRefGoogle Scholar
  5. [5]
    K. Horaik. Some results on neural network approximation. Neural Networks, 6: 1069–1072, 1993.CrossRefGoogle Scholar
  6. [6]
    M. Leshno, V. Lin, A. Pinkus, and S. Shocken. Multilayer feedforward networks with a polynomial activation function can approximate any function. Neural Networks, 6: 861–867, 1993.CrossRefGoogle Scholar
  7. [7]
    J. Park and I. W. Sandberg. Approximation and radial-basis-functions networks. Neural Computation, 5: 305–316, 1993CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 1997

Authors and Affiliations

  • Marco Gori
    • 1
  • Franco Scarselli
    • 2
  1. 1.Facoltà di IngegneriaUniversità di SienaSienaItaly
  2. 2.Dipartimento di Sistemi e InformaticaUniversità di FirenzeFirenzeItaly

Personalised recommendations