Neural Nets

  • James J. Buckley
  • Thomas Feuring
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 25)


There are many different types of neural networks but in this book we will be only concerned with what is called layered, feedforward, neural nets. A simple 2 - 4 - 2 layered, feedforward, neural net is shown in Figure 3.1. The notation “2 - 4 - 2” means two input neurons, 4 neurons in the second layer (also called the hidden layer), and 2 neurons in the output layer. We will usually use three layers throughout the book. A 2 - m - 2 - 1 layered, feedforward, neural net means two input neurons, m neurons in the second layer, two neurons in the third layer, and 1 output neuron. We will be using more than three layers in Chapters 5, 9 and 10. Good general references to neural nets are [12] and [14]. Also, let us abbreviate “layered, feedforward, neural network” as simply “neural net”.


Output Layer Training Algorithm Output Neuron Input Neuron Forward Pass 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References Chapter 3

  1. 1.
    E.K. Blum and L.K. Li: Approximation Theory and Feedforward Networks, Neural Networks 4 (1991), pp. 511–515.CrossRefGoogle Scholar
  2. 2.
    G Cybenko: Approximation by Superpositions of a Sigmoidal Function, Math of Control, Signals, and Systems 2 (1989), pp. 303–314.MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    Simon Haykin: Neural Networks: A Comprehensive Foundation, Macmillan, New York, 1994.MATHGoogle Scholar
  4. 4.
    K. Hornik: Approximation Capabilities of Multilayer Feedforward Networks, Neural Networks 4 (1991), pp. 251–257.CrossRefGoogle Scholar
  5. 5.
    Selected freeware packages for training layered feedforward neural nets: Aspirin/Migraines Matrix-Backpropagation Software SNNS http://www.informatik.uni-stuttgartde/ipvr/bv/projekte/snns/snns.html
  6. 6.
    Selected commercial products can be found on: BrainMaker Matlab: NN Toolbox Neural Works NeuroShell
  7. 7.
    S.C. Huang and Y.F.Huang: Bounds on the Number of Hidden Neurons in Multilayer Perceptrons, IEEE Trans. on Neural Networks 2 (1991), pp. 47–55.CrossRefGoogle Scholar
  8. 8.
    IEEE Transactions on Neural Networks, IEEE Neural Networks Council, IEEE Press.Google Scholar
  9. 9.
    Y. Ito: Approximation of Continuous Functions on Rd by Linear Combinations of Shifted Rotations of a Sigmoidal Function With and Without Scaling, Neural Networks 5 (1992), pp. 105–115.CrossRefGoogle Scholar
  10. 10.
    V.Y. Kreinovich: Arbitrary Nonlinearity is Sufficient to Represent all Functions by Neural Networks: A Theorem, Neural Networks 4 (1991), pp. 381–383.CrossRefGoogle Scholar
  11. 11.
    M. Lehno, Y.Y. Lin, A. Pinkus and S. Schocken: Multilayer Feedforward Networks with a Nonpolynomial Activation Function Can Approximate Any Function, Neural Networks 6 (1993), pp. 861–867.CrossRefGoogle Scholar
  12. 12.
    R.P. Lippman: An Introduction to Computing with Neural Nets, IEEE ASSP Magazine, 1987, pp. 4–22.Google Scholar
  13. 13.
    Neural Networks, International Neural Network Society, Pergamon Press, Elsevier.Google Scholar
  14. 14.
    D.E. Rumelhart and J.L. McClelland and the PDP Research Group: Parallel Distributed Processing, Vol. 1, MIT Press, Cambridge, MA, 1986.Google Scholar
  15. 15.
    D.E. Rumelhart, G.E. Hinton and R.J. Williams. Learning Internal Representations by Error Propagation, in: D.E. Rumelhart, J.L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Mircostructure of Cognition, Vol. I, MIT Press, Cambridge,. MA, pp. 318–362, 1986.Google Scholar
  16. 16.
    M.A. Satori and P.J. Antsaklis: A Simple Method to Derive Bounds on the Size and to Train Multilayer Neural Networks, IEEE Trans. on Neural Networks 2 (1991), pp. 467–471.CrossRefGoogle Scholar
  17. 17.
    H. White: Artificial Neural Networks: Approximation and Learning Theory, Backwell, Cambridge, Mass., 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • James J. Buckley
    • 1
  • Thomas Feuring
    • 2
  1. 1.Mathematics DepartmentUniversity of Alabama at BirminghamBirminghamUSA
  2. 2.Department of Electrical and Computer ScienceUniversity of SiegenSiegenGermany

Personalised recommendations