There are many different types of neural networks but in this book we will be only concerned with what is called layered, feedforward, neural nets. A simple 2 - 4 - 2 layered, feedforward, neural net is shown in Figure 3.1. The notation “2 - 4 - 2” means two input neurons, 4 neurons in the second layer (also called the hidden layer), and 2 neurons in the output layer. We will usually use three layers throughout the book. A 2 - m - 2 - 1 layered, feedforward, neural net means two input neurons, m neurons in the second layer, two neurons in the third layer, and 1 output neuron. We will be using more than three layers in Chapters 5, 9 and 10. Good general references to neural nets are  and . Also, let us abbreviate “layered, feedforward, neural network” as simply “neural net”.
KeywordsOutput Layer Training Algorithm Output Neuron Input Neuron Forward Pass
Unable to display preview. Download preview PDF.
References Chapter 3
- 5.Selected freeware packages for training layered feedforward neural nets: Aspirin/Migraines http://www.psc.edu/general/software/packages/aspirin/aspirin.html Matrix-Backpropagation ftp://www.risc6000.dibe.unige.it/pub/PDP++ Software http://www.cnbc.cmu.edu/PDP++/PDP++.html SNNS http://www.informatik.uni-stuttgartde/ipvr/bv/projekte/snns/snns.html
- 8.IEEE Transactions on Neural Networks, IEEE Neural Networks Council, IEEE Press.Google Scholar
- 12.R.P. Lippman: An Introduction to Computing with Neural Nets, IEEE ASSP Magazine, 1987, pp. 4–22.Google Scholar
- 13.Neural Networks, International Neural Network Society, Pergamon Press, Elsevier.Google Scholar
- 14.D.E. Rumelhart and J.L. McClelland and the PDP Research Group: Parallel Distributed Processing, Vol. 1, MIT Press, Cambridge, MA, 1986.Google Scholar
- 15.D.E. Rumelhart, G.E. Hinton and R.J. Williams. Learning Internal Representations by Error Propagation, in: D.E. Rumelhart, J.L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Mircostructure of Cognition, Vol. I, MIT Press, Cambridge,. MA, pp. 318–362, 1986.Google Scholar
- 17.H. White: Artificial Neural Networks: Approximation and Learning Theory, Backwell, Cambridge, Mass., 1992.Google Scholar