This chapter wants to give an overview over established ideas, algorithms and methods of neural computing. It contains the basic definitions necessary for further reading. Beginning with the model that is most equivalent to the brain’s structure — the single neuron — some simple examples of binary and analog network models are presented in their structure and in their principles. These examples have been chosen because they are fairly well-suited for optical implementations. Therefore, all following examples of optical neural net architectures will refer to the basic explanations in the present chapter. For the reader who is already familiar with the principle functions of several net structures, some of the explanations and algorithms might already be well-known. Then this chapter might be briefly read to familiarize with the foundations of the subject or even skipped for a first reading.
KeywordsEnergy Function Weight Vector Output Node Spin Glass Input Pattern
Unable to display preview. Download preview PDF.
- 1.D. Hebb, The Organization of Behaviour, J. Wiley & Sons (1949) . The original ideas of Hebb concerning learning by reinforcement with special emphasize on learning in simple neurons and the one-layer perceptron.Google Scholar
- 3.M.A. Minsky, S.A. Papert, Perceptrons. An introduction to Computational Geometry, MIT Press (1969) and Perceptrons. Expanded edition, MIT Press (1988). Concepts and criticism on single-layer perceptrons, with a lot of mathematical derivations and proofs.Google Scholar
- 4.P. Kanerva, Sparse Distributed Memory, MIT Press, Bradford Books, Cambridge (Mass.) (1986) An overview over models that are derived from the backpropagation paradigm.Google Scholar
- 5.J.S. Albus, Brains, Behavior and Robotics, McGraw-Hill, New York (1981) A summary of models that originate from the multilayer perceptron idea.Google Scholar
- 6.T. Kohonen, Self-Organization and Associative Memory, Springer Verlag, Berlin (1984). An introduction to concepts of associative memory and discussion of selforganization principles.Google Scholar
- 8.H. Ritter, T. Martinetz and K. Schulten, Neural computing and self-organizing maps, Addison-Wesley, Reading, Massachusetts (1992). Self-organization is prescribed for neural computing, including the formation of feature nets and applications for computational tasks.Google Scholar
- 10.G.E. Hinton and T.J. Sejnowski, Learning and relearning in Boltzmann machines, in J.L. McClelland, D.E. Rumelhardt, Parallel Distributed Processing, Vol. 1, MIT Press (1986), chapter 7. This chapter of the famous Parallel distributed Processing book gives an excellent overview over the principles of Boltzmann machines.Google Scholar