Product Units with Trainable Exponents and Multi-Layer Networks
This chapter reviews and examines a variant type of computational unit which we have recently proposed for use in multi-layer neural networks . Instead of the output of this unit depending on a weighted sum of the inputs, it depends on a weighted product. In justifying the introduction of a new type of unit we explore at some length the rationale behind the use of multi-layer neural networks, and the properties of the computational units within them. At the end of the chapter we discuss a biological model for a single complex neve cell with active dendritic membrane that uses the product units.
KeywordsNMDA Sine Summing
Unable to display preview. Download preview PDF.
- Hanson, S.J. and Burr, D.J.: Knowledge representation in connectionist networks. Technical Report, Bell Communication Research, Morristown, New Jersey (1987)Google Scholar
- Maxwell, T., Giles, C.G, and Lee, Y.C.: Generalization in neural networks, the contiguity problem, in Proceedings IEEE First International Conference on Neural Networks, Vol 2, 41–45 (1987)Google Scholar
- Rumelhart, D.E., Hinton, G.E. and Mlelland, J.L.: A general framework for parallel distributed processing, in Parallel Distributed Processing, Vol 1, pp 45–76, MIT Press, Cambridge, Mass and London (1986)Google Scholar
- Rumelhart, D.E., Hinton, G.E. and Williams R.J.: Learning internal representations by error propagation, in Parallel Distributed Processing, Vol 1, pp 318–362, MIT Press, Cambridge, Mass and London (1986)Google Scholar
- Sorenson, H.W. (ed): Kaiman filtering: theory and application, IEEE Press, New York (1985)Google Scholar