Product Units with Trainable Exponents and Multi-Layer Networks

  • Richard Durbin
  • David E. Rumelhart
Part of the NATO ASI Series book series (volume 68)

Abstract

This chapter reviews and examines a variant type of computational unit which we have recently proposed for use in multi-layer neural networks [3]. Instead of the output of this unit depending on a weighted sum of the inputs, it depends on a weighted product. In justifying the introduction of a new type of unit we explore at some length the rationale behind the use of multi-layer neural networks, and the properties of the computational units within them. At the end of the chapter we discuss a biological model for a single complex neve cell with active dendritic membrane that uses the product units.

Keywords

NMDA Sine Summing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Collingridge, G.L. and Bliss, T.V.P.: NMDA receptors-their role in long-term potentiation. Trends Neurosci. 10, 288–293 (1987)CrossRefGoogle Scholar
  2. [2]
    Cover, T.: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Elect. Comp. 14, 326–334 (1965)CrossRefMATHGoogle Scholar
  3. [3]
    Durbin, R. and Rumelhart, D.E.: Product units: a computationally powerful and biologically plausible extension to back propagation networks, Neural Computation 1, 133–42 (1989)CrossRefGoogle Scholar
  4. [4]
    Hanson, S.J. and Burr, D.J.: Knowledge representation in connectionist networks. Technical Report, Bell Communication Research, Morristown, New Jersey (1987)Google Scholar
  5. [5]
    Maxwell, T., Giles, C.G, and Lee, Y.C.: Generalization in neural networks, the contiguity problem, in Proceedings IEEE First International Conference on Neural Networks, Vol 2, 41–45 (1987)Google Scholar
  6. [6]
    Mitchison, G.J. and Durbin, R.M.: Bounds on the learning capacity of some multilayer networks. Biological Cybernetics 60, 345–356 (1989)CrossRefMATHMathSciNetGoogle Scholar
  7. [7]
    Poggio, T. and Reichardt, W.: On the representation of multi-input systems: computational properties of polynomial algorithms. Biological Cybernetics 37, 167–186 (1980)CrossRefMATHMathSciNetGoogle Scholar
  8. [8]
    Rabiner, L.R. and Juang, B.H.: An introduction to hidden markov models. IEEE ASSP Magazine 3:1, 4–16 (1986)CrossRefGoogle Scholar
  9. [9]
    Rosenblatt, F.: Principles of Neurodynamics, Spartan, New York (1962)MATHGoogle Scholar
  10. [10]
    Rumelhart, D.E., Hinton, G.E. and Mlelland, J.L.: A general framework for parallel distributed processing, in Parallel Distributed Processing, Vol 1, pp 45–76, MIT Press, Cambridge, Mass and London (1986)Google Scholar
  11. [11]
    Rumelhart, D.E., Hinton, G.E. and Williams R.J.: Learning internal representations by error propagation, in Parallel Distributed Processing, Vol 1, pp 318–362, MIT Press, Cambridge, Mass and London (1986)Google Scholar
  12. [12]
    Sorenson, H.W. (ed): Kaiman filtering: theory and application, IEEE Press, New York (1985)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Richard Durbin
    • 1
  • David E. Rumelhart
    • 1
  1. 1.Department of PsychologyStanford UniversityStanfordUSA

Personalised recommendations