Skip to main content

Three Conjectures on Neural Network Implementations of Volterra Models (Mappings)

  • Chapter
Advanced Methods of Physiological System Modeling

Abstract

Three conjectures are presented that form the methodological link between Volterra models (mappings) and a popular class of artificial neural networks (multi-layer perceptrons). The first conjecture elucidates the equivalence between these two types of nonlinear mapping and shows how we can achieve a network implementation of a Volterra model by employing proper linear input transformations and polynomial activation functions in the hidden units; while the output unit(s) may be simple adder(s). The second conjecture outlines the trade-offs between general polynomial and fixed sigmoidal activation functions traditionally used in multilayer perceptrons. The former are more flexible in defining nonlinear mappings; while thelatter, being far more restrictive, lead to increased numbers of hidden units and heavier computational burden during training via back-propagation. In general, an infinite number of sigmoidal hidden units is required to represent exactly a Volterra model (mapping) or a network with (finite) polynomial hidden units. The third conjecture extends the results for continous-output models to binary-(or spike-)output models/mappings, often encountered in neural networks. These conjectures collectively point to the potential versatility and efficiency of a class of networks that utilize polynomial activation functions in the hidden units and linear output unit(s) with fixed weights. Practical procedures for optimal use of these networks are currently developed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blum, E.K. and Li, L.K. (1991) Approximation theory and feedforward networks. Neural Networks, 4:511–515.

    Article  Google Scholar 

  2. Cybenko, G. (1989) Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals & Systems, 2:303–314.

    Article  MathSciNet  MATH  Google Scholar 

  3. de Figueiredo, R.J.P. (1980) Implications and applications of Kolmogorov’s superposition theorem. IEEE Trans. Autom. Contr., 25:1227–1231.

    Article  MATH  Google Scholar 

  4. Funahashi, K.-I. (1989) On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192.

    Article  Google Scholar 

  5. Hornik, K., Stinchcombe, M. and White, H. (1989) Multi-layer feedforward networks are universal approximators. Neural Networks, 2:359–366.

    Article  Google Scholar 

  6. Ito, (1991) Neural Networks, 4:385–394.

    Article  Google Scholar 

  7. Kolmogorov, A.N. (1957) On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk. SSSR, 114:953–956; AMS Transi. 2:55–59, (1963).

    MathSciNet  MATH  Google Scholar 

  8. Marmarelis, V.Z. (1989) Signal transformation and coding in neural systems. IEEE Trans. Biomed. Eng., 36:15–24.

    Article  Google Scholar 

  9. Marmarelis, VIZ. and Orme, M.E. (1993) Modeling of neuronal systems by use of neuronal modes. IEEE Trans. Biomed. Eng., 40:1149–1158.

    Article  Google Scholar 

  10. Rosenblatt, F. (1962) Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, D.C.

    MATH  Google Scholar 

  11. Rumelhart, D.E. and McClelland, J.L. (1986) (Eds.) Parallel Distributed Processing, Vol. I. and II., MIT Press, Cambridge, Massachussetts.

    Google Scholar 

  12. Sprecher, D.A. (1972) An improvement in the superposition theorem of Kolmogorov. /. Math. Anal. Amol, 38:208–213.

    Article  MathSciNet  MATH  Google Scholar 

  13. Widrow, B. and Lehr, M.A. (1990) 30 years of adaptive neural networks: Perceptron, madaline and back propagation. Proc. IEEE, 78:1415–1442.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer Science+Business Media New York

About this chapter

Cite this chapter

Marmarelis, V.Z. (1994). Three Conjectures on Neural Network Implementations of Volterra Models (Mappings). In: Marmarelis, V.Z. (eds) Advanced Methods of Physiological System Modeling. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-9024-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-1-4757-9024-5_15

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4757-9026-9

  • Online ISBN: 978-1-4757-9024-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics