Advertisement

ICANN ’93 pp 200-203 | Cite as

A Computer Simulation Model of Backwards Feedback Across Synapse Via Arachidonic Acid

  • R. Lahoz-Beltra
  • A. Murciano
  • J. Zamora
  • F. Vico
  • J. M. Jerez
  • S. R. Hameroff
  • J. E. Dayhoff
Conference paper

Abstract

Algorithms for artificial neural networks are usually developed assuming in the most of the models that information propagates fordward and backward across the neural network. Fordwards propagation is modeled easily in ANN and biologically plausible in biological neurons, however for backwards propagation the plausibility of the algorithms developed for ANN seems remote in biological neurons. Based on the presynaptic changes induced by the arachidonic acid released by postsynaptic neurons during long-term potentiation (LTP) in the dentate gyrus, we show a computer simulation model where a backward feedback is performed across local synapses by arachidonic acid. Our simulation model shows how arachidonic acid could be playing the role of retrograde messenger during LTP.

Keywords

Arachidonic Acid Artificial Neural Network Synaptic Cleft Postsynaptic Neuron Phospholipase Activity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    M.P. Blaustein. (1988). Calcium transport and buffering in neurons. TINS 11, No. 10: 438–443.Google Scholar
  2. [2]
    T.V.P. Bliss, M.P. Clements, M.L. Errington, M.A. Lynch, J.H. Williams. (1990). Presynaptic changes associated with long-term potentiation in the dentate gyrus. The Neurosciences 2: 345–354.Google Scholar
  3. [3]
    G. Carpenter and S. Grossberg. (1987). ART2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics: 4919–4930.Google Scholar
  4. [4]
    J. Dayhoff, S.R. Hameroff, C. Swenberg, Riahoz-Beltra. (1992). Biological plausibility of back-error propagation through microtubules. University of Maryland, SRC Technical Report TR 92–17: 1–79.Google Scholar
  5. [5]
    R. Lahoz-Beltra, S.R. Hameroff, J.E. Dayhoff. (1992). Connection weights based on molecular mechanisms in Aplysia neuron synapses, in: Artificial Neural Networks, 2 (eds. I. Aleksander, J. Taylor) Elsevier Science Publishers: 869–872.Google Scholar
  6. [6]
    D.L. Reilly, L.N. Cooper, C. Elbaum. (1982). A neural model for category learning. Biol. Cyber. 45: 35–41.CrossRefGoogle Scholar
  7. [7]
    D. E. Rumelhart, G.E. Hinton, R.J. Williams. (1986). Learning internal representations by error propagation, in: Parallel Distributed Processing (eds. D.E. Rumelhart, J.L. McClelland) Cambridge, Massachusetts: MIT Press.Google Scholar

Copyright information

© Springer-Verlag London Limited 1993

Authors and Affiliations

  • R. Lahoz-Beltra
    • 1
  • A. Murciano
    • 1
  • J. Zamora
    • 1
  • F. Vico
    • 2
  • J. M. Jerez
    • 2
  • S. R. Hameroff
    • 3
  • J. E. Dayhoff
    • 4
  1. 1.Departamento de Matematica Aplicada, Facultad de BiologiaUniversidad ComplutenseMadridSpain
  2. 2.Departamento de Tecnologia Electronica. E.T.S.I. TelecomunicacionUniversidad de MalagaMalagaSpain
  3. 3.Advanced Biotechnology Laboratory, Department of AnesthesiologyUniversity of Arizona, Health Sciences CenterTucsonUSA
  4. 4.Systems Research CenterUniversity of MarylandCollege ParkUSA

Personalised recommendations