Advertisement

How the Brain Adjusts Synapses—Maybe

  • Hans J. Bremermann
  • Russell W. Anderson
Part of the Automated Reasoning Series book series (ARSE, volume 1)

Abstract

The notion that the synapse is the site of lasting change in memory and learning has had wide acceptance for decades. Hebb [46] postulated that when one neuron repeatedly excites another, the synaptic knobs are strengthened. Verification has taken time, but there is now ample evidence that Hebbian type long term potentiation (with some modifications of the original hypothesis) does indeed occur [61].

Keywords

Weight Space Synaptic Weight Lateral Geniculate Nucleus Hide Unit Output Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski (1985): A Learning Algorithm for Boltzmann Machines. Cognitive Science 9, 147–169.CrossRefGoogle Scholar
  2. [2]
    David H. Ackley (1987): A Connectionist Machine for Hillclimbing. Boston: Kluwer Academic Publishers.CrossRefGoogle Scholar
  3. [3]
    L. B. Almeida (1987): A Learning Rule for Asynchronous Perceptrons with Feedback in a Combinatorial Environment. Proc. IEEE First Int’l. Conf. Neural Networks.Google Scholar
  4. [4]
    Wolfgang Alt (1980): Biased Random Walk Models for Chemotaxis and Related Diffusion Approximations. J. Mathem. Biology 9, 147–177.MathSciNetzbMATHCrossRefGoogle Scholar
  5. [5]
    Russell W. Anderson and V. Vemuri (1990): Neural Networks Can Be Used For Open-Loop, Dynamic Control. To appear in: International Journal of Neural Networks: Research and Applications.Google Scholar
  6. [6]
    Russell W. Anderson (1991): Ph. D. dissertation in progress, U. C. Berkeley.Google Scholar
  7. [7]
    Chiye Aoki and Philip Siekevitz (1988): Plasticity in Brain Development. Scientific American 259(6), December, 56–64.CrossRefGoogle Scholar
  8. [8]
    Scott Austin (1990): Genetic Solutions To XOR Problems. AI Expert, December, 52–57.Google Scholar
  9. [9]
    Bill Baird (1990): Ph.D. Thesis, U. C. Berkeley.Google Scholar
  10. [10]
    Dana H. Ballard (1987): Modular Learning in Neural Networks. AAAI National Conference on Artificial Intelligence, 279–284.Google Scholar
  11. [11]
    Andrew G. Barto, Richard S. Sutton, and Peter S. Brouwer (1981): Associative Search Network: A Reinforcement Learning Associative Memory. J. Biological Cybernetics 40, 201–211.zbMATHCrossRefGoogle Scholar
  12. [12]
    Andrew G. Barto and Richard S. Sutton (1983): Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems. IEEE Transactions on Systems, Man, and Cybernetics SMC-13(5), 835–846.CrossRefGoogle Scholar
  13. [13]
    Jacob D. Bekenstein and Marcello Schiffer (1990): Quantum Limitations on the Storage and Transmission of Information. International Journal of Modern Physics C. (in press).Google Scholar
  14. [14]
    Howard Berg (1975): How Bacteria Swim. Scientific American 233(2), 36–44.CrossRefGoogle Scholar
  15. [15]
    Howard Berg (1983): Random Walks in Biology. Princeton: Princeton University Press.Google Scholar
  16. [16]
    W. W. Bledsoe (1961): The Use of Biological Concepts in the Analytical Study of Systems. Technical Report, Panoramic Research Inc., Palo Alto, CA.Google Scholar
  17. [17]
    W. W. Bledsoe (1961): Lethally Dependent Genes Using Instant Selection. Technical Report, Panoramic Research Inc., Palo Alto, CA.Google Scholar
  18. [18]
    W. W. Bledsoe (1961): A Quantum-Theoretical Limitation of the Speed of Digital Computers. IRE Trans. Elec. Comp. EC-10 (3).Google Scholar
  19. [19]
    T. Boseniuk, W. Ebeling, and A. Engel (1987): Boltzmann and Darwin Strategies in Complex Optimization. Physics Letters A 125, 307–310.CrossRefGoogle Scholar
  20. [20]
    Hans J. Bremermann (1958): The Evolution of Intelligence. ONR Technical Report No. 1, Contract Nonr 477(17), University of Washington, Seattle.Google Scholar
  21. [21]
    Hans J. Bremermann (1962): Optimization Through Evolution and Recombination. In: Yovits, Jacobi, Goldstein, eds.: Self-Organizing Systems. Washington, D. C.: Spartan Books.Google Scholar
  22. [22]
    Hans J. Bremermann and M. Rogson (1964): An Evolution-Type Search Method for Convex Sets. Technical Report, Contracts Nonr 222(85) and 3656(08), Berkeley, CA.Google Scholar
  23. [23]
    Hans J. Bremermann, M. Rogson, and S. Salaff (1966): Global Properties of Evolution Processes. In: H. H. Pattee, E. A. Edelsack, Louis Fein, and A. B. Callahan, eds.: Natural Automata and Useful Simulations. Washington, D. C.: Spartan Books. 3–41.Google Scholar
  24. [24]
    Hans J. Bremermann (1970): A Method of Unconstrained Global Optimization. Mathematical Biosciences 9, 1–15.MathSciNetzbMATHCrossRefGoogle Scholar
  25. [25]
    Hans J. Bremermann (1974): Chemotaxis and Optimization. J. of the Franklin Institute 297, 397–404. (Special Issue: Mathematical Models of Biological Systems).CrossRefGoogle Scholar
  26. [26]
    David Ceperley and Berni Alder (1986): Quantum Monte Carlo. Science 231, 555–560, 7 Feb.CrossRefGoogle Scholar
  27. [27]
    Michael Conrad (1983): Adaptability. Chapter 10, Plenum Press, N.Y.Google Scholar
  28. [28]
    Francis Crick (1989): The Recent Excitement about Neural Networks. Nature 337, 129–132, 12 January.CrossRefGoogle Scholar
  29. [29]
    Adele Cutler (1988): Optimization Methods in Statistics. Ph. D. Thesis, Department of Statistics, University of California, Berkeley, CA.Google Scholar
  30. [30]
    G. Cybenko (1989): Approximation by Superpositions of a Sigmoidal Function. Math. Contr., Signal and Sys. 2, 303–14.MathSciNetzbMATHCrossRefGoogle Scholar
  31. [31]
    Farid U. Dowla, Steven R. Taylor, and Russell W. Anderson (1990): Seismic Discrimination with Artificial Neural Networks: Preliminary Results with Regional Spectral Data. Bulletin of the Seismological Society of America 80(5), 1346–1373. October.Google Scholar
  32. [32]
    Kenji Doya and Shuji Yoshizawa (1989): Memorizing Oscillatory Patterns in the Analog Neuron Network. Internat. Joint Conf. Neural Networks. 1–27-32, Washington, D. C.Google Scholar
  33. [33]
    W. Ebeling, A. Engel, B. Esser, and R. Feistel (1984): Diffusion and Reaction in Random Media and Models of Evolution Processes. J. Statistical Physics 37(3/4), 369–384.MathSciNetzbMATHCrossRefGoogle Scholar
  34. [34]
    G. M. Edelman (1987): Neural Darwinism. New York: Basic Books.Google Scholar
  35. [35]
    M. Eigen (1988): Macromolecular Evolution: Dynamical Ordering in Sequence Space. In: D. Pines, ed.: Emerging Synthesis in Science. Redwood City, CA: Addison-Wesley. 21–42.Google Scholar
  36. [36]
    M. Eigen, J. McCasgill, and P. Schuster (1991): Dynamics of Darwinian Molecular Systems. J. Phys. Chem. (in press).Google Scholar
  37. [37]
    J. L. Elman (1988): Finding Structure in Time. Technical Report 8801, La Jolla: University of California, San Diego, Center for Research in Language.Google Scholar
  38. [38]
    J. A. Feldman (1981): A Connectionist Model of Visual Memory. In: G. E. Hinton and J. A. Anderson, eds.: Parallel Models of Associative Memory. Hillsdale, N. J.: Erlbaum. 49–81.Google Scholar
  39. [39]
    Walter Fontana, W. Schnabl, and Peter Schuster (1989): Physical Aspects of Evolutionary Optimization and Adaptation. Phys. Rev. A 40, 3301–21.CrossRefGoogle Scholar
  40. [40]
    Walter J. Freeman (1991): The Physiology of Perception. Scientific American 264(2), 78–85, February.MathSciNetCrossRefGoogle Scholar
  41. [41]
    S. Geman and D. Geman (1984): Stochastic Relaxation, Gibbs Distribution, and Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 721–741.zbMATHCrossRefGoogle Scholar
  42. [42]
    S. Geman and D. Geman (1988): In: James A. Anderson and Edward Rosenfeld, eds.: Neurocomputing: Foundations of Research. Cambridge, MA: MIT Press. Reprint of [41].Google Scholar
  43. [43]
    David L. Glanzman, Eric R. Kandel, and Samuel Schacher (1990): Target-Dependent Structural Changes Accompanying Long-Term Synaptic Facilitation in Aplysia Neurons. Science 249, 799–802, 17 August.CrossRefGoogle Scholar
  44. [44]
    Stephen Grossberg (1988): Neurocomputing: Foundations of Research. In: James A. Anderson and Rosenfeld, eds.: Neurocomputing: Foundations of Research. Cambridge: MIT Press. paper number 24.Google Scholar
  45. [45]
    H. Haken (1988): Neural and Synergetic Computers. Berlin and Heidelberg: Springer-Verlag.zbMATHCrossRefGoogle Scholar
  46. [46]
    D. O. Hebb (1949): The Organization of Behavior. New York: Wiley.Google Scholar
  47. [47]
    G. E. Hinton and J. L. McClelland (1988): Learning Representations by Recirculation. In: D. Z. Anderson, ed.: Neural Information Processing Systems. New York: American Institute of Physics.Google Scholar
  48. [48]
    G. E. Hinton (1989): Connectionist Learning Procedures. Artificial Intelligence 40(1), 143–150.MathSciNetCrossRefGoogle Scholar
  49. [49]
    John H. Holland (1975): Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press.Google Scholar
  50. [50]
    J. J. Hopfield (1982): Neural Networks and Physical Systems with Emergent Collective Computational Abilities. PNAS (USA) 79, 2554–2558, April.MathSciNetCrossRefGoogle Scholar
  51. [51]
    John J. Hopfield and David W. Tank (1985): “Neural” Computation of Decisions in Optimization Problems. Biological Cybernetics 52, 141.MathSciNetzbMATHGoogle Scholar
  52. [52]
    John J. Hopfield and David W. Tank (1986): Computing with Neural Circuits: A Model. Science, 625–633, 8 Aug.CrossRefGoogle Scholar
  53. [53]
    Eric M. Johansson, Farid U. Dowla, and D. M. Goodman (1990): Back-propagation Learning for Multi-Layer Feed-Forward Neural Networks Using the Conjugate Gradient Method, submitted to IEEE Transactions on Neural Networks, Technical Report UCRL–JC-1850, Lawrence Livermore National Laboratory, September 26.Google Scholar
  54. [54]
    J. A. Kauer, R. C. Malenka, and R. A. Nicoll (1988): NDMA Application Potentiates Synaptic Transmission in the Hippocampus. Nature 334, 250252. 21 July.CrossRefGoogle Scholar
  55. [55]
    Stuart A. Kauffman and S. Levin (1987): Towards a General Theory of Adaptive Walks on Rugged Landscapes. J. Theoret. Biol. 128, 11–45.MathSciNetCrossRefGoogle Scholar
  56. [56]
    Evelyn Fox Keller and Lee Segel (1970): J. of Theoretical Biology 26, 399.CrossRefGoogle Scholar
  57. [57]
    Mary B. Kennedy (1988): Synaptic Memory Molecules. Nature 335, 770–772, 27 Oct.CrossRefGoogle Scholar
  58. [58]
    Daniel Koshland (1980): Bacterial Chemotaxis as a Model Behavioral System. New York: Raven Press.Google Scholar
  59. [59]
    S. R. Lehky and Terrence J. Sejnowski (1988): Computing 3-D Curvatures from Images of Surfaces Using a Neural Model. Nature 333, 452.CrossRefGoogle Scholar
  60. [60]
    S. R. Lehky and Terrence J. Sejnowski (1990): Neuronal Model of Stereoacuity and Depth Interpolation Based on a Distributed Representation of Stereo Disparity. Journal of Neuroscience 10(7), 2281–2299, July.Google Scholar
  61. [61]
    Gary Lynch (1986): Synapses, Circuits, and the Beginnings of Memory. Cambridge, MA: Bradford/MIT Press.Google Scholar
  62. [62]
    Catherine A. Macken and Alan S. Perelson (1989): Protein Evolution on Rugged Landscapes. PNAS(USA) 86, 6191–5, August.MathSciNetCrossRefGoogle Scholar
  63. [63]
    Catherine A. Macken, Patrick S. Hagan, and Alan S. Perelson (1991): Evolutionary Walks on Rugged Landscapes, SIAM J. Appl. Math., in press.Google Scholar
  64. [64]
    Bartlett W. Mel (1990): Connectionist Robot Motion Planning. Boston, San Diego: Academic Press.Google Scholar
  65. [65]
    M. M. Merzenich, G. Recanzone, W. M. Jenkins, T. T. Allard, and R. J. Nudo (1988): Cortical Representational Plasticity. In: [78]. 41–67.Google Scholar
  66. [66]
    M. M. Merzenich, R. J. Nelson, J. H. Kaas, M. P. Stryker, W. M. Jenkins, J. M. Zook, M. S. Cynader, and A. Schoppman (1987): Variability in Hand Surface Representations in Areas 3b and 1 in Adult Owl and Squirrel Monkeys. J. of Comparative Neurology 258(2), 281–96, April 8.CrossRefGoogle Scholar
  67. [67]
    N. Metropolis and S. Ulam (1949): The Monte Carlo Method. J. Amer. Statistical Association 44(247), 335–341.MathSciNetzbMATHCrossRefGoogle Scholar
  68. [68]
    N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller (1953): Equation of State Calculations for Fast Computing Machines. J. of Chemical Physics 21, 1087–1092CrossRefGoogle Scholar
  69. [69]
    M. Minsky (1961): Steps Toward Artificial Intelligence. Proc. IRE. 49, 8–30, Jan.MathSciNetCrossRefGoogle Scholar
  70. [70]
    M. Minsky and S. Papert (1969): Perceptrons: An Introduction to Computational Geometry. Cambridge, Mass.: M. I. T. Press.Google Scholar
  71. [71]
    D. Montana and L. Davis (1989): Training Feedforward Neural Networks Using Genetic Algorithms. Proc. 11th IJCAI.Google Scholar
  72. [72]
    Ralph Nossal (1980): Mathematical Theories of Topotaxis. Lecture Notes in Biomathematics 38, Springer-Verlag, 410–439.Google Scholar
  73. [73]
    A. Okubo (1980): Diffusion and Ecological Problems: Mathematical Models Biomathematics 10.zbMATHGoogle Scholar
  74. [74]
    Alan S Perelson and Stuart A. Kauffman (1991): Molecular Evolution and Rugged Landscapes: Proteins, RNA and the Immune System, volume IX. Redwood City, CA: Addison-Wesley.Google Scholar
  75. [75]
    Fernando J. Pineda (1988): Generalization of Backpropagation to Recurrent and Higher Order Neural Networks. Physics Review Letters, 602–611.Google Scholar
  76. [76]
    T. Poggio and F. Girosi (1990): Regularization Algorithms for Learning That are Equivalent to Multilayer Neural Networks. Science 247, 978–82, 23 February.MathSciNetzbMATHCrossRefGoogle Scholar
  77. [77]
    Ning Qian and Terrence J. Sejnowski (1988): Predicting the Secondary Structure of Globular Proteins Using Neural Network Models. J. Molec. Biol. 202, 865–884.CrossRefGoogle Scholar
  78. [78]
    P. Rakic and W. Singer (1988): Neurobiology of Neocortex. Dahlem Conferences Report No. 42. New York: Wiley-Interscience.Google Scholar
  79. [79]
    Anna W. Roe, Sarah L. Pallas, Jong-On Hahm, and Mriganka Sur (1990): A Map of Visual Space Induced in Primary Auditory Cortex. Science 250, 818–20. 9 November.CrossRefGoogle Scholar
  80. [80]
    Frank Rosenblatt (1962): Principles of Neurodynamics. Washington, D. C.: Spartan Books.Google Scholar
  81. [81]
    David E. Rumelhart, Geoffrey E. Hinton, and R. J. Williams (1986): Learning Internal Representations by Error Propagation In: D. E. Rumelhart and J. L. McClelland, eds.: Parallel Distributed Processing Vold, 318–362. Cambridge, MA: MIT Press.Google Scholar
  82. [82]
    Peter Schuster and K. Sigmund (1985): Dynamics of Evolutionary Optimization. Ber. Bunsenges. Phys. Chem. 89, 668–682.Google Scholar
  83. [83]
    Peter Schuster and Jorg Swetina (1988): Stationary Mutant Distributions and Evolutionary Optimization. Bulletin of Mathematical Biology 50(6), 635–660.MathSciNetzbMATHGoogle Scholar
  84. [84]
    Terrence J. Sejnowski and Charles R. Rosenberg (1987): Parallel Networks that Learn to Pronounce English Text. Complex Systems 1, 145–168.zbMATHGoogle Scholar
  85. [85]
    Y. A. Shreider (1966): The Monte Carlo Method. Pure and Applied Mathematics 87. Translation from Russian. Oxford: Pergamon Press.Google Scholar
  86. [86]
    Christine A. Skarda and Walter J. Freeman (1987): How Brains Make Chaos In Order To Make Sense of the World. Behavioral and Brain Sciences 10(2), 161–195.CrossRefGoogle Scholar
  87. [87]
    Robert Smalz and Michael Conrad (1990): A Credit Apportionment Algorithm for Evolutionary Learning with Neural Networks. Dept. of Computer Science, Wayne State University, Detroit (preprint).Google Scholar
  88. [88]
    Patric K. Stanton and Terrence J. Sejnowski (1989): Associative Longterm Depression in the Hippocampus Induced by Hebbian Covariance. Nature 339, 215–218, 18 May.CrossRefGoogle Scholar
  89. [89]
    Charles F. Stevens (1989): Strengthening the Synapses. Nature 338, 460–461, 6 April.CrossRefGoogle Scholar
  90. [90]
    Lawrence D. Stone (1975): Theory of Optimal Search. New York: Academic Press.zbMATHGoogle Scholar
  91. [91]
    M. P. Stryker, J. Allman, C. Blakemore, J. M. Greuel, J. H. Kaas, M. M. Merzenich, P Rakic, W. Singer, G. S. Stent, T. N. Wiesel, and H. van der Loos (1988): Group Report: Principles of Cortical Self-Organization. In: [78], 115–136.Google Scholar
  92. [92]
    G. Tesauro and Terrence J. Sejnowski (1989): A Parallel Network that Learns to Play Backgammon. Artificial Intelligence 39(3), 357–390. July.zbMATHCrossRefGoogle Scholar
  93. [93]
    Gerald Tesauro and Bob Janssens (1988): Scaling Relationships in Back-propagation Learning Complex Systems 2, 39–44.zbMATHGoogle Scholar
  94. [94]
    Fu-Sheng Tsung and Garrison W. Cottrell (1989): A Sequential Adder Using Recurrent Networks. Internat. Joint Conf. Neural Networks. II-133–39, Washington, D. C.Google Scholar
  95. [95]
    D. C. Van Essen (1985): In: A. Peters and F. G. Jones, eds.: Cerebral Cortex Vol. 3, 259–324. New York: Plenum Press.Google Scholar
  96. [96]
    J. H. Williams, M. L. Errington, M. A. Lynch, and T. V. P. Bliss (1989): Arachidonic Acid Induces a Long-Term Activity-Dependent Enhancement of Synaptic Transmission in the Hippocampus. Nature 341, 739–42, 26 October.CrossRefGoogle Scholar
  97. [97]
    R. J. Williams and D. A. Zipser (1988): A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Technical Report ICS-8805, University of California, San Diego.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1991

Authors and Affiliations

  • Hans J. Bremermann
    • 1
  • Russell W. Anderson
    • 2
  1. 1.Division of Biophysics, Department of Molecular and Cell Biology and Department of MathematicsUniversity of California at BerkeleyUSA
  2. 2.Graduate Group in BioengineeringUniversity of California at Berkeley and San FranciscoUSA

Personalised recommendations