Skip to main content

Neural networks: Deterministic and stochastic dynamics

  • Conference paper
  • First Online:
Dynamics and Stochastic Processes Theory and Applications

Part of the book series: Lecture Notes in Physics ((LNP,volume 355))

  • 204 Accesses

Abstract

Problems of combinatorial optimization, beyond their interest in applied research, play a crucial role in fundamental issues of theoretical computer science, for their inherent computational complexity. Here we use them as test bed on which to gauge the many perspectives and problems offered by neural networks.

The realization that optimization problems for quadratic functions of many Boolean variables which are, in a technical sense to be made precise, as difficult as they can be, are conveniently dealt with by neural networks contributes to the interest of such dynamical systems: the parameters controlling their evolution can indeed be assigned in such a way that they have precisely the function to be minimized as a Lyapunov function. The recognition that such an evolution will, in general, stop in a local minimum of this Lyapunov function, as opposed to the global minima one is searching for, motivates the idea of endowing the dynamics of a neural network with a stochastic transition rule leading to a stationary distribution strongly peacked around global minima.

Here we discuss several problems related to the dynamics of both deterministic and stochastic networks with an emphasis on the problem of quantitatively assessing their computational capabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ackley D.H., Hinton G.E., Sejnowski T.J.:“A learning algorithm for Boltzmann machines”. Cognitive Science 9, 147–169 (1985)

    Article  Google Scholar 

  2. Apolloni B., Carvalho C., de Falco D.:“Quantum stochastic optimization” to appear in Stochastic Processes and their Applications.

    Google Scholar 

  3. Apolloni B., Cesa-Bianchi N., de Falco D.: “Quantum tunnelling in stochastic mechanics and combinatorial optimization” in [Ca89]

    Google Scholar 

  4. Barahona F.: “On computational complexity of Ising spin glass models”. Journal of Physics A 15 3241–3253 (1982)

    Google Scholar 

  5. Bertoni A., Campadelli P., Morpurgo A.:“Total stabilization in symmetric networks”. Proceedings of the international workshop Neural Networks and their Applications, Nimes (1988)

    Google Scholar 

  6. Bertoni A., Campadelli P.: “Neural networks and non uniform circuits”, in [Ca89]

    Google Scholar 

  7. Bounds D.G.: “A statistical mechanical study of Boltzmann machines”. Journal of Physics A 20, 2133–2145 (1987)

    Google Scholar 

  8. Bruschi D., Campadelli P. “Reachability and stabilization in antisymmetric networks” in[Ca89]

    Google Scholar 

  9. Brook J., Goodman J.W.: “A generalized convergence theorem for neural networks” Stanford preprint (1988)

    Google Scholar 

  10. Caianiello E.R.:“Outline of a theory of thought processes and thinking machines”. Journal of Theoretical Biology 1, 204–235 (1961)

    PubMed  Google Scholar 

  11. Caianiello E.R., ed:“Parallel architectures and neural networks”. World Scientific (1989)

    Google Scholar 

  12. [C188]Clark J.W.: “Statistical Mechanics of neural networks”. Physics Reports 158, 91–157 (1988)

    Article  Google Scholar 

  13. Cook S.A.:“The complexity of theorem proving procedure”. Proceedings of the third ACM symposium on the theory of computing.

    Google Scholar 

  14. Fogelman F., Goles E., Pellegrin D.:“Decreasing energy functions as a tool for studying threshold networks”. Discrete and Applied Mathematics 12, 261–277 (1985)

    Article  Google Scholar 

  15. Garey M.R., Johnson D.S.:“Computers and intractability” Freeman (1979)

    Google Scholar 

  16. Gill J.:“Computational complexity of probabilistic Turing machines”. SIAM Journal of Computing 6, 675–695 (1977)

    Article  Google Scholar 

  17. Hinton G.E., Sejnowski J.J., Ackley D.H.:“Boltzmann Machines: constraint satisfaction networks that learn”. Technical Report CMU-CS-119 Carnegie-Mellon University (1984)

    Google Scholar 

  18. Hopfield J.J.: “Neural networks and physical systems with emergent collective computational abilities”. Proceedings of the National Academy of Science, 79, 2554–2558 (1982)

    PubMed  Google Scholar 

  19. Hopfield J.J., Tank D.: “Neural computation of decisions in optimization problems”. Biological Cybernetics 52, 141–152 (1985)

    PubMed  Google Scholar 

  20. Hong J.: “On connectionist model”. Beijing Computer Institute preprint (1988)

    Google Scholar 

  21. Hu S.T. “Threshold Logic” University of California Press (1965)

    Google Scholar 

  22. Kullback S.: “Information theory and statistics”. Wiley (1959)

    Google Scholar 

  23. Levin L.A.:“Universal sorting problem” Problemy Peredachi Informatsii, 115–116; 9, English translation in: Problems of Information Transmission 9, 255–256 (1973)

    Google Scholar 

  24. McCulloch W.S., Pitts W.A.: “A logical calculus of ideas immanent in nervous activity”. Bulletin of Mathematical Biophysics 5, 115–133 (1943)

    Google Scholar 

  25. Muroga S. “Threshold logic and its application”. Wiley (1971)

    Google Scholar 

  26. Robert F.:“An introduction to discrete iterations”. in: “Automata networks in computer science”, Fogelman, Robert, Tchuente eds. Manchester University Press (1987)

    Google Scholar 

  27. Rosenberg I.G.: “Reduction of bivalent maximization to the quadratic case” Cahiers Centre Etudes Rech.Oper. 17, 71–74 (1975)

    Google Scholar 

  28. Schnakenberg J.:“Network theory of microscopic and macroscopic behavior of master equation systems”. Reviews of Modern Physics 48, 571–585 (1976)

    Article  Google Scholar 

  29. Stockmeyer L.: “Classifying the computational complexity of problems”. The Journal of Symbolic Logic, 52, 1–43, (1987)

    Google Scholar 

  30. Valiant L.G.:“A theory of the learnable”. Communications of the ACM 27, 1134–1142 (1984)

    Article  Google Scholar 

  31. Valiant L.G.:“Functionality in neural networks”. Harvard preprint (1988)

    Google Scholar 

  32. van Laarhoven P.J.M., Aarts E.H.L.:“Simulated annealing”. Reidel (1987).

    Google Scholar 

  33. Zachos S.:“Robustness of probabilistic computational complexity classes under defnitional perturbations”; Information and Control, 54, 143–154 (1982)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ricardo Lima Ludwig Streit Rui Vilela Mendes

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag

About this paper

Cite this paper

Apolloni, B., Bertoni, A., Campadelli, P., de Falco, D. (1990). Neural networks: Deterministic and stochastic dynamics. In: Lima, R., Streit, L., Vilela Mendes, R. (eds) Dynamics and Stochastic Processes Theory and Applications. Lecture Notes in Physics, vol 355. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-52347-2_21

Download citation

  • DOI: https://doi.org/10.1007/3-540-52347-2_21

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-52347-5

  • Online ISBN: 978-3-540-46969-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics