Advertisement

Complexity issues in discrete neurocomputing

  • Juraj Wiedermann
Part II Invited Lectures
Part of the Lecture Notes in Computer Science book series (LNCS, volume 464)

Abstract

An overview of the basic results in complexity theory of discrete neural computations is presented. Especially, the computational power and efficiency of single neurons, neural circuits, symmetric neural networks (Hopfield model), and of Boltzmann machines is investigated and characterized. Corresponding intractability results are mentioned as well. The evidence is presented why discrete neural networks (inclusively Boltzmann machines) are not to be expected to solve intractable problems more efficiently than other conventional models of computing.

Keywords

Energy Function Boolean Function Neural Circuit Initial Configuration Conjunctive Normal Form 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Adlemann, L.: Two Theorems on Random Polynomial Time, Proc. 19-th FOCS, Washington D. C., 1978, pp. 75–83Google Scholar
  2. [2]
    Ackley, D. N. — Hinton, G. E. — Sejnowski, T.I.: A Learning Algorithm for Boltzmann Machines. Cognitive Science 9, 1985, pp. 147–169CrossRefGoogle Scholar
  3. [3]
    Barahona, F.: On the Computational Complexity of Ising Spin Glass Models. J. Phys. A. 15, 1982, pp. 3241–3253Google Scholar
  4. [4]
    Bruck, J. — Goodman, J. W.: A Generalized Convergence Theorem for Neural Networks and Its Application in Combinatorial Optimization. Proc. IEEE First International Conf. on Neural Networks, Vol. 3, 1987, pp.649–656Google Scholar
  5. [5]
    Chandra, A. K. — Kozen, D. C. — Stockmeyer, L. I.: Alternation. JACM 28, 1981, pp. 114–133CrossRefGoogle Scholar
  6. [6]
    Chandra, A. K. — Stockmeyer, L. I. — Vishkin, U.: Constant Depth Reducibility. SIAM J. Comput. Vol. 15, No. 3, 1984, pp. 423–432Google Scholar
  7. [7]
    Egecioglu, O. — Smith, T.R. — Moody, I.: Computable Functions and Complexity in neural networks. Tech. Rep. ITP-124, University of California, Santa Barbara, 1986Google Scholar
  8. [8]
    Farhat, N. H. — Psaltis, D. — Prata, A. — Paek, E.: Optical Implementation of the Hopfield Model. Applied Optics, 24, 1985, pp. 1469–1475Google Scholar
  9. [9]
    Faigle, U. — Schrader, R.: On the Convergence Of Stationary Distributions in Simulated Annealing Algorithms. Inf. Proc. Letters, 27, 1988, pp. 189–194Google Scholar
  10. [10]
    Feldman, J. A.: Energy and the Behavior of Connectionist Models. Tech. Rep. TR-155, University of Rochester, Nov. 1985Google Scholar
  11. [11]
    Garey, M. R. — Johnson, D. S.: Computers and Intractability. A Guide to the Theory of NP-Completeness. Freeman and Co., San Francisco, 1979Google Scholar
  12. [12]
    Hopfield, J. J.: Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Natl. Acad. Sci. USA 79, 1982, pp. 2554–2558PubMedGoogle Scholar
  13. [13]
    Hopfield, J. J.: Neurons with Graded Response Have Collective Computational Properties Like Those of Two-state Neurons. Proc. Natl. Acad. Sci. USA, 1984, pp. 3088–3092Google Scholar
  14. [14]
    Hopfield, J. J.: The Effectiveness of Neural Computing. Proc. IFIP'89, North-Holland, 1989, pp. 503–507Google Scholar
  15. [15]
    Hopfield, J. J. — Tank, D. W.: 'Neural’ Computations of Decisions in Optimization Problems. Biol. Cybern. 52, 1985, pp. 141–152PubMedGoogle Scholar
  16. [16]
    Hopfield, J. J. — Tank, D. W.: Computing with Neural Circuits: A Model. Science 233, 1986, pp.625–633PubMedGoogle Scholar
  17. [17]
    Johnson, J. l.: A Neural Network Approach to the 3-Satisfiability Problem. J. of Parall. and Distrib. Comput. 6, 1989, pp. 435–449Google Scholar
  18. [18]
    Kirkpatrick, S. — Gellat, C. D., Jr. — Vecchi, M. P.: Optimization by Simulated Annealing. Science, 220, No. 4598, 1983Google Scholar
  19. [19]
    Korst,. J. H. M. — Arts, E. H. L.: Combinatorial Optimization on a Boltzmann Machine. J. of Parall. and Distrib. Comput. 6, 1989, pp. 331–357Google Scholar
  20. [20]
    Metropolis, N. — Rosenbluth, A. — Rosenbluth, M. — Teller, A. — Teller, E.: J. Chem. Phys., 21, 1087, 1953Google Scholar
  21. [21]
    Minsky, M.: Computation. Finite and Infinite Machines. Prentice Hall, Englewood Cliffs, NJ, 1967Google Scholar
  22. [22]
    Muroga, S.: Threshold Logic and Its Applications. Wiley-Interscience, New York, 1971Google Scholar
  23. [23]
    Minsky, M. — Papert, S.: Perceptrons. An Introduction to Computational Geometry. The MIT Press, Cambridge, Mass., 1969Google Scholar
  24. [24]
    Muroga, S. — Tsubi, T. — Baugh, Ch. R.: Enumeration of Threshold Functions of Eight Variables. IEEE Trans. on Comp., C-19, No. 9, 1970, pp. 818–825Google Scholar
  25. [25]
    Parberry, I.: A Primer on the Complexity Theory of Neural Networks. Research Report CS-88-38, Dept. of Comp. Sci., The Pennsylvania state university, October 1988Google Scholar
  26. [26]
    Parberry, I. — Schnitger, G.: Parallel Computation with Threshold Functions. JCSS 36, 1988, pp. 278–302Google Scholar
  27. [27]
    Parberry, I. — Schnitger, G.: Relating Boltzmann Machines to Conventional models of Computations. Neural Networks, 2, 1989Google Scholar
  28. [28]
    Reif, J. H. — Tate, S. R.: On Threshold Circuits and Polynomial Computation. Technical Report, Dept. of Comp. Sci., Duke University, 1988Google Scholar
  29. [29]
    Robson, J. M.: Linear Size Formulas for Non-deterministic Single Tape Computations. Proc. 11-th Australian Comp. Sci. Conference, Brisbane, Feb. 3–5, 1988Google Scholar
  30. [30]
    van Emde Boas, P.: Machine Models and Simulations. ITLI Prepublication Series of Computation and Complexity Theory CT-88-95, University of Amsterdam, 1988Google Scholar
  31. [31]
    Wiedermann, J.: On the Computational Power of Neural Networks and Related Computational Systems. Technical Report OPS-9/1988, Department of Programming Systems, VUSEI-AR, Bratislava, June 1988 (in Slovak), also in Proc. SOFSEM'88, VUSEI-AR Bratislava, November 1988, pp. 73–78Google Scholar
  32. [32]
    Wiedermann, J.: On the Computational Efficiency of Symmetric Neural Networks. Proc. 14-th Symp. on Math. Found. of Comp. Sci., MFCS'89, LNCS Vol. 379, Springer Verlag, Berlin, 1989, pp. 545–552Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Juraj Wiedermann
    • 1
  1. 1.VUSEI-ARBratislavaCzechoslovakia

Personalised recommendations