Skip to main content

Neural networks and complexity theory

  • Invited Lectures
  • Conference paper
  • First Online:
Mathematical Foundations of Computer Science 1992 (MFCS 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 629))

Abstract

We survey some of the central results in the complexity theory of neural networks, with pointers to the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aarts, E., Korst, J. Simulated Annealing and Boltzmann Machines. John Wiley & Sons, Chichester, 1989.

    MATH  Google Scholar 

  2. Alon, N. Asynchronous threshold networks. Graphs and Combinatorics 1 (1985), 305–310.

    MATH  MathSciNet  Google Scholar 

  3. Alon, N., Dewdney, A. K., Ott, T. J. Efficient simulation of finite automata by neural nets. J. Assoc. Comp. Mach. 38 (1991), 495–514.

    MATH  MathSciNet  Google Scholar 

  4. Anderson, J. A., Rosenfeld, E. (eds.) Neurocomputing: Foundations of Research. The MIT Press, Cambridge, MA, 1988.

    Google Scholar 

  5. Anderson, J. A., Pellionisz, A., Rosenfeld, E. (eds.) Neurocomputing 2: Directions for Research. The MIT Press, Cambridge, MA, 1991.

    Google Scholar 

  6. Balcázar, J. L., Díaz, J., Gabarró, J. On characterizations of the class PSPACE/poly. Theoret. Comput. Sci. 52 (1987), 251–267.

    Article  MATH  MathSciNet  Google Scholar 

  7. Blum, A. L., Rivest, R. L. Training a 3-node neural network in NP-complete. Neural Networks 5 (1992), 117–127.

    Article  Google Scholar 

  8. Bruck, J. On the convergence properties of the Hopfield model. Proc. of the IEEE 78 (1990), 1579–1585.

    Article  Google Scholar 

  9. Cybenko, G. Approximation by superposition of a sigmoidal function. Math. of Control, Signals, and Systems 2 (1989), 303–314.

    MATH  MathSciNet  Google Scholar 

  10. Floréen, P., Orponen, P. Attraction radii in Hopfield nets are hard to compute. Manuscript submitted for publication, 7 pp., April 1992.

    Google Scholar 

  11. Floréen, P., Orponen, P. On the computational complexity of analyzing Hopfield nets. Complex Systems 3 (1989), 577–587.

    MATH  MathSciNet  Google Scholar 

  12. Fogelman, F., Goles, E., Weisbuch, G. Transient length in sequential iterations of threshold functions. Discr. Appl. Math. 6 (1983), 95–98.

    Article  MATH  MathSciNet  Google Scholar 

  13. Fogelman, F., Robert, Y., Tchuente, M. Automata Networks in Computer Science: Theory and Applications. Manchester University Press, 1987.

    Google Scholar 

  14. Franklin, S., Garzon, M. Global dynamics in neural networks. Complex Systems 3 (1989), 29–36.

    MATH  MathSciNet  Google Scholar 

  15. Franklin, S., Garzon, M. Neural computability. In: Progress in Neural Networks 1 (ed. O. M. Omidvar). Ablex, Norwood, NJ, 1990. Pp. 128–144.

    Google Scholar 

  16. Funahashi, K.-I. On the approximate realization of continuous mappings by neural networks. Neural Networks 2 (1989), 183–192.

    Article  Google Scholar 

  17. Furst, M., Saxe, J. B., Sipser, M. Parity, circuits, and the polynomial-time hierarchy. Math. Systems Theory 17 (1984), 13–27.

    Article  MATH  MathSciNet  Google Scholar 

  18. Garzon, M., Franklin, S. Global dynamics in neural nets II. Report 89-9, Memphis State Univ., Dept. of Mathematical Sciences, 1989.

    Google Scholar 

  19. Garzon, M., Franklin, S. Neural computability II. In: Proc. of the 3rd Internat. Joint Conf. on Neural Networks, Vol. 1. IEEE, New York, 1989. Pp. 631–637.

    Chapter  Google Scholar 

  20. Godbeer, G. H., Lipscomb, J., Luby, M. On the Computational Complexity of Finding Stable State Vectors in Connectionist Models (Hopfield Nets). Technical Report 208/88, Dept. of Computer Science, Univ. of Toronto, March 1988.

    Google Scholar 

  21. Goldmann, M., HÃ¥stad, J., Razborov, A. Majority gates vs. general weighted threshold gates. In: Proc. of the 7th Ann. Conf. on Structure in Complexity Theory. IEEE, New York, 1992.

    Google Scholar 

  22. Goles, E., Fogelman, F., Pellegrin, D. Decreasing energy functions as a tool for studying threshold networks. Disrc. Appl. Math. 12 (1985), 261–277.

    Article  MATH  Google Scholar 

  23. Goles, E., Martínez, S. Exponential transient classes of symmetric neural networks for synchronous and sequential updating. Complex Systems 3 (1989), 589–597.

    MATH  MathSciNet  Google Scholar 

  24. Goles, E., Martínez, S. Neural and Automata Networks. Kluwer Academic, Dordrecht, 1990.

    MATH  Google Scholar 

  25. Goles, E., Olivos, J. The convergence of symmetric threshold automata. Info. and Control 51 (1981), 98–104.

    Article  MATH  Google Scholar 

  26. Hajnal, A., Maass, W., Pudlák, P., Szegedy, M., Turán, G. Threshold circuits of bounded depth. In: Proc. of the 28th Ann. IEEE Symp. on Foundations of Computer Science. IEEE, New York, 1992. Pp. 99–110. (Revised version to appear in J. Comp. Syst. Sci.)

    Google Scholar 

  27. Haken, A. Connectionist networks that need exponential time to stabilize. Manuscript, 10 pp., January 1989.

    Google Scholar 

  28. Haken, A., Luby, M. Steepest descent can take exponential time for symmetric connection networks. Complex Systems 2 (1988), 191–196.

    MATH  Google Scholar 

  29. Hartley, R., Szu, H. A comparison of the computational power of neural networks. In: Proc. of the 1987 Internat. Conf. on Neural Networks, Vol. 3. IEEE, New York, 1987. Pp. 15–22.

    Google Scholar 

  30. Hertz, J., Krogh, A., Palmer, R. G. Introduction to the Theory of Neural Computation. Addison-Wesley, Redwood City, CA, 1991.

    MATH  Google Scholar 

  31. Hinton, G. E., Sejnowski, T. E. Learning and relearning in Boltzmann machines. In [62], pp. 282–317.

    Google Scholar 

  32. Hong, J. On connectionist models. Comm. Pure and Applied Math. XLI (1988), 1039–1050.

    Google Scholar 

  33. Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. USA 79 (1982), 2554–2558.

    Article  MathSciNet  Google Scholar 

  34. Hornik, K., Stinchcombe, M., White, H. Multilayer feedforward nets are universal approximators. Neural Networks 2 (1989), 359–366.

    Article  Google Scholar 

  35. Håstad, J. Almost optimal lower bounds for small depth circuits. In: Randomness and Computation. Advances in Computing Research 5 (ed. S. Micali). JAI Press, Greenwich, CT, 1989. Pp. 143–170.

    Google Scholar 

  36. Håstad, J., Goldmann, M. On the power of small-depth threshold circuits. Computational Complexity 1 (1991), 113–129.

    Article  MATH  MathSciNet  Google Scholar 

  37. Judd, J. S. On the complexity of loading shallow neural networks. J. Complexity 4 (1988), 177–192.

    Article  MATH  MathSciNet  Google Scholar 

  38. Judd, J. S. Neural Network Design and the Complexity of Learning. The MIT Press, Cambridge, MA, 1990.

    Google Scholar 

  39. Kamp, Y., Hasler, M. Recursive Neural Networks for Associative Memory. John Wiley & Sons, Chichester, 1990.

    MATH  Google Scholar 

  40. Kearns, M., Valiant, L. G. Cryptographic limitations on learning Boolean formulae and finite automata. In: Proc. of the 21st Ann. ACM Symp. on Theory of Computing. ACM, New York, 1989. Pp. 433–444.

    Google Scholar 

  41. Kleene, S. C. Representation of events in nerve nets and finite automata. In: Automata Studies (ed. C. E. Shannon and J. McCarthy). Annals of Mathematics Studies n:o 34. Princeton Univ. Press, Princeton, NJ, 1956. Pp. 3–41.

    Google Scholar 

  42. Kohonen, T. Self-Organization and Associative Memory. 3rd Ed., Springer-Verlag, Berlin, 1989.

    MATH  Google Scholar 

  43. Lin, J.-H., Vitter, J. S. Complexity results on learning by neural nets. Machine Learning 6 (1991), 211–230.

    Google Scholar 

  44. Maass, W., Schnitger, G., Sontag, E. D. On the computational power of sigmoid versus Boolean threshold circuits. In: Proc. of the 32nd Ann. IEEE Symp. on Foundations of Computer Science. IEEE, New York, 1991. Pp. 767–776.

    Google Scholar 

  45. McClelland, J. L., Rumelhart, D. E., et al. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2. The MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  46. McCulloch, W. S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5 (1943), 115–133. Reprinted in [4], pp. 18–27.

    Article  MATH  MathSciNet  Google Scholar 

  47. Minsky, M. L. Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, NJ, 1972.

    Google Scholar 

  48. Minsky, M. L., Papert, S. A. Perceptrons: An Introduction to Computational Geometry. The MIT Press, Cambridge, MA, 1969 (expanded edition 1988).

    MATH  Google Scholar 

  49. Muroga, S. Threshold Logic and Its Applications. John Wiley & Sons, New York, 1971.

    MATH  Google Scholar 

  50. Muroga, S., Toda, I., Takasu, S. Theory of majority decision elements. J. Franklin Inst. 271 (1961), 376–418.

    Article  MATH  MathSciNet  Google Scholar 

  51. Myhill, J., Kautz, W. H. On the size of weights required for linear-input switching functions. IRE Trans. Electronic Computers 10 (1961), 288–290.

    Article  Google Scholar 

  52. Obradovic, Z., Parberry, I. Analog neural networks of limited precision I: Computing with multilinear threshold functions (Preliminary version). In: Advances in Neural Information Processing Systems 2 (ed. D. S. Touretzky). Morgan Kaufmann, San Mateo, CA, 1990. Pp. 702–709.

    Google Scholar 

  53. Parberry, I. A primer on the complexity theory of neural networks. In: Formal Techniques in Artificial Intelligence: A Sourcebook (ed. R. B. Banerji). Elsevier-North-Holland, Amsterdam, 1990. Pp. 217–268.

    Google Scholar 

  54. Parberry, I., Schnitger, G. Relating Boltzmann machines to conventional models of computation. Neural Networks 2 (1989), 59–67.

    Article  Google Scholar 

  55. Pollack, J. On Connectionist Models of Natural Language Processing. Ph. D. Thesis, Univ. Illinois, Urbana, 1987.

    Google Scholar 

  56. Porat, S. Stability and looping in connectionist models with asymmetric weights. Biol. Cybern. 60 (1989), 335–344.

    Article  MATH  MathSciNet  Google Scholar 

  57. Raghavan, P. Learning in threshold networks. In: Proc. of the 1988 Workshop on Computational Learning Theory (ed. D. Haussler, L. Pitt). Morgan Kaufmann, San Mateo, CA, 1988. Pp. 19–27.

    Google Scholar 

  58. Reif, J. On threshold circuits and polynomial computation. In: Proc. of the 2nd Ann. Conf. on Structure in Complexity Theory. IEEE, New York, 1987. Pp. 118–123.

    Google Scholar 

  59. F. Rosenblatt. Priciples of Neurodynamics. Spartan Books, New York, 1962.

    Google Scholar 

  60. Roychowdhury, V., Siu, K. Y., Orlitsky, A., Kailath, T. A geometric approach to threshold circuit complexity. In: Proc. of the 4th Ann. Workshop on Computational Learning Theory (ed. L. G. Valiant, M. K. Warmuth). Morgan Kaufmann, San Mateo, CA, 1991. Pp. 97–111.

    Google Scholar 

  61. Rumelhart, D. E., Hinton, G. E., Williams, R. J. Learning internal representations by error propagation. In [62]. pp. 318–362.

    Google Scholar 

  62. Rumelhart, D. E., McClelland, J. L., et al. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1. The MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  63. Siegelman, H. T., Sontag, E. D. On the computational power of neural nets. Report SYCON-91-11, Rutgers University, New Brunswick, NJ, Nov. 1991.

    Google Scholar 

  64. Siu, K.-Y., Bruck, J. Neural computation of arithmetic functions. Proc. of the IEEE 78 (1990), 1669–1675.

    Article  Google Scholar 

  65. Siu, K.-Y., Bruck, J. On the power of threshold circuits with small weights. SIAM J. Discr. Math. 4 (1991), 423–435.

    Article  MATH  MathSciNet  Google Scholar 

  66. Wegener, I. The Complexity of Boolean Functions. John Wiley & Sons, Chichester, and B. G. Teubner, Stuttgart, 1987.

    MATH  Google Scholar 

  67. Yao, A. C. Separating the polynomial-time hierarchy by oracles. In: Proc. of the 26th Ann. IEEE Symp. on Foundations of Computer Science. IEEE, New York, 1985. Pp. 1–10.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ivan M. Havel Václav Koubek

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Orponen, P. (1992). Neural networks and complexity theory. In: Havel, I.M., Koubek, V. (eds) Mathematical Foundations of Computer Science 1992. MFCS 1992. Lecture Notes in Computer Science, vol 629. Springer, Berlin, Heidelberg . https://doi.org/10.1007/3-540-55808-X_5

Download citation

  • DOI: https://doi.org/10.1007/3-540-55808-X_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55808-8

  • Online ISBN: 978-3-540-47291-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics