Skip to main content

Structural Complexity and Neural Networks

  • Conference paper
  • First Online:
Neural Nets (WIRN 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2486))

Included in the following conference series:

Abstract

We survey some relationships between computational complexity and neural network theory. Here, only networks of binary threshold neurons are considered.

We begin by presenting some contributions of neural networks in structural complexity theory. In parallel complexity, the class TC0 k of problems solvable by feed-forward networks with k levels and a polynomial number of neurons is considered. Separation results are recalled and the relation between TC0 =∪TC0 k and NC1 is analyzed. In particular, under the conjecture TC ≠ NC1, we characterize the class of regular languages accepted by feed-forward networks with a constant number of levels and a polynomial number of neurons.

We also discuss the use of complexity theory to study computational aspects of learning and combinatorial optimization in the context of neural networks. We consider the PAC model of learning, emphasizing some negative results based on complexity theoretic assumptions. Finally, we discussed some results in the realm of neural networks related to a probabilistic characterization of NP.

Partially supported by M.I.U.R. COFIN, under the project “Linguaggi formali e automi: teoria e applicazioni”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W. B. Alexi, B. Chor, O. Goldreich AND C. P. Schnorr, RSA and Rabin function: Certain parts are as hard as the whole. SIAM J. Comput. 17 (1988) 194–209.

    Article  MATH  MathSciNet  Google Scholar 

  2. E. Amaldi AND V. Kann The complexity and approximability of finding maximum feasible subsystems of linear relations. Theoretical Computer Science 147 (1995) 181–210.

    Article  MATH  MathSciNet  Google Scholar 

  3. D. Angluin, Queries and Concept Learning. Machine Learning 2 (1988) 319–342.

    Google Scholar 

  4. M. M. Arbib, Brains, Machines and Mathematics (2nd ed.). Springer Verlag, 1987.

    Google Scholar 

  5. S. Arora, C. Lund, R. Motwani, M. Sudan AND M. Szegedy, Proof Verification and Hardness of Approximation. In: Proc. 33rd Am. IEEE Symp. on Found. Comp. Sci. (1992) 14–23.

    Google Scholar 

  6. D. Barrington, Bounded-width polynomial-size branching programs recognize exactly those languages in NC1. J. Comput. Syst. Sci. 38 (1989) 150–164.

    Article  MATH  MathSciNet  Google Scholar 

  7. D. A. Mix Barrington, K. Compton, H. Straubing AND D. Thérien, Regular languages in NC1. J. Comp. Syst. Sci. 44 (1992) 478–499.

    Article  MATH  Google Scholar 

  8. F. Barahona, On the computational complexity of Ising spin glass models. J.of Physics A: Mathematical, nuclear and general 15 (1982) 3241–3253.

    Article  MathSciNet  Google Scholar 

  9. P. Bartlett AND S. Ben-David, Hardness Results for Neural Network Approximation Problems. Proc. 4th European Conference on Comput. Learning Theory (1999) 50–62.

    Google Scholar 

  10. E. B. Baum, Neural Nets Algorithms that learn in polynomial time from examples and queries. IEEE Trans. on Neural Network 2 (1991) 5–19.

    Article  Google Scholar 

  11. E. B. Baum And D. Haussler, What size net gives valid generalization? Neura Computation 1 (1989) 151–160.

    Article  Google Scholar 

  12. S. Ben-David, N. Eiron AND P. M. Long, On the Difficulty of Approximately Maximizing Agreements. Proc. 13th Ann. Conference on Comput. Learning Theory (2000) 266–274.

    Google Scholar 

  13. A. Bertoni, P. Campadelli AND G. Molteni, On the approximability of energy function in Ising spin glasses. J. of Physics A: Mathematical, nuclear and general 27 (1994) 6719–6729.

    Article  MATH  MathSciNet  Google Scholar 

  14. A. Bertoni, P. Campadelli, A. Morpurgo AND S. Panizza, Polynomial uniform convergence and polynomial-sample learnability. In: Proc. 5th Ann.ACM Workshop on Computational Learning Theory (1992) 265–271.

    Google Scholar 

  15. I. Bieche, R. Maynard, R. Rammal AND J. P. Uhry, J. of Physics A: Mathematical, nuclear and general 13 (1980) 2553–2576.

    Article  MathSciNet  Google Scholar 

  16. G. Benedek AND A. Itai, Learnability with respect to fixed distributions. Theoretical Computer Science 86 (1991) 377–389.

    Article  MATH  MathSciNet  Google Scholar 

  17. A. Blum AND R. L. Rivest, Training a 3-node neural network is NP-complete. Neural Networks 5 (1992) 117–127.

    Article  Google Scholar 

  18. A. Blumer, A. Ehrenfeucht, D. Haussler AND M.K. Warmuth, Learnability and the Vapnik-Chervonenkis Dimension. J. ACM 36 (1989) 929–965.

    Article  MATH  MathSciNet  Google Scholar 

  19. E. R. Caianiello, Outline of a theory of thought processes and thinking machines. J. Theoretical Biology 1 (1961) 1–27.

    Article  MathSciNet  Google Scholar 

  20. S.A. Cook, The Complexity of Theorem Proving Procedures. In: Proc. 3rd Ann. ACM Symposium on Theory of Computing (1971) 151–158.

    Google Scholar 

  21. S. A. Cook, A taxonomy of problems with fast parallel algorithms. Information and Control 64 (1985) 2–22.

    Article  MATH  MathSciNet  Google Scholar 

  22. A. Ehrenfeucht, D. Haussler, M. Kearns AND L. Valiant, A general lower bound on the number of examples needed for learning. Information and Computation 82 (1989) 247–261.

    Article  MATH  MathSciNet  Google Scholar 

  23. M. R. Garey AND D.S. Johnson, Computers and intractability. A guide to the theory of NP-completeness. W.H. Freeman, 1979.

    Google Scholar 

  24. K. Goedel, Uber Formal Unentscheidbare Satze der Principia Matematica und verwandter Systeme I. Monatshefte fur Matematik und Physik 38 (1931) 173–198.

    Article  MATH  Google Scholar 

  25. M. Goldmann AND M. Karpinski, Simulating Threshold Circuits by Majority Circuits. SIAM J. Comput. 98 (1998) 230–246.

    Article  MathSciNet  Google Scholar 

  26. A. Hajnal, W. Maass, P. Pudlák, M. Szegedy AND G. Turań, Threshold circuits of bounded depth. J. Comput. Sys. Sci. 46 (1993) 129–154.

    Article  MATH  Google Scholar 

  27. J. Hartmanis AND R. E. Stearns, On the computational complexity of algorithms. Trans. Am. Math. Soc. 117 (1965) 285–306.

    Article  MATH  MathSciNet  Google Scholar 

  28. J. Hartmanis, Goedel, Von Neumann and P =?NP problem. Bull. of EATCS, 38, 101–107, 1989.

    MATH  Google Scholar 

  29. D. Haussler, Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications. Information and Computation 100 (1992) 78–150.

    Article  MATH  MathSciNet  Google Scholar 

  30. J. Hastad, Some optimal inapproximability results. Royal Institute of Technology (1999).

    Google Scholar 

  31. D.O. Hebb, The Organization of Behavior. Wiley, 1949.

    Google Scholar 

  32. T. Hegedus, Can complexity theory benefit from learning theory? In: European Conf. on Machine Learning (1993) 354–359.

    Google Scholar 

  33. W. Hesse, Division is in Uniform TC0. In: ICALP: Annual International Colloquium on Automata, Languages and Programming (2001) 104–114.

    Google Scholar 

  34. T. Hofmeister, Depth-efficient threshold circuits for arithmetic functions. In: V. Roychowdhury, K.-Y. Siu AND A. Orlitsky (eds.), Theoretical advances in Neural Computation and Learning. Kluwer Academic, Boston, London, Dordrecht (1994) 37–84.

    Google Scholar 

  35. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities. In: Proc. of the National Academy of Science, USA (1982) 2554–2558.

    Google Scholar 

  36. D. Johnson, A catalog of complexity classes. In: J. Van Leeuwen (eds.), Handbook of Theoretical Computer Science. North-Holland (1990) 142–143.

    Google Scholar 

  37. J. S. Judd, Neural Network Design and the Complexity of Learning. The MIT Press, 1990.

    Google Scholar 

  38. N. Karmarkar, A new polynomial-time algorithm for linear programming. Combinatorica 4 (1984) 373–395.

    Article  MATH  MathSciNet  Google Scholar 

  39. R. M. Karp, Reducibility Among Combinatorial Problems. Complexity of Computer Computations (1972) 85–103.

    Google Scholar 

  40. M. Kearns AND L. G. Valiant, Cryptographic limitations on learning Boolean formulae and finite automata. In: Proc. of ACM Symp. on Theory of Computing (1989) 15–17.

    Google Scholar 

  41. L. A. Levin, Universal Search Problems. Problemy Peredachi Informatsii 9 (1973) 265–266.

    Google Scholar 

  42. J. H. Lin AND J. S. Vitter, Complexity result on learning with neural nets. Machine Learning 6 (1991) 211–230.

    Google Scholar 

  43. W. Maass, Neural nets with superlinear VC dimension. Neural Computation 6 (1994) 877–884.

    Article  MATH  Google Scholar 

  44. C. Mereghetti AND B. Palano, The Parallel Complexity of Deterministic and Probabilistic Automata. J. Aut., Lang. and Comb. To be published.0

    Google Scholar 

  45. M.L. Minsky, Steps toward artificial intelligence. In: Proc. IRE 49 (1961) 8–30.

    Article  MathSciNet  Google Scholar 

  46. W. S. McCulloch AND W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5 (1943) 115–133.

    Article  MATH  MathSciNet  Google Scholar 

  47. S. Muroga, Threshold Logic and its Applications. Wiley, 1971.

    Google Scholar 

  48. L. Pitt AND L. G. Valiant, Computational limitations on learning from examples. J. ACM 35 (1988) 965–984.

    Article  MATH  MathSciNet  Google Scholar 

  49. F. Rosenblatt, Principles of Neurodynamics. Spartan Books, 1962.

    Google Scholar 

  50. V. Roychowdhury, K.-Y. Siu AND A. Orlitsky, Theoretical advances in Neural Computation and Learning. Kluwer Academic, Boston, London, Dordrecht, 1994.

    MATH  Google Scholar 

  51. V. Roychowdhury, K.-Y. Siu AND A. Orlitsky, Neural models and spectral methods. In: V. Roychowdhury, K.-Y. Siu AND A. Orlitsky (eds.), Theoretical advances in Neural Computation and Learning. Kluwer Academic, Boston, London, Dordrecht (1994) 3–36.

    Google Scholar 

  52. D.E. Rumelhart, G. E. Hinton AND R. J. Williams, Learning representations by back-propagating errors. Nature 323 (1986) 533–536.

    Article  Google Scholar 

  53. W. R. Scott, Group Theory. Prentice-Hall, 1964. Reprinted by Dover, 1987.

    Google Scholar 

  54. H. Straubing, Finite Automata, Formal Logic, and Circuit Complexity. Birkhäuser, 1994.

    Google Scholar 

  55. A.M. Turing, On computable numbers with an application to the Entscheidungs problem. Proc. London Math. Soc. 2-42, (1936) 230–265.

    Google Scholar 

  56. L. Valiant, A theory of the learnable. Communications of the ACM 27 (1984) 1134–1142.

    Article  MATH  Google Scholar 

  57. V. N. Vapnik AND A. Y. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory of Prob. and its Applications 16 (1971) 264–280.

    Article  MATH  Google Scholar 

  58. R. S. Wenocur AND R. M. Dudley, Some Special Vapnik-Chervonenkis classes. Discrete Mathematics 33 (1981) 313–318.

    Article  MATH  MathSciNet  Google Scholar 

  59. I. Wegener, The Complexity of Boolean Functions. Teubner, Stuttgart, 1987.

    Google Scholar 

  60. I. Wegener, Optimal lower bounds on the depth of polynomial-size threshold circuits for some arithmetic functions. Information Processing Letters 46 (1993) 85–87.

    Article  MATH  MathSciNet  Google Scholar 

  61. P. Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Science. PhD thesis, Harvard University, 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bertoni, A., Palano, B. (2002). Structural Complexity and Neural Networks. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets. WIRN 2002. Lecture Notes in Computer Science, vol 2486. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45808-5_21

Download citation

  • DOI: https://doi.org/10.1007/3-540-45808-5_21

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44265-3

  • Online ISBN: 978-3-540-45808-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics