Abstract
We survey some relationships between computational complexity and neural network theory. Here, only networks of binary threshold neurons are considered.
We begin by presenting some contributions of neural networks in structural complexity theory. In parallel complexity, the class TC0 k of problems solvable by feed-forward networks with k levels and a polynomial number of neurons is considered. Separation results are recalled and the relation between TC0 =∪TC0 k and NC1 is analyzed. In particular, under the conjecture TC ≠ NC1, we characterize the class of regular languages accepted by feed-forward networks with a constant number of levels and a polynomial number of neurons.
We also discuss the use of complexity theory to study computational aspects of learning and combinatorial optimization in the context of neural networks. We consider the PAC model of learning, emphasizing some negative results based on complexity theoretic assumptions. Finally, we discussed some results in the realm of neural networks related to a probabilistic characterization of NP.
Partially supported by M.I.U.R. COFIN, under the project “Linguaggi formali e automi: teoria e applicazioni”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
W. B. Alexi, B. Chor, O. Goldreich AND C. P. Schnorr, RSA and Rabin function: Certain parts are as hard as the whole. SIAM J. Comput. 17 (1988) 194–209.
E. Amaldi AND V. Kann The complexity and approximability of finding maximum feasible subsystems of linear relations. Theoretical Computer Science 147 (1995) 181–210.
D. Angluin, Queries and Concept Learning. Machine Learning 2 (1988) 319–342.
M. M. Arbib, Brains, Machines and Mathematics (2nd ed.). Springer Verlag, 1987.
S. Arora, C. Lund, R. Motwani, M. Sudan AND M. Szegedy, Proof Verification and Hardness of Approximation. In: Proc. 33rd Am. IEEE Symp. on Found. Comp. Sci. (1992) 14–23.
D. Barrington, Bounded-width polynomial-size branching programs recognize exactly those languages in NC1. J. Comput. Syst. Sci. 38 (1989) 150–164.
D. A. Mix Barrington, K. Compton, H. Straubing AND D. Thérien, Regular languages in NC1. J. Comp. Syst. Sci. 44 (1992) 478–499.
F. Barahona, On the computational complexity of Ising spin glass models. J.of Physics A: Mathematical, nuclear and general 15 (1982) 3241–3253.
P. Bartlett AND S. Ben-David, Hardness Results for Neural Network Approximation Problems. Proc. 4th European Conference on Comput. Learning Theory (1999) 50–62.
E. B. Baum, Neural Nets Algorithms that learn in polynomial time from examples and queries. IEEE Trans. on Neural Network 2 (1991) 5–19.
E. B. Baum And D. Haussler, What size net gives valid generalization? Neura Computation 1 (1989) 151–160.
S. Ben-David, N. Eiron AND P. M. Long, On the Difficulty of Approximately Maximizing Agreements. Proc. 13th Ann. Conference on Comput. Learning Theory (2000) 266–274.
A. Bertoni, P. Campadelli AND G. Molteni, On the approximability of energy function in Ising spin glasses. J. of Physics A: Mathematical, nuclear and general 27 (1994) 6719–6729.
A. Bertoni, P. Campadelli, A. Morpurgo AND S. Panizza, Polynomial uniform convergence and polynomial-sample learnability. In: Proc. 5th Ann.ACM Workshop on Computational Learning Theory (1992) 265–271.
I. Bieche, R. Maynard, R. Rammal AND J. P. Uhry, J. of Physics A: Mathematical, nuclear and general 13 (1980) 2553–2576.
G. Benedek AND A. Itai, Learnability with respect to fixed distributions. Theoretical Computer Science 86 (1991) 377–389.
A. Blum AND R. L. Rivest, Training a 3-node neural network is NP-complete. Neural Networks 5 (1992) 117–127.
A. Blumer, A. Ehrenfeucht, D. Haussler AND M.K. Warmuth, Learnability and the Vapnik-Chervonenkis Dimension. J. ACM 36 (1989) 929–965.
E. R. Caianiello, Outline of a theory of thought processes and thinking machines. J. Theoretical Biology 1 (1961) 1–27.
S.A. Cook, The Complexity of Theorem Proving Procedures. In: Proc. 3rd Ann. ACM Symposium on Theory of Computing (1971) 151–158.
S. A. Cook, A taxonomy of problems with fast parallel algorithms. Information and Control 64 (1985) 2–22.
A. Ehrenfeucht, D. Haussler, M. Kearns AND L. Valiant, A general lower bound on the number of examples needed for learning. Information and Computation 82 (1989) 247–261.
M. R. Garey AND D.S. Johnson, Computers and intractability. A guide to the theory of NP-completeness. W.H. Freeman, 1979.
K. Goedel, Uber Formal Unentscheidbare Satze der Principia Matematica und verwandter Systeme I. Monatshefte fur Matematik und Physik 38 (1931) 173–198.
M. Goldmann AND M. Karpinski, Simulating Threshold Circuits by Majority Circuits. SIAM J. Comput. 98 (1998) 230–246.
A. Hajnal, W. Maass, P. Pudlák, M. Szegedy AND G. Turań, Threshold circuits of bounded depth. J. Comput. Sys. Sci. 46 (1993) 129–154.
J. Hartmanis AND R. E. Stearns, On the computational complexity of algorithms. Trans. Am. Math. Soc. 117 (1965) 285–306.
J. Hartmanis, Goedel, Von Neumann and P =?NP problem. Bull. of EATCS, 38, 101–107, 1989.
D. Haussler, Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications. Information and Computation 100 (1992) 78–150.
J. Hastad, Some optimal inapproximability results. Royal Institute of Technology (1999).
D.O. Hebb, The Organization of Behavior. Wiley, 1949.
T. Hegedus, Can complexity theory benefit from learning theory? In: European Conf. on Machine Learning (1993) 354–359.
W. Hesse, Division is in Uniform TC0. In: ICALP: Annual International Colloquium on Automata, Languages and Programming (2001) 104–114.
T. Hofmeister, Depth-efficient threshold circuits for arithmetic functions. In: V. Roychowdhury, K.-Y. Siu AND A. Orlitsky (eds.), Theoretical advances in Neural Computation and Learning. Kluwer Academic, Boston, London, Dordrecht (1994) 37–84.
J. Hopfield, Neural networks and physical systems with emergent collective computational abilities. In: Proc. of the National Academy of Science, USA (1982) 2554–2558.
D. Johnson, A catalog of complexity classes. In: J. Van Leeuwen (eds.), Handbook of Theoretical Computer Science. North-Holland (1990) 142–143.
J. S. Judd, Neural Network Design and the Complexity of Learning. The MIT Press, 1990.
N. Karmarkar, A new polynomial-time algorithm for linear programming. Combinatorica 4 (1984) 373–395.
R. M. Karp, Reducibility Among Combinatorial Problems. Complexity of Computer Computations (1972) 85–103.
M. Kearns AND L. G. Valiant, Cryptographic limitations on learning Boolean formulae and finite automata. In: Proc. of ACM Symp. on Theory of Computing (1989) 15–17.
L. A. Levin, Universal Search Problems. Problemy Peredachi Informatsii 9 (1973) 265–266.
J. H. Lin AND J. S. Vitter, Complexity result on learning with neural nets. Machine Learning 6 (1991) 211–230.
W. Maass, Neural nets with superlinear VC dimension. Neural Computation 6 (1994) 877–884.
C. Mereghetti AND B. Palano, The Parallel Complexity of Deterministic and Probabilistic Automata. J. Aut., Lang. and Comb. To be published.0
M.L. Minsky, Steps toward artificial intelligence. In: Proc. IRE 49 (1961) 8–30.
W. S. McCulloch AND W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5 (1943) 115–133.
S. Muroga, Threshold Logic and its Applications. Wiley, 1971.
L. Pitt AND L. G. Valiant, Computational limitations on learning from examples. J. ACM 35 (1988) 965–984.
F. Rosenblatt, Principles of Neurodynamics. Spartan Books, 1962.
V. Roychowdhury, K.-Y. Siu AND A. Orlitsky, Theoretical advances in Neural Computation and Learning. Kluwer Academic, Boston, London, Dordrecht, 1994.
V. Roychowdhury, K.-Y. Siu AND A. Orlitsky, Neural models and spectral methods. In: V. Roychowdhury, K.-Y. Siu AND A. Orlitsky (eds.), Theoretical advances in Neural Computation and Learning. Kluwer Academic, Boston, London, Dordrecht (1994) 3–36.
D.E. Rumelhart, G. E. Hinton AND R. J. Williams, Learning representations by back-propagating errors. Nature 323 (1986) 533–536.
W. R. Scott, Group Theory. Prentice-Hall, 1964. Reprinted by Dover, 1987.
H. Straubing, Finite Automata, Formal Logic, and Circuit Complexity. Birkhäuser, 1994.
A.M. Turing, On computable numbers with an application to the Entscheidungs problem. Proc. London Math. Soc. 2-42, (1936) 230–265.
L. Valiant, A theory of the learnable. Communications of the ACM 27 (1984) 1134–1142.
V. N. Vapnik AND A. Y. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory of Prob. and its Applications 16 (1971) 264–280.
R. S. Wenocur AND R. M. Dudley, Some Special Vapnik-Chervonenkis classes. Discrete Mathematics 33 (1981) 313–318.
I. Wegener, The Complexity of Boolean Functions. Teubner, Stuttgart, 1987.
I. Wegener, Optimal lower bounds on the depth of polynomial-size threshold circuits for some arithmetic functions. Information Processing Letters 46 (1993) 85–87.
P. Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Science. PhD thesis, Harvard University, 1974.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bertoni, A., Palano, B. (2002). Structural Complexity and Neural Networks. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets. WIRN 2002. Lecture Notes in Computer Science, vol 2486. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45808-5_21
Download citation
DOI: https://doi.org/10.1007/3-540-45808-5_21
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44265-3
Online ISBN: 978-3-540-45808-1
eBook Packages: Springer Book Archive