Complexity of network training for classes of Neural Networks

  • Charles C. Pinter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 997)


It is known that the problem of training certain specific, very simple neural networks is NP-complete. While such results suggest that training is equally hard for larger, as well as differently configured networks, this conclusion is by no means self-evident. The main result of this paper is that it is NP-complete to train any specific architecture or class of architectures. On the other hand, it is also shown that a simple 4-node network (with two hidden and two output units) can be trained in polynomial time if the target network function is assumed to be surjective. This remains true for networks of any size, on the condition that the number of ouput units is at least equal to the number of units in the first computing layer. Thus, with a mild restriction placed on the class of target functions, training certain large networks may be easier than training smaller ones.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Baum, E.B.: On learning a union of halfspaces. Journal of Complexity 6 (1990) 67–101Google Scholar
  2. 2.
    Baum, E.B.: A polynomial time algorithm that learns two hidden unit nets. Neural Computation 2 (1990) 510–522Google Scholar
  3. 3.
    Baum, E.B.: Neural net algorithms that learn in polynomial time from examples and queries. IEEE Trans. on Neural Networks 2 (1991) 5–19Google Scholar
  4. 4.
    Blum, A.L., Rivest, R.L.: Training a 3-node neural network is NP-complete. Machine Learning: From Theory to Applications, Springer-Verlag (1993) 9–28Google Scholar
  5. 5.
    DasGupta, B., Siegelmann, H., Sontag, E.: On a learnability question associated with neural networks with continuous activations. Proc. of the 7th Annual ACM Conference on Computational Learning Theory (1994) 47–56Google Scholar
  6. 6.
    Golea, M., Hancock, T., Marchand, M.: Learning nonoverlapping perceptron networks from examples and membership queries. Machine Learning 16 (1994) 161–183Google Scholar
  7. 7.
    Judd, J.S.: On the complexity of loading shallow neural networks. Journal of Complexity 4 (1988) 177–192Google Scholar
  8. 8.
    Lin, J.H., Vitter, J.S.: Complexity results on learning by neural nets. Machine Learning 6 (1991) 211–230Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Charles C. Pinter
    • 1
  1. 1.Bucknell UniversityLewisburgUSA

Personalised recommendations