Skip to main content

Hardness Results for Neural Network Approximation Problems

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1572))

Abstract

We consider the problem of efficiently learning in two-layer neural networks. We show that it is NP-hard to fifid a linear threshold network of a fixed size that approximately minimizes the proportion of misclassified examples in a training set, even if there is a network that correctly classifies all of the training examples. In particular, for a training set that is correctly classified by some two-layer linear threshold network with k hidden units, it is NP-hard to find such a network that makes mistakes on a proportion smaller than c/k 3 of the examples, for some constant c. We prove a similar result for the problem of approximately minimizing the quadratic loss of a two-layer network with a sigmoid output unit.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sanjeev Arora, Laszlo Babai, Jacques Stern, and Z. Sweedyk. Hardness of approximate optima in lattices, codes, and linear systems. Journal of Computer and System Sciences, 54(2):317–331, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  2. Eric B. Baum. On learning a union of half-spaces. Journal of Complexity, 6:67–101, 1990.

    Article  MATH  MathSciNet  Google Scholar 

  3. A.L. Blum and R.L. Rivest. Training a 3-node neural network is NP-complete. Neural Networks, 5(1):117–127, 1992.

    Article  Google Scholar 

  4. Bhaskar DasGupta, Hava T. Siegelmann, and Eduardo D. Sontag. On the complexity of training neural networks with continuous activation functions. IEEE Transactions on Neural Networks, 6(6):1490–1504, 1995.

    Article  Google Scholar 

  5. András Faragó and Gábor Lugosi. Strong universal consistency of neural network classifiers. IEEE Transactions on Information Theory, 39(4):1146–1151, 1993.

    Article  MATH  Google Scholar 

  6. O. Goldreich, S. Goldwasser, and S. Micali. How to construct random functions. Journal of the ACM, 33:792–807, 1986.

    Article  MathSciNet  Google Scholar 

  7. D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inform. Comput., 100(1):78–150, September 1992.

    Article  MATH  MathSciNet  Google Scholar 

  8. Klaus-U. Höffgen, Hans-U. Simon, and Kevin S. Van Horn. Robust trainability of single neurons. J. of Comput. Syst. Sci., 50(1):114–125, 1995.

    Article  MATH  Google Scholar 

  9. David S. Johnson and F.P. Preparata. The densest hemisphere problem. Theoretical Computer Science, 6:93–107, 1978.

    Article  MATH  MathSciNet  Google Scholar 

  10. Lee K. Jones. The computational intractability of training sigmoidal neural networks. IEEE Transactions on Information Theory, 43(1):167–713, 1997.

    Article  MATH  Google Scholar 

  11. J.S. Judd. Neural Network Design and the Complexity of Learning. MIT Press, 1990.

    Google Scholar 

  12. Viggo Kann, Sanjeev Khanna, Jens Lagergren, and Alessandro Panconesi. On the hardness of approximating max-k-cut and its dual. Technical Report CJTCS-1997-2, Chicago Journal of Theoretical Computer Science, 1997.

    Google Scholar 

  13. Michael Kearns and Leslie G. Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. In Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, pages 433–444, 1989.

    Google Scholar 

  14. Wee Sun Lee, Peter L. Bartlett, and Robert C Williamson. Eficient agnostic learning of neural networks with bounded fan-in. IEEE Transactions on Information Theory, 42(6):2118–2132, 1996.

    Article  MATH  MathSciNet  Google Scholar 

  15. Carsten Lund and Mihalis Yannakakis. On the hardness of approximating minimization problems. Journal of the ACM, 41(5):960–981, 1994.

    Article  MATH  MathSciNet  Google Scholar 

  16. Nimrod Megiddo. On the complexity of polyhedral separability. Discrete Computational Geometry, 3:325–337, 1988.

    Article  MATH  MathSciNet  Google Scholar 

  17. C.H. Papadimitriou and M. Yannakakis. Optimization, approximation, and complexity classes. Journal of Computer and System Science, 43:425–440, 1991.

    Article  MATH  MathSciNet  Google Scholar 

  18. Erez Petrank. The hardness of approximation: Gap location. Computational Complexity, 4(2):133–157, 1994.

    Article  MATH  MathSciNet  Google Scholar 

  19. Van H. Vu. On the infeasibility of training neural networks with small squared errors. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Advances in Neural Information Processing Systems, volume 10, pages 371–377. The MIT Press, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bartlett, P., Ben-David, S. (1999). Hardness Results for Neural Network Approximation Problems. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_5

Download citation

  • DOI: https://doi.org/10.1007/3-540-49097-3_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65701-9

  • Online ISBN: 978-3-540-49097-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics