Upper Bounds on the Approximation Rates of Real-valued Boolean Functions by Neural Networks

  • K. Hlaváčková
  • V. Kůrková
  • P. Savický
Conference paper


Real-valued functions with multiple boolean variables are represented by one-hidden-layer Heaviside perceptron networks with an exponential number of hidden units. We derive upper bounds on approximation error using a given number n of hidden units. The bounds on error axe of the form \(\frac{c}{\sqrt{n}}\) where c depends on certain norms of the function being approximated and n is the number of hidden units. We show examples of functions for which these norms grow polynomially and exponentially with increasing input dimension.


Neural Network Orthonormal Basis Hide Unit Real Vector Space Fourier Basis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39:930–945, 1993.MathSciNetMATHCrossRefGoogle Scholar
  2. [2]
    A. Hajnal, W. Maass, P. Pudlák, M. Szegedy, and G. Turán. Threshold circuits of bounded depth. In Proceedings of the 28th Annual Symposium on Foundations of Computer Science, pages 99–110. IEEE Computer Society Press, 1987.Google Scholar
  3. [3]
    K. Hlaváčková and V. Kůrková. Rates of approximation of real-valued Boolean functions by neural networks. In Proceedings of ESANN’96, pages 167–172. Bruges 1996, Belgium, 1996.Google Scholar
  4. [4]
    Y. Ito. Finite mapping by neural networks and truth functions. Math. Scientist, 17:69–77, 1992.MATHGoogle Scholar
  5. [5]
    L.K. Jones. A simple lemma on greedy approximation in hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics, 20:601–613, 1992.CrossRefGoogle Scholar
  6. [6]
    V. Kůrková. Dimension-independent rates of approximation by neural networks. Birkhauser, 1997. in press.Google Scholar
  7. [7]
    V. Kůrková, P. C. Kainen, and V. Kreinovich. Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks, 1997. in press.Google Scholar
  8. [8]
    E. Kushilevitz and Y. Mansour. Learning decision trees using the fourier spectrum. In Proceedings of 23rd STOC, pages 455–464, 1991.Google Scholar
  9. [9]
    H.N. Mhaskar and C.A. Micchelli. Dimension-independent bounds on the degree of approximation by neural networks. IBM Journal of Research and Development, 38(3), May 1994.Google Scholar
  10. [10]
    O.S. Rothaus. On “bent” functions. J. Combin. Theory, Ser. A, 20:300–305, 1976.MathSciNetMATHCrossRefGoogle Scholar
  11. [11]
    T.J. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce english text. Complex Systems, 1:145–168, 1987.MATHGoogle Scholar
  12. [12]
    H.J. Weaver. Applications of discrete and continuous Fourier analysis. John Wiley, New York, 1983.MATHGoogle Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • K. Hlaváčková
    • 1
  • V. Kůrková
    • 1
  • P. Savický
    • 1
  1. 1.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPrague 8Czech Republic

Personalised recommendations