Neural Networks and Fuzzy Basis Functions for Functional Approximation

  • Liang Jin
  • Madan M. Gupta
  • Peter N. Nikiforuk
Part of the International Series in Intelligent Technologies book series (ISIT, volume 3)


Universal approximation capabilities of neural networks and fuzzy basis functions are given in this chapter using the Stone-Weierstrass theorem, Kolmogorov’s theorem and functional analysis methods. This study focuses on few commonly-used neural networks such as multilayered feedforward neural networks (MFNNs) with sigmoidal activation functions, trigonometric networks, higher-order neural networks, Gaussian radial basis function networks, and fuzzy basis function networks. The results show that an arbitrary continuous function on a compact set may be approximated to any degree of accuracy by such a neural network or fuzzy system. However, the accuracy of the approximations is strongly related to the design of the learning phases of the network parameters. The theory presented in this chapter provides a theoretical basis for applications to the fields of identification, control and pattern recognition.


Fuzzy System Feedforward Neural Network Hide Unit Approximation Capability Functional Approximation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Blum, E.K. and Li, L.K. (1991). “Approximation theory and feedforward networks,” Neural Networks, Vol. 4, No. 4, pp. 511–515.CrossRefGoogle Scholar
  2. [2]
    Cardaliaguet, P. and Euvrard, G. (1992). “Approximation of a function and its derivative with a neural network,” Neural Networks, Vol. 5, No. 2, pp. 207–220.CrossRefGoogle Scholar
  3. [3]
    Carroll, S.M. and Dickinson, B.W. (1989). “Construction of neural nets using the Radon transform,” Proceeding of International Joint Conference on Neural Networks, Vol. I, pp. 607–611, New York.Google Scholar
  4. [4]
    Cotter, N. (1990). “The Stone-Weierstrass theorem and its application to neural networks,” IEEE Trans. Neural Networks, Vol. 1, No. 4, 290–295.CrossRefMathSciNetGoogle Scholar
  5. [5]
    Cybenko, G. (1989). “Approximation by superpositions of a sigmoidal function,” Math. Control Signal System, Vol. 2, No. 3, pp. 303–314.MATHCrossRefMathSciNetGoogle Scholar
  6. [6]
    Dickerson, J. and Kosko, B. (1993). “Fuzzy function approximation with supervised ellipsoidal learning,” Proc. of 1993 World Congress on Neural Networks, Vol. II, pp. 11–17, Portland, Oregon.Google Scholar
  7. [7]
    Funahashi, K. (1989). “On the approximate realization of continuous mappings by neural networks,’ vol 2, No. 3, pp. 183–192.Google Scholar
  8. [8]
    Gallant, A.R. and White, H. (1988). “There exists a neural network that does not make avoidable mistakes,” Proc. of 1988 ICNN, Vol.11, pp. 657–664, San Diego.Google Scholar
  9. [9]
    Gallant, A.R. and White H. (1992). “On learning the derivatives of an unknown mapping with multilayered feedforward networks,” Neural Networks, Vol. 5, No. 1, pp. 129–138.CrossRefGoogle Scholar
  10. [10]
    Hecht-Nielsen, R. (1989). “Theory of the back-propagation neural network”, Proc. Int’l. Joint Conf. on Neural Networks, pp. I–593–605.Google Scholar
  11. [11]
    Hecht-Nielsen, R. (1987). “Kolmogorov’s mapping neural network existence theorem,” Proc. of 1987 ICNN, Vol. III, pp. 11–14.Google Scholar
  12. [12]
    Hornik, K. (1991). “Approximation capabilities of multilayer feedforward networks,” Neural Networks, Vol. 4, No. 2, pp. 251–257.CrossRefGoogle Scholar
  13. [13]
    Hornik, K. (1993). “Some new results on neural network approximation,” Neural Networks, Vol. 6, No. 8, pp. 1069–1072.Google Scholar
  14. [14]
    Hornik, K., Stinchcombe, M. and White, H. (1989). “Multilayer feedforward networks are universal approximators,” Neural Networks, Vol. 2, No. 5, pp. 359–366.CrossRefGoogle Scholar
  15. [15]
    Hornik, K., Stinchcombe, M. and White, H. (1990). “Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks,” Neural Networks, Vol. 3, No. 6, pp. 551–560.CrossRefGoogle Scholar
  16. [16]
    Hush, D.R. and Horne, B.G. (1993) “ Progress in supervised neural networks,” IEEE Signal Processing Magazine, No. 1, pp. 8–39.Google Scholar
  17. [17]
    Ito, Y. (1991). “Approximation of functions on a compact set by finite sums of a sigmoid function without scaling,” Neural Networks, Vol. 4, No. 1, pp. 105–115.Google Scholar
  18. [18]
    Ito, Y. (1993). “Extension of approximation capability of three layered neural networks to derivatives,” Proc. of 1993 ICNN, Vol. 1, pp. 377–381.Google Scholar
  19. [19]
    Jin, L., Gupta, M.M. and Nikiforuk, P. N. (1994). “Approximation capabilities of feedforward and recurrent neural networks,” in Intelligent Control Systems, Gupta, M.M. and Sinha, N.K. (Eds.), IEEE Press.Google Scholar
  20. [20]
    Kaufmann, A. and Gupta, M.M. (1985). Introduction to Fuzzy Arithmetic, Van Nostrand, New York, 1985.MATHGoogle Scholar
  21. [21]
    Kolmogorov, A.N. (1957). “On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition,” Dokl. Akad. Nauk USSR, Vol. 114, pp. 953–956.MATHMathSciNetGoogle Scholar
  22. [22]
    Kosko, B., (1992). Neural Networks and Fuzzy Systems, Prentice Hall, Englewood Cliffs.MATHGoogle Scholar
  23. [23]
    Kurkova, V. (1992). “Kolmogorov’s theorem and multilayer neural networks,” Neural Networks, Vol. 5, No. 3, pp. 501–506.CrossRefGoogle Scholar
  24. [24]
    Lee, S. and Kil, R. M., (1991). “A Gaussian potential function network with hierarchically self-organizing learning,” Neural Networks, Vol. 4, pp. 207–224.CrossRefGoogle Scholar
  25. [25]
    Lorentz, G.G. (1986). Approximation of Functions., Chelsea Publishing Co., New York, 1976.Google Scholar
  26. [26]
    Musavi, M., Ahmed, W., Chan, K. H., Faris, K. B. and Hummels, D. M., (1992). “On the training of radial basis function classifiers,” Neural Networks, Vol. 5, pp. 595–603.CrossRefGoogle Scholar
  27. [27]
    Poggio, T. and Girosi, F. (1990). “Networks for approximation and learning,” Proc. of IEEE, Vol. 78, pp. 1481–1497.CrossRefGoogle Scholar
  28. [28]
    Ray, W.O. (1988). Real Analysis, Prentice Hall, Englewood Cliffs, New Jersey.Google Scholar
  29. [29]
    Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986). “Learning internal representations by error propagation,” In Rumelhart, D.E. and McClelland, J. L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp. 318–362, MIT Press, Cambridge, MA, 1986.Google Scholar
  30. [30]
    Shin, Y. and Ghosh, J. (1991). “The pi-sigma network: an efficient higher-order neural network for pattern classification and function approximation,” Proc. of IJCNN, Vol. 1, pp. 13–18, Seattle, Washington, July.Google Scholar
  31. [31]
    Shin, Y. and Ghosh, J. (1992). “Approximation of multivariate functions using ridge polynomial networks,” Proceeding of International Joint Conference on Neural Networks, Vol. II, pp. 380–385, Baltimore, Maryland, June 1992.Google Scholar
  32. [32]
    Sprecher, D. A. (1965). “On the structure of continuous functions of several variables,” Trans. Amer. Math. Soc., 115, 340–355.MATHCrossRefMathSciNetGoogle Scholar
  33. [33]
    Sprecher, D. A. (1993). “A universal mapping for Kolmogorov’s superposition theorem,” Neural Networks, Vol. 6, No. 8, pp. 1089–1094.MATHCrossRefGoogle Scholar
  34. [34]
    Softky, R. W., and Kammen, D. M. (1991). Correlations in high dimensional or asymmetrical data sets: Hebbian neuronal processing.” Neural Networks, Vol. 4, No. 3, pp. 337–347.CrossRefGoogle Scholar
  35. [35]
    Taylor, J.G. and Commbes, S. (1993). “Learning higher order correlations,” Neural Networks, Vol. 6, No. 3, pp. 423–428.CrossRefGoogle Scholar
  36. [36]
    Wang, L. and Mendel, J.M. (1992). “Fuzzy basis functions, universal approximation, and orthogonal least-squares learning,” IEEE Trans. On Neural Networks, Vol. 3, No. 5, pp. 807–814.CrossRefGoogle Scholar
  37. [37]
    Xu, L., Oja, E., and Suen, C.Y. (1992). “Modified Hebbian learning for curve and surface fitting, Neural Networks, Vol. 5, No. 3, pp. 441–457.CrossRefGoogle Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • Liang Jin
    • 1
  • Madan M. Gupta
    • 1
  • Peter N. Nikiforuk
    • 1
  1. 1.Intelligent Systems Research LaboratoryCollege of Engineering, University of SaskatchewanSaskatoon, SaskatchewanCanada

Personalised recommendations