Quantitative approximation by perturbed Kantorovich–Choquet neural network operators

  • George A. Anastassiou
Original Paper


This paper deals with the determination of the rate of convergence to the unit of perturbed Kantorovich–Choquet univariate and multivariate normalized neural network operators of one hidden layer. These are given through the univariate and multivariate moduli of continuity of the involved univariate or multivariate function or its high order derivatives and that appears in the right-hand side of the associated univariate and multivariate Jackson type inequalities. The activation function is very general, especially it can derive from any univariate or multivariate sigmoid or bell-shaped function. The right hand sides of our convergence inequalities do not depend on the activation function. The sample functionals are of Kantorovich–Choquet type. We give applications for the first derivatives of the involved function.


Univariate and multivariate neural network approximation Univariate and multivariate perturbation of operators Modulus of continuity Jackson type inequality Choquet integral 

Mathematics Subject Classification

41A17 41A25 41A30 41A35 


  1. 1.
    Anastassiou, G.A.: Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 212, 237–262 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Anastassiou, G.A.: Rate of convergence of some multivariate neural network operators to the unit. J. Comput. Math. Appl. 40, 1–19 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Anastassiou, G.A.: Quantitative Approximations. Chapman & Hall/CRC, Boca Raton (2001)zbMATHGoogle Scholar
  4. 4.
    Anastassiou, G.A.: Rate of convergence of some neural network operators to the unit-univariate case, revisited. Vesnik 65(4), 511–518 (2013)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Anastassiou, G.A.: Rate of convergence of some multivariate neural network operators to the unit, revisited. J. Comput. Anal. Appl. 15(7), 1300–1309 (2013)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39, 930–945 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cao, F.L., Xie, T.F., Xu, Z.B.: The estimate for approximation error of neural networks: a constructive approach. Neurocomputing 71, 626–630 (2008)CrossRefGoogle Scholar
  8. 8.
    Cardaliaguet, P., Euvrard, G.: Approximation of a function and its derivative with a neural network. Neural Netw. 5, 207–220 (1992)CrossRefGoogle Scholar
  9. 9.
    Chen, Z., Cao, F.: The approximation operators with sigmoidal functions. Comput. Math. Appl. 58, 758–765 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Chen, T.P., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its applications to a dynamic system. IEEE Trans. Neural Netw. 6, 911–917 (1995)CrossRefGoogle Scholar
  11. 11.
    Choquet, G.: Theory of capacities. Ann. Inst. Fourier (Grenoble) 5, 131–295 (1954)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Chui, C.K., Li, X.: Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 70, 131–141 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Costarelli, D., Spigler, R.: Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 44, 101–106 (2013)CrossRefzbMATHGoogle Scholar
  14. 14.
    Costarelli, D., Spigler, R.: Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 48, 72–77 (2013)CrossRefzbMATHGoogle Scholar
  15. 15.
    Cybenko, G.: Approximation by superpositions of sigmoidal function. Math. Control Signal Syst. 2, 303–314 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Denneberg, D.: Non-additive Measure and Integral. Kluwer, Dordrecht (1994)CrossRefzbMATHGoogle Scholar
  17. 17.
    Ferrari, S., Stengel, R.F.: Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16, 24–38 (2005)CrossRefGoogle Scholar
  18. 18.
    Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)CrossRefGoogle Scholar
  19. 19.
    Gal, S.: Uniform and Pointwise quantitative approximation by Kantorovich–Choquet type integral operators with respect to monotone and submodular set functions. Mediter. J. Math 14(5), 12 (2017). Art. 205MathSciNetzbMATHGoogle Scholar
  20. 20.
    Hahm, N., Hong, B.I.: An approximation by neural networks with a fixed weight. Comput. Math. Appl. 47, 1897–1903 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall, New York (1998)zbMATHGoogle Scholar
  22. 22.
    Hornik, K., Stinchombe, M., White, H.: Multilayer feedforward networks are universal approximations. Neural Netw. 2, 359–366 (1989)CrossRefGoogle Scholar
  23. 23.
    Hornik, K., Stinchombe, M., White, H.: Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw. 3, 551–560 (1990)CrossRefGoogle Scholar
  24. 24.
    Leshno, M., Lin, V.Y., Pinks, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6, 861–867 (1993)CrossRefGoogle Scholar
  25. 25.
    Maiorov, V., Meir, R.S.: Approximation bounds for smooth functions in \(C\left( R^{d}\right) \) by neural and mixture networks. IEEE Trans. Neural Netw. 9, 969–978 (1998)CrossRefGoogle Scholar
  26. 26.
    Makovoz, Y.: Uniform approximation by neural networks. J. Approx. Theory 95, 215–228 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    McCulloch, W., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 7, 115–133 (1943)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Mhaskar, H.N., Micchelli, C.A.: Approximation by superposition of a sigmoidal function. Adv. Appl. Math. 13, 350–373 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Mhaskar, H.N., Micchelli, C.A.: Degree of approximation by neural networks with a single hidden layer. Adv. Appl. Math. 16, 151–183 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Mitchell, T.M.: Machine Learning. WCB-McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  31. 31.
    Suzuki, S.: Constructive function approximation by three-layer artificial neural networks. Neural Netw. 11, 1049–1058 (1998)CrossRefGoogle Scholar
  32. 32.
    Wang, Z., Klir, G.J.: Generalized Measure Theory. Springer, New York (2009)CrossRefzbMATHGoogle Scholar
  33. 33.
    Xu, Z.B., Cao, F.L.: The essential order of approximation for neural networks. Sci. Chin. (Ser. F) 47, 97–112 (2004)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Italia S.r.l., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Mathematical SciencesUniversity of MemphisMemphisUSA

Personalised recommendations