Abstract
This chapter deals with the determination of the rate of convergence to the unit of Perturbed Kantorovich–Choquet univariate and multivariate normalized neural network operators of one hidden layer. These are given through the univariate and multivariate moduli of continuity of the involved univariate or multivariate function or its high order derivatives and that appears in the right-hand side of the associated univariate and multivariate Jackson type inequalities. The activation function is very general, especially it can derive from any univariate or multivariate sigmoid or bell-shaped function. The right hand sides of our convergence inequalities do not depend on the activation function. It follows (Anastassiou, Quantitative Approximation by Perturbed Kantorovich–Choquet Neural Network Operators (2018) [1]).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
G.A. Anastassiou, Quantitative Approximation by Perturbed Kantorovich-Choquet Neural Network Operators (Fisicasy Naturales. Serie A. Mathematicas, accepted for publication, Revista de la Real Academia de Ciencias Exactas, 2018)
G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 212, 237–262 (1997)
G.A. Anastassiou, Rate of convergence of some multivariate neural network operators to the unit. J. Comput. Math. Appl. 40, 1–19 (2000)
G.A. Anastassiou, Quantitative Approximations (Chapman and Hall/CRC, New York, 2001)
G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case, revisited. Vesnik 65(4), 511–518 (2013)
G.A. Anastassiou, Rate of convergence of some multivariate neural network operators to the unit, revisited. J. Comput. Anal. Appl. 15(7), 1300–1309 (2013)
A.R. Barron, Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 39, 930–945 (1993)
F.L. Cao, T.F. Xie, Z.B. Xu, The estimate for approximation error of neural networks: a constructive approach. Neurocomputing 71, 626–630 (2008)
P. Cardaliaguet, G. Euvrard, Approximation of a function and its derivative with a neural network. Neural Netw. 5, 207–220 (1992)
Z. Chen, F. Cao, The approximation operators with sigmoidal functions. Comput. Math. Appl. 58, 758–765 (2009)
T.P. Chen, H. Chen, Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its applications to a dynamic system. IEEE Trans. Neural Netw. 6, 911–917 (1995)
G. Choquet, Theory of capacities. Ann. Inst. Fourier (Grenoble) 5, 131–295 (1954)
C.K. Chui, X. Li, Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 70, 131–141 (1992)
D. Costarelli, R. Spigler, Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 44, 101–106 (2013)
D. Costarelli, R. Spigler, Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 48, 72–77 (2013)
G. Cybenko, Approximation by superpositions of sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)
D. Denneberg, Non-additive Meas. Integral (Kluwer, Dordrecht, 1994)
S. Ferrari, R.F. Stengel, Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16, 24–38 (2005)
K.I. Funahashi, On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)
S. Gal, Uniform and Pointwise Quantitative Approximation by Kantorovich-Choquet type integral Operators with respect to monotone and submodular set functions. Mediter. J. Math. 14(5), 12 (2017). Art. 205
N. Hahm, B.I. Hong, An approximation by neural networks with a fixed weight. Comput. Math. Appl. 47, 1897–1903 (2004)
S. Haykin, Neural Netw.: A Compr. Found., 2nd edn. (Prentice Hall, New York, 1998)
K. Hornik, M. Stinchombe, H. White, Multilayer feedforward networks are universal approximations. Neural Netw. 2, 359–366 (1989)
K. Hornik, M. Stinchombe, H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw. 3, 551–560 (1990)
M. Leshno, V.Y. Lin, A. Pinks, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6, 861–867 (1993)
V. Maiorov, R.S. Meir, Approximation bounds for smooth functions in \(C\left( R^{d}\right) \) by neural and mixture networks. IEEE Trans. Neural Netw. 9, 969–978 (1998)
Y. Makovoz, Uniform approximation by neural networks. J. Approx. Theory 95, 215–228 (1998)
W. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 7, 115–133 (1943)
H.N. Mhaskar, C.A. Micchelli, Approximation by superposition of a sigmoidal function. Adv. Appl. Math. 13, 350–373 (1992)
H.N. Mhaskar, C.A. Micchelli, Degree of approximation by neural networks with a single hidden layer. Adv. Appl. Math. 16, 151–183 (1995)
T.M. Mitchell, Machine Learning (WCB-McGraw-Hill, New York, 1997)
S. Suzuki, Constructive function approximation by three-layer artificial neural networks. Neural Netw. 11, 1049–1058 (1998)
Z. Wang, G.J. Klir, Generalized Measure Theory (Springer, New York, 2009)
Z.B. Xu, F.L. Cao, The essential order of approximation for neural networks. Sci. China (Ser. F) 47, 97–112 (2004)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Anastassiou, G.A. (2019). Approximation with Rates by Perturbed Kantorovich–Choquet Neural Network Operators. In: Ordinary and Fractional Approximation by Non-additive Integrals: Choquet, Shilkret and Sugeno Integral Approximators. Studies in Systems, Decision and Control, vol 190. Springer, Cham. https://doi.org/10.1007/978-3-030-04287-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-04287-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04286-8
Online ISBN: 978-3-030-04287-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)