Skip to main content

Part of the book series: Studies in Systems, Decision and Control ((SSDC,volume 190))

  • 199 Accesses

Abstract

This chapter deals with the determination of the rate of convergence to the unit of Perturbed Kantorovich–Choquet univariate and multivariate normalized neural network operators of one hidden layer. These are given through the univariate and multivariate moduli of continuity of the involved univariate or multivariate function or its high order derivatives and that appears in the right-hand side of the associated univariate and multivariate Jackson type inequalities. The activation function is very general, especially it can derive from any univariate or multivariate sigmoid or bell-shaped function. The right hand sides of our convergence inequalities do not depend on the activation function. It follows (Anastassiou, Quantitative Approximation by Perturbed Kantorovich–Choquet Neural Network Operators (2018) [1]).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. G.A. Anastassiou, Quantitative Approximation by Perturbed Kantorovich-Choquet Neural Network Operators (Fisicasy Naturales. Serie A. Mathematicas, accepted for publication, Revista de la Real Academia de Ciencias Exactas, 2018)

    Google Scholar 

  2. G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 212, 237–262 (1997)

    Article  MathSciNet  Google Scholar 

  3. G.A. Anastassiou, Rate of convergence of some multivariate neural network operators to the unit. J. Comput. Math. Appl. 40, 1–19 (2000)

    Article  MathSciNet  Google Scholar 

  4. G.A. Anastassiou, Quantitative Approximations (Chapman and Hall/CRC, New York, 2001)

    MATH  Google Scholar 

  5. G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case, revisited. Vesnik 65(4), 511–518 (2013)

    MathSciNet  MATH  Google Scholar 

  6. G.A. Anastassiou, Rate of convergence of some multivariate neural network operators to the unit, revisited. J. Comput. Anal. Appl. 15(7), 1300–1309 (2013)

    MathSciNet  MATH  Google Scholar 

  7. A.R. Barron, Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 39, 930–945 (1993)

    Article  MathSciNet  Google Scholar 

  8. F.L. Cao, T.F. Xie, Z.B. Xu, The estimate for approximation error of neural networks: a constructive approach. Neurocomputing 71, 626–630 (2008)

    Article  Google Scholar 

  9. P. Cardaliaguet, G. Euvrard, Approximation of a function and its derivative with a neural network. Neural Netw. 5, 207–220 (1992)

    Article  Google Scholar 

  10. Z. Chen, F. Cao, The approximation operators with sigmoidal functions. Comput. Math. Appl. 58, 758–765 (2009)

    Article  MathSciNet  Google Scholar 

  11. T.P. Chen, H. Chen, Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its applications to a dynamic system. IEEE Trans. Neural Netw. 6, 911–917 (1995)

    Article  Google Scholar 

  12. G. Choquet, Theory of capacities. Ann. Inst. Fourier (Grenoble) 5, 131–295 (1954)

    Article  MathSciNet  Google Scholar 

  13. C.K. Chui, X. Li, Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 70, 131–141 (1992)

    Article  MathSciNet  Google Scholar 

  14. D. Costarelli, R. Spigler, Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 44, 101–106 (2013)

    Article  Google Scholar 

  15. D. Costarelli, R. Spigler, Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 48, 72–77 (2013)

    Article  Google Scholar 

  16. G. Cybenko, Approximation by superpositions of sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)

    Article  MathSciNet  Google Scholar 

  17. D. Denneberg, Non-additive Meas. Integral (Kluwer, Dordrecht, 1994)

    Book  Google Scholar 

  18. S. Ferrari, R.F. Stengel, Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16, 24–38 (2005)

    Article  Google Scholar 

  19. K.I. Funahashi, On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)

    Article  Google Scholar 

  20. S. Gal, Uniform and Pointwise Quantitative Approximation by Kantorovich-Choquet type integral Operators with respect to monotone and submodular set functions. Mediter. J. Math. 14(5), 12 (2017). Art. 205

    Google Scholar 

  21. N. Hahm, B.I. Hong, An approximation by neural networks with a fixed weight. Comput. Math. Appl. 47, 1897–1903 (2004)

    Article  MathSciNet  Google Scholar 

  22. S. Haykin, Neural Netw.: A Compr. Found., 2nd edn. (Prentice Hall, New York, 1998)

    Google Scholar 

  23. K. Hornik, M. Stinchombe, H. White, Multilayer feedforward networks are universal approximations. Neural Netw. 2, 359–366 (1989)

    Article  Google Scholar 

  24. K. Hornik, M. Stinchombe, H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw. 3, 551–560 (1990)

    Article  Google Scholar 

  25. M. Leshno, V.Y. Lin, A. Pinks, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6, 861–867 (1993)

    Article  Google Scholar 

  26. V. Maiorov, R.S. Meir, Approximation bounds for smooth functions in \(C\left( R^{d}\right) \) by neural and mixture networks. IEEE Trans. Neural Netw. 9, 969–978 (1998)

    Article  Google Scholar 

  27. Y. Makovoz, Uniform approximation by neural networks. J. Approx. Theory 95, 215–228 (1998)

    Article  MathSciNet  Google Scholar 

  28. W. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 7, 115–133 (1943)

    Article  MathSciNet  Google Scholar 

  29. H.N. Mhaskar, C.A. Micchelli, Approximation by superposition of a sigmoidal function. Adv. Appl. Math. 13, 350–373 (1992)

    Article  MathSciNet  Google Scholar 

  30. H.N. Mhaskar, C.A. Micchelli, Degree of approximation by neural networks with a single hidden layer. Adv. Appl. Math. 16, 151–183 (1995)

    Article  MathSciNet  Google Scholar 

  31. T.M. Mitchell, Machine Learning (WCB-McGraw-Hill, New York, 1997)

    MATH  Google Scholar 

  32. S. Suzuki, Constructive function approximation by three-layer artificial neural networks. Neural Netw. 11, 1049–1058 (1998)

    Article  Google Scholar 

  33. Z. Wang, G.J. Klir, Generalized Measure Theory (Springer, New York, 2009)

    Book  Google Scholar 

  34. Z.B. Xu, F.L. Cao, The essential order of approximation for neural networks. Sci. China (Ser. F) 47, 97–112 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to George A. Anastassiou .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Anastassiou, G.A. (2019). Approximation with Rates by Perturbed Kantorovich–Choquet Neural Network Operators. In: Ordinary and Fractional Approximation by Non-additive Integrals: Choquet, Shilkret and Sugeno Integral Approximators. Studies in Systems, Decision and Control, vol 190. Springer, Cham. https://doi.org/10.1007/978-3-030-04287-5_2

Download citation

Publish with us

Policies and ethics