Advertisement

Convergence in Orlicz spaces by means of the multivariate max-product neural network operators of the Kantorovich type and applications

  • Danilo CostarelliEmail author
  • Anna Rita Sambucini
  • Gianluca Vinti
Original Article
  • 21 Downloads

Abstract

In this paper, convergence results in a multivariate setting have been proved for a family of neural network operators of the max-product type. In particular, the coefficients expressed by Kantorovich type means allow to treat the theory in the general frame of the Orlicz spaces, which includes as particular case the \(L^p\)-spaces. Examples of sigmoidal activation functions are discussed, for the above operators in different cases of Orlicz spaces. Finally, concrete applications to real-world cases have been presented in both univariate and multivariate settings. In particular, the case of reconstruction and enhancement of biomedical (vascular) image has been discussed in detail.

Keywords

Sigmoidal function Multivariate max-product neural network operator Orlicz space Modular convergence Neurocomputing process Data modeling Image processing 

Mathematics Subject Classification

41A25 41A05 41A30 47A58 

Notes

Acknowledgements

The authors would like to thank the referees for their useful suggestions which led us to insert the section devoted to real-world applications.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical statement

Ethical approval was waived considering that the CT images analyzed were anonymized and the results did not influence any clinical judgment.

References

  1. 1.
    Adler A, Guardo R (1994) A neural network image reconstruction technique for electrical impedance tomography. IEEE Trans Med Imaging 13(4):594–600Google Scholar
  2. 2.
    Angeloni L, Costarelli D, Vinti G (2018) A characterization of the convergence in variation for the generalized sampling series. Annales Academiae Scientiarum Fennicae Mathematica 43:755–767zbMATHGoogle Scholar
  3. 3.
    Angeloni L, Vinti G (2009) Convergence and rate of approximation for linear integral operators in \(BV^\varphi \)-spaces in multidimensional setting. J Math Anal Appl 349:317–334MathSciNetzbMATHGoogle Scholar
  4. 4.
    Angeloni L, Vinti G (2010) Approximation with respect to Goffman–Serrin variation by means of non-convolution type integral operators. Numer Funct Anal Optim 31:519–548MathSciNetzbMATHGoogle Scholar
  5. 5.
    Angeloni L, Vinti G (2014) Convergence and rate of approximation in BV\(\varphi ({\mathbb{R}}^N_+)\) for a class of Mellin integral operators. Atti Accad Naz Lincei Rend Lincei Mat Appl 25:217–232MathSciNetzbMATHGoogle Scholar
  6. 6.
    Asdrubali F, Baldinelli G, Bianchi F, Costarelli D, Rotili A, Seracini M, Vinti G (2018) Detection of thermal bridges from thermographic images by means of image processing approximation algorithms. Appl Math Comput 317:160–171MathSciNetGoogle Scholar
  7. 7.
    Asdrubali F, Baldinelli G, Bianchi F, Costarelli D, Evangelisti L, Rotili A, Seracini M, Vinti G (2018) A model for the improvement of thermal bridges quantitative assessment by infrared thermography. Appl Energy 211:854–864Google Scholar
  8. 8.
    Ball KR, Grant C, Mundy WR, Shafera TJ (2017) A multivariate extension of mutual information for growing neural networks. Neural Netw 95:29–43Google Scholar
  9. 9.
    Bardaro C, Karsli H, Vinti G (2011) Nonlinear integral operators with homogeneous kernels: pointwise approximation theorems. Appl Anal 90(3–4):463–474MathSciNetzbMATHGoogle Scholar
  10. 10.
    Bardaro C, Musielak J, Vinti G (2003) Nonlinear integral operators and applications. Series in nonlinear analysis and applications 9. W. De Gruyter & Co., BerlinzbMATHGoogle Scholar
  11. 11.
    Bartoccini B, Costarelli D, Vinti G (2018) Extension of saturation theorems for the sampling Kantorovich operators. Complex Anal Oper Theory.  https://doi.org/10.1007/s11785-018-0852-z Google Scholar
  12. 12.
    Bede B, Coroianu L, Gal SG (2016) Approximation by max-product type operators. Springer International Publishing, Berlin.  https://doi.org/10.1007/978-3-319-34189-7 zbMATHGoogle Scholar
  13. 13.
    Boccuto A, Candeloro D, Sambucini AR (2017) \(L^p\) spaces in vector lattices and applications. Math Slov 67(6):1409–1426.  https://doi.org/10.1515/ms-2017-0060 zbMATHGoogle Scholar
  14. 14.
    Bono-Nuez A, Bernal-Ruíz C, Martín-del-Brío B, Pérez-Cebolla FJ, Martínez-Iturbe A (2017) Recipient size estimation for induction heating home appliances based on artificial neural networks. Neural Comput Appl 28(11):3197–3207Google Scholar
  15. 15.
    Candeloro D, Sambucini AR (2015) Filter convergence and decompositions for vector lattice-valued measures. Mediterr J Math 12(3):621–637.  https://doi.org/10.1007/s00009-014-0431-0 MathSciNetzbMATHGoogle Scholar
  16. 16.
    Cao F, Chen Z (2009) The approximation operators with sigmoidal functions. Comput Math Appl 58(4):758–765MathSciNetzbMATHGoogle Scholar
  17. 17.
    Cao F, Chen Z (2012) The construction and approximation of a class of neural networks operators with ramp functions. J Comput Anal Appl 14(1):101–112MathSciNetzbMATHGoogle Scholar
  18. 18.
    Cao F, Liu B, Park DS (2013) Image classification based on effective extreme learning machine. Neurocomputing 102:90–97Google Scholar
  19. 19.
    Cheang GHL (2010) Approximation with neural networks activated by ramp sigmoids. J Approx Theory 162:1450–1465MathSciNetzbMATHGoogle Scholar
  20. 20.
    Coroianu L, Gal SG (2014) Saturation and inverse results for the Bernstein max-product operator. Period Math Hungar 69:126–133MathSciNetzbMATHGoogle Scholar
  21. 21.
    Coroianu L, Gal SG (2017) \(L^p\)-approximation by truncated max-product sampling operators of Kantorovich-type based on Fejer kernel. J Integral Equ Appl 29(2):349–364zbMATHGoogle Scholar
  22. 22.
    Costarelli D, Minotti AM, Vinti G (2017) Approximation of discontinuous signals by sampling Kantorovich series. J Math Anal Appl 450(2):1083–1103MathSciNetzbMATHGoogle Scholar
  23. 23.
    Costarelli D, Sambucini AR (2018) Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators. Results Math 73(1):15.  https://doi.org/10.1007/s00025-018-0799-4 MathSciNetzbMATHGoogle Scholar
  24. 24.
    Costarelli D, Spigler R (2015) How sharp is the Jensen inequality? J Inequal Appl 69:1–10MathSciNetzbMATHGoogle Scholar
  25. 25.
    Costarelli D, Spigler R (2018) Solving numerically nonlinear systems of balance laws by multivariate sigmoidal functions approximation. Comput Appl Math 37(1):99–133MathSciNetzbMATHGoogle Scholar
  26. 26.
    Costarelli D, Vinti G (2016) Approximation by max-product neural network operators of Kantorovich type. Results Math 69(3):505–519MathSciNetzbMATHGoogle Scholar
  27. 27.
    Costarelli D, Vinti G (2016) Max-product neural network and quasi-interpolation operators activated by sigmoidal functions. J Approx Theory 209:1–22MathSciNetzbMATHGoogle Scholar
  28. 28.
    Costarelli D, Vinti G (2016) Pointwise and uniform approximation by multivariate neural network operators of the max-product type. Neural Netw 81:81–90zbMATHGoogle Scholar
  29. 29.
    Costarelli D, Vinti G (2017) Saturation classes for max-product neural network operators activated by sigmoidal functions. Results Math 72(3):1555–1569MathSciNetzbMATHGoogle Scholar
  30. 30.
    Costarelli D, Vinti G (2017) Convergence for a family of neural network operators in Orlicz spaces. Mathematische Nachrichten 290(2–3):226–235MathSciNetzbMATHGoogle Scholar
  31. 31.
    Costarelli D, Vinti G (2017) Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting. Mathematica Slovaca 67(6):1469–1480MathSciNetzbMATHGoogle Scholar
  32. 32.
    Costarelli D, Vinti G (2018) Estimates for the neural network operators of the max-product type with continuous and p-integrable functions. Results Math 73(1):12.  https://doi.org/10.1007/s00025-018-0790-0 MathSciNetzbMATHGoogle Scholar
  33. 33.
    Costarelli D, Vinti G (2018) An inverse result of approximation by sampling Kantorovich series. Proc Edinb Math Soc.  https://doi.org/10.1017/S0013091518000342 zbMATHGoogle Scholar
  34. 34.
    Egmont-Petersena M, de Ridderb D, Handels H (2002) Image processing with neural networks—a review. Pattern Recognit 35:2279–2301zbMATHGoogle Scholar
  35. 35.
    Gnecco G (2012) A comparison between fixed-basis and variable-basis schemes for function approximation and functional optimization. J Appl Math.  https://doi.org/10.1155/2012/806945 MathSciNetzbMATHGoogle Scholar
  36. 36.
    Gnecco G, Sanguineti M (2011) On a variational norm tailored to variable-basis approximation schemes. IEEE Trans Inf Theory 57:549–558MathSciNetzbMATHGoogle Scholar
  37. 37.
    Goh ATC (1995) Back-propagation neural networks for modeling complex systems. Artif Intell Eng 9:143–151Google Scholar
  38. 38.
    Gotleyb D, Lo Sciuto G, Napoli C, Shikler R, Tramontana E, Wozniak M (2016) Characterization and modeling of organic solar cells by using radial basis neural networks. Artif Intell Soft Comput.  https://doi.org/10.1007/978-3-319-39378-0_9 Google Scholar
  39. 39.
    Guliyev NJ, Ismailov VE (2018) On the approximation by single hidden layer feedforward neural networks with fixed weights. Neural Networks 98:296–304Google Scholar
  40. 40.
    Guliyev NJ, Ismailov VE (2018) Approximation capability of two hidden layer feedforward neural networks with fixed weights. Neurocomputing 316:262–269Google Scholar
  41. 41.
    Iliev A, Kyurkchiev N, Markov S (2015) On the approximation of the cut and step functions by logistic and Gompertz functions. Biomath.  https://doi.org/10.11145/j.biomath.2015.10.101 MathSciNetzbMATHGoogle Scholar
  42. 42.
    Kainen PC, Kurkova V, Sanguineti M (2009) Complexity of Gaussian-radial-basis networks approximating smooth functions. J Complex 25(1):63–74MathSciNetzbMATHGoogle Scholar
  43. 43.
    Lai G, Liu Z, Zhang Y, Philip Chen CL (2016) Adaptive position/attitude tracking control of aerial robot with unknown inertial matrix based on a new robust neural identifier. IEEE Trans Neural Netw Learn Syst 27(1):18–31MathSciNetGoogle Scholar
  44. 44.
    Liu P, Wang J, Zeng Z (2017) Multistability of delayed recurrent neural networks with Mexican hat activation functions. Neural Comput 29(2):423–457MathSciNetGoogle Scholar
  45. 45.
    Livingstone DJ (2008) Artificial neural networks: methods and applications (methods in molecular biology). Humana Press, New YorkGoogle Scholar
  46. 46.
    Maiorov V (2006) Approximation by neural networks and learning theory. J Complex 22(1):102–117MathSciNetzbMATHGoogle Scholar
  47. 47.
    Musielak J (1983) Orlicz spaces and modular spaces. Lecture notes in mathematics, vol 1034. Springer, BerlinzbMATHGoogle Scholar
  48. 48.
    Musielak J, Orlicz W (1959) On modular spaces. Studia Math 28:49–65MathSciNetzbMATHGoogle Scholar
  49. 49.
    Olivera JJ (2017) Global exponential stability of nonautonomous neural network models with unbounded delays. Neural Netw 96:71–79Google Scholar
  50. 50.
    Rister B, Rubin DL (2017) Piecewise convexity of artificial neural networks. Neural Netw 94:34–45Google Scholar
  51. 51.
    Sahoo A, Xu H, Jagannathan S (2016) Adaptive neural network-based event-triggered control of single-input single-output nonlinear discrete-time systems. IEEE Trans Neural Netw Learn Syst 27(1):151–164MathSciNetGoogle Scholar
  52. 52.
    Stamov G, Stamova I (2017) Impulsive fractional-order neural networks with time-varying delays: almost periodic solutions. Neural Comput Appl 28(11):3307–3316Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Mathematics and Computer SciencesUniversità degli Studi di PerugiaPerugiaItaly

Personalised recommendations