Advertisement

Journal of Fourier Analysis and Applications

, Volume 25, Issue 6, pp 3104–3122 | Cite as

Super-Resolution Meets Machine Learning: Approximation of Measures

  • H. N. MhaskarEmail author
Article
  • 43 Downloads

Abstract

The problem of super-resolution in general terms is to recuperate a finitely supported measure \(\mu \) given finitely many of its coefficients \(\hat{\mu }(k)\) with respect to some orthonormal system. The interesting case concerns situations, where the number of coefficients required is substantially smaller than a power of the reciprocal of the minimal separation among the points in the support of \(\mu \). In this paper, we consider the more severe problem of recuperating \(\mu \) approximately without any assumption on \(\mu \) beyond having a finite total variation. In particular, \(\mu \) may be supported on a continuum, so that the minimal separation among the points in the support of \(\mu \) is 0. A variant of this problem is also of interest in machine learning as well as the inverse problem of de-convolution. We define an appropriate notion of a distance between the target measure and its recuperated version, give an explicit expression for the recuperation operator, and estimate the distance between \(\mu \) and its approximation. We show that these estimates are the best possible in many different ways. We also explain why for a finitely supported measure the approximation quality of its recuperation is bounded from below if the amount of information is smaller than what is demanded in the super-resolution problem.

Keywords

Super-resolution Machine learning De-convolution Data defined spaces Widths 

Mathematics Subject Classification

94A12 68T05 41A25 65J22 

Notes

References

  1. 1.
    Barron, A.R.: Neural net approximation. In: Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems, vol. 1, pp. 69–72 (1992)Google Scholar
  2. 2.
    Batenkov, D.: Stability and super-resolution of generalized spike recovery. Appl. Comput. Harmon. A. 45(2), 299–323 (2018)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Batenkov, D., Yomdin, Y.: On the accuracy of solving confluent Prony systems. SIAM J. Appl. Math. 73(1), 134–154 (2013)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bendory, T., Dekel, S., Feuer, A.: Exact recovery of dirac ensembles from the projection onto spaces of spherical harmonics. Constr. Approx. 42(2), 183–207 (2015)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Bendory, T., Dekel, S., Feuer, A.: Super-resolution on the sphere using convex optimization. IEEE Trans. Signal Process. 63(9), 2253–2262 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Candès, E.J., Fernandez-Granda, C.: Super-resolution from noisy data. J. Fourier Anal. Appl. 19(6), 1229–1254 (2013)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Candès, E.J., Fernandez-Granda, C.: Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67(6), 906–956 (2014)MathSciNetCrossRefGoogle Scholar
  8. 8.
    De Prony, B.G.R.: Essai éxperimental et analytique: sur les lois de la dilatabilité de fluides élastique et sur celles de la force expansive de la vapeur de l’alkool,a différentes températures. J. de l’école Polytech. 1(22), 24–76 (1795)Google Scholar
  9. 9.
    DeVore, R.A., Howard, R., Micchelli, C.A.: Optimal nonlinear approximation. Manuscr. Math. 63(4), 469–478 (1989)MathSciNetCrossRefGoogle Scholar
  10. 10.
    DeVore, R .A., Lorentz, G .G.: Constructive Approximation, vol. 303. Springer Science & Business Media, New York (1993)CrossRefGoogle Scholar
  11. 11.
    Dick, J., Pillichshammer, F.: Digital nets and sequences: discrepancy theory and quasi-Monte Carlo integration. Cambridge University Press, Cambridge (2010)CrossRefGoogle Scholar
  12. 12.
    Donoho, D.L.: Superresolution via sparsity constraints. SIAM J. Math. Anal. 23(5), 1309–1331 (1992)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Eckhoff, K.S.: Accurate reconstructions of functions of finite regularity from truncated fourier series expansions. Math. Comput. 64(210), 671–690 (1995)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Ehler, M., Filbir, F., Mhaskar, H.N.: Locally learning biomedical data using diffusion frames. J. Comput. Biol. 19(11), 1251–1264 (2012)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Filbir, F., Mhaskar, H.N.: A quadrature formula for diffusion polynomials corresponding to a generalized heat kernel. J. Fourier Anal. Appl. 16(5), 629–657 (2010)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Filbir, F., Mhaskar, H.N.: Marcinkiewicz-Zygmund measures on manifolds. J. Complex. 27(6), 568–596 (2011)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Filbir, F., Mhaskar, H.N., Prestin, J.: On the problem of parameter estimation in exponential sums. Constr. Approx. 35(3), 323–343 (2012)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Filbir, F., Schröder, K.: Exact recovery of discrete measures from wigner d-moments. arXiv preprint arXiv:1606.05306, (2016)
  19. 19.
    Gelb, A., Tadmor, E.: Detection of edges in spectral data. Appl. Comput. Harmon. Anal. 7(1), 101–135 (1999)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Klusowski, J.M., Barron, A.R.: Uniform approximation by neural networks activated by first and second order ridge splines. arXiv preprint arXiv:1607.07819, (2016)
  21. 21.
    Krim, H., Viberg, M.: Two decades of array signal processing research: the parametric approach. IEEE Signal Process. Mag. 13(4), 67–94 (1996)CrossRefGoogle Scholar
  22. 22.
    Kurková, V., Sanguineti, M.: Bounds on rates of variable basis and neural network approximation. IEEE Trans. Inf. Theory 47(6), 2659–2665 (2001)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Kurková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. Inf. Theory 48(1), 264–275 (2002)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Lanczos, C.: Applied Analysis. Courier Dover Publications, New York (1988)zbMATHGoogle Scholar
  25. 25.
    Maggioni, M., Mhaskar, H.N.: Diffusion polynomial frames on metric measure spaces. Appl. Comput. Harmon. Anal. 24(3), 329–353 (2008)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural Comput. 8(1), 164–177 (1996)CrossRefGoogle Scholar
  27. 27.
    Mhaskar, H.N.: On the tractability of multivariate integration and approximation by neural networks. J. Complex. 20(4), 561–590 (2004)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Mhaskar, H.N.: Eignets for function approximation on manifolds. Appl. Comput. Harm. Anal. 29(1), 63–87 (2010)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Mhaskar, H.N.: A unified framework for harmonic analysis of functions on directed graphs and changing data. Appl. Comput. Harm. Anal. 44(3), 611–644 (2018)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Mhaskar, H.N., Poggio, T.: Deep vs. shallow networks: an approximation theory perspective. Anal. Appl. 14(06), 829–848 (2016)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Mhaskar, H.N., Prestin, J.: On local smoothness classes of periodic functions. J. Fourier Anal. Appl. 11(3), 353–373 (2005)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Novak, E., Woźniakowski, H.: Tractability of Multivariate Problems: Standard Information for Functionals. European Mathematical Society, Zurich (2008)CrossRefGoogle Scholar
  33. 33.
    Potts, D., Tasche, M.: Parameter estimation for exponential sums by approximate Prony method. Signal Process. 90(5), 1631–1642 (2010)CrossRefGoogle Scholar
  34. 34.
    Szegö, G.: Orthogonal Polynomials. Colloquium publications/American mathematical society, New York (1975)zbMATHGoogle Scholar
  35. 35.
    Tadmor, E., Tanner, J.: Adaptive filters for piecewise smooth spectral data. IMA J. Numer. Anal. 25(4), 635–647 (2005)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Zygmund, A.: Trigonometric series, vol. 1. Cambridge University Press, Cambridge (2002)zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Institute of Mathematical SciencesClaremont Graduate UniversityClaremontUSA

Personalised recommendations