The Convergence Rate for a Open image in new window -Functional in Learning Theory

Open Access
Research Article
Part of the following topical collections:
  1. Inequalities in the A-Harmonic Equations and the Related Topics

Abstract

It is known that in the field of learning theory based on reproducing kernel Hilbert spaces the upper bounds estimate for a Open image in new window -functional is needed. In the present paper, the upper bounds for the Open image in new window -functional on the unit sphere are estimated with spherical harmonics approximation. The results show that convergence rate of the Open image in new window -functional depends upon the smoothness of both the approximated function and the reproducing kernels.

Keywords

Spherical Harmonic Tikhonov Regularization Cauchy Inequality Jacobi Weight Mercer Kernel 

1. Introduction

It is known that the goal of learning theory is to approximate a function (or some function features) from data samples.

Let Open image in new window be a compact subset of Open image in new window -dimensional Euclidean spaces Open image in new window , Open image in new window . Then, learning theory is to find a function Open image in new window related the input Open image in new window to the output Open image in new window (see [1, 2, 3]). The function Open image in new window is determined by a probability distribution Open image in new window on Open image in new window where Open image in new window is the marginal distribution on Open image in new window and Open image in new window is the condition probability of Open image in new window for a given Open image in new window

Generally, the distribution Open image in new window is known only through a set of sample Open image in new window independently drawn according to Open image in new window . Given a sample Open image in new window , the regression problem based on Support Vector Machine (SVM) learning is to find a function Open image in new window such that Open image in new window is a good estimate of Open image in new window when a new input Open image in new window is provided. The binary classification problem based on SVM learning is to find a function Open image in new window which divides Open image in new window into two parts. Here Open image in new window is often induced by a real-valued function Open image in new window with the form of Open image in new window where Open image in new window if Open image in new window , otherwise, Open image in new window . The functions Open image in new window are often generated from the following Tikhonov regularization scheme (see, e.g., [4, 5, 6, 7, 8, 9]) associated with a reproducing kernel Hilbert space (RKHS) Open image in new window (defined below) and a sample Open image in new window :

where Open image in new window is a positive constant called the regularization parameter and Open image in new window ( Open image in new window ) called Open image in new window -norm SVM loss.

In addition, the Tikhonov regularization scheme involving offset Open image in new window (see, e.g., [4, 10, 11]) can be presented below with a similar way to (1.1)

We are in a position to define reproducing kernel Hilbert space. A function Open image in new window is called a Mercer kernel if it is continuous, symmetric, and positive semidefinite, that is, for any finite set of distinct points Open image in new window , the matrix Open image in new window is positive semidefinite.

The reproducing kernel Hilbert space (RKHS) Open image in new window (see [12]) associated with the Mercer kernel Open image in new window is defined to be the closure of the linear span of the set of functions Open image in new window with the inner product Open image in new window satisfying Open image in new window and the reproducing property

If Open image in new window , then Open image in new window . Denote Open image in new window as the space of continuous function on Open image in new window with the norm Open image in new window . Let Open image in new window Then the reproducing property tells that

It is easy to see that Open image in new window is a subset of Open image in new window We say that Open image in new window is a universal kernel if for any compact subset Open image in new window is dense in Open image in new window (see [13, Page 2652]).

Let Open image in new window be a given discrete set of finite points. Then, we may define an RKHS Open image in new window by the linear span of the set of functions Open image in new window . Then, it is easy to see that Open image in new window and for any Open image in new window there holds Open image in new window

Define Open image in new window and Open image in new window where the minimum is taken over all measurable functions. Then, to estimate the explicit learning rate, one needs to estimate the regularization errors (see, e.g., [4, 7, 9, 14])

The convergence rate of (1.5) is controlled by the Open image in new window -functional (see, e.g., [9])

and (1.6) is controlled by another Open image in new window -functional (see, e.g., [4])

where Open image in new window with

We notice that, on one hand, the Open image in new window -functionals (1.7) and (1.8) are the modifications of the Open image in new window -functional of interpolation theory (see [15]) since the interpolation relation (1.4). On the other hand, they are different from the usual Open image in new window -functionals (see e.g., [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]) since the term Open image in new window However, they have some similar point. For example, if Open image in new window is a universal kernel, Open image in new window is dense in Open image in new window (see e.g., [31]). Moreover, some classical function spaces such as the polynomial spaces (see [2, 32]) and even some Sobolev spaces may be regarded as RKHS (see e.g., [33]).

In learning theory we often require Open image in new window and Open image in new window for some Open image in new window (see e.g., [1, 7, 14]). Many results on this topic have been achieved. With the weighted Durrmeyer operators [8, 9] showed the decay by taking Open image in new window to be the algebraic polynomials kernels on Open image in new window or on the simplex in Open image in new window .

However, in general case, the convergence of Open image in new window -functional (1.8) should also be considered since the offset often has influences on the solution of the learning algorithms (see e.g., [6, 11]). Hence, the purpose of this paper is twofold. One is to provide the convergence rates of (1.7) and (1.8) when Open image in new window is a general Mercer kernel on the unit sphere Open image in new window and Open image in new window The other is how to construct functions of the type of

to obtain the convergence rate of (1.8). The translation networks constructed in [34, 35, 36, 37] have the form of (1.10) and the zonal networks constructed in [38, 39] have the form of (1.10) with Open image in new window . So the methods used by these references may be used here to estimate the convergence rates of (1.7) and (1.8) if one can bound the term Open image in new window

In the present paper, we shall give the convergence rate of (1.7) and (1.8) for a general kernel defined on the unit sphere Open image in new window and Open image in new window with Open image in new window being the usual Lebesgue measure on Open image in new window . If there is a distortion between Open image in new window and Open image in new window the convergence rate of (1.7)-(1.8) in the general case may be obtained according to the way used by [1, 8].

The rest of this paper is organized as follows. In Section 2, we shall restate some notations on spherical harmonics and present the main results. Some useful lemmas dealing with the approximation order for the de la Vallée means of the spherical harmonics, the Gauss integral formula, the Marcinkiewicz-Zygmund with respect to the scattered data obtained by G. Brown and F. Dai and a result on the zonal networks approximation provided by H. N. Mhaskar will be given in Section 3. A kind of weighted norm estimate for the Mercer kernel matrices on the unit sphere will be given in Lemma 3.8. Our main results are proved in the last section.

Throughout the paper, we shall write Open image in new window if there exists a constant Open image in new window such that Open image in new window . We write Open image in new window if Open image in new window and Open image in new window .

2. Notations and Results

To state the results of this paper, we need some notations and results on spherical harmonics.

2.1. Notations

For integers Open image in new window , Open image in new window , the class of all one variable algebraic polynomials of degree Open image in new window defined on Open image in new window is denoted by Open image in new window , the class of all spherical harmonics of degree Open image in new window will be denoted by Open image in new window , and the class of all spherical harmonics of degree Open image in new window will be denoted by Open image in new window . The dimension of Open image in new window is given by (see [40, Page 65])

and that of Open image in new window is Open image in new window One has the following well-known addition formula (see [41, Page 10, Theorem Open image in new window ]):

where Open image in new window is the degree- Open image in new window generalized Legendre polynomial. The Legendre polynomials are normalized so that Open image in new window and satisfy the orthogonality relations

Define Open image in new window and Open image in new window by taking Open image in new window to be the usual volume element of Open image in new window and the Jacobi weights functions Open image in new window , Open image in new window , Open image in new window , respectively. For any Open image in new window we have the following relation (see [42, Page 312]):

The orthogonal projections Open image in new window of a function Open image in new window on Open image in new window are defined by (see e.g., [43])

where Open image in new window denotes the inner product of Open image in new window and Open image in new window .

2.2. Main Results

Let Open image in new window satisfy Open image in new window and Open image in new window . Define

Then, by [44, Chapter 17] we know that Open image in new window is positive semidefinite on Open image in new window and the right of (2.6) is convergence absolutely and uniformly since Open image in new window . Therefore, Open image in new window is a Mercer kernel on Open image in new window By [13, Theorem Open image in new window ] we know that Open image in new window is a universal kernel on Open image in new window . We suppose that there is a constant Open image in new window depending only on Open image in new window such for any Open image in new window

Given a finite set Open image in new window , we denote by Open image in new window the cardinality of Open image in new window . For Open image in new window and Open image in new window we say that a finite subset Open image in new window is an Open image in new window -covering of Open image in new window if

where Open image in new window with Open image in new window being the geodesic distance between Open image in new window and Open image in new window .

Let Open image in new window be an integer, Open image in new window a sequence of real numbers. Define forward difference operators by Open image in new window , Open image in new window , Open image in new window

We say a finite subset Open image in new window is a subset of interpolatory type if for any real numbers Open image in new window there is a Open image in new window such that Open image in new window , Open image in new window This kind of subsets may be found from [45, 46].

Let Open image in new window be the set of all sequence Open image in new window for which Open image in new window and Open image in new window the set of all sequence Open image in new window for which Open image in new window

Let Open image in new window be a real number, Open image in new window Then, we say Open image in new window if there is a function Open image in new window such that

We now give the results of this paper.

Theorem 2.1.

If there is a constant Open image in new window depending only on Open image in new window such that Open image in new window is a subset of interpolatory type and a Open image in new window -covering of Open image in new window satisfying Open image in new window with Open image in new window and Open image in new window being a given positive integer. Open image in new window is an integer. Open image in new window is a real number such that there is Open image in new window and Open image in new window , Open image in new window satisfies Open image in new window and Open image in new window . Open image in new window is the reproducing kernel space reproduced by Open image in new window and the kernel (2.6). Open image in new window . Then there is a constant Open image in new window depending only on Open image in new window and Open image in new window and a function Open image in new window with Open image in new window and Open image in new window a constant such that

The functions Open image in new window satisfying the conditions of Theorem 2.1 may be found in [39, Page 357].

Corollary 2.2.

Under the conditions of Theorem 2.1. If Open image in new window , then

Corollary 2.2 shows that the convergence rate of the Open image in new window -functional (1.8) is controlled by the smoothness of both the reproducing kernels and the approximated function Open image in new window .

Theorem 2.3.

If there is a constant Open image in new window depending only on Open image in new window such that Open image in new window is a subset of interpolatory type and a Open image in new window -covering of Open image in new window satisfying Open image in new window with Open image in new window and Open image in new window being a given positive integer. Open image in new window is the reproducing kernel space reproducing by Open image in new window and the kernel (2.6) with Open image in new window satisfying Open image in new window and Open image in new window Then, for Open image in new window and Open image in new window there holds

where Open image in new window

3. Some Lemmas

To prove Theorems 2.1 and 2.3, we need some lemmas. The first one is about the Gauss integral formula and Marcinkiewicz inequalities.

Lemma 3.1 (see [47, 48, 49, 50]).

There exist constants Open image in new window depending only on Open image in new window such that for any positive integer Open image in new window and any Open image in new window -covering Open image in new window of Open image in new window satisfying Open image in new window , there exists a set of real numbers Open image in new window , Open image in new window such that

where Open image in new window the constants of equivalence depending only on Open image in new window , Open image in new window , Open image in new window , and Open image in new window when Open image in new window is small. Here one employs the slight abuse of notation that Open image in new window

The second lemma we shall use is the Nikolskii inequality for the spherical harmonics.

Lemma 3.2 (see [38, 45, 49, 51, 52]).

If Open image in new window , Open image in new window , then one has the following Nikolskii inequality:

where the constant Open image in new window depends only on Open image in new window .

We now restate the general approximation frame of the Cesàro means and de la Vallée Poussin means provided by Dai and Ditzian (see [53]).

Lemma 3.3.

Let Open image in new window be a positive measure on Open image in new window . Open image in new window is a sequence of finite-dimensional spaces satisfying the following:

(I) Open image in new window .

(II) Open image in new window is orthogonal to Open image in new window (in Open image in new window ) when Open image in new window

(III) Open image in new window is dense in Open image in new window for all Open image in new window .

(IV) Open image in new window is the collection of the constants.

The Cesàro means Open image in new window of Open image in new window is given by

and Open image in new window is an orthogonal base of Open image in new window in Open image in new window One sets,for a given Open image in new window , Open image in new window and Open image in new window if there exists Open image in new window such that Open image in new window

Let Open image in new window be defined as Open image in new window for Open image in new window and Open image in new window for Open image in new window and is a nonegative and nonincrease function. Open image in new window are the de la Vallée Poussin means defined as

Lemma 3.3 makes the following Lemma 3.4.

Lemma 3.4.

Let Open image in new window be the function defined as in Lemma 3.3. Define two kinds of operators, respectively, by

Then, Open image in new window for any Open image in new window and Open image in new window for any Open image in new window . Moreover,

Proof.

By [54, Lemma Open image in new window ] we know Open image in new window for some Open image in new window . Hence, (3.9) holds by (3.7). By [19, Theorem Open image in new window ] we know Open image in new window for Open image in new window Hence, (3.10) holds by (3.7).

Let Open image in new window be a finite set. Then we call Open image in new window an M-Z quadrature measure of order Open image in new window if (3.1) and (3.2) hold for Open image in new window By this definition one knows the finite set Open image in new window in Lemma 3.1 is an M-Z quadrature measure of order Open image in new window .

Define an operator as

Then, we have the following results.

Lemma 3.5 (see [39]).

For a given integer Open image in new window let Open image in new window be an M-Z quadrature measure of order Open image in new window , Open image in new window , Open image in new window an integer, Open image in new window , Open image in new window , where Open image in new window satisfies Open image in new window which satisfies Open image in new window if Open image in new window and Open image in new window if Open image in new window . Open image in new window defined in Lemma 3.3 is a nonnegative and non-increasing function. Let Open image in new window satisfy Open image in new window . Then, for Open image in new window , Open image in new window , where Open image in new window consists of Open image in new window for which the derivative of order Open image in new window ; that is, Open image in new window , belongs to Open image in new window . Then, there is an operator Open image in new window such that

(i)(see [39, Proposition Open image in new window , (b)]). Open image in new window for Open image in new window

where Open image in new window

(ii)(see [39, Theorem Open image in new window ]). Moreover, if one adds an assumption that Open image in new window then, there are constants Open image in new window and Open image in new window such that

Lemma 3.6 (see e.g., [29, Page 230]).

Following Lemma 3.7 deals with the orthogonality of the Legendre polynomials Open image in new window

Lemma 3.7.

For the generalized Legendre polynomials Open image in new window one has

Proof.

It may be obtained by (2.2).

Lemma 3.8.

Let Open image in new window satisfy (2.7) for Open image in new window and Open image in new window . Open image in new window is a finite set satisfying the conditions of Theorem 2.1. Then, there is a constant Open image in new window depending only on Open image in new window such that

Proof.

By the Parseval equality we have

Let Open image in new window satisfy Open image in new window , Open image in new window . Then, by (3.1)

Define Open image in new window . Then, (3.24), (3.10), the Cauchy inequality, and the fact Open image in new window make
It follows that

Equation (3.2) thus holds.

4. Proof of the Main Results

We now show Theorems 2.1 and 2.3, respectively.

Proof of Theorem 2.1.

Lemma Open image in new window in [39] gave the following results.

Let Open image in new window , Open image in new window , Open image in new window be an integer, and a sequence of real numbers such Open image in new window . Then, there exists Open image in new window such that Open image in new window , Open image in new window

Since Open image in new window and Open image in new window we have a Open image in new window such that Open image in new window Hence, Open image in new window and

On the other hand, since
where for Open image in new window , we have by (4.3)
Hence, above equation and (3.1)-(3.2) makes
Then, we know Open image in new window and by (3.9)

It follows by (3.9) that

On the other hand, by the definition of Open image in new window and (3.14) we have for Open image in new window that

Equation (3.2) and the definition of Open image in new window make
The Hölder inequality, the Open image in new window of Lemma 3.5, and the fact that Open image in new window make Open image in new window . Therefore,

Take Open image in new window then

Equations (3.2), (3.17), (3.16), and the Cauchy inequality make

Let Open image in new window be the Gamma function. Then, it is well known that Open image in new window Therefore,

Equations (4.14) and (4.4) make
and hence

Since Open image in new window , we have (2.11) by (4.20). Equation (2.12) follows by (4.3), (4.4), and (3.19).

Proof of Corollary 2.2.

By (2.11)-(2.12) one has

Proof of Theorem 2.3.

Take the place of Open image in new window in Lemma 3.5 with Open image in new window denote still by Open image in new window the operator Open image in new window in Lemma 3.5 with Open image in new window and
Since Open image in new window is a spherical harmonics of order Open image in new window , we know by Open image in new window of Lemma 3.5 that Open image in new window are also spherical harmonics of order Open image in new window Then, (3.2), Open image in new window of Lemma 3.5, (3.3), and (3.16) make

Hence, (3.19) and above equation make Open image in new window . Equation (2.14) follows by (3.15).

Notes

Acknowledgments

This work is supported by the National NSF (10871226) of China. The authors thank the reviewers for giving very valuable suggestions.

References

  1. 1.
    Cucker F, Smale S: On the mathematical foundations of learning. Bulletin of the American Mathematical Society 2002, 39(1):1–49.MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Cucker F, Zhou D-X: Learning Theory: An Approximation Theory Viewpoint, Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge, Mass, USA; 2007:xii+224.CrossRefMATHGoogle Scholar
  3. 3.
    Vapnik VN: Statistical Learning Theory, Adaptive and Learning Systems for Signal Processing, Communications, and Control. John Wiley & Sons, New York, NY, USA; 1998:xxvi+736.MATHGoogle Scholar
  4. 4.
    Chen DR, Wu Q, Ying YM, Zhou DX: Support vector machine soft margin classifier: error analysis. Journal of Machine Learning and Research 2004, 5: 1143–1175.MathSciNetMATHGoogle Scholar
  5. 5.
    Evgeniou T, Pontil M, Poggio T: Regularization networks and support vector machines. Advances in Computational Mathematics 2000, 13(1):1–50. 10.1023/A:1018946025316MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Li Y, Liu Y, Zhu J: Quantile regression in reproducing kernel Hilbert spaces. Journal of the American Statistical Association 2007, 102(477):255–268. 10.1198/016214506000000979MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Tong H, Chen D-R, Peng L: Analysis of support vector machines regression. Foundations of Computational Mathematics 2009, 9(2):243–257. 10.1007/s10208-008-9026-0MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Tong H, Chen D-R, Peng L: Learning rates for regularized classifiers using multivariate polynomial kernels. Journal of Complexity 2008, 24(5–6):619–631. 10.1016/j.jco.2008.05.008MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Zhou D-X, Jetter K: Approximation with polynomial kernels and SVM classifiers. Advances in Computational Mathematics 2006, 25(1–3):323–344.MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Chen D, Xiang D-H: The consistency of multicategory support vector machines. Advances in Computational Mathematics 2006, 24(1–4):155–169.MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    De Vito E, Rosasco L, Caponnetto A, Piana M, Verri A: Some properties of regularized kernel methods. Journal of Machine Learning Research 2004, 5: 1363–1390.MathSciNetMATHGoogle Scholar
  12. 12.
    Aronszajn N: Theory of reproducing kernels. Transactions of the American Mathematical Society 1950, 68: 337–404. 10.1090/S0002-9947-1950-0051437-7MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Micchelli CA, Xu Y, Zhang H: Universal kernels. Journal of Machine Learning Research 2006, 7: 2651–2667.MathSciNetMATHGoogle Scholar
  14. 14.
    Wu Q, Ying Y, Zhou D-X: Multi-kernel regularized classifiers. Journal of Complexity 2007, 23(1):108–134. 10.1016/j.jco.2006.06.007MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Bergh J, Löfström J: Interpolation Spaces. Springer, New York, NY, USA; 1976.CrossRefMATHGoogle Scholar
  16. 16.
    Berens H, Lorentz GG: Inverse theorems for Bernstein polynomials. Indiana University Mathematics Journal 1972, 21(8):693–708. 10.1512/iumj.1972.21.21054MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Berens H, Li LQ: The Peetre -moduli and best approximation on the sphere. Acta Mathematica Sinica 1995, 38(5):589–599.MathSciNetMATHGoogle Scholar
  18. 18.
    Berens H, Xu Y: Open image in new window-moduli, moduli of smoothness, and Bernstein polynomials on a simplex. Indagationes Mathematicae 1991, 2(4):411–421. 10.1016/0019-3577(91)90027-5MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Chen W, Ditzian Z: Best approximation and -functionals. Acta Mathematica Hungarica 1997, 75(3):165–208. 10.1023/A:1006543020828MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Chen W, Ditzian Z: Best polynomial and Durrmeyer approximation in . Indagationes Mathematicae 1991, 2(4):437–452. 10.1016/0019-3577(91)90029-7MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Dai F, Ditzian Z: Jackson inequality for Banach spaces on the sphere. Acta Mathematica Hungarica 2008, 118(1–2):171–195. 10.1007/s10474-007-6206-3MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Ditzian Z, Zhou X: Optimal approximation class for multivariate Bernstein operators. Pacific Journal of Mathematics 1993, 158(1):93–120.MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Ditzian Z, Runovskii K: Averages and -functionals related to the Laplacian. Journal of Approximation Theory 1999, 97(1):113–139. 10.1006/jath.1997.3262MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Ditzian Z: A measure of smoothness related to the Laplacian. Transactions of the American Mathematical Society 1991, 326(1):407–422. 10.2307/2001870MathSciNetMATHGoogle Scholar
  25. 25.
    Ditzian Z, Totik V: Moduli of Smoothness, Springer Series in Computational Mathematics. Volume 9. Springer, New York, NY, USA; 1987:x+227.Google Scholar
  26. 26.
    Ditzian Z: Approximation on Banach spaces of functions on the sphere. Journal of Approximation Theory 2006, 140(1):31–45. 10.1016/j.jat.2005.11.013MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Ditzian Z: Fractional derivatives and best approximation. Acta Mathematica Hungarica 1998, 81(4):323–348. 10.1023/A:1006554907440MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Schumaker LL: Spline Functions: Basic Theory. John Wiley & Sons, New York, NY, USA; 1981:xiv+553. Pure and Applied Mathematics Pure and Applied MathematicsMATHGoogle Scholar
  29. 29.
    Wang KY, Li LQ: Harmonic Analysis and Approximation on the Unit Sphere. Science Press, Beijing, China; 2000.Google Scholar
  30. 30.
    Xu Y: Approximation by means of -harmonic polynomials on the unit sphere. Advances in Computational Mathematics 2004, 21(1–2):37–58.MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Smale S, Zhou D-X: Estimating the approximation error in learning theory. Analysis and Applications 2003, 1(1):17–41. 10.1142/S0219530503000089MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Sheng B: Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms. Acta Mathematica Hungarica 2009, 122(4):339–355. 10.1007/s10474-008-8037-2MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Loustau S: Aggregation of SVM classifiers using Sobolev spaces. Journal of Machine Learning Research 2008, 9: 1559–1582.MathSciNetMATHGoogle Scholar
  34. 34.
    Mhaskar HN, Micchelli CA: Degree of approximation by neural and translation networks with a single hidden layer. Advances in Applied Mathematics 1995, 16(2):151–183. 10.1006/aama.1995.1008MathSciNetCrossRefMATHGoogle Scholar
  35. 35.
    Sheng BH: Approximation of periodic functions by spherical translation networks. Acta Mathematica Sinica. Chinese Series 2007, 50(1):55–62.MathSciNetMATHGoogle Scholar
  36. 36.
    Sheng B: On the degree of approximation by spherical translations. Acta Mathematicae Applicatae Sinica. English Series 2006, 22(4):671–680. 10.1007/s10255-006-0341-4MathSciNetCrossRefMATHGoogle Scholar
  37. 37.
    Sheng B, Wang J, Zhou S: A way of constructing spherical zonal translation network operators with linear bounded operators. Taiwanese Journal of Mathematics 2008, 12(1):77–92.MathSciNetGoogle Scholar
  38. 38.
    Mhaskar HN, Narcowich FJ, Ward JD: Approximation properties of zonal function networks using scattered data on the sphere. Advances in Computational Mathematics 1999, 11(2–3):121–137.MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Mhaskar HN: Weighted quadrature formulas and approximation by zonal function networks on the sphere. Journal of Complexity 2006, 22(3):348–370. 10.1016/j.jco.2005.10.003MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Groemer H: Geometric Applications of Fourier Series and Spherical Harmonics, Encyclopedia of Mathematics and Its Applications. Volume 61. Cambridge University Press, Cambridge, Mass, USA; 1996:xii+329.CrossRefMATHGoogle Scholar
  41. 41.
    Müller C: Spherical Harmonics, Lecture Notes in Mathematics. Volume 17. Springer, Berlin, Germany; 1966:iv+45.Google Scholar
  42. 42.
    Lu SZ, Wang KY: Bochner-Riesz Means. Beijing Normal University Press, Beijing, China; 1988.Google Scholar
  43. 43.
    Wang Y, Cao F: The direct and converse inequalities for jackson-type operators on spherical cap. Journal of Inequalities and Applications 2009, 2009:-16.Google Scholar
  44. 44.
    Wendland H: Scattered Data Approximation, Cambridge Monographs on Applied and Computational Mathematics. Volume 17. Cambridge University Press, Cambridge, Mass, USA; 2005:x+336.MATHGoogle Scholar
  45. 45.
    Mhaskar HN, Narcowich FJ, Sivakumar N, Ward JD: Approximation with interpolatory constraints. Proceedings of the American Mathematical Society 2002, 130(5):1355–1364. 10.1090/S0002-9939-01-06240-2MathSciNetCrossRefMATHGoogle Scholar
  46. 46.
    Narcowich FJ, Ward JD: Scattered data interpolation on spheres: error estimates and locally supported basis functions. SIAM Journal on Mathematical Analysis 2002, 33(6):1393–1410. 10.1137/S0036141001395054MathSciNetCrossRefMATHGoogle Scholar
  47. 47.
    Brown G, Dai F: Approximation of smooth functions on compact two-point homogeneous spaces. Journal of Functional Analysis 2005, 220(2):401–423. 10.1016/j.jfa.2004.10.005MathSciNetCrossRefMATHGoogle Scholar
  48. 48.
    Brown G, Feng D, Sheng SY: Kolmogorov width of classes of smooth functions on the sphere . Journal of Complexity 2002, 18(4):1001–1023. 10.1006/jcom.2002.0656MathSciNetCrossRefMATHGoogle Scholar
  49. 49.
    Dai F: Multivariate polynomial inequalities with respect to doubling weights and weights. Journal of Functional Analysis 2006, 235(1):137–170. 10.1016/j.jfa.2005.09.009MathSciNetCrossRefMATHGoogle Scholar
  50. 50.
    Mhaskar HN, Narcowich FJ, Ward JD: Spherical Marcinkiewicz-Zygmund inequalities and positive quadrature. Mathematics of Computation 2001, 70(235):1113–1130.MathSciNetCrossRefMATHGoogle Scholar
  51. 51.
    Belinsky E, Dai F, Ditzian Z: Multivariate approximating averages. Journal of Approximation Theory 2003, 125(1):85–105. 10.1016/j.jat.2003.09.005MathSciNetCrossRefMATHGoogle Scholar
  52. 52.
    Kamzolov AI: Approximation of functions on the sphere . Serdica 1984, 10(1):3–10.MathSciNetMATHGoogle Scholar
  53. 53.
    Dai F, Ditzian Z: Cesàro summability and Marchaud inequality. Constructive Approximation 2007, 25(1):73–88. 10.1007/s00365-005-0623-8MathSciNetCrossRefMATHGoogle Scholar
  54. 54.
    Dai F: Some equivalence theorems with -functionals. Journal of Approximation Theory 2003, 121(1):143–157. 10.1016/S0021-9045(02)00059-XMathSciNetCrossRefMATHGoogle Scholar

Copyright information

© B.-H. Sheng and D.-H. Xiang. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.Department of MathematicsShaoxing UniversityShaoxingChina
  2. 2.Department of MathematicsZhejiang Normal UniversityJinhuaChina

Personalised recommendations