# Pointwise and Uniform Convergence of Fourier Extensions

- 83 Downloads

## Abstract

Fourier series approximations of continuous but nonperiodic functions on an interval suffer the Gibbs phenomenon, which means there is a permanent oscillatory overshoot in the neighborhoods of the endpoints. Fourier extensions circumvent this issue by approximating the function using a Fourier series that is periodic on a larger interval. Previous results on the convergence of Fourier extensions have focused on the error in the \(L^2\) norm, but in this paper we analyze pointwise and uniform convergence of Fourier extensions (formulated as the best approximation in the \(L^2\) norm). We show that the pointwise convergence of Fourier extensions is more similar to Legendre series than classical Fourier series. In particular, unlike classical Fourier series, Fourier extensions yield pointwise convergence at the endpoints of the interval. Similar to Legendre series, pointwise convergence at the endpoints is slower by an algebraic order of a half compared to that in the interior. The proof is conducted by an analysis of the associated Lebesgue function, and Jackson- and Bernstein-type theorems for Fourier extensions. Numerical experiments are provided. We conclude the paper with open questions regarding the regularized and oversampled least squares interpolation versions of Fourier extensions.

## Keywords

Fourier extension Lebesgue function Legendre polynomials on a circular arc Constructive approximation## Mathematics Subject Classification

42A10 41A17 65T40 42C15## 1 Introduction

The Fourier series of a periodic function converges spectrally fast with respect to the number of terms in the series, that is, with an algebraic order that increases with the number of available derivatives and exponentially fast for analytic functions. Furthermore, the truncated Fourier series can be approximated via the fast Fourier transform (FFT) in a fast and stable manner [40]. As such, it is the go-to approach to approximate a periodic function. However, when the function in question is nonperiodic, the situation is very different. Regardless of how smooth this function is, convergence is slow in the \(L^2\) norm and there is a permanent oscillatory overshoot close to the endpoints due to the Gibbs phenomenon [42].

*n*th Fourier extension of

*f*to the periodic interval \([-T,T]\). For the purposes of this paper, other kinds of Fourier extensions, which might come from a discrete sampling of

*f*or regularization, are a modification of this.

^{1}

There are many approximation schemes that avoid the Gibbs phenomenon. Chebyshev polynomial interpolants such as those implemented in the Chebfun [13, 36] and ApproxFun [30] software packages are extremely successful, so why consider Fourier extensions? First, discrete collocation versions of Fourier extensions sample the function on equispaced or near-equispaced grids, which in some situations are more natural than Chebyshev grids, which cluster near the endpoints [5]. Second, the approach generalizes naturally to higher dimensions. If one has a function on a bounded subset \(\Omega \subset {\mathbb {R}}^d\), then one can use multivariate Fourier series that are periodic on a *d*-dimensional bounding box containing \(\Omega \) [8, 18, 27]. Modifications of Fourier extensions that use discete samples of a function are particularly relevant in this generalization, because the integrals defining the \(L^2(\Omega )\) norm can be difficult to compute.

Fourier extensions can be computed stably in \({\mathcal {O}}(N\log ^2(N))\) floating point operations, with the following important caveats ([20, 23, 26]). Computation of \(f_N\) is equivalent to inversion of the so-called prolate matrix [37], which is a Toeplitz matrix \(G \in {\mathbb {R}}^{N\times N}\) with entries \(G_{k,j} = \mathrm {sinc}\left( (k-j)\frac{\pi }{T}\right) \), with right-hand-side vector \({\mathbf {b}}\in {\mathbb {C}}^N\) with entries \(b_k = \left( \frac{T}{2}\right) ^{\frac{1}{2}}\int _{-1}^1 e^{-\frac{i\pi }{T}kx}f(x)\,\mathrm {d}x\) [26]. The prolate matrix is exponentially ill-conditioned [34, Eq. 63], so computation of the exact Fourier extension is practically impossible, even for moderately sized *N*. However, a truncated singular value decomposition (SVD) solution is only worse than the exact solution (in the \(L^2(-1,1)\) norm) by a small factor \({\mathcal {O}}(\varepsilon ^{\frac{1}{2}})\) in the limit as \(N \rightarrow \infty \), where \(\varepsilon > 0\) is the truncation parameter [3, 4]. Furthermore, using an oversampled least squares interpolation in equispaced points in \([-1,1]\) can bring this down to \({\mathcal {O}}(\varepsilon )\) for a sufficient oversampling rate [2, 3, 4]. At the heart of these facts is the observation that while the Fourier basis on \([-T,T]\) does not form a Schauder basis for \(L^2(-1,1)\), it satisfies the weaker conditions of a *frame* [3].

Fourier extensions which approximate a truncated SVD solution rather than the exact solution are called *regularized* Fourier extensions. An approximate SVD of the prolate matrix can be computed in \({\mathcal {O}}(N\log ^2(N))\) operations using the FFT and exploiting the so-called plunge region in the profile of its singular values [20]. This is a vast improvement on \({\mathcal {O}}(N^3)\) operations for a standard SVD. Fast algorithms for regularized, oversampled least squares interpolation Fourier extensions were developed in [26], building on the work of Lyon [23].

Previous convergence results on Fourier extensions have focused on convergence in the \(L^2\) norm, because the Fourier extension by definition minimizes the error in the \(L^2\) norm over the approximation space. Convergence in \(L^2\) of algebraic order *k* for functions in the Sobolev space \(H^k(-1,1)\) was proved by Adcock and Huybrechs [4, Thm. 2.1]. It follows immediately that convergence is superalgebraic for smooth functions. Exponential convergence in \(L^2\) and \(L^\infty \) norms for analytic functions was proved by Huybrechs for \(T =2\) [17] and by Adcock et al. for general \(T > 1\) [4]. The proofs of exponential convergence appeal to connections between the Fourier extension problem and the sub-range Chebyshev polynomials [4], for which series approximations converge at an exponential rate which depends on analyticity in Bernstein ellipses in the complex plane. Regarding pointwise convergence of Fourier extensions for nonanalytic functions, there are no proofs in the literature. Some numerical exploration of pointwise convergence appears in [9, Sec. 2], but a rigorous theoretical foundation is lacking.

### 1.1 Summary of New Results

*f*in the Hölder space \(C^{k,\alpha }([-1,1])\),

This factor of \(N^{-k-\alpha }\) can be pessimistic if *f* is least regular at the boundary; in Sect. 5 we discuss how a weighted form of regularity (as opposed to Hölder regularity taken uniformly over the interval \([-1,1]\)) might yield a more natural correspondence between regularity and convergence rate. This is precisely the case in best polynomial approximation on an interval, where weighted moduli of continuity have a tight correspondence with best approximation errors [11, Ch. 7, Thm. 7.7].

From Eq. (2), it is immediate that if \(f \in C^{\alpha }([-1,1])\) where \(\alpha \in (0,1)\), then \(f_N\) converges to *f* uniformly in any subinterval \([a,b] \subset (-1,1)\), and if \(\alpha > \frac{1}{2}\), then we get uniform convergence over the whole interval \([-1,1]\).

We also prove a local pointwise convergence result, which states that if \(f \in L^2(-1,1)\), but *f* is uniformly Dini–Lipschitz in a subinterval [*a*, *b*], then the Fourier extension converges uniformly in compact subintervals of (*a*, *b*) (see Theorem 3.5). This is done by generalizing a localization theorem of Freud on convergence of orthogonal polynomial expansions in \([-1,1]\) (see Sect. 6).

A key insight of this paper is that the kernel associated with approximation by Fourier extension has an explicit formula that is related to the Christoffel–Darboux kernel of the *Legendre polynomials on a circular arc* (see Lemma 4.3). The asymptotics of these polynomials were derived by Krasovksy using Riemann–Hilbert analysis [10, 21, 22], which we use to derive asymptotics of the kernel. The Lebesgue function for Fourier extensions are estimated using these asymptotics in Theorem 4.1. We find that the Lebesgue function is \({\mathcal {O}}(\log (N))\) in the interior of \([-1,1]\) and \({\mathcal {O}}(N^{\frac{1}{2}})\) globally. This is just as with the Lebesgue function for Legendre series, and distinct from classical Fourier series which has a \({\mathcal {O}}(\log N)\) Lebesgue function over the full periodic interval.

The results of this paper would become more interesting if they could be extended to regularized and oversampled interpolation versions of Fourier extensions, because as discussed above, these are the versions for which stable and efficient algorithms have been developed. The multivariate case is another direction this line of inquiry would ideally lead. We briefly discuss future research like this in Sect. 8.

The paper is structured as follows. Section 2 recounts the known results about convergence of Fourier extensions in the \(L^2\) norm. Section 3 gives new pointwise and uniform convergence theorems along with proofs that depend on results proved in the self-contained Sects. 4, 5, and 6. Section 4 is on the Lebesgue function for Fourier extensions. Section 5 is on uniform best approximation for Fourier extensions, in which Jackson- and Bernstein-type theorems are proved. Section 6 is on an analogue of Freud’s localization theorem for Fourier extensions. Section 7 provides the reader with results from numerical experiments, and Sect. 8 provides discussion. The appendix contains a derivation of asymptotics of Legendre polynomials on a circular arc, on the arc itself, from the Riemann–Hilbert analysis of Krasovsky [10, 21, 22].

## 2 Convergence of Fourier Extensions in \(L^2\)

In this section we summarize the already known results regarding convergence in the \(L^2\) norm.

### 2.1 Exponential Convergence

### Theorem 2.1

*f*is an analytic function in \(\mathcal {D}(\rho ^\star )\) and continuous on \(\mathcal {D}(\rho ^\star )\) itself, then

*T*.

Note that there is a *T*-dependent upper limit on the rate of exponential convergence.

### 2.2 Algebraic Convergence

For functions in the Sobolev space \(H^k(-1,1)\) of \(L^2(-1,1)\) functions whose *k*th weak derivatives are in \(L^2(-1,1)\), we have algebraic convergence of order *k*.

### Theorem 2.2

*k*and

*T*.

### Corollary 2.3

If *f* is smooth, then \(f_N \rightarrow f\) superalgebraically in the \(L^2(-1,1)\) norm.

### 2.3 Subalgebraic Convergence

This elementary result says that Fourier extensions converge in the \(L^2\) norm for \(L^2\) functions.

### Proposition 2.4

### Proof

Let \(g \in L^2(-T,T)\) be the function that is equal to *f* inside \([-1,1]\) and zero in the complement. Let \(g(x) = \sum _{k=-\infty }^\infty c_k e^{\frac{i\pi }{T}kx}\) be its Fourier series, and for all odd integers \(N = 2n+1\), define \(t_N(x) = \sum _{k=-n}^n c_k e^{\frac{i\pi }{T}kx}\). Then following the definitions of \(f_N\), *g* and \(t_N\), we have \(\Vert f - f_N \Vert _{L^2(-1,1)} \le \Vert f - t_N \Vert _{L^2(-1,1)} = \Vert g - t_N \Vert _{L^2(-T,T)} \rightarrow 0\) as \(N\rightarrow \infty \). \(\square \)

## 3 Pointwise and Uniform Convergence

### 3.1 Exponential Convergence

The pointwise convergence result for analytic functions is the same as Theorem 2.1. In fact, Theorem 2.1 is a corollary of the following theorem.

### Theorem 3.1

*f*is analytic inside of the mapped Bernstein ellipse \(\mathcal {D}(\rho ^\star )\) (see Eq. (3)) and continuous on \(\mathcal {D}(\rho ^\star )\) itself, then

*T*.

### 3.2 Algebraic Convergence

Pointwise convergence for Hölder continuous functions is as follows.

### Theorem 3.2

*a*,

*b*,

*k*, \(\alpha \), and

*T*. Over the whole interval \([-1,1]\), we have

*k*, \(\alpha \), and

*T*.

We lose a half order of algebraic convergence at the endpoints, something that we could not possibly see in classical Fourier series because a periodic interval has no endpoints.

### Corollary 3.3

If *f* is smooth, then \(f_N \rightarrow f\) superalgebraically in \(L^\infty (-1,1)\).

### 3.3 Subalgebraic Convergence

The loss of a half order of algebraic convergence at the endpoints predicted by Theorem 3.2 means that we require at least Hölder continuity with order greater than a half in order to guarantee uniform convergence.

### Theorem 3.4

*f*is

*uniformly Dini–Lipschitz*in [

*a*,

*b*] if [42],

### Theorem 3.5

### Remark 3.6

This theorem is stronger than it might appear at first. It says that even if a function is in \(L^2(-1,1)\), and can have for example jump discontinuities, we will still have pointwise convergence in regions where *f* is Dini–Lipschitz. However, the localization theorem (Theorem 6.1) which we use to prove this result, does not give any indication of the *rate* of convergence.

### 3.4 Proofs of the Results of This Section

*f*and \({\mathcal {H}}_N\). Let \(\{ e_k \}_{k = 1}^N\) be

*any*orthonormal basis for \({\mathcal {H}}_N \subset L^2(-1,1)\). Then the kernel

*Lebesgue function*for the projection \(P_N\) at a point \(x\in [-1,1]\) is the \(L^1\) norm of the kernel at

*x*,

*best approximation error functional*on \({\mathcal {H}}_N\) is defined for all \(f\in C([-1,1])\) by

Now we can proceed to prove the pointwise convergence results stated above. The proofs depend on the content of Sects. 4, 5, and 6, which consist of self-contained results.

### Lemma 3.7

*a*,

*b*, and

*T*. Over the whole interval \([-1,1]\), we have

*T*.

### Proof

By Lebesgue’s lemma, given in Eq. (6), it suffices to show that \(\sup _{x \in [a,b]}\Lambda (x;P_N) = {\mathcal {O}}(\log N)\), and \(\sup _{x \in [-1,1]}\Lambda (x; P_N) = {\mathcal {O}}(N^{\frac{1}{2}})\). This is proved in Theorem 4.1. \(\square \)

### Proof of Theorem 3.2

By Lemma 3.7, it suffices to show that for \(f \in C^{k,\alpha }([-1,1])\), we have \(E(f;{\mathcal {H}}_N) = {\mathcal {O}}\left( N^{-k-\alpha }\right) |f|_{C^\alpha ([-1,1])}\). This follows from Lemma 5.1 and Theorem 5.3. \(\square \)

### Proof of Theorem 3.4

This follows from Theorem 3.2 with \(k = 0\), because \(N^{\frac{1}{2} - \alpha } \log N \rightarrow 0\) as \(N\rightarrow \infty \) for all \(\alpha > \frac{1}{2}\). \(\square \)

### Proof of Theorem 3.5

*a*,

*b*] and is in \(L^2(-1,1)\), we have by Theorem 6.1 that \(P_N(f_2) \rightarrow 0\) uniformly in all subintervals \([c,d] \subset (a,b)\). It is clear by the definition of \(f_1\) and the definition of Dini–Lipschitz continuity in Eq. (4) that \(f_1\) is also uniformly Dini–Lipschitz in \([-1,1]\). By Lemma 3.7,

## 4 The Lebesgue Function of Fourier Extensions

*prolate kernel*, because one particular choice of orthonormal basis is the discrete prolate spheroidal wave functions (DPSWFs). These functions, denoted by \(\{\xi _{k,N} \}_{k=1}^N\), are the

*N*eigenfunctions of a time-band-limiting operator; specifically, there exist eigenvalues \(\{\lambda _{k,N} \}_{k=1}^N\) such that

The key outcome of this section is a proof of the following theorem.

### Theorem 4.1

- (i)For each closed interval \([a,b] \subset (-1,1)\), the Lebesgue function satisfies$$\begin{aligned} \sup _{x \in [a,b]}\Lambda (x;P_N) = {\mathcal {O}}(\log N). \end{aligned}$$
- (ii)Over the whole interval \([-1,1]\), we have$$\begin{aligned} \sup _{x \in [-1,1]}\Lambda (x;P_N) = {\mathcal {O}}(N^{\frac{1}{2}}). \end{aligned}$$

*N*-dimensional space \({\mathcal {H}}_N\),

*n*. Using this idea we prove the following lemma.

### Lemma 4.2

### Proof

*n*. We need only show its orthonormality with respect to the inner product on \({\mathcal {H}}_N\) induced by \(L^2(-1,1)\). Let \(j , k \in \{0,\ldots , 2n\}\). Then, making the change of variables \(\theta = \frac{\pi }{T}x\), we have

*N*) [35, Thm 11.42]. On the unit circle itself, where \(z = e^{i\theta }\), \(\zeta =e^{i\phi }\), this reduces, after some elementary manipulations, to

### Lemma 4.3

### Remark 4.4

Setting \(T= 1\) in this formula returns the Dirichlet kernel of classical Fourier series, because \(\Pi _N(z) = z^N\) for the trivial weight \(f(\theta ) \equiv 1\).

### Proof

Now, to ascertain asymptotics of the prolate kernel, it is sufficient to ascertain asymptotics of the orthogonal polynomials \(\{\Pi _k(z)\}_{k=0}^\infty \). These polynomials have been studied before in the literature, and are known as the Legendre polynomials on a circular arc [25].

### Theorem 4.5

*T*and \(\delta \). The asymptotics for \(x\in [-1,-1+\delta ]\) are found by using the relation \(\Pi _N\left( e^{-\frac{i\pi }{T}x}\right) = \overline{\Pi _N\left( e^{\frac{i\pi }{T}x}\right) }\).

*N*, we have

### Remark 4.6

The asymptotic order of \(\Pi _N\left( e^{\frac{i\pi }{T}x}\right) \) with respect to *N* in Eq. (10) is the same as for the *N*th (normalized) Legendre polynomial in \([-1,1]\) [35, Thm. 8.21.6]. Further discussion on how Legendre series approximations compare to Fourier extensions lies in Sect. 8.1

### Proof

This result follows directly from Lemma A.1 in Appendix A, because if we take \(\alpha = \pi - \pi /T\) and \(f_\alpha (\theta ) \equiv 1\), then the polynomials \(\Pi _N(z) = (2T)^{-\frac{1}{2}}\phi _N\left( -z,\alpha \right) \) satisfy the orthonormality conditions that define \(\Pi _N\) as in Lemma 4.2. To obtain the asymptotic formula above, make the change of variables \(\theta = \frac{\pi }{T}x + \pi \) in the asymptotic formulae for \(\phi _N(z,\alpha )\). Be careful to note that the endpoint with explicit formula given above (\(x = 1\)), corresponds to \(\theta = 2\pi - \alpha \), which is not the endpoint with explicit formula given in Lemma A.1 (\(\theta = \alpha \)). This was done to shorten the expressions for the asymptotics at the endpoints.

We now have the required results to prove Theorem 4.1.

### Proof of Theorem 4.1 part (i)

*y*in each of \(I_1\), \(I_2\), and \(I_3\), and then estimate the associated integral over each of \(I_1\), \(I_2\), and \(I_3\).

### Proof of Theorem 4.1 part (ii)

*x*. Now, since \(\Pi _N\left( e^{-\frac{i\pi }{T}x}\right) = \overline{\Pi _N\left( e^{\frac{i\pi }{T}x}\right) }\), it follows that \(K_N(-x,y) = \overline{K_N(x,-y)}\), so that \(\Lambda (-x;P_N) = \Lambda (x;P_N)\). Therefore, to complete the proof we need only show that \(\Lambda (x;P_N) = {\mathcal {O}}(N^{\frac{1}{2}})\) uniformly for \(x \in [1-\delta ,1]\). For such

*x*, we divide the interval \([-1,1]\) into the following subsets:

*x*and

*y*currently in question, and consider the numerator in the formula for the kernel \(K_N(x,y)\) (Lemma 4.3). An asymptotic formula is as follows:

*x*and \(\eta \) are replaced by

*y*and \(\lambda \).

It is straightforward to also show that \((1-y)\frac{\pi }{2T} \le \sin \left( \frac{\pi }{2T} \right) \lambda ^2\) for \(y \in [0,1]\) and \(\lambda \in \left[ 0,\frac{\pi }{2}\right] \). From this, we have that for \(y \in I_2\), \(\lambda \ge \sqrt{\frac{\pi }{2TN}}\). Combining this with the fact that for \(t \rightarrow \infty \), \(J_\alpha (t) = {\mathcal {O}}\left( t^{-\frac{1}{2}}\right) \) (see [31, Eq. 10.17.3]), we get that \(J_0(N\lambda ) = {\mathcal {O}}\left( N^{-\frac{1}{4}}\right) \).

## 5 Best Uniform Approximation by Fourier Extensions

*N*and the regularity of the functions to be approximated.

*modulus of continuity*is defined by [11, 28]

*x*,

*y*as elements of the periodic interval \([-T,T]\). The following results are immediate.

### Lemma 5.1

If *f* is in the Hölder space \(C^\alpha ([-1,1])\) for \(\alpha \in [0,1]\), then \(\omega (f;\delta ) \le \delta ^\alpha |f|_{C^\alpha ([-1,1])}\) for all \(\delta > 0\).

### Lemma 5.2

### 5.1 A Jackson-Type Theorem

*k*and

*T*[19, Thm. 1.IV].

*k*[19, Thm. 1.VIII]. We prove a version of Jackson’s theorem for Fourier extensions.

### Theorem 5.3

*k*and

*T*.

### Lemma 5.4

*f*can be extended to a function \(g\in C_{\mathrm {per}}^k([-T,T])\) such that

### Proof

*g*(

*x*) is the the linear function that interpolates

*f*at \(\{-1,1\}\). We distinguish between 4 different cases for points \(x,y \in [-T,T]\) such that \(d_T(x,y) \le \delta \): (i) if \(x,y\in [-1,1]\), then

*g*is linear in this region,

*x*; and (iv) if \(x\in [-T,T]\backslash [-1,1],y\in [-T,T]\), the bound is similar to the previous one. Now it remains to bound \(|f(1)-f(-1)|\) in terms of \(\omega (f;\delta )\). For any positive integer

*m*, we can use a telescoping sum,

Now let \(k>0\) and choose as extension of *f* the \(2(k+1)\)th degree Hermite interpolant in the points \(x=1\) and \(x=-1\); then \(g^{(k)}(x)\) is the linear interpolation between \(f^{(k)}(1)\) and \(f^{(k)}(-1)\) for \(x\in [-T,T]\backslash [-1,1]\). By the case \(k=0\) proved above, \(\omega _{\mathrm {per}}(g^{(k)};\delta )\le \frac{T}{T-1}\omega (f^{(k)};\delta )\). \(\square \)

### Proof of Theorem 5.3

*g*, then (trivially) there exists a function \(r_N \in {\mathcal {H}}_N\) such that \(r_N(x) = t_N(x)\) for all \(x \in [-1,1]\). Hence,

### 5.2 A Bernstein-Type Theorem

While Jackson-type theorems bound the best approximation error functional by powers of *N* and moduli of continuity of derivatives, Bernstein-type theorems attempt to do the opposite.

### Theorem 5.5

The direct analogue of Theorem 5.5 for best uniform approximation by algebraic polynomials in \(C([-1,1])\) is not true. Indeed, consider the function \(h(x) = (1-x^2)^\alpha \), whose modulus of continuity satisfies \(\omega (h;\delta ) = {\mathcal {O}}(\delta ^{\alpha })\) by Lemma 5.1. Define the function \(g(\theta ) = h\left( \cos \left( \theta \right) \right) = \left| \sin \left( \theta \right) \right| ^{2\alpha }\) for \(\theta \in [-\pi ,\pi ]\). If \(\alpha < \frac{1}{2}\), then \(g \in C^{2\alpha }([-\pi ,\pi ])\), so \(E(g;{\mathcal {T}}_N) = {\mathcal {O}}(N^{-2\alpha })\) by Theorem 5.5. Furthermore, the best approximations will be even since *g* is even, so the approximants are in fact polynomials in \(\cos (\theta )\). This implies that the best approximations to *h* are polynomials in *x*, showing that \(E(h;{\mathcal {P}}_N) = {\mathcal {O}}(N^{-2\alpha })\), twice as good as would be expected from Jackson’s theorem for algebraic polynomials (Eq. (17)).

*weighted*moduli of continuity. The weighted modulus of continuity with weight \(\phi : [-1,1] \rightarrow [0,\infty )\) for a function \(f \in C([-1,1])\) is defined as

It turns out that if this weighted modulus of continuity is used with \(\phi (x) = \sqrt{1-x^2}\), then there is a direct analogue of Theorem 5.5 for best uniform approximation by algebraic polynomials.

### Theorem 5.6

*N*to \(N^2\) on the right-hand side; this is then Markov’s inequality [11, Ch. 4, Thm. 1.4].

### Theorem 5.7

### Proof

This follows directly from [11, Ch. 6, Thm. 6.2] and [11, Ch. 7, Thm. 5.1(b)], with \(r =1\), \(\mu = 1\), \(X = L^\infty (-1,1)\), \(\Phi _n = {\mathcal {H}}_N\), and \(Y = W_\infty ^1(\phi ) := \{f \in W^{1,1}(-1,1): \phi \cdot f' \in L^\infty (-1,1)\}\), where \(W^{1,1}(-1,1)\) is the Sobolev space of absolutely continuous functions on \((-1,1)\). \(\square \)

From this Bernstein-type theorem for Fourier extensions, we get one half of an equivalence theorem between best approximation errors and weighted moduli of continuity. For the full equivalence, one must prove Conjecture 5.9 below.

### Theorem 5.8

### Proof

*f*, one can verify that the functions \(f_\rho (x) = f(\rho x)\) for \(\rho \in (0,1)\) satisfy: \(f_\rho \in W^{1,\infty }(-1,1)\), \(f_\rho \rightarrow f\) in \(L^\infty \), and \(\Vert \phi \cdot f_\rho '\Vert _\infty \le \Vert \phi \cdot f'\Vert _\infty \). For each \(\rho \) and \(\varepsilon > 0\) there exists \(f_{\rho ,\varepsilon } \in C^1([-1,1])\) such that \(\Vert f_{\rho ,\varepsilon } - f_\rho \Vert _{W^{1,\infty }} < \varepsilon \) by density of \(C^1([-1,1])\) in \(W^{1,\infty }(-1,1)\). Therefore there exists \(f_\varepsilon \in C^1([-1,1])\) such that \(\Vert f - f_\varepsilon \Vert _{L^\infty (-1,1)} < \varepsilon \) and \(\Vert \phi \cdot f_\varepsilon '\Vert _\infty \le \Vert \phi \cdot f'\Vert _\infty + \varepsilon \). Hence \(E(f;{\mathcal {H}}_N) \le \Vert f-f_\varepsilon \Vert _{L^\infty (-1,1)} + E(f_\varepsilon ;{\mathcal {H}}_N) \le \left( 1 + \frac{C_T}{n}\right) \varepsilon + \frac{C_T}{n}\Vert \phi \cdot f'\Vert _\infty \). Since \(\varepsilon \) is arbitrary, we have the desired inequality. A similar argument may be found in [11, p. 280].

### Conjecture 5.9

*n*such that

Notice that to approximate *f* we conjecture that we only need to use positive powers of *z*, which means we do not need to utilize all of the functions in \({\mathcal {H}}_N\). This is because by Mergelyan’s theorem [33, Thm. 20.5] polynomials are dense in the space *C*(*A*). It is not surprising because of the redundant nature of approximation by Fourier extensions.

## 6 A Localization Theorem for Fourier Extensions

The theorem proved in this section is a modification of a theorem of Freud ([15, Thm. IV.5.4]), which is a localization theorem for orthogonal polynomials on an interval. We, however, are working with the orthonormal basis given in Lemma 4.2, and there are some clear differences between the two situations. We show that these differences do not change the statement of the result.

### Theorem 6.1

(Localization theorem) Let \(f\in L^2(-1,1)\) be such that \(f(x) = 0\) for all \(x \in [a,b] \subseteq [-1,1]\). Then \(P_N(f) \rightarrow 0\) uniformly in all subintervals \([c,d] \subset (a,b)\).

### Proof

*f*as in the statement of the theorem, we have

*a*,

*b*] and equal to

*f*(an \(L^2(-1,1)\) function) multiplied by a bounded function (\(y \mapsto e^{\frac{i\pi }{2T}y} / \sin \left( \frac{\pi }{2T}(\xi -y)\right) \)) outside of [

*a*,

*b*].

*c*,

*d*] will be covered by finitely many of these intervals \(I(\xi )\), which we denote by \(I(\xi _1), I(\xi _2),\dots ,I(\xi _s)\).

In conclusion, since \(\varepsilon \) is arbitrary and the inequality above is valid for all \(N > K_{\varepsilon }\), the integral must converge to zero as \(N \rightarrow \infty \), uniformly with respect to \(x \in [c,d]\), as required. \(\square \)

## 7 Numerical Experiments

In this section we provide numerically computed examples of pointwise and uniform convergence of Fourier extensions for functions with various regularity properties. It was discussed in the introduction that the condition number of the linear system for computing the Fourier extension is extremely ill-conditioned, making computation of the exact solution to the Fourier extension practically impossible. To deal with this issue, we used sufficiently high precision floating point arithmetic and we did not take *N* higher than 129, to ensure that the system could be inverted accurately. The right-hand side vectors for the computations are computed by quadrature in high precision floating point arithmetic.

In practice, one would compute a fast regularized oversampled interpolation Fourier extension using the algorithm in [26], requiring only \(\mathcal {O}(N\log ^2(N))\) floating point operations. However, we are interested in the exact Fourier extension and want to avoid any artefacts that may be caused by the regularization or discretization of the domain.

In some cases, we compare the convergence rate of Fourier extensions to that of Legendre series, because we predict that the qualitative behavior of Legendre series will be similar (see Sect. 8). For the Legendre series approximations we computed the Legendre series coefficients one by one using adaptive quadrature in 64-bit floating point precision. As such, the errors for the Legendre series approximations will stagnate due to numerical error.

### 7.1 Analytic and Entire Functions

*T*-dependent upper bound.

### 7.2 Differentiable Functions

We investigate Fourier extension approximation of splines of degree \(d = 3, 9\), and 15 on the interval \(\left[ 0,\frac{1}{2}\right] \), which lie in the Hölder spaces \(C^{2,1}\left( \left[ 0,\frac{1}{2}\right] \right) \), \(C^{8,1}\left( \left[ 0,\frac{1}{2}\right] \right) \), and \(C^{14,1}\left( \left[ 0,\frac{1}{2}\right] \right) \), respectively. By Theorem 3.2, we expect the pointwise errors to be \(\mathcal {O}(N^{-d}\log N)\) in the interior and \(\mathcal {O}(N^{\frac{1}{2} - d})\) uniformly over the whole interval.

*N*are plotted in Fig. 2. The rates of convergence predicted by Theorem 3.2 fit reasonably well, sometimes performing slightly better. For comparison, we include the errors for a Legendre series approximation in a dashed line of the same color. See Sect. 8.1 for a full discussion comparing convergence of Legendre series and Fourier extensions.

### 7.3 Nondifferentiable Functions

We investigate the approximation of functions with algebraic singularities, discontinuities, and Dini–Lipschitz continuity.

Functions with an algebraic singularity at the endpoint are studied in Fig. 3. We plot the pointwise errors for Fourier extension and Legendre series approximations to \(f(x) = x^\alpha \) for \(\alpha = \frac{3}{4}, \frac{1}{2}\), and \(\frac{1}{10}\). These functions lie in the Hölder spaces \(C^\alpha \left( \left[ 0,\frac{1}{2}\right] \right) \) for their respective values of \(\alpha \).

In all three cases, we compared the convergence of Fourier extension approximations and Legendre series. While there is sometimes a mismatch between the pessimistic prediction of Theorem 3.2 and Lemma 3.7 for the convergence rates (see Sect. 5), when we compare Fourier extensions and Legendre series, we observe agreement. See Sect. 8.1 for a full discussion comparing convergence of Legendre series and Fourier extensions.

## 8 Discussion

We proved pointwise and uniform convergence results for Fourier extension approximations of functions in Hölder spaces and with local uniform Dini–Lipschitz conditions. This was achieved by proving upper bounds on the associated Lebesgue function and the decay rate of best uniform approximation error for Fourier extensions, then appealing to Lebesgue’s lemma.

### 8.1 Comparison to Legendre Series

*k*th Legendre polynomial normalized so that \(\frac{1}{2}\int _{-1}^1 p^{L}_k(x)^2 \,\mathrm {d}x = 1\).

Theorem 2.1 on exponential convergence differs from the exponential convergence results for Legendre series in two ways. First, the region in the complex plane that determines the rate of exponential convergence is determined not by Bernstein ellipses for Legendre series, but by mapped Bernstein ellipses for Fourier extensions. Second, there is an upper limit of \(\cot ^2\left( \frac{\pi }{4T}\right) \) for the rate of exponential convergence of Fourier extensions regardless of the region of analyticity, whereas for Legendre series the rate can be arbitrarily fast, and for entire functions the rate of convergence is superexponential [39].

### 8.2 Extensions of This Work

It was mentioned in the introduction that our convergence results will be more applicable if we can extend them to regularized and oversampled interpolation versions of Fourier extensions, because those are the kinds of Fourier extensions for which stable and efficient algorithms have been developed.

*S*but with all entries less than \(\varepsilon \) set to 0. The coefficients \({\mathbf {c}}^\varepsilon \in {\mathbb {C}}^N\) of the regularized Fourier extension of \(f \in L^2(-1,1)\) are given by

*G*whose eigenvalues are greater than or equal to \(\varepsilon \). These eigenvectors are the discrete prolate spheroidal sequences (DPSSs), which are the Fourier coefficients of the DPSWFs \(\{\xi _{k,N}\}_{k=1}^N\) discussed in Sect. 4 [34]. The regularized Fourier extension, therefore, finds the best approximation not in \({\mathcal {H}}_N\), but in the linear space \({\mathcal {H}}_{N,\varepsilon } \subset {\mathcal {H}}_N \subset L^2(-1,1)\), where

Generalization of this work to the multivariate case would be extremely interesting, because the shape of the domain \(\Omega \subset {\mathbb {R}}^d\) and regularity of its boundary will likely come into play [27].

## Footnotes

## Notes

### Acknowledgements

We benefited from useful discussions with Ben Adcock, Arno Kuijlaars, Walter Van Assche, and Andrew Gibbs. The first author is grateful to FWO Research Foundation Flanders for a postdoctoral fellowship he enjoyed during the writing of this paper.

## References

- 1.Adcock, B., Huybrechs, D.: On the resolution power of Fourier extensions for oscillatory functions. J. Comput. Appl. Math.
**260**, 312–336 (2014)MathSciNetCrossRefGoogle Scholar - 2.Adcock, B., Huybrechs, D.: Frames and numerical approximation II: generalized sampling (2018). arXiv:1802.01950
- 3.Adcock, B., Huybrechs, D.: Frames and numerical approximation. SIAM Rev.
**61**(3), 443–473 (2019)MathSciNetCrossRefGoogle Scholar - 4.Adcock, B., Huybrechs, D., Martín-Vaquero, J.: On the numerical stability of Fourier extensions. Found. Comput. Math.
**14**(4), 635–687 (2014)MathSciNetCrossRefGoogle Scholar - 5.Adcock, B., Ruan, J.: Parameter selection and numerical approximation properties of Fourier extensions from fixed data. J. Comput. Phys.
**273**, 453–471 (2014)MathSciNetCrossRefGoogle Scholar - 6.Borwein, P., Erdélyi, T.: Polynomials and Polynomial Inequalities, vol. 161. Springer, Berlin (2012)zbMATHGoogle Scholar
- 7.Boyd, J.P.: A comparison of numerical algorithms for Fourier extension of the first, second, and third kinds. J. Comput. Phys.
**178**(1), 118–160 (2002)MathSciNetCrossRefGoogle Scholar - 8.Boyd, J.P.: Fourier embedded domain methods: extending a function defined on an irregular region to a rectangle so that the extension is spatially periodic and \(C^\infty \). Appl. Math. Comput.
**161**(2), 591–597 (2005)MathSciNetzbMATHGoogle Scholar - 9.Bruno, O.P., Han, Y., Pohlman, M.M.: Accurate, high-order representation of complex three-dimensional surfaces via Fourier continuation analysis. J. Comput. Phys.
**227**(2), 1094–1125 (2007)MathSciNetCrossRefGoogle Scholar - 10.Deift, P.: Orthogonal Polynomials and Random Matrices: A Riemann–Hilbert Approach, vol. 3. American Mathematical Society, Providence (1999)zbMATHGoogle Scholar
- 11.DeVore, R.A., Lorentz, G.G.: Constructive Approximation, vol. 303. Springer, Berlin (1993)CrossRefGoogle Scholar
- 12.Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer, Berlin (1987)CrossRefGoogle Scholar
- 13.Driscoll, T.A., Hale, N., Trefethen, L.N.: Chebfun Guide. Pafnuty Publications, Oxford (2014)Google Scholar
- 14.Evans, L.C.: Partial Differential Equations. American Mathematical Society, Providence (2010)zbMATHGoogle Scholar
- 15.Freud, G.: Orthogonal Polynomials. Pergamon Press, Oxford (1971)zbMATHGoogle Scholar
- 16.Gronwall, T.H.: Über die Laplacesche Reihe. Math. Ann.
**74**(2), 213–270 (1913)MathSciNetCrossRefGoogle Scholar - 17.Huybrechs, D.: On the Fourier extension of nonperiodic functions. SIAM J. Numer. Anal.
**47**(6), 4326–4355 (2010)MathSciNetCrossRefGoogle Scholar - 18.Huybrechs, D., Matthysen, R.: Computing with functions on domains with arbitrary shapes. International Conference Approximation Theory, pp. 105–117. Springer, Berlin (2016)Google Scholar
- 19.Jackson, D.: The Theory of Approximation. The American Society, Providence (1930)zbMATHGoogle Scholar
- 20.Karnik, S., Zhu, Z., Wakin, M.B., Romberg, J., Davenport, M.A.: The fast Slepian transform. Appl. Comput. Harmon. Anal.
**46**, 624 (2017)MathSciNetCrossRefGoogle Scholar - 21.Krasovsky, I.V.: Gap probability in the spectrum of random matrices and asymptotics of polynomials orthogonal on an arc of the unit circle. Int. Math. Res. Not.
**2004**(25), 1249–1272 (2004)MathSciNetCrossRefGoogle Scholar - 22.Kuijlaars, A.B.J., Mclaughlin, R., Van Assche, W., Vanlessen, M.: The Riemann–Hilbert approach to strong asymptotics for orthogonal polynomials on \([-1,1]\). Adv. Math.
**188**, 337–398 (2004)MathSciNetCrossRefGoogle Scholar - 23.Lyon, M.: A fast algorithm for Fourier continuation. SIAM J. Sci. Comput.
**33**(6), 3241–3260 (2011)MathSciNetCrossRefGoogle Scholar - 24.Lyon, M.: Approximation error in regularized SVD-based Fourier continuations. Appl. Numer. Math.
**62**(12), 1790–1803 (2012)MathSciNetCrossRefGoogle Scholar - 25.Magnus, A.P.: Freud equations for Legendre polynomials on a circular arc and solution of the Grünbaum–Delsarte–Janssen–Vries problem. J. Approx. Theory
**139**(1–2), 75–90 (2006)MathSciNetCrossRefGoogle Scholar - 26.Matthysen, R., Huybrechs, D.: Fast algorithms for the computation of Fourier extensions of arbitrary length. SIAM J. Sci. Comput.
**38**(2), A899–A922 (2016)MathSciNetCrossRefGoogle Scholar - 27.Matthysen, R., Huybrechs, D.: Function approximation on arbitrary domains using Fourier extension frames. SIAM J. Numer. Anal.
**56**(3), 1360–1385 (2018)MathSciNetCrossRefGoogle Scholar - 28.Mhaskar, H.N., Pai, D.V.: Fundamentals of Approximation Theory. CRC Press, Boca Raton (2000)zbMATHGoogle Scholar
- 29.Nagy, B., Totik, V.: Bernsteins inequality for algebraic polynomials on circular arcs. Constr. Approx.
**37**(2), 223–232 (2013)MathSciNetCrossRefGoogle Scholar - 30.Olver, S., collaborators.: ApproxFun v0.10.0. https://github.com/JuliaApproximation/ApproxFun.jl (2018). Accessed 22 Nov 2018
- 31.Olver, F.W.J., Olde Daalhuis, A.B., Lozier, D.W., Schneider, B,I., Boisvert, R.F., Clark, C.W., Miller, B.R., Saunders, B,V., eds.: NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, Release 1.0.20 of 2018-09-15. Accessed 22 Nov 2018
- 32.Phillips, G.M.: Interpolation and Approximation by Polynomials, vol. 14. Springer, Berlin (2003)CrossRefGoogle Scholar
- 33.Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill International Editions, New York (1987)zbMATHGoogle Scholar
- 34.Slepian, D.: Prolate spheroidal wave functions, Fourier analysis, and uncertainty V: the discrete case. Bell Syst. Tech. J.
**57**(5), 1371–1430 (1978)CrossRefGoogle Scholar - 35.Szegő, G.: Orthogonal Polynomials, vol. 23. American Mathematical Society, Providence (1939)zbMATHGoogle Scholar
- 36.Trefethen, L.N.: Approximation Theory and Approximation Practice, vol. 128. SIAM, Philadelphia (2013)zbMATHGoogle Scholar
- 37.Varah, J.: The prolate matrix. Linear Algebra Appl.
**187**, 269–278 (1993)MathSciNetCrossRefGoogle Scholar - 38.Videnskii, V.: Extremal estimates for the derivative of a trigonometric polynomial on an interval shorter than its period. Sov. Math. Dokl
**1**, 5–8 (1960)MathSciNetGoogle Scholar - 39.Wang, H., Xiang, S.: On the convergence rates of Legendre approximation. Math. Comput.
**81**(278), 861–877 (2012)MathSciNetCrossRefGoogle Scholar - 40.Wright, G.B., Javed, M., Montanelli, H., Trefethen, L.N.: Extension of Chebfun to periodic functions. SIAM J. Sci. Comput.
**37**(5), C554–C573 (2015)MathSciNetCrossRefGoogle Scholar - 41.Xu, W.Y., Chamzas, C.: On the periodic discrete prolate spheroidal sequences. SIAM J. Appl. Math.
**44**(6), 1210–1217 (1984)MathSciNetCrossRefGoogle Scholar - 42.Zygmund, A.: Trigonometric Series, vol. 1. Cambridge University Press, Cambridge (2002)zbMATHGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.