# Efficient high order algorithms for fractional integrals and fractional differential equations

- 372 Downloads

## Abstract

We propose an efficient algorithm for the approximation of fractional integrals by using Runge–Kutta based convolution quadrature. The algorithm is based on a novel integral representation of the convolution weights and a special quadrature for it. The resulting method is easy to implement, allows for high order, relies on rigorous error estimates and its performance in terms of memory and computational cost is among the best to date. Several numerical results illustrate the method and we describe how to apply the new algorithm to solve fractional diffusion equations. For a class of fractional diffusion equations we give the error analysis of the full space-time discretization obtained by coupling the FEM method in space with Runge–Kutta based convolution quadrature in time.

## Mathematics Subject Classification

65R20 65L06 65M15 26A33 35R11## 1 Introduction

To compute up to time \(T = Nh\) using formula (2) requires *O*(*N*) memory and \(O(N^2)\) arithmetic operations. Algorithms based on FFT can reduce the computational complexity to \(O(N \log N)\) [16] or \(O(N \log ^2 N)\) [9], but not the memory requirements; for an overview of FFT algorithms see [7]. Here we develop algorithms that reduce the memory requirement to \(O(|\log \varepsilon |\log N)\) and the computational cost to \(O(|\log \varepsilon | N \log N)\), with \(\varepsilon \) the accuracy in the computation of the convolution weights. Hence, our algorithm has the same complexity as the fast and oblivious quadratures of [19] and [23], but as we will see, a simpler construction.

*p*is the order of the underlying RK method, this is intimately related to the construction of an efficient quadrature for the integral representation of the convolution kernel

Even though we eventually only require the quadrature for (3), we begin with developing a quadrature formula for (4) for a number of reasons: the calculation for (4) is cleaner and easier to follow, such a quadrature allows for efficient algorithms that are not based on CQ, and finally once this is available the analysis for (3) is much shorter. The quadrature we develop for (4) is closely related to the one developed in [13], the main difference being our treatment of the singularity at \(x=0\) by Gauss–Jacobi quadrature and the restriction of *t* to the finite interval rather than semi-infinite as used in [13]. Both these decisions allow us to substantially reduce constants in the above asymptotic estimates of memory and computational costs. Recent references [2, 12, 27] also consider fast computation of (1), but do not address the approximation of the convolution quadrature approximation exploiting (3). Our main contribution here is the development of an efficient quadrature to approximate (3) and its use in a fast and memory efficient scheme for computing the discrete convolution (2).

To our knowledge, underlying high order solvers for ODEs have been considered for the approximation of (1) only at experimental level in [2, 3] and in [23], where a fast and oblivious implementation of RK based CQ is considered for more general applications than (1). The fast and oblivious quadratures of [19] and [23] have the same asymptotic complexity as our algorithm, but have a more complicated memory management structure and require the optimization of the shape of the integration contour. Our new algorithm has the advantage of being much easier to implement, as it does not require sophisticated memory management and the optimization of quadrature parameters is much simpler, and furthermore only real arithmetic is required. The new method is also much better suited for the extension to variable steps — this will be investigated in a follow up work. On the other hand, the present algorithm is specially tailored to the application to (1) and related FDEs, whereas the algorithms in [19, 23] allow for a wider range of applications.

The paper is organized as follows. In Sect. 2 we develop and fully analyze a special quadrature for (1), which uses the same nodes and weights for every \(t\in [n_0h,T]\). In Sect. 3, we recall Convolution Quadrature based on Runge–Kutta methods and derive the special representation of the associated weights already stated in (3). In Sect. 4 we derive a special quadrature for (3), which uses the same nodes and weights for every \(n\in [n_0,N]\), with \(T=hN\). In Sect. 5 we explain how to turn our quadrature for the CQ weights into a fast and memory saving algorithm. In Sect. 6 we test our algorithm with a scalar problem and in Sect. 7 we consider the application to a fractional diffusion equation. We provide a complete error analysis of the discretization in space and time of a class of fractional diffusion equations.

## 2 Efficient quadrature for \(t^{\alpha -1}\)

In the following we fix an integer \(n_0 > 0\), time step \(h> 0\), and the final computational time \(T > 0\). Throughout, the parameter \(\alpha \) is restricted to the interval (0, 1). We develop an efficient quadrature for (4) accurate for \(t \in [n_0 h, T]\).

### 2.1 Truncation

### Lemma 1

### Proof

### Remark 2

*A*.

### 2.2 Gauss–Jacobi quadrature for the initial interval

Recall the Bernstein ellipse \(\mathcal {E}_{\varrho }\), which is given as the image of the circle of radius \(\varrho > 1\) under the map \(z \mapsto (z+z^{-1})/2\). The largest imaginary part on \(\mathcal {E}_{\rho }\) is \((\varrho -\varrho ^{-1})/2\) and the largest real part is \((\varrho +\varrho ^{-1})/2\).

### Theorem 3

*f*be analytic inside the Bernstein ellipse \(\mathcal {E}_{\varrho }\) with \(\varrho > 1\) and bounded there by

*M*. Then the error of Gauss quadrature with weight

*w*(

*x*) is bounded by

### Proof

A proof of this result for \(w(x) \equiv 1\) can be found in [24, Chapter 19]. The same proof works for the weighted Gauss quadrature as well. We give the details next.

*f*in Chebyshev series

### Theorem 4

### Proof

### 2.3 Gauss quadrature on increasing intervals

*Q*nodes and denote the corresponding error by

### Theorem 5

### Proof

### Remark 6

## 3 Runge–Kutta convolution quadrature

*s*-stage Runge–Kutta method described by the coefficient matrix \(\mathcal{O}\!\iota = (a_{ij})_{i,j=1}^s \in \mathbb {R}^{s\times s}\), the vectors of weights \({\mathbf {b}}= (b_1,\ldots ,b_s)^T \in \mathbb {R}^s\) and the vector of abcissae \({\mathbf {c}}= (c_1,\ldots ,c_s)^T \in [0,1]^s\). We assume that the method is

*A*-stable, has classical order \(p\ge 1\), stage order

*q*and satisfies \(a_{s,j}=b_j\), \(j=1,\dots ,s\), [10]. The corresponding stability function is given by

- 1.
\(c_s=1\).

- 2.
\(r(\infty ) = {\mathbf {b}}^T \mathcal{O}\!\iota ^{-1}\mathbb {1}-1 = 0\).

- 3.
\(r(z) = e^z+O(z^{p+1})\)

- 4.
\(|r(z)| \le 1\) for \(\mathrm{Re}\,z \le 0\).

*K*(

*z*) denotes the Laplace transform of the convolution kernel

*k*(

*t*).

*K*is assumed to be analytic for \(\mathrm{Re}\,z > 0\) and bounded there as \(|K(z)| \le |z|^{-\mu }\) for some \(\mu > 0\). The operational notation \(K(\partial _t) f\) introduced in [15], is useful in emphasising certain properties of convolutions. Of particular importance is the composition rule, namely, if \(K(s) = K_1(s)K_s(s)\) then \(K(\partial _t)f = K_1(\partial _t) K_2(\partial _t) f\). This will be used when solving fractional differential equations in Sect. 7.2.

*m*smallest integer such that \(m > -\mu \).

### Remark 7

*f*. For a sequence \(\mathbf {f}_0, \dots , \mathbf {f}_N \in \mathbb {R}^s\), we use the same notation \(K(\partial _t^h)\mathbf {f}\) to denote

FFT techniques based on (12) can be applied to compute at once all the required \({\mathbf {W}}_j\), \(j=0,\dots ,N\), with \(N=\lceil T/h\rceil \), [16]. The computational cost associated to this method is \(O(N\log (N))\). It implies precomputing and keeping in memory all weight matrices for the approximation of every \(\mathcal {I}^{\alpha }[f](t_n)\), \(n=1,\dots ,N\), see [4] for details and many experiments.

The following error estimate for the approximation of (1) by (14) is given by [18, Theorem 2.2]. Notice that we allow *K*(*z*) to be a map between two Banach spaces with appropriate norms denoted by \(\Vert \cdot \Vert \) in the following. This will be needed in Sect. 7.

### Theorem 8

*K*(

*z*) is analytic in a sector \(|\arg (z-c)|<\pi -\delta \) and satisfies there the bound \(\Vert K(z)\Vert \le M|z|^{-\alpha }\). Then if \(f\in C^{p}[0,T]\), there exists \(h_0>0\) and \(C>0\) such that for \(h\le h_0\) it holds

### 3.1 Real integral representation of the CQ weights

### Lemma 9

*r*is the stability function of the method and \(\mathbf{q}(z)=\mathbf{b}^T(I-z\mathcal{O}\!\iota )^{-1}\).

### Proof

The next properties will be used later in Sect. 4

### Lemma 10

### Proof

*r*(

*z*) (and hence \(\mathbf{q}(z)\)) belong to \(\mathrm{Re}\,z > b\). Define now

*r*(

*z*) to see that \(\sup _{\mathrm{Re}\,z = 0} \frac{1}{\mathrm{Re}\,z}\log |r(z)| = 1\).

Recall that \({\mathbf {q}}(z) = {\mathbf {b}}^T(I-z\mathcal{O}\!\iota )^{-1}\). As all the singularities of \({\mathbf {q}}\) are in the half-plane \(\mathrm{Re}\,z > b\) and \(\Vert {\mathbf {q}}(z)\Vert \rightarrow 0\) as \(|z| \rightarrow \infty \), we have that \(\Vert {\mathbf {q}}(z)\Vert \) is bounded in the region \(\mathrm{Re}\,z \le b\). \(\square \)

### Remark 11

- (a)
Note that for BDF1 we can choose \(b \in (0,1)\). Hence, \(\gamma = b^{-1}\log \frac{1}{1-b}\) and since \({\mathbf {q}}(z) = r(z)\) for BDF1, we can set \(C_{{\mathbf {q}}} = e^{\gamma b}\).

- (b)For the 2-stage Radau IIA method we haveAs the poles of$$\begin{aligned} r(z) = \frac{2z+6}{z^2-4z+6}, \quad {\mathbf {q}}(z) = \frac{1}{2(z^2-4z+6)} \begin{bmatrix} 9&3-2z \end{bmatrix}. \end{aligned}$$
*r*and \({\mathbf {q}}\) are at \(z = 2\pm \sqrt{2}\mathrm {i}\), we can choose any \( b\in (0,2)\) and obtain the optimal \(\gamma \) numerically using (21). For example for \(b = 1\), we can choose \(\gamma \approx 1.0735\). Similarly we can compute \(C_{{\mathbf {q}}}\) by computingFor \(b = 1\) and the Euclidian norm we have \(C_{{\mathbf {q}}} \approx 1.6429\). Using the same procedure, for \(b = 3/2\), we have \(\gamma \approx 1.2617\) and \(C_{{\mathbf {q}}} \approx 3.3183\).$$\begin{aligned} C_{{\mathbf {q}}} = \sup _{\mathrm{Re}\,z = 0 \text { or } \mathrm{Re}\,z = b} \Vert {\mathbf {q}}(z)\Vert . \end{aligned}$$ - (c)
For the 3-stage Radau IIA method the poles of

*r*(*z*) and \({\mathbf {q}}(z)\) belong to \(\mathrm{Re}\,z \ge \frac{9^{2/3}}{6}-\frac{9^{1/3}}{2}+3 \approx 2.681\). Choosing \(b = 1\) gives \(\gamma \approx 1.0117\) and \(C_{{\mathbf {q}}} \approx 1.1803\), whereas for \(b = 1.5\) we obtain \(\gamma \approx 1.0521\) and \(C_{\mathbf {q}}\approx 1.7954\).

### Lemma 12

### Proof

*r*(

*z*), \({\mathbf {q}}(z)\), and \((I-z\mathcal{O}\!\iota )^{-1}\) lie in \(\mathrm{Re}\,z \ge \tilde{x}_0 > 0\). There exists a constant

*C*such that for all \(\mathrm{Re}\,z < 0\)

### Remark 13

- 1.
For BDF1, \(c = 1\) and \(x_0 = 1\).

- 2.For 2-stage Radau IIA the constant can be obtained following the proof. Namely we choose \(\tilde{x}_0 = 2\) and find numerically thatHence we can choose \(C = 2\) and \(c = 1/2\) and \(x_0 = 2/C = 1\).$$\begin{aligned} \max \{|r(z)|, \Vert {\mathbf {q}}(z)\Vert , \Vert (I-z\mathcal{O}\!\iota )^{-1}\Vert \} |\mathrm{Re}\,z-\tilde{x}_0| \le 2, \qquad \mathrm{Re}\,z \le 0. \end{aligned}$$
- 3.Similarly, for 3-stage Radau IIA we choose \(\tilde{x}_0 = 2.6811\) and find thatHence we can choose \(C = 3.0821\) and \(c = 1/C = 0.3245\) and \(x_0 = \tilde{x}_0/C = 0.8699\).$$\begin{aligned} \max \{|r(z)|, \Vert {\mathbf {q}}(z)\Vert , \Vert (I-z\mathcal{O}\!\iota )^{-1}\Vert \} |\mathrm{Re}\,z-\tilde{x}_0| \le 3.0821, \qquad \mathrm{Re}\,z \le 0. \end{aligned}$$

In the rest of the section our goal is to derive a good quadrature for the approximation of \(\varvec{\omega }_n\) and \({\mathbf {W}}_n\). We will perform the same steps as in Sect. 2 for the \(\varvec{\omega }_n\). The same quadrature rules will give essentially the same error estimates for the \({\mathbf {W}}_n\); see Remark 21.

## 4 Efficient quadrature for the CQ weights

Analogously to the the continuous case (1), we fix \(n_0\), \(h\), and *T* and develop an efficient quadrature for the CQ weights representation (17), for \(nh\in [(n_0+1)h, T]\) and \(\alpha \in (0,1)\).

### 4.1 Truncation of the CQ weights integral representation

### Lemma 14

### Proof

### Corollary 15

### Proof

### Remark 16

In practice we find that instead of using Corollary 15, better results are obtained if a simple numerical search is done to find the optimal *A* such that the right-hand side in (22) with \(n = n_0+1\) is less than \(\mathrm {tol}\). To do this, we start from \(A=0\) and iteratively approximate the integral in (22) for increased values of *A* (\(A \leftarrow A+0.125\) in our code) until the resulting quantity is below our error tolerance. The approximation of the integrals is done by the MATLAB built-in routine integral. Notice that this has to be done only once for each RK-CQ formula and value of \(\alpha \in (0,1)\).

### 4.2 Gauss–Jacobi quadrature for the CQ weights

### Theorem 17

*b*and \(\gamma \) from Lemma 10, and

### Proof

*T*we obtain the bound

### Remark 18

In all our numerical experiments, we have found that \(\varrho _{\text {opt}} < \varrho _{\max }\).

### 4.3 Gauss quadrature on increasing intervals for the CQ weights

*Q*nodes and denote the corresponding error by \(\varvec{\tau }_{n,j}(Q)\).

### Theorem 19

### Proof

The proof is the same as the proof of Theorem 5, we only need to combine the facts that \(|r(z)| \le 1\) for \(\mathrm{Re}\,z \le 0\), the bound \(\Vert q(z)\Vert \le C_{{\mathbf {q}}}\) from Lemma 10, and the bound from Lemma 12. \(\square \)

### Remark 20

To obtain a uniform bound for \(t_n \in [t_{n_0+1}, T]\), we replace *n* by \(n_0+1\) in the above bound.

### Remark 21

We have developed the quadrature for the weights \(\varvec{\omega }_n\). However, up to a small difference in constants, the same error estimates hold for the matrix weights \({\mathbf {W}}_n\). Certainly, due to Lemma 12, the truncation estimate is the same. The main estimate used in the proof of Theorem 17 is the bound on the stability function *r*(*z*) and on *q*(*z*). The additional terms in \({\mathbf {E}}_n(z)\) would only contribute to the constant. Similar comment holds for Theorem 17.

## 5 Fast summation and computational cost

*L*where Gauss quadrature is used. Then our approximation of \(I^2_n\) has the form

## 6 Numerical experiments

Given a tolerance \(\mathrm {tol}> 0\), time step \(h> 0\), minimal index \(n_0\), final time \(T > 0\), and the fractional power \(\alpha \in (0,1)\) we use the above estimates to choose the parameters in the quadrature.

*A*such that the upper bound for the trunction error in (22) is less than \(\mathrm {tol}/3\). We set \(\tilde{B} = 3\) and

*B*and setting

*J*to the smallest integer such that \(L \le L_0(1+B)^J\). Next, we set \(L_j = L_0(1+B)^j\), for \(j = 0,\dots ,J\) and let \(Q_0\) denote the number of quadrature points in the Gauss–Jacobi quadrature on \([0, L_0]\) and \(Q_j\), \(j = 0,\dots , J-1\), the number of Gauss quadrature points in the interval \([L_{j},L_{j+1}]\). We choose the smallest \(Q_0\) so that the bound on \(\Vert \varvec{\tau }_{\mathrm {GJ},n}(Q_0)\Vert \) in Theorem 17 is less than \(\mathrm {tol}/3\); note that in all of the experiments below we had \(\varrho _{\text {opt}} < \varrho _{\text {max}}\). By doing a simple numerical minimization on the bound in Theorem 19, we find the optimal \(Q_j\) such that the error \(\Vert \varvec{\tau }_{n,j}(Q_j)\Vert < \mathrm {tol}\, J^{-1}/3\). With this choice of parameters each weight \(\varvec{\omega }_j\), \(j > n_0\), is computed to accuracy less than \(\mathrm {tol}\).

Dependence of the total number of quadrature points on time step \(h\) and final time *T*. The other parameters are fixed at \(n_0 = 5\), \(B = 3\), \(\mathrm {tol}= 10^{-6}\), \(\alpha = 0.5\). On the left the data is for backward Euler and on the right for the 2-stage Radau IIA CQ

\(h\big \backslash T\) | 1 | 10 | 100 | 1000 | \(h\big \backslash T\) | 1 | 10 | 100 | 1000 |
---|---|---|---|---|---|---|---|---|---|

\(10^{-1}\) | 20 | 30 | 40 | 49 | \(10^{-1}\) | 13 | 25 | 34 | 44 |

\( 10^{-2}\) | 27 | 36 | 44 | 52 | \( 10^{-2}\) | 21 | 31 | 39 | 46 |

\( 10^{-3}\) | 31 | 39 | 46 | 50 | \( 10^{-3}\) | 28 | 35 | 41 | 46 |

\( 10^{-4}\) | 34 | 40 | 45 | 48 | \( 10^{-4}\) | 31 | 37 | 43 | 45 |

*n*th weight computed using the new quadrature scheme and \(\varvec{\omega }_n\) is an accurate approximation of the weight computed by standard means. We see that the error is bounded by the tolerance and that for the initial weights the error is close to this bound. The error for larger

*n*is considerably smaller than the required tolerance. This is expected, as in Corollary 15 we need to use the worst case \(n = n_0+1\) to determine the trunction parameter

*A*.

*T*in Table 1 and on \(\alpha \) and \(\mathrm {tol}\) in Table 2. We observe only a moderate increase with decreasing \(h\), \(\mathrm {tol}\) and increasing

*T*. The dependence on \(\alpha \) is mild.

Dependence of the total number of quadrature points on the tolerance \(\mathrm {tol}\) and the fractional power \(\alpha \). The other parameters are fixed at \(h= 10^{-2}\), \(T = 50\), \(n_0 = 5\), \(B = 3\). Again the data on the left is for backward Euler and on the right for 2-stage Radau IIA

\(\mathrm {tol}\big \backslash \alpha \) | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | \(\mathrm {tol}\big \backslash \alpha \) | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |
---|---|---|---|---|---|---|---|---|---|---|---|

\(10^{-2}\) | 11 | 11 | 10 | 8 | 6 | \(10^{-2}\) | 9 | 9 | 8 | 8 | 6 |

\(10^{-4}\) | 27 | 27 | 26 | 25 | 21 | \(10^{-4}\) | 23 | 25 | 24 | 23 | 20 |

\(10^{-6}\) | 45 | 44 | 45 | 43 | 36 | \(10^{-6}\) | 39 | 39 | 39 | 37 | 35 |

\(10^{-8}\) | 66 | 65 | 64 | 61 | 55 | \(10^{-8}\) | 71 | 68 | 65 | 53 | 51 |

\(10^{-10}\) | 86 | 87 | 85 | 82 | 74 | \(10^{-10}\) | 96 | 93 | 90 | 86 | 77 |

### 6.1 Fractional integral

*u*(

*t*), so its role is taken by an accurate numerical approximation. In Fig. 2 we show the convergence of the error \(\max _n |u(t_n)-u_n|\) using the standard implementation of CQ. We compare it with the theoretical reference curve \( 10^{-2.5}(h^3+ \left| \log (h)\right| h^{3+\alpha })\), which fits the results better in this pre-asymptotic regime than the dominant term \(h^3\) on its own.

*n*, showing that the final perturbation error introduced by our approximation of the CQ weights remains bounded with respect to the target accuracy in our quadrature, cf. [23]. We also compare computational times in Fig. 3. For the implementation of the standard CQ we have used the \(O(N\log N)\) FFT based algorithm from [16]. We see that for larger time-steps the FFT method is faster due to a certain overhead in constructing the quadrature points for the new method. For smaller time steps however the new method is even marginally faster. The main advantage of the new method is the \(O(\log N)\) amount of memory required compared to

*O*(

*N*) amount of memory by the standard method. For example, in this computation with the smallest time step, there are \(N = 2048\) time steps and the total number of quadrature points is 37. As each quadrature point carries approximately the same amount of memory as one directly computed time-step, we see that the memory requirement is around 50 times smaller with the new method for this example. Such a difference in memory requirements becomes of crucial importance when faced with non-scalar examples coming from discretizations of PDE. The next section considers this case.

## 7 Application to a fractional diffusion equation

### Remark 22

For simplicity we avoid the integer case \(\beta = 1\) as it is just the standard heat equation and in some places this case would have to be treated slighlty differently.

The application of CQ based on BDF2 to integrate (28) in time has been analyzed in [8, Sect. 8]. A related problem with a fractional power of the Laplacian has been studied in [20], but not with a CQ time discretization. Here we apply Runge–Kutta based CQ. The analysis of the application of RK based CQ to (27) is not available in the literature, hence we give the analysis here for sufficiently smooth and compatible right-hand side *f*. We first analyze the error of the spatial discretization.

### 7.1 Space-time discretization of the FPDE: error estimates

### Theorem 23

*u*(

*t*) the solution of (27). Then if \(m > \beta \) we have

### Proof

### Theorem 24

*A*-stable,

*s*-stage Runge–Kutta method of order

*p*and stage order

*q*be given which satisifes the assumptions of Sect. 3 and let

*u*(

*t*) be the solution of (27) and \({\mathbf {U}}\) solution of (32). If \({\mathbf {u}}^h\) denotes the solution at full time steps, i.e., \({\mathbf {u}}^h_{n+1} = {\mathbf {U}}_{n,s}\) and if \(f \in C^p([0,T]; L^2(\Omega ))\) with \(f^{(k)}(0) = 0\), for \(k = 0,\dots , \lceil \beta \rceil -1\) then

### Proof

### 7.2 Implementation and numerical experiments

Though all the information needed for the implementation is given in the preceding pages, for the benefit of the reader we give some more detail here. Let *M* denote the number of degrees of freedom in space, i.e., \(M = \dim X_{\Delta x}\), let \({\mathbf {B}}\) and \({\mathbf {A}}\) be the mass and stiffness matrices.

*m*times.

*f*so that the exact solution is

We show the error and the memory requirements for the new method and the standard implementation of CQ

| Error | Memory (MB) | SE | Standard mem. (MB) |
---|---|---|---|---|

32 | \(2.94 \times 10^{-1}\) | 39.1 | \(2.94 \times 10^{-1}\) | 59.2 |

64 | \(3.07 \times 10^{-2}\) | 40.3 | \(3.07 \times 10^{-2}\) | 98.7 |

128 | \(2.61\times 10^{-3}\) | 42.8 | \(2.61\times 10^{-3}\) | 177.6 |

256 | \(2.98\times 10^{-4}\) | 44.0 | \(3.01\times 10^{-4}\) | 335.4 |

## References

- 1.Adolfsson, K., Enelund, M., Larsson, S.: Space-time discretization of an integro-differential equation modeling quasi-static fractional-order viscoelasticity. J. Vib. Control
**14**(9–10), 1631–1649 (2008)MathSciNetCrossRefzbMATHGoogle Scholar - 2.Baffet, D.: A Gauss–Jacobi kernel compression scheme for fractional differential equations. (2018). arXiv:1801.06095
- 3.Baffet, D., Hesthaven, J.S.: A kernel compression scheme for fractional differential equations. SIAM J. Numer. Anal.
**55**(2), 496–520 (2017)MathSciNetCrossRefzbMATHGoogle Scholar - 4.Banjai, L.: Multistep and multistage convolution quadrature for the wave equation: algorithms and experiments. SIAM J. Sci. Comput.
**32**(5), 2964–2994 (2010)MathSciNetCrossRefzbMATHGoogle Scholar - 5.Banjai, L., Lubich, C.: An error analysis of Runge–Kutta convolution quadrature. BIT
**51**(3), 483–496 (2011)MathSciNetCrossRefzbMATHGoogle Scholar - 6.Banjai, L., Lubich, C., Melenk, J.M.: Runge–Kutta convolution quadrature for operators arising in wave propagation. Numer. Math.
**119**(1), 1–20 (2011)MathSciNetCrossRefzbMATHGoogle Scholar - 7.Banjai, L., Schanz, M.: Wave propagation problems treated with convolution quadrature and BEM. In: Langer, U., Schanz, M., Steinbach, O., Wendland, W.L. (eds.) Fast Boundary Element Methods in Engineering and Industrial Applications, Lecture Notes in Applied and Computational Mechanics, vol. 63, pp. 145–184. Springer, Berlin (2012)Google Scholar
- 8.Cuesta, E., Lubich, C., Palencia, C.: Convolution quadrature time discretization of fractional diffusion-wave equations. Math. Comput.
**75**(254), 673–696 (2006)MathSciNetCrossRefzbMATHGoogle Scholar - 9.Hairer, E., Lubich, C., Schlichte, M.: Fast numerical solution of nonlinear Volterra convolution equations. SIAM J. Sci. Stat. Comput.
**6**(3), 532–541 (1985)MathSciNetCrossRefzbMATHGoogle Scholar - 10.Hairer, E., Wanner, G.: Solving ordinary differential equations. II, vol. 14 of Springer Series in Computational Mathematics, 2nd edition. Springer, Berlin (1996)Google Scholar
- 11.Henrici, P.: Applied and Computational Complex Analysis, vol. 2. Wiley, New York (1977)zbMATHGoogle Scholar
- 12.Jiang, S., Zhang, J., Zhang, Q., Zhang, Z.: Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations. (2015). arXiv:1511.03453
- 13.Li, J.-R.: A fast time stepping method for evaluating fractional integrals. SIAM J. Sci. Comput.
**31**(6), 4696–4714 (2009/10)Google Scholar - 14.Lubich, C.: Discretized fractional calculus. SIAM J. Math. Anal.
**17**(3), 704–719 (1986)MathSciNetCrossRefzbMATHGoogle Scholar - 15.Lubich, C.: Convolution quadrature and discretized operational calculus I. Numer. Math.
**52**, 129–145 (1988)MathSciNetCrossRefzbMATHGoogle Scholar - 16.Lubich, C.: Convolution quadrature and discretized operational calculus II. Numer. Math.
**52**, 413–425 (1988)MathSciNetCrossRefzbMATHGoogle Scholar - 17.Lubich, C.: On the multistep time discretization of linear initial-boundary value problems and their boundary integral equations. Numer. Math.
**67**, 365–389 (1994)MathSciNetCrossRefzbMATHGoogle Scholar - 18.Lubich, C., Ostermann, A.: Runge–Kutta methods for parabolic equations and convolution quadrature. Math. Comput.
**60**(201), 105–131 (1993)MathSciNetCrossRefzbMATHGoogle Scholar - 19.Lubich, C., Schädle, A.: Fast convolution for nonreflecting boundary conditions. SIAM J. Sci. Comput.
**24**(1), 161–182 (2002)MathSciNetCrossRefzbMATHGoogle Scholar - 20.Nochetto, R.H., Otárola, E., Salgado, A.J.: A PDE approach to space-time fractional parabolic problems. SIAM J. Numer. Anal.
**54**(2), 848–873 (2016)MathSciNetCrossRefzbMATHGoogle Scholar - 21.Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, vol. 44. Springer, New York (1983)zbMATHGoogle Scholar
- 22.Sayas, F.-J.: Retarded Potentials and Time Domain Boundary Integral Equations, vol. 50. Springer, Heidelberg (2016)zbMATHGoogle Scholar
- 23.Schädle, A., López-Fernández, M., Lubich, C.: Fast and oblivious convolution quadrature. SIAM J. Sci. Comput.
**28**(2), 421–438 (2006)MathSciNetCrossRefzbMATHGoogle Scholar - 24.Trefethen, L.N.: Approximation Theory and Approximation Practice. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2013)zbMATHGoogle Scholar
- 25.Yu, Y., Perdikaris, P., Karniadakis, G.E.: Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms. J. Comput. Phys.
**323**, 219–242 (2016)MathSciNetCrossRefGoogle Scholar - 26.Yuste, S.B., Acedo, L., Lindenberg, K.: Reaction front in an \(A+B \rightarrow C\) reaction-subdiffusion process. Phys. Rev. E
**69**, 036126 (2004)CrossRefGoogle Scholar - 27.Zeng, F., Turner, I., Burrage, K.: A stable fast time-stepping method for fractional integral and derivative operators. (2017). arXiv:1703.05480

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.