1 Background

We consider the linear transport equation in one-dimensional slab geometry:

$$\begin{aligned} \varepsilon \partial _tf + v \partial _xf&= \frac{\sigma }{\varepsilon }{{{\mathcal {L}}}} f - \varepsilon \sigma ^a f + \varepsilon S, \quad \sigma (x, z) \ge \sigma _{\text {min}} > 0, \end{aligned}$$
(1)
$$\begin{aligned} {{{\mathcal {L}}}} f(t,x,v,z)&= \frac{1}{2} \int _{-1}^1 f(t, x,v',z) \,\mathrm {d}v' - f(t,x,v,z). \end{aligned}$$
(2)

This equation arises in neutron transport, radiative transfer, etc., that describes particles (for example neutrons) transport in a background media (for example nuclei), in which f(txvz) is the density distribution of particles at time \(t\ge 0\), position \(x \in (0,1)\). \(v=\Omega \cdot e_x = \cos \theta \in [-1, 1]\) where \(\theta \) is the angle between the moving direction and x-axis. \(\sigma (x,z)\)\(\sigma ^a(x,z)\) are total and absorption cross section, respectively. S(xz) is the source term. \(\varepsilon \) is the dimensionless Knudsen number, the ratio between particle mean free path and the characteristic length (such as the length of the domain). The Dirichlet boundary conditions are given in the incoming direction by

$$\begin{aligned} \begin{aligned} f(t,0,v,z)&= f_{\mathrm {L}}(t,v,z), \quad \text{ for } v\ge 0\,, \\ f(t,1,v,z)&= f_{\mathrm {R}}(t,v,z), \quad \text{ for } v\le 0\,, \end{aligned} \end{aligned}$$
(3)

while the initial condition is given by

$$\begin{aligned} f |_{t=0}(x,v,z) = f_0(x,v,z). \end{aligned}$$
(4)

We are interested in the problem that contains uncertainty in the collision cross section, source, initial or boundary data. The uncertainty is characterized by the random variable \(z\in {\mathbb {R}}^d\) with probability density function \(\omega (z)\). Thus in our problem \(f, \sigma , \sigma ^a\) and S all depend on z.

In recent years, there have been extensive activities to study partial differential equations or engineering problems with uncertainties. Many numerical methods have been introduced. In this article, we are interested in the polynomial chaos (originally introduced in Wiener’s work [26])-based stochastic Galerkin method which has been shown to be competitive in many applications, see [4, 27, 28]. The stochastic Galerkin method has been used for linear transport equation with uncertain coefficients [25]. Here we are interested in the problem that contains both uncertainty and multiscale. The latter is characterized by the Knudsen number \(\varepsilon \), which, in the so-called optically thin region (\(\varepsilon \ll 1\)), due to high scattering rate of particles, leads the linear transport equation to a diffusion equation, known as the diffusion limit [1, 3, 18]. For the past decades, developing asymptotic-preserving (AP) schemes for (deterministic) linear transport equation with diffusive scaling has seen many activities, see for examples [5, 7,8,9, 17, 19, 20, 22]. Only recently AP scheme for linear transport equation with both uncertainty and diffusive scaling was introduced in [15] (in the framework of stochastic Galerkin method, coined as s-AP method). See more related recent works along this line in [6, 12, 13]. A scheme is s-AP if the stochastic Galerkin method for the linear transport equation, as \(\varepsilon \rightarrow 0\), becomes a stochastic Galerkin method for the limiting diffusion equation. It was realized in [15] that the deterministic AP framework can be easily adopted to study linear transport equations with uncertain coefficients. Moreover, as shown in [6, 12], kinetic equations, linear or nonlinear, could preserve the regularity in random space of the initial data at later time, which naturally leads to spectral accuracy of the stochastic Galerkin method.

When \(\varepsilon \ll 1\), however, the energy estimates and consequently the convergence rates given in [6, 12] depend on the reciprocal of \(\varepsilon \), which implies that one needs the degree of the polynomials used in the stochastic Galerkin method to grow as \(\varepsilon \) decreases. In fact, this is typical of a standard numerical method for problems that contain small or multiple scales. While AP schemes can be used with numerical parameters independent of \(\varepsilon \), to prove this rigorously is not so easy and has been done only in a few occasions [5, 14]. A standard approach to prove a uniform convergence is to use the diffusion limit, as was done first in [5] in the deterministic case and then in [11] for the uncertain transport equation. See also the review article [7]. However such approaches might not give the sharp convergence rate.

In this paper, we provide a sharp error estimate for the stochastic Galerkin method for problem (1). This requires a sharp (\(\varepsilon \)-independent) energy estimate on high-order derivatives in the random space for f, as well as \([f]-f\) where [f] is the velocity average of f defined in (5) which is shown to be bounded even if \(\varepsilon \rightarrow 0\). Then the uniform in \(\varepsilon \) spectral convergence naturally follows, without using the diffusion limit.

The s-AP scheme in [15] uses the AP framework of [8] that relies on the even- and odd-parity formulation of the transport equation. In this paper, we use the micro–macro decomposition-based approach (see [22]) to develop a fully discrete s-AP method. The advantage of this approach is that it allows us to prove a uniform (in \(\varepsilon \)) stability condition, as was done in the deterministic counterpart in [23]. In fact, we will show that one can easily adopt the proof of [23] for the s-AP scheme.

The paper is organized as follows. In Sect. 2 we summarize the diffusion limit of the linear transport equation. The generalized polynomial chaos-based stochastic Galerkin method for the problem is introduced in Sect. 3 and shown formally to be s-AP. The uniform in \(\varepsilon \) regularity of the stochastic Galerkin scheme is proven in Sect. 4, which leads to a uniform spectral convergence proof. The micro–macro decomposition-based fully discrete scheme is given in Sect. 5, and its uniform stability is established in Sect. 6. Numerical experiments are carried out in Sect. 7. The paper is concluded in Sect. 8.

2 The diffusion limit

Denote

$$\begin{aligned}{}[ \phi ]=\frac{1}{2}\int _{-1}^1\phi (v)\,\mathrm {d}v, \end{aligned}$$
(5)

as the average of velocity-dependent function \(\phi \). For each random realization z, there exists a positive function \(\phi (v)>0\), the so-called absolute equilibrium state that satisfies \([ \phi ]=1, [ v\phi (v)]=0\), (from Perron–Frobenius theorem, cf. [1]).

Define in the Hilbert space \(L^2\big ((-1,1);\;\phi ^{-1}\,\mathrm {d}v\big )\) the inner product and norm

$$\begin{aligned} \langle f, g \rangle _{\phi } = \int _{-1}^1 f(v) g(v) \phi ^{-1}\,\mathrm {d}v, \quad \Vert f \Vert _{\phi }^2 = \langle f, f \rangle _{\phi }. \end{aligned}$$
(6)

The linear operator \({{{\mathcal {L}}}}\) satisfies the following properties [1]:

  • \([ {{{\mathcal {L}}}} f] =0\), for every \(f\in L^2([-1,1])\);

  • The null space of f is \({{{\mathcal {N}}}} ({{{\mathcal {L}}}}) = \text{ Span } \{\,\phi \mid \phi =[ \phi ]\,\}\);

  • The range of f is \({{{\mathcal {R}}}} ({{{\mathcal {L}}}}) = {{{\mathcal {N}}}} ({{{\mathcal {L}}}})^{\bot }=\{\,f\mid [f]=0\, \} ;\)

  • \({{{\mathcal {L}}}}\) is nonpositive self-adjoint in \(L^2((-1,1); \phi ^{-1}\,\mathrm {d}v)\), i.e. , there is a positive constant \(s_m\) such that

    $$\begin{aligned} \langle f, {{{\mathcal {L}}}} f\rangle _{\phi } \le - 2 s_m \Vert f\Vert ^2_{\phi },\quad \forall \, f\in {{{\mathcal {N}}}} ({{{\mathcal {L}}}})^{\bot }; \end{aligned}$$
    (7)
  • \({{{\mathcal {L}}}}\) admits a pseudo-inverse, denoted by \({{{\mathcal {L}}}}^{-1}\), from \({{{\mathcal {R}}}} ({{{\mathcal {L}}}})\) to \({{{\mathcal {R}}}} ({{{\mathcal {L}}}})\).

Let \(\rho = [ f]\). For each fixed z, the classical diffusion limit theory of linear transport equation [1, 3, 18] gives that, as \(\varepsilon \rightarrow 0, \rho \) converges to the following random diffusion equation:

$$\begin{aligned} \partial _t\rho = \partial _{x} (\kappa (z) \partial _{x}\rho )-\sigma ^a(z) \rho +S, \end{aligned}$$
(8)

where the diffusion coefficient

$$\begin{aligned} \kappa (z) =\frac{1}{3} \sigma (z)^{-1}\,. \end{aligned}$$
(9)

The micro–macro decomposition, a useful tool for the study of the Boltzmann equation and its fluid dynamics limit [24], and for the design of asymptotic-preserving numerical schemes for kinetic equations [2, 16, 22], takes the form

$$\begin{aligned} f(t,x,v,z) = \rho (t,x,z) + \varepsilon g(t,x,v,z), \end{aligned}$$
(10)

where \([g]=0\). Introducing (10) into (1), one gets its micro–macro form:

$$\begin{aligned}&\partial _t\rho + \partial _x[ vg] = -\sigma ^a \rho + S , \end{aligned}$$
(11a)
$$\begin{aligned}&\partial _tg + \frac{1}{\varepsilon } (I-[ .]) (v\partial _xg) = -\frac{\sigma (z)}{\varepsilon ^2}g - \sigma ^a g - \frac{1}{\varepsilon ^2}v \partial _x\rho . \end{aligned}$$
(11b)

Diffusion limit (8) can be easily seen now. When \(\varepsilon \rightarrow 0\), (11b) gives

$$\begin{aligned} g = - \frac{v}{\sigma (z)} \partial _x \rho \end{aligned}$$

which, when plugged into (11a), gives diffusion equation  (8)–(9).

3 The gPC-stochastic Galerkin approximation

We assume the complete orthogonal polynomial basis in the Hilbert space \(H({\mathbb {R}}^d; \omega (z)\,\mathrm {d}z)\) corresponding to the weight \(\omega (z)\) is \(\{\phi _i(z), i=0,1, \ldots ,\}\), where \(\phi _i(z)\) is a polynomial of degree i and satisfies

$$\begin{aligned} \langle \phi _i, \phi _j \rangle _\omega =\int \phi _i(z)\phi _j(z)\omega (z)\,\mathrm {d}z=\delta _{ij}. \end{aligned}$$

Here \(\phi _0(z)=1\), and \(\delta _{i j}\) is the Kronecker delta function. The inner product and norm in this space are, respectively,

$$\begin{aligned} \langle f, g \rangle _\omega = \int _{{{\mathbb {R}}}^d} fg\,\omega (z)\,\mathrm {d}z,\quad \Vert f \Vert _{\omega }^2 = \langle f, f \rangle _{\omega }\,. \end{aligned}$$
(12)

Since the solution \(f(t,\cdot ,\cdot ,\cdot )\) is defined in \(L^2\big ( (0,1)\times (-1,1)\times {\mathbb {R}}^d; \omega (z)\,\mathrm {d}x\,\mathrm {d}v\,\mathrm {d}z\big )\), one has the generalized polynomial chaos (gPC) expansion

$$\begin{aligned} f(t,x,v,z) = \sum _{i=0}^{\infty } f_i(t,x,v) \, \phi _i(z), \quad {\hat{f}} = \big (\, f_i\, \big )_{i=0}^\infty :=\big ({\bar{f}}, {\hat{f}}_1\big ). \end{aligned}$$

The mean and variance of f can be obtained from the expansion coefficients as

$$\begin{aligned} {\bar{f}}= E (f) = \int _{{\mathbb {R}}} f \omega (z)\,\mathrm {d}z = f_0, \quad \text{ var } (f) = |{\hat{f}}_1|^2\,. \end{aligned}$$

The idea of the stochastic Galerkin (SG) approximation [4, 28] is to truncate the above infinite series by

$$\begin{aligned} f_M = \sum _{i=0}^{M} f_i \, \phi _i, \quad {\hat{f}}^M = \big (\, f_i \, \big )_{i=0}^M := \big ({\bar{f}}, {\hat{f}}_1^M\big ), \end{aligned}$$
(13)

from which one can extract the mean and variance of \(f_M\) from the expansion coefficients as

$$\begin{aligned} E (f_M) = {\bar{f}}, \quad \text{ var } (f_M) = |{\hat{f}}_1^M|^2 \le \text{ var } (f)\,. \end{aligned}$$

Furthermore, we define

$$\begin{aligned} \sigma _{ij}= & {} \big \langle \phi _i,\, \sigma \phi _j \big \rangle _\omega , \quad \Sigma = \big ( \,\sigma _{ij} \, \big )_{M+1, M+1}, \\ \sigma ^a_{ij}= & {} \big \langle \phi _i, \, \sigma ^a \phi _j \big \rangle _\omega , \quad \Sigma ^a = \big (\, \sigma ^a_{ij} \,\big )_{M+1, M+1}, \end{aligned}$$

for \(0\le i,j \le M\). Let \( \text{ Id } \) be the \((M+1)\times (M+1)\) identity matrix. \(\Sigma , \Sigma ^a\) are symmetric positive-definite matrices satisfying [27]

$$\begin{aligned} \Sigma \ge \sigma _{\text {min}} \text{ Id } . \end{aligned}$$

If one applies the gPC ansatz (13) into transport equation (1), and conduct the Galerkin projection, one obtains [15, 25]:

$$\begin{aligned} \varepsilon \partial _t{\hat{f}} + v \partial _x{\hat{f}} = - \frac{1}{\varepsilon }(I-[ \cdot ]) \Sigma {\hat{f}} -\varepsilon \Sigma ^a {\hat{f}} - {\hat{S}} \end{aligned}$$
(14)

where \({\hat{S}}\) is defined similarly as (13).

We now use the micro–macro decomposition

$$\begin{aligned} {\hat{f}}(t,x,v) = {\hat{\rho }}(t,x) + \varepsilon \hat{g}(t,x,v), \end{aligned}$$
(15)

where \({{\hat{\rho }}}=[{\hat{f}}]\) and \([g]=0\), in (14) to get

$$\begin{aligned} \partial _t{{\hat{\rho }}} + \partial _x[ v {\hat{g}}]&= -\Sigma ^a {{\hat{\rho }}} + {\hat{S}} , \end{aligned}$$
(16a)
$$\begin{aligned} \partial _t{\hat{g}} + \frac{1}{\varepsilon } (I-[ .]) (v\partial _x{\hat{g}})&= -\frac{1}{\varepsilon ^2} \Sigma {\hat{g}} - \Sigma ^a {\hat{g}} - \frac{1}{\varepsilon ^2}v \partial _x{{\hat{\rho }}}, \end{aligned}$$
(16b)

with initial data

$$\begin{aligned} {{\hat{\rho }}}(0,x) = {{\hat{\rho }}}_0(x), \quad {\hat{g}}(0,x,v) = {\hat{g}}_0(x,v)\,, \end{aligned}$$

that satisfy

$$\begin{aligned} \frac{1}{2} \int _{-1}^{1} ({{\hat{\rho }}}(0,x) + \varepsilon {\hat{g}}(0,x,v))^2 \,\mathrm {d}v = {{\hat{\rho }}}(0,x)^2 + \frac{\varepsilon ^2}{2} \int _{-1}^{1} {\hat{g}}(0,x,v))^2 \,\mathrm {d}v \le C\,. \end{aligned}$$

It is easy to see that system (16) formally has the diffusion limit as \(\varepsilon \rightarrow 0\):

$$\begin{aligned} \partial _t{{\hat{\rho }}} = \partial _{x} (K \partial _{x}{{\hat{\rho }}}) - \Sigma ^a {{\hat{\rho }}} + {\hat{S}}\,, \end{aligned}$$
(17)

where

$$\begin{aligned} K =\frac{1}{3} \Sigma ^{-1}\,. \end{aligned}$$

Thus the gPC approximation is s-AP in the sense of in [15].

One can easily derive the following energy estimate for system (16)

$$\begin{aligned}&\int _0^1{{\hat{\rho }}}(t,x)^2\,\mathrm {d}x + \frac{\varepsilon ^2}{2} \int _0^1 \int _{-1}^{1} {\hat{g}}(t,x,v)^2 \,\mathrm {d}v \,\mathrm {d}x \\&\quad \le \int _0^1 {{\hat{\rho }}}(0,x)^2 \,\mathrm {d}x+ \frac{\varepsilon ^2}{2} \int _0^1 \int _{-1}^{1} {\hat{g}}(0,x,v)^2 \,\mathrm {d}v \,\mathrm {d}x\,. \end{aligned}$$

On the other hand, the direct gPC approximation of the random diffusion equation (8)–(9) is:

$$\begin{aligned} \partial _t{{\hat{\rho }}} = \partial _{x} (K_d \partial _{x}{{\hat{\rho }}})- \Sigma ^a {{\hat{\rho }}} + {\hat{S}}, \end{aligned}$$
(18)

where \(K_d = (\kappa _{ij}), \kappa _{i,j}= \langle \phi _i, \kappa \phi _j \rangle _\omega \).

4 The regularity in the random space and a uniform spectral convergence analysis of gPC-SG method

In this section, we assume \(\sigma ^a=S=0\) for clarity and periodic boundary condition

$$\begin{aligned} f(t, 0, v, z) = f(t, 1, v, z) \end{aligned}$$
(19)

for simplicity. We prove that, under some suitable assumptions on \(\sigma (z)\), the solution to the linear transport equation with random inputs preserves the regularity in the random space of the initial data uniformly in \(\varepsilon \). Then based on the regularity result, we conduct the spectral convergence analysis and error estimates for the gPC-SG method and will also obtain error bounds uniformly in \(\varepsilon \).

4.1 Notations

We first recall the Hilbert space of the random variable introduced in Sect. 3,

$$\begin{aligned} H({{\mathbb {R}}}^d;\;\omega \,\mathrm {d}z) = \left\{ \,f\mid {{\mathbb {R}}}^d \rightarrow {{\mathbb {R}}}, \;\int _{{{\mathbb {R}}}^d} f^2(z)\omega (z)\,\mathrm {d}z < +\infty \,\right\} , \end{aligned}$$
(20)

equipped with the inner product and norm defined in (12). We also define the kth-order differential operator with respect to z as

$$\begin{aligned} D^k f(t,x,v,z) := \partial ^k_z f(t,x,v,z), \end{aligned}$$
(21)

and the Sobolev norm in H as

$$\begin{aligned} \Vert f(t,x,v,\cdot )\Vert _{H^k}^2 := \sum _{\alpha \le k} \Vert D^\alpha f(t,x,v,\cdot )\Vert _\omega ^2. \end{aligned}$$
(22)

Finally, we introduce norms in space and velocity as follows:

$$\begin{aligned} \Vert f(t,\cdot ,\cdot ,\cdot )\Vert _{\Gamma }^2&:=\int _Q \Vert f(t,x,v,\cdot )\Vert _{\omega }^2\,\,\mathrm {d}x\,\mathrm {d}v,\quad t\ge 0, \end{aligned}$$
(23)
$$\begin{aligned} \Vert f(t,\cdot ,\cdot ,\cdot )\Vert _{\Gamma ^k}^2&:=\int _Q \Vert f(t,x,v,\cdot )\Vert _{H^k}^2\,\,\mathrm {d}x\,\mathrm {d}v,\quad t\ge 0, \end{aligned}$$
(24)

where \(Q=[0,1]\times [-1,1]\) denotes the domain in the phase space. For simplicity, we will suppress the dependence of t and just use \(\Vert f\Vert _\Gamma , \Vert f\Vert _{\Gamma ^k}\) in the following proof.

4.2 Regularity in the random space

We will study the regularity of f with respect to the random variable z. To this aim, we first prove the following lemma. For simplicity, we state and prove the following lemma only for one-dimensional case. Proof for high-dimensional case is identical except the change of coefficient.

Lemma 4.1

Assume \(\sigma (z)\ge \sigma _{\mathrm {min}}>0\), then for any integer k and \(\sigma \in W^{k,\infty }, g\in H^k\) we have

$$\begin{aligned} -\langle D^k (\sigma g), D^k g\rangle _\omega \le -\frac{\sigma _{\mathrm {min}}}{2}\Vert D^k g\Vert ^2_\omega + \frac{4^k}{2\sigma _{\mathrm {min}}}\left( \max \limits _{0\le j\le k}\Vert D^j \sigma \Vert ^2_{L^\infty }\right) \Vert g\Vert _{H^{k-1}}^2\,. \end{aligned}$$
(25)

Proof

Since

$$\begin{aligned} D^k (\sigma g) = \sum _{j = 0}^k \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g) = \sigma D^k g + \sum _{j = 0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g)\,, \end{aligned}$$
(26)

we have

$$\begin{aligned} -\langle D^k(\sigma g), D^k g\rangle _\omega= & {} -\langle \sigma D^k g, D^k g \rangle _\omega -\left\langle \sum _{j=0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g), D^k g \right\rangle _\omega \,\nonumber \\\le & {} -\sigma _{\mathrm {min}}\Vert D^k g\Vert ^2_\omega -\left\langle \sum _{j=0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g), D^k g \right\rangle _\omega \,. \end{aligned}$$
(27)

By Young’s inequality

$$\begin{aligned} -\left\langle \sum _{j=0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g), D^k g \right\rangle _\omega\le & {} {} \frac{\sigma _{\mathrm {min}}}{2}\Vert D^k g\Vert ^2_\omega \nonumber \\&{} + \,\frac{1}{2\sigma _{\mathrm {min}}}\left\| \sum _{j=0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g)\right\| ^2_\omega \,, \end{aligned}$$
(28)

and Cauchy–Schwarz inequality

$$\begin{aligned} \left\| \sum _{j=0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) (D^{k-j} \sigma ) (D^j g)\right\| ^2_\omega\le & {} \left( \sum _{j=0}^{k-1} \left( {\begin{array}{c}k\\ j\end{array}}\right) ^2\Vert D^{k-j} \sigma \Vert ^2_{L^\infty }\right) \left( \sum _{j=0}^{k-1}\Vert D^{j} g\Vert _\omega ^2\right) \nonumber \\\le & {} \left\{ \sum _{j=0}^{k} \left( {\begin{array}{c}k\\ j\end{array}}\right) ^2\right\} \max \limits _{0\le j\le k}\Vert D^{j} \sigma \Vert ^2_{L^\infty }\Vert g\Vert _{H^{k-1}}^2 \nonumber \\\le & {} 4^k\left( \max \limits _{0\le j\le k}\Vert D^{j} \sigma \Vert ^2_{L^\infty }\right) \Vert g\Vert _{H^{k-1}}^2\,. \end{aligned}$$
(29)

Combining (27), (28) and (29), one obtains

$$\begin{aligned} -\langle D^k(\sigma \cdot g), D^k g\rangle \le -\frac{\sigma _{\mathrm {min}}}{2}\Vert D^k g\Vert ^2_\omega + \frac{4^k}{2\sigma _{\mathrm {min}}}\left( \max \limits _{0\le j\le k}\Vert D^{j} \sigma \Vert ^2_{L^\infty }\right) \Vert g\Vert _{H^{k-1}}^2\,. \end{aligned}$$
(30)

This completes the proof of Lemma 4.1. \(\square \)

Now we are ready to prove the following regularity result.

Theorem 4.1

(Uniform regularity) Assume

$$\begin{aligned} \sigma (z)\ge \sigma _{\mathrm {min}}>0\,. \end{aligned}$$

If for some integer \(m\ge 0\),

$$\begin{aligned} \Vert D^k\sigma (z)\Vert _{L^\infty }\le C_\sigma , \quad \Vert D^k f_{0}\Vert _{\Gamma }\le C_0,\quad k=0,\dots ,m, \end{aligned}$$
(31)

then the solution f to linear transport equation (1)–(2), with \(\sigma ^a = S = 0\) and periodic boundary condition (19), satisfies,

$$\begin{aligned} \Vert D^k f\Vert _{\Gamma } \le C, \quad k=0,\ldots ,m, \quad \forall t>0, \end{aligned}$$
(32)

where \(C_\sigma , C_0\) and C are constants independent of \(\varepsilon \).

Proof

For \(\sigma ^a = S = 0\), the kth (\(0\le k\le m\))-order formal differentiation of (1) with respect to z is,

$$\begin{aligned} \varepsilon ^2 \partial _t(D^k f) + \varepsilon v \partial _x(D^k f) = D^k\big (\sigma (z)([ f]-f)\big ), \end{aligned}$$
(33)

where \([ \cdot ]\) is the average operator defined in (5). Multiplying \(D^k f\) to both sides of (33) and integrating on \(Q=[0,1]\times [-1,1]\), one gets

$$\begin{aligned}&\frac{\varepsilon ^2}{2}\partial _t\Vert D^k f\Vert ^2_{\Gamma } + \varepsilon \int _Q v\langle D^k f, \partial _x(D^k f)\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\&\quad = \int _Q \langle D^k(\sigma (z)([ f]-f)),D^k f \rangle _{\omega }\,\mathrm {d}x\,\mathrm {d}v\,. \end{aligned}$$
(34)

Integration by parts yields

$$\begin{aligned} \varepsilon \int _{Q}v\langle D^k f, \partial _x(D^k f)\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v = \frac{\varepsilon }{2}\int _{Q\times {{\mathbb {R}}}^d}v \partial _x(D^k f)^2 \omega \,\mathrm {d}z\,\mathrm {d}x\,\mathrm {d}v=0, \end{aligned}$$
(35)

where we have used periodic boundary condition (19). Notice that

$$\begin{aligned} \int _Q \langle D^k\big (\sigma (z)([ f]-f)\big ),[ D^k f] \rangle _{\omega }\,\mathrm {d}x\,\mathrm {d}v = 0\,, \end{aligned}$$
(36)

combining with (34) one obtains

$$\begin{aligned} \frac{\varepsilon ^2}{2}\partial _t\Vert D^k f\Vert ^2_{\Gamma } = -\int _Q \langle D^k\big (\sigma (z)([ f]-f)\big ),D^k ([ f]-f) \rangle _{\omega }\,\mathrm {d}x\,\mathrm {d}v. \end{aligned}$$
(37)

Energy estimate We will establish the following energy estimate by using mathematical induction with respect to k: for any \(k\ge 0\), there exist k constants \(c_{kj}>0, j=0,\dots , k-1\) such that

$$\begin{aligned} \varepsilon ^2\partial _t\left( \Vert D^k f\Vert ^2_{\Gamma } + \sum _{j=0}^{k-1} c_{kj}\Vert D^j f\Vert ^2_{\Gamma }\right) \le {\left\{ \begin{array}{ll} -2\sigma _{\mathrm {min}}\big \Vert [ f] - f\big \Vert ^2_{\Gamma }, \; &{} k=0, \\ \\ -\sigma _{\mathrm {min}}\big \Vert D^k([ f] - f)\big \Vert ^2_{\Gamma }, \; &{} k \ge 1. \end{array}\right. } \end{aligned}$$
(38)

When \(k=0\), (37) becomes

$$\begin{aligned} \begin{aligned} \varepsilon ^2\partial _t\Vert f\Vert ^2_{\Gamma }&= -2\int _Q \langle \sigma (z)([f]-f), ([f]-f) \rangle _{\omega }\,\mathrm {d}x\,\mathrm {d}v \\&\le -2\sigma _{\mathrm {min}} \Vert [f] - f\Vert ^2_{\Gamma }\,, \end{aligned} \end{aligned}$$
(39)

which satisfies (38).

Assume that for any \(k\le p\) where \(p\in {{\mathbb {N}}}\), (38) holds. Adding all these inequalities together we get

$$\begin{aligned} \varepsilon ^2\partial _t\left( \frac{1}{2}\Vert f\Vert ^2_{\Gamma } + \sum _{i=1}^{p}\Vert D^i f\Vert ^2_{\Gamma } + \sum _{i=1}^{p}\sum _{j=0}^{i-1}c_{ij}\Vert D^j f\Vert ^2_{\Gamma }\right) \le -\sigma _{\mathrm {min}}\big \Vert [f]-f\big \Vert ^2_{\Gamma ^{p}}\,, \end{aligned}$$
(40)

which is equivalent to

$$\begin{aligned} \varepsilon ^2\partial _t\left( \sum _{j=0}^{p}c'_{p+1, j}\Vert D^j f\Vert ^2_{\Gamma }\right) \le -\sigma _{\mathrm {min}}\big \Vert [f]-f\big \Vert ^2_{\Gamma ^{p}}\,, \end{aligned}$$
(41)

where

$$\begin{aligned} c'_{p+1, j} = {\left\{ \begin{array}{ll} \dfrac{1}{2} + \sum \limits _{i=1}^p c_{i 0}, \quad &{} j=0, \\ 1 + \sum \limits _{i=1}^p c_{i j}, \quad &{} 1\le j\le p-1, \\ 1, \quad &{} j=p. \end{array}\right. } \end{aligned}$$
(42)

When \(k=p+1\), (37) reads

$$\begin{aligned} \varepsilon ^2\partial _t\Vert D^{p+1} f\Vert ^2_{\Gamma } = -2\int _Q \langle D^{p+1}\big (\sigma (z)([ f]-f)\big ),D^{p+1} ([f]-f) \rangle _{\omega }\,\mathrm {d}x\,\mathrm {d}v\,. \end{aligned}$$
(43)

According to Lemma 4.1 with \(g = D^{p+1}([f] - f)\) and the assumption \(\Vert D^k\sigma (z)\Vert _{L^\infty }\le C_\sigma \), the right-hand side satisfies the estimate

$$\begin{aligned} \mathrm {RHS}\le & {} {} -\sigma _{\mathrm {min}}\int _Q \big \Vert D^{p+1}([f] - f)\big \Vert ^2_\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\ {}&+ \frac{4^{p+1}}{\sigma _{\mathrm {min}}}\left( \max \limits _{0\le j\le p+1}\Vert D^{j} \sigma \Vert ^2_{L^\infty }\right) \int _Q\big \Vert [f] - f\big \Vert _{H^{p}}^2\,\mathrm {d}x\,\mathrm {d}v\nonumber \\\le & {} {} -\sigma _{\mathrm {min}}\big \Vert D^{p+1}([f] - f)\big \Vert ^2_{\Gamma } + \frac{C_\sigma ^2 C'_{p+1}}{\sigma _{\mathrm {min}}}\big \Vert [f] - f\big \Vert ^2_{\Gamma ^{p}}\,. \end{aligned}$$
(44)

where \(C'_{p+1} = (p+1)4^{p+1}\). Now we have the estimate

$$\begin{aligned} \varepsilon ^2\partial _t\Vert D^{p+1} f\Vert ^2_{\Gamma } \le -\sigma _{\mathrm {min}}\big \Vert D^{p+1}([f] - f)\big \Vert ^2_{\Gamma } + \frac{C_\sigma ^2 C'_{p+1}}{\sigma _{\mathrm {min}}}\big \Vert [f] - f\big \Vert ^2_{\Gamma ^{p}}. \end{aligned}$$
(45)

Adding this equation (45) with (41) multiplied by \(C_\sigma ^2 C'_{p+1}/ \sigma _{\mathrm {min}}^2\) gives,

$$\begin{aligned} \varepsilon ^2\partial _t\left( \Vert D^{p+1} f\Vert ^2_{\Gamma } + \sum _{j=0}^{p} c_{p+1, j}\Vert D^j f\Vert ^2_{\Gamma }\right) \le -\sigma _{\mathrm {min}}\big \Vert D^{p+1}([f] - f)\big \Vert ^2_{\Gamma }\,, \end{aligned}$$
(46)

where

$$\begin{aligned} c_{p+1, j} = \frac{C_\sigma ^2 C'_{p+1}}{\sigma _{\mathrm {min}}}c'_{p+1, j}\,. \end{aligned}$$
(47)

This shows that (38) still holds for \(k= p+1\). By mathematical induction, (38) holds for all integer \(k\in {{\mathbb {N}}}\).

Finally, according to (38), we have

$$\begin{aligned} \partial _t\left( \Vert D^k f\Vert ^2_{\Gamma } + \sum _{j=0}^{k-1} c_{kj}\Vert D^j f\Vert ^2_{\Gamma }\right) \le 0, \quad c_{kj}>0,\;k\in {{\mathbb {N}}}, \end{aligned}$$
(48)

which yields

$$\begin{aligned} \Vert D^k f\Vert ^2_{\Gamma }\le & {} \Vert D^k f\Vert ^2_{\Gamma } + \sum _{j=0}^{k-1} c_{kj}\Vert D^j f\Vert ^2_{\Gamma } \nonumber \\\le & {} \Vert D^k f_0\Vert ^2_{\Gamma } + \sum _{j=0}^{k-1} c_{kj}\Vert D^j f_0\Vert ^2_{\Gamma } \nonumber \\\le & {} C_0^2\left( 1 + \sum _{j=0}^{k-1} c_{kj}\right) := C^2, \end{aligned}$$
(49)

where C is clearly independent of \(\varepsilon \). This completes the proof of the theorem. \(\square \)

Theorem 4.1 shows the derivatives of the solution with respect to z can be bounded by the derivatives of initial data. In particular, the \(\Vert D^k f\Vert _{\Gamma }\) bound is independent of \(\varepsilon \)! This is crucial for our later proof that our scheme is s-AP. However, this estimate alone is not sufficient to guarantee that the whole gPC-SG method has a spectral convergence uniform in \(\varepsilon \) (since there is \(O(1/\varepsilon ^2)\) coefficient in front of the projection error, such we need \(O(\varepsilon ^2)\) estimation of \([ f] - f\) to cancel this coefficient). To this aim, we first provide the following lemma.

Lemma 4.2

Assume for some integer \(m \ge 0\),

$$\begin{aligned} \Vert D^k(\partial _xf_0)\Vert _{\Gamma } \le C_x, \quad k=0,\dots ,m, \quad t>0. \end{aligned}$$
(50)

Then holds:

$$\begin{aligned} \int _Q \varepsilon \langle vD^k(\partial _xf), D^k([ f]-f) \rangle _\omega \,\mathrm {d}x\,\mathrm {d}v\le \frac{\sigma _{\mathrm {min}}}{4}\Vert D^k([ f]-f)\Vert ^2_{\Gamma } + \frac{C_1\varepsilon ^2}{\sigma _{\mathrm {min}}}. \end{aligned}$$
(51)

Proof

First note that \(\partial _xf\) satisfies the same equation as f itself,

$$\begin{aligned} \varepsilon ^2\partial _t(\partial _xf) + \varepsilon v\partial _x(\partial _xf) = \sigma (z)([ \partial _xf]-\partial _xf). \end{aligned}$$
(52)

Thus according to Theorem 4.1 and our assumption (50),

$$\begin{aligned} \Vert D^k(\partial _xf)\Vert _{\Gamma } \le C, \quad t>0, \end{aligned}$$
(53)

with C independent of \(\varepsilon \). Then by Young’s inequality,

$$\begin{aligned}&\int _Q\varepsilon \langle vD^k(\partial _xf), D^k([ f]-f) \rangle _\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\&\quad \le \frac{\sigma _{\mathrm {min}}}{4}\Vert D^k([ f]-f)\Vert ^2_{\Gamma } + \frac{\varepsilon ^2}{\sigma _{\mathrm {min}}}\Vert vD^k(\partial _xf)\Vert _{\Gamma }^2 \nonumber \\&\quad \le \frac{\sigma _{\mathrm {min}}}{4}\Vert D^k([ f]-f)\Vert ^2_{\Gamma } + \frac{\varepsilon ^2}{\sigma _{\mathrm {min}}}\Vert D^k(\partial _xf)\Vert _{\Gamma }^2 \nonumber \\&\quad \le \frac{\sigma _{\mathrm {min}}}{4}\Vert D^k([ f]-f)\Vert ^2_{\Gamma } + \frac{C_1\varepsilon ^2}{\sigma _{\mathrm {min}}}, \end{aligned}$$
(54)

where \(C_1 = C^2\) is a constant. This completes the proof. \(\square \)

Now we are ready to prove the following theorem.

Theorem 4.2

(\(\varepsilon ^2\)-estimate on \([ \varvec{f}]-\varvec{f}\)) With all the assumptions in Theorem 4.1 and Lemma 4.2, for a given time \(T>0\), the following regularity result of \([ f] - f\) holds:

$$\begin{aligned} \begin{aligned}&\Vert D^k([ f]-f)\Vert ^2_{\Gamma } \\&\quad \le \mathrm {e}^{-\sigma _{\mathrm {min}}t/2\varepsilon ^2}\Vert D^k([ f_0]-f_0)\Vert ^2_{\Gamma } + C'\varepsilon ^2 \\&\quad \le C\varepsilon ^2, \end{aligned} \end{aligned}$$
(55)

for any \(t\in (0,T]\) and \(0\le k\le m,\), where \(C'\) and C are constants independent of \(\varepsilon \).

Proof

First notice that \([ f]\) satisfies

$$\begin{aligned} \varepsilon ^2\partial _t[ f] + \varepsilon \partial _x[ v f] = 0, \end{aligned}$$
(56)

so \([ f]-f\) satisfies the following equation:

$$\begin{aligned} \varepsilon ^2\partial _t([ f]-f) + \varepsilon \partial _x([ v f] - vf) = -\sigma (z)([ f]-f). \end{aligned}$$
(57)

As the proof in Theorem 4.1, differentiating this equation k times with respect to z, multiplying by \(D^k([ f]-f)\) and integrating on Q, one obtains

$$\begin{aligned} \varepsilon ^2\partial _t\big \Vert D^k([ f]-f)\big \Vert ^2_{\Gamma }= & {} {} -2\int _Q\varepsilon \langle D^k(\partial _x[ v f] - v\partial _xf), D^k([ f]-f)\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\ {}&\,-2\int _Q \langle D^k\big (\sigma (z)([ f]-f)\big ),D^k ([ f]-f) \rangle _{\omega }\,\mathrm {d}x\,\mathrm {d}v \nonumber \\:= & {} {} I + II. \end{aligned}$$
(58)

Notice that

$$\begin{aligned} \int _Q\varepsilon \langle D^k(\partial _x[ v f]), D^k([ f]-f)\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v=0, \end{aligned}$$
(59)

and using Lemma 4.2, we have

$$\begin{aligned} I \le \frac{\sigma _{\mathrm {min}}}{2}\Vert D^k([ f]-f)\Vert ^2_{\Gamma } + \frac{2C_1\varepsilon ^2}{\sigma _{\mathrm {min}}}. \end{aligned}$$
(60)

For the second part by Lemma 4.1,

$$\begin{aligned} II\le -\sigma _{\mathrm {min}}\big \Vert D^{k}([f] - f)\big \Vert ^2_{\Gamma } + \frac{C_\sigma ^2 4^{k}}{\sigma _{\mathrm {min}}}\big \Vert [f] - f\big \Vert ^2_{\Gamma ^{k-1}}. \end{aligned}$$
(61)

So we get the following estimate,

$$\begin{aligned} \varepsilon ^2\partial _t\big \Vert D^k([ f]-f)\big \Vert ^2_{\Gamma }\le & {} {} -\frac{\sigma _{\mathrm {min}}}{2}\big \Vert D^{k}([f] - f)\big \Vert ^2_{\Gamma } + \frac{2C_1\varepsilon ^2}{\sigma _{\mathrm {min}}} \nonumber \\ {}&+ \frac{C_\sigma ^2 4^{k}}{\sigma _{\mathrm {min}}}\big \Vert [f] - f\big \Vert ^2_{\Gamma ^{k-1}}. \end{aligned}$$
(62)

To prove the theorem we use mathematical induction. When \(k=0\) (62) turns to

$$\begin{aligned} \varepsilon ^2\partial _t\big \Vert [ f]-f\big \Vert ^2_{\Gamma } \le -\frac{\sigma _{\mathrm {min}}}{2}\big \Vert [f] - f\big \Vert ^2_{\Gamma } + \frac{2C_1\varepsilon ^2}{\sigma _{\mathrm {min}}}. \end{aligned}$$
(63)

By Grönwall’s inequality,

$$\begin{aligned} \big \Vert [ f]-f\big \Vert ^2_{\Gamma }\le & {} \mathrm {e}^{-\sigma _{\mathrm {min}}t/2\varepsilon ^2}\big \Vert [f_0] - f_0\big \Vert ^2_{\Gamma } + \frac{4C_1}{\sigma ^2_{\mathrm {min}}}\varepsilon ^2 \nonumber \\\le & {} C_0\varepsilon ^2, \quad \text{ for } t>0, \end{aligned}$$
(64)

which satisfies (55).

Assume for any \(k\le p\) where \(p\in {{\mathbb {N}}}\), (55) holds. This implies

$$\begin{aligned} \big \Vert [f] - f\big \Vert ^2_{\Gamma ^{p}(t)} \le C_p\varepsilon ^2. \end{aligned}$$
(65)

So when \(k=p+1\) by (62),

$$\begin{aligned} \varepsilon ^2\partial _t\big \Vert D^{p+1}([ f]-f)\big \Vert ^2_{\Gamma }\le & {} -\frac{\sigma _{\mathrm {min}}}{2}\big \Vert D^{p+1}([f] - f)\big \Vert ^2_{\Gamma } + \frac{2C_1\varepsilon ^2}{\sigma _{\mathrm {min}}} \nonumber \\ {}&+ \frac{C_\sigma ^2 C'_{p+1}}{\sigma _{\mathrm {min}}}C_p\varepsilon ^2, \end{aligned}$$
(66)

which means

$$\begin{aligned} \partial _t\big \Vert D^{p+1}([ f]-f)\big \Vert ^2_{\Gamma } \le -\frac{\sigma _{\mathrm {min}}}{2\varepsilon ^2}\big \Vert D^{p+1}([f] - f)\big \Vert ^2_{\Gamma } + C''_{p+1}. \end{aligned}$$
(67)

Again, the Grönwall’s inequality yields

$$\begin{aligned} \big \Vert D^{p+1}([ f]-f)\big \Vert ^2_{\Gamma }\le & {} \mathrm {e}^{-\sigma _{\mathrm {min}}t/2\varepsilon ^2}\big \Vert D^{p+1}([f_0] - f_0)\big \Vert ^2_{\Gamma } + C''_{p+1}\varepsilon ^2 \nonumber \\\le & {} C_{p+1}\varepsilon ^2, \quad \text{ for } t>0, \end{aligned}$$
(68)

where \(C_{p+1}\) is a constant independent of \(\varepsilon \). So by mathematical induction, we complete the proof of the theorem. \(\square \)

Remark 4.1

We remark that all the above lemma and theorems are proved for \(z\in {{\mathbb {R}}}\) and \(\sigma \) depending only on z. However, our conclusions and techniques are not limited to these cases. For \(z\in {{\mathbb {R}}}^d\), it is straightforward to prove and for \(\sigma (x,z)\) also a function of x, we only need to modify the proof of Lemma 4.2 by using the same technique as in the proof of Theorem 4.1.

4.3 A spectral convergence uniformly in \(\varvec{\varepsilon }\)

Let f be the solution to linear transport equation (1)–(2). We define the Mth-order projection operator

$$\begin{aligned} {\mathcal {P}}_M f = \sum _{i=0}^M \langle f,\phi _i \rangle _\omega \phi _i. \end{aligned}$$

The error arisen from the gPC-SG can be split into two parts \(r_N\) and \(e_N\),

$$\begin{aligned} f - f_M = f - {\mathcal {P}}_M f + {\mathcal {P}}_M f - f_M:= r_M + e_M, \end{aligned}$$
(69)

where \(r_M = f - {\mathcal {P}}_M f\) is the truncation error, and \(e_M={\mathcal {P}}_M f - f_M\) is the projection error.

For the truncation error \(r_M\), we have the following lemma

Lemma 4.3

(Truncation error). Under all the assumptions in Theorems 4.1 and 4.2, we have for \(t\in (0,T]\) and any integer \(k=0,\ldots ,m\),

$$\begin{aligned} \Vert r_M\Vert _{\Gamma } \le \frac{C_1}{M^k}. \end{aligned}$$
(70)

Moreover,

$$\begin{aligned} \big \Vert [ r_M]-r_M\big \Vert _{\Gamma }\le \frac{C_2}{M^k}\varepsilon , \end{aligned}$$
(71)

where \(C_1\) and \(C_2\) are independent of \(\varepsilon \).

Proof

By the standard error estimate for orthogonal polynomial approximations and Theorem 4.1, for \(0\le t\le T\),

$$\begin{aligned} \Vert r_M\Vert _{\Gamma } \le C M^{-k}\Vert D^k f\Vert _{\Gamma } \le \frac{C_1}{M^k}, \end{aligned}$$
(72)

with C independent of M.

In the same way, according to Theorem 4.2,

$$\begin{aligned} \big \Vert [ r_M]-r_M\Vert _{\Gamma }= & {} \big \Vert ([ f]-f)-([ {\mathcal {P}}_M f]- {\mathcal {P}}_M f)\big \Vert _{\Gamma } \nonumber \\\le & {} C M^{-k}\Vert D^k([ f] - f)\Vert _{\Gamma } \nonumber \\\le & {} \frac{C_2}{M^k}\varepsilon , \end{aligned}$$
(73)

which completes the proof \(\square \)

It remains to estimate \(e_M\). To this aim, we first notice \(f_M\) satisfying

$$\begin{aligned} \varepsilon ^2 \partial _tf_M + \varepsilon v \partial _xf_M = {\mathcal {P}}_M\big \{\sigma (z)([ f_M]-f_M)\big \}. \end{aligned}$$
(74)

On the other hand, by doing the Mth-order projection directly on original linear transport equation we get

$$\begin{aligned} \varepsilon ^2 \partial _t({\mathcal {P}}_M f) + \varepsilon v \partial _x({\mathcal {P}}_M f) = {\mathcal {P}}_M\big \{\sigma (z)([ f]-f)\big \}. \end{aligned}$$
(75)

(75) subtracted by (74) gives

$$\begin{aligned} \varepsilon ^2 \partial _te_M + \varepsilon v \partial _xe_M= & {} {\mathcal {P}}_M\Big \{\sigma (z)\big \{{} [ f]-f - ([ f_M]-f_M)\big \}\Big \} \nonumber \\= & {} {\mathcal {P}}_M\Big \{\sigma (z)\big \{{} [ f]-f - ([ {\mathcal {P}}_M f]-{\mathcal {P}}_M f) \nonumber \\ {}&+ \,([ {\mathcal {P}}_M f]-{\mathcal {P}}_M f) - ([ f_M]-f_M)\big \}\Big \} \nonumber \\= & {} {\mathcal {P}}_M\Big \{\sigma (z)\big ({} [ r_M]-r_M \big )\Big \} + {\mathcal {P}}_M\Big \{\sigma (z)\big ([ e_M]-e_M\big )\Big \}. \end{aligned}$$
(76)

Now we can give the following estimate of the projection error \(e_M\),

Lemma 4.4

Under all the assumptions in Theorems 4.1 and 4.2, we have for \(t\in (0,T]\) and any integer \(k=0,\ldots ,m\),

$$\begin{aligned} \Vert e_M\Vert _{\Gamma } \le \frac{C(T)}{M^k}, \end{aligned}$$
(77)

where C(T) is a constant independent of \(\varepsilon \).

Proof

We use basically the same energy estimate as before: multiply (76) by \(e_M\) and integrate on Q, notice that

$$\begin{aligned} \int _Q \langle {\mathcal {P}}_M\big \{\sigma (z)\big ([ r_M]-r_M \big )\big \}, [ e_M]\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v&= 0, \end{aligned}$$
(78)
$$\begin{aligned} \int _Q \langle {\mathcal {P}}_M\big \{\sigma (z)\big ([ e_M]-e_M \big )\big \}, [ e_M]\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v&= 0, \end{aligned}$$
(79)

then one gets

$$\begin{aligned} \varepsilon ^2\partial _t\Vert e_M\Vert ^2_{\Gamma }= & {} {} -\int _Q \langle {\mathcal {P}}_M\big \{\sigma (z)\big ([ e_M]-e_M \big )\big \}, [ e_M] - e_M\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\ {}&\quad -\int _Q \langle {\mathcal {P}}_M\big \{\sigma (z)\big ([ r_M]-r_M \big )\big \}, [ e_M]-e_M\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v. \end{aligned}$$
(80)

Notice the projection operator \({\mathcal {P}}_M\) is a self-joint operator

$$\begin{aligned} \langle {\mathcal {P}}_M f, g\rangle _\omega = \langle f, {\mathcal {P}}_M g\rangle _\omega , \end{aligned}$$

and

$$\begin{aligned} {\mathcal {P}}_M e_M = e_M, \end{aligned}$$

thus

$$\begin{aligned} \varepsilon ^2\partial _t\Vert e_M\Vert ^2_{\Gamma }= & {} {}-\int _Q \langle \sigma (z)\big ([ e_M]-e_M \big ), [ e_M] - e_M\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\&\quad {} -\int _Q \langle \sigma (z)\big ([ r_M]-r_M \big ), [ e_M]-e_M\rangle _\omega \,\mathrm {d}x\,\mathrm {d}v \nonumber \\\le & {} {} -\sigma _{\mathrm {min}}\big \Vert [ e_M]-e_M\Vert ^2_{\Gamma } + \frac{\sigma _{\mathrm {min}}}{2}\big \Vert [ e_M]-e_M\Vert ^2_{\Gamma } \nonumber \\&\quad {}+ \frac{C_\sigma }{2\sigma _{\mathrm {min}}}\big \Vert [ r_M]-r_M\Vert ^2_{\Gamma } \nonumber \\\le & {} {} -\frac{\sigma _{\mathrm {min}}}{2}\big \Vert [ e_M]-e_M\Vert ^2_{\Gamma } + \frac{C_\sigma }{2\sigma _{\mathrm {min}}}\Big (\frac{C'}{M^k}\Big )^2\varepsilon ^2 \nonumber \\\le & {} {} \Big (\frac{C}{M^k}\Big )^2\varepsilon ^2, \end{aligned}$$
(81)

where for the last two inequalities we have used Young’s inequality and Lemma 4.3. Then by a integral over t we get

$$\begin{aligned} \Vert e_M\Vert ^2_{\Gamma } \le \Vert e^0_M\Vert ^2_{\Gamma } + \Big (\frac{C(T)}{M^k}\Big )^2, \end{aligned}$$
(82)

since \(e^0_M = {\mathcal {P}}_M f_0 - f^0_M = 0\) we complete the proof of this lemma. \(\square \)

Finally, we are now ready to state the main convergence theorem:

Theorem 4.3

(Uniform convergence in \(\varvec{\varepsilon }\)) Assume

$$\begin{aligned} \sigma (z)\ge \sigma _{\mathrm {min}}>0\,. \end{aligned}$$

If for some integer \(m\ge 0\),

$$\begin{aligned} \Vert \sigma (z)\Vert _{H^k}\le C_\sigma , \quad \Vert D^k f_{0}\Vert _{\Gamma }\le C_0, \quad \Vert D^k(\partial _xf_0)\Vert _{\Gamma } \le C_x, \quad k=0,\dots ,m, \end{aligned}$$
(83)

Then the error of the whole gPC-SG method is

$$\begin{aligned} \Vert f - f_M\Vert _{\Gamma }\le \frac{C(T)}{M^k}, \end{aligned}$$
(84)

where C(T) is a constant independent of \(\varepsilon \).

Proof

From Lemmas 4.3 and 4.4, one has

$$\begin{aligned} \Vert f - f_M\Vert _{\Gamma } \le \Vert r_M\Vert _{\Gamma } + \Vert e_M\Vert _{\Gamma }\le \frac{C(T)}{M^k}, \end{aligned}$$

which completes the proof. \(\square \)

Remark 4.2

Theorem 4.3 gives a uniformly in \(\varepsilon \) spectral convergence rate; thus, one can choose M independent of \(\varepsilon \), a very strong s-AP property. If the scattering is anisotropic, namely \(\sigma \) depends on \(\nu \), then one usually obtains a convergence rate that requires \(M\gg \varepsilon \) (see for example [12]). In such cases the proof of s-AP property is much harder, and one usually needs to use the diffusion limit, see [5] in the case of deterministic case and [11] in the random case.

5 The Full discretization

As pointed out in [15], by using the gPC-SG formulation, one obtains a vector version of the original deterministic transport equation. This enables one to use the deterministic AP scheme. In this paper, we adopt the AP scheme developed in [22] for gPC-SG system (16).

One of the most important and challenge problems for linear transport equation is the treatment of boundary conditions; here, we refer to the early work by Jin and Levermore [10] and more recent work by Lemou and Méhats [21] for their study of AP property and numerical treatment of physical boundary conditions.

We take a uniform grid \(x_i = ih, i = 0, 1, \ldots N\), where \(h=1/N\) is the grid size, and time steps \(t^n=n \Delta t\). \(\rho ^n_{i}\) is the approximation of \(\rho \) at the grid point \((x_i, t^n)\), while \(g^{n+1}_{i+\frac{1}{2}}\) is defined at a staggered grid \(x_{i+1/2} = (i+1/2)h, i = 0, \ldots N-1\).

The fully discrete scheme for gPC system (11) is

$$\begin{aligned}&\frac{{\hat{\rho }}^{n+1}_{i}-{\hat{\rho }}^n_{i}}{\Delta t} + \left[ {v\frac{\hat{g}^{n+1}_{i+\frac{1}{2}}-\hat{g}^{n+1}_{i-\frac{1}{2}}}{\Delta x}}\right] = -\Sigma ^a_i {{\hat{\rho }}}^{n+1}_i + {\hat{S}}_i, \end{aligned}$$
(85a)
$$\begin{aligned}&\frac{\hat{g}^{n+1}_{i+\frac{1}{2}}-\hat{g}^n_{i+\frac{1}{2}}}{\Delta t} + \frac{1}{\varepsilon \Delta x} (I-[ .]) \left( v^+\left( \hat{g}^n_{i+\frac{1}{2}}-\hat{g}^n_{i-\frac{1}{2}}\right) +v^-\left( \hat{g}^n_{i+\frac{3}{2}}-\hat{g}^n_{i+\frac{1}{2}}\right) \right) \nonumber \\&\quad = - \frac{1}{\varepsilon ^2}\Sigma _i \hat{g}^{n+1}_{i+\frac{1}{2}}- \Sigma ^a \hat{g}^{n+1}_{i+\frac{1}{2}}- \frac{1}{\varepsilon ^2}v\frac{{\hat{\rho }}^n_{i+1}-{\hat{\rho }}^n_{i}}{\Delta x}. \end{aligned}$$
(85b)

It has the formal diffusion limit when \(\varepsilon \rightarrow 0\) as can be easily checked, which is

$$\begin{aligned} \frac{{\hat{\rho }}^{n+1}_{i}-{\hat{\rho }}^n_{i}}{\Delta t} - K \, \frac{{\hat{\rho }}^n_{i+1}-2{\hat{\rho }}^n_{i}+{\hat{\rho }}^n_{i-1}}{\Delta x^2}= -\Sigma ^a_i {{\hat{\rho }}}^{n+1}_i + {\hat{S}}_i, \end{aligned}$$
(86)

where \(K =\frac{1}{3} \Sigma ^{-1}\). This is the fully discrete scheme for (17). Thus the scheme is stochastically AP as defined in [15].

We also notice that \([ \hat{g}^n_{i+\frac{1}{2}}]=0\) for every n which will be used later.

6 The uniform stability

One important property for an AP scheme is to have a stability condition independent of \(\varepsilon \), so one can take \(\Delta t \gg O(\varepsilon )\) when \(\varepsilon \) becomes small. In this section we prove such a result. The proof basically follows that of [23] for the deterministic problem.

For clarity in this section we assume \(\sigma ^a =S=0\). The main theoretical result about the stability is the following theorem:

Theorem 6.1

Denote

$$\begin{aligned} \sigma _{ij}= \langle \phi _i, \sigma \phi _j \rangle _\omega , \quad \Sigma = ( \sigma _{ij}), \quad \Sigma \ge \sigma _{\mathrm {min}} \text{ Id } \,. \end{aligned}$$

If \(\Delta t\) satisfies the following CFL condition

$$\begin{aligned} \Delta t\le \frac{\sigma _{\mathrm {min}}}{3}\Delta x^2 + \frac{2 \varepsilon }{3}\Delta x, \end{aligned}$$
(87)

then the sequences \({\hat{\rho }}^n\) and \(\hat{g}^n\) defined by scheme (85) satisfy the energy estimate

$$\begin{aligned} \Delta x\sum _{i=0}^{N-1} \left( \left( {\hat{\rho }}^n_{i}\right) ^2 + \frac{\varepsilon ^2}{2} \int _{-1}^{1} \left( \hat{g}^n_{i+\frac{1}{2}}\right) ^2\,\mathrm {d}v \right) \le \Delta x\sum _{i=0}^{N-1} \left( \left( {{\hat{\rho }}}^0_i\right) ^2 + \frac{\varepsilon ^2}{2} \int _{-1}^{1}\left( {\hat{g}}^0_{i+\frac{1}{2}}\right) ^2 \,\mathrm {d}v \right) \end{aligned}$$

for every n, and hence scheme (85) is stable.

Remark 6.1

Since the right-hand side of (87) has a lower bound when \(\varepsilon \rightarrow 0\) (and the lower bound being that of the stability condition of discrete diffusion equation (86)), the scheme is asymptotically stable and \(\Delta t\) remains finite even if \(\varepsilon \rightarrow 0\).

6.1 Notations and useful lemma

We give some useful notations for norms and inner products that are used in our analysis. For every grid function \(\mu =(\mu _i)_{i=0}^{N-1}\) define:

$$\begin{aligned} \Vert \mu \Vert ^2=\Delta x\sum _{i=0}^{N-1} \mu _i^2\,. \end{aligned}$$
(88)

For every velocity-dependent grid function \(v\in [-1,1]\mapsto \phi (v)=(\phi _{i+\frac{1}{2}}(v))_{i=0}^{N-1}\), define:

$$\begin{aligned} \Vert |\phi \Vert |=\Delta x\sum _{i=0}^{N-1} \left[ {\phi _{i+\frac{1}{2}}^2}\right] . \end{aligned}$$
(89)

If \(\phi \) and \(\psi \) are two velocity-dependent grid functions, their inner product is defined as:

$$\begin{aligned} \left\langle \phi \, , \, \psi \right\rangle = \Delta x\sum _{i=0}^{N-1} \left[ {\phi _{i+\frac{1}{2}}\psi _{i+\frac{1}{2}}}\right] . \end{aligned}$$
(90)

Now we give some notations for the finite difference operators that are used in scheme (85). For every grid function \(\phi =(\phi _{i+\frac{1}{2}})_{i\in {{\mathbb {Z}}}}\), we define the following one-sided difference operators:

$$\begin{aligned} D^-\phi _{i+\frac{1}{2}} = \frac{\phi _{i+\frac{1}{2}}-\phi _{i-\frac{1}{2}}}{\Delta x} \quad \text { and }\quad D^+\phi _{i+\frac{1}{2}} = \frac{\phi _{i+\frac{3}{2}}-\phi _{i+\frac{1}{2}}}{\Delta x} \end{aligned}$$
(91)

We also define the following centered difference operators:

$$\begin{aligned} D^c\phi _{i+\frac{1}{2}} = \frac{\phi _{i+\frac{3}{2}}-\phi _{i-\frac{1}{2}}}{2\Delta x} \quad \text { and }\quad D^0\phi _i=\frac{\phi _{i+\frac{1}{2}}-\phi _{i-\frac{1}{2}}}{\Delta x} \left( =D^-\phi _{i+\frac{1}{2}}\right) . \end{aligned}$$
(92)

Finally, for every grid function \(\mu =(\mu _{i})_{i\in {{\mathbb {Z}}}}\), define the following centered operator:

$$\begin{aligned} \delta ^0\mu _{i+\frac{1}{2}}=\frac{\mu _{i+1}-\mu _{i}}{\Delta x}. \end{aligned}$$
(93)

We first recall some basic facts. For every grid functions \(\phi =(\phi _{i+\frac{1}{2}})_{i=0}^{N-1}, \psi =(\psi _{i+\frac{1}{2}})_{i=0}^{N-1}\), and \(\mu =(\mu _{i})_{i=0}^{N-1}\), one has (see [23]):

$$\begin{aligned}&\left( v^+D^-+ v^-D^+\right) \phi _{i+\frac{1}{2}}=vD^c\phi _{i+\frac{1}{2}} -\frac{\Delta x}{2}|v|D^-D^+\phi _{i+\frac{1}{2}};\end{aligned}$$
(94)
$$\begin{aligned}&\Delta x\sum _{i\in {{\mathbb {Z}}}}{\left( D^+\phi _{i+\frac{1}{2}}\right) ^2} \le \frac{4}{\Delta x^2} \Delta x\sum _i\phi _{i+\frac{1}{2}}^2; \end{aligned}$$
(95)
$$\begin{aligned}&\left| \left\langle \left( v^+D^++ v^-D^-\right) \psi \, , \, \phi \right\rangle \right| \le \alpha \Vert |\phi \Vert |^2 + \frac{1}{4\alpha }\Vert ||v|D^+\psi \Vert |^2, \forall \alpha >0; \end{aligned}$$
(96)
$$\begin{aligned}&\Delta x\sum _{i\in {{\mathbb {Z}}}}\mu _iD^0\phi _i = - \Delta x\sum _{i\in {{\mathbb {Z}}}}\bigl (\delta ^0\mu _{i+\frac{1}{2}}\bigr )\phi _{i+\frac{1}{2}}; \end{aligned}$$
(97)
$$\begin{aligned}&\Delta x\sum _{i\in {{\mathbb {Z}}}}\psi _{i+\frac{1}{2}}D^-\phi _{i+\frac{1}{2}}\Delta x= -\Delta x\sum _{i\in {{\mathbb {Z}}}}\bigl (D^+\psi _{i+\frac{1}{2}}\bigr )\phi _{i+\frac{1}{2}}; \end{aligned}$$
(98)
$$\begin{aligned}&\Delta x\sum _{i\in {{\mathbb {Z}}}}\phi _{i+\frac{1}{2}}D^c\phi _{i+\frac{1}{2}} = 0;\end{aligned}$$
(99)
$$\begin{aligned}&{\text{ If }} \quad g\in L^2([-1,1]), \quad {\text{ then }} \; [ vg]^2\le \frac{1}{2}[ |v|g^2]. \end{aligned}$$
(100)

6.2 Energy estimates

Now we provide the details of the energy estimate. The proof is similar to that for deterministic problem in [23].

First, multiplying (85a) and (85b) by \({{\hat{\rho }}}^{n+1}\) and \(\varepsilon ^2 {\hat{g}}^{n+1}\), respectively. With the assumption that \(\sigma _i^a=0, {\hat{S}}_i=0\), and using the fact that \(\Sigma \ge \sigma _{\text {min}} Id\), one has

$$\begin{aligned} \begin{aligned}&\frac{1}{2\Delta t} \left( \Vert {\hat{\rho }}^{n+1}\Vert ^2-\Vert {\hat{\rho }}^n\Vert ^2+\Vert {\hat{\rho }}^{n+1}-{\hat{\rho }}^n\Vert ^2\right) + \Delta x\sum _{i=0}^{N-1} {\hat{\rho }}^{n+1}_{i}D^0\left[ {v\hat{g}^{n+1}_i}\right] \\&\qquad + \frac{\varepsilon ^2}{2\Delta t} \left( \Vert |\hat{g}^{n+1}\Vert |^2-\Vert |\hat{g}^n\Vert |^2 + \Vert |\hat{g}^{n+1}-\hat{g}^n\Vert |^2 \right) \\&\qquad + \varepsilon \left\langle \hat{g}^{n+1} \, , \, \left( v^+D^-+v^-D^+\right) \hat{g}^n\right\rangle \\&\quad \le -\sigma _\text {{min}} \Vert |\hat{g}^{n+1}\Vert |^2 + \Delta x\sum _{i=0}^{N-1} \left[ {vD^0\hat{g}^{n+1}_i}\right] {\hat{\rho }}^n_{i}. \end{aligned} \end{aligned}$$

Combining the second term on the left hand side and the last term on the right-hand side, one gets

$$\begin{aligned} \begin{aligned}&\frac{1}{2\Delta t} \left( \Vert {\hat{\rho }}^{n+1}\Vert ^2-\Vert {\hat{\rho }}^n\Vert ^2+\Vert {\hat{\rho }}^{n+1}-{\hat{\rho }}^n\Vert ^2\right) \\&\qquad + \frac{\varepsilon ^2}{2\Delta t} \left( \Vert |\hat{g}^{n+1}\Vert |^2-\Vert |\hat{g}^n\Vert |^2 + \Vert |\hat{g}^{n+1}-\hat{g}^n\Vert |^2 \right) \\&\qquad + \varepsilon \left\langle \hat{g}^{n+1} \, , \, \left( v^+D^-+v^-D^+\right) \hat{g}^n\right\rangle \\&\quad \le -\sigma _{\text {min}} \Vert |\hat{g}^{n+1}\Vert |^2 + \Delta x\sum _{i=0}^{N-1} \left[ {vD^0\hat{g}^{n+1}_i}\right] ({\hat{\rho }}^n_{i}- {\hat{\rho }}^{n+1}_{i}). \end{aligned} \end{aligned}$$

Using the Young’s inequality,

$$\begin{aligned} \Delta x\sum _{i=0}^{N-1} \left[ {vD^0\hat{g}^{n+1}_i}\right] ({\hat{\rho }}^n_{i}-{\hat{\rho }}^{n+1}_{i}) \le \frac{1}{2\Delta t} \Vert {\hat{\rho }}^{n+1}-{\hat{\rho }}^n\Vert ^2 +\frac{\Delta t}{2}\Delta x\sum _{i=0}^{N-1} \left[ {vD^0\hat{g}^{n+1}_i}\right] ^2. \end{aligned}$$

This gives

$$\begin{aligned} \begin{aligned}&\frac{1}{2\Delta t} \left( \Vert {\hat{\rho }}^{n+1}\Vert ^2-\Vert {\hat{\rho }}^n\Vert ^2\right) + \frac{\varepsilon ^2}{2\Delta t} \left( \Vert |\hat{g}^{n+1}\Vert |^2-\Vert |\hat{g}^n\Vert |^2 + \Vert |\hat{g}^{n+1}-\hat{g}^n\Vert |^2 \right) \\&\qquad + \varepsilon \left\langle \hat{g}^{n+1} \, , \, \left( v^+D^-+v^-D^+\right) \hat{g}^n\right\rangle \\&\quad \le -\sigma _{\text {min}} \Vert |\hat{g}^{n+1}\Vert |^2 +\frac{\Delta t}{2} \Delta x\sum _{i=0}^{N-1} \left[ {vD^0\hat{g}^{n+1}_{i+\frac{1}{2}}}\right] ^2. \end{aligned} \end{aligned}$$

We take the following decomposition

$$\begin{aligned} \begin{aligned}&\left\langle \hat{g}^{n+1} \, , \, \left( v^+D^-+v^-D^+\right) \hat{g}^n\right\rangle = \left\langle \hat{g}^{n+1} \, , \, \left( v^+D^-+v^-D^+\right) \hat{g}^{n+1}\right\rangle \\&\quad + \left\langle \hat{g}^{n+1} \, , \, \left( v^+D^-+v^-D^+\right) (\hat{g}^n-\hat{g}^{n+1})\right\rangle =: A+B, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} A= & {} \frac{\Delta x}{2} \Delta x\sum _{i=0}^{N-1} \left[ {|v|\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2}\right] ,\\ B= & {} -\left\langle \left( v^+D^++v^-D^-\right) \hat{g}^{n+1} \, , \, \hat{g}^n-\hat{g}^{n+1}\right\rangle . \end{aligned}$$

Using the Young’s inequality,

$$\begin{aligned} |B|\le \frac{\varepsilon }{2\Delta t} \Vert |\hat{g}^{n+1}-\hat{g}^n\Vert |^2+\frac{\Delta t}{2\varepsilon }\Vert ||v|D^+\hat{g}^{n+1}\Vert |^2. \end{aligned}$$

This leads to

$$\begin{aligned} \begin{aligned}&\frac{1}{2\Delta t} \left( \Vert {\hat{\rho }}^{n+1}\Vert ^2-\Vert {\hat{\rho }}^n\Vert ^2\right) + \frac{\varepsilon ^2}{2\Delta t} \left( \Vert |\hat{g}^{n+1}\Vert |^2-\Vert |\hat{g}^n\Vert |^2 \right) \\&\qquad + \varepsilon \frac{\Delta x}{2} \sum _{i=0}^{N-1} \left[ {|v|\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2}\right] \Delta x- \frac{\Delta t}{2}\Vert ||v|D^+\hat{g}^{n+1}\Vert |^2 \\&\quad \le - \sigma _{\text {min}} \Vert |\hat{g}^{n+1}\Vert |^2 +\frac{\Delta t}{2}\Delta x\sum _{i=0}^{N-1} \left[ {vD^0\hat{g}^{n+1}_{i+\frac{1}{2}}}\right] ^2. \end{aligned} \end{aligned}$$

Since \(|v| \le 1\),

$$\begin{aligned} \frac{\Delta t}{2}\Vert ||v|D^+\hat{g}^{n+1}\Vert |^2\le & {} \frac{\Delta t}{2}\Delta x\sum _{i=0}^{N-1} \left[ {|v|\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2}\right] ,\\ \frac{\Delta t}{2}\Delta x\sum _{i\in {{\mathbb {Z}}}}\left[ {vD^0\hat{g}^{n+1}_{i+\frac{1}{2}}}\right] ^2\le & {} \frac{\Delta t}{4}\Delta x\sum _{i=0}^{N-1} \left[ {|v|\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2}\right] . \end{aligned}$$

These imply

$$\begin{aligned} \begin{aligned}&\frac{1}{2\Delta t} \left( \Vert {\hat{\rho }}^{n+1}\Vert ^2-\Vert {\hat{\rho }}^n\Vert ^2\right) + \frac{\varepsilon ^2}{2\Delta t} \left( \Vert |\hat{g}^{n+1}\Vert |^2-\Vert |\hat{g}^n\Vert |^2 \right) \\&\quad \le - \sigma _{\text {min}} \Vert |\hat{g}^{n+1}\Vert |^2 + \left( \frac{3\Delta t}{4}- \varepsilon \frac{\Delta x}{2}\right) \Delta x\sum _{i=0}^{N-1} \left[ {|v|\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2}\right] . \end{aligned} \end{aligned}$$

Note

$$\begin{aligned} \begin{aligned}&\left( \frac{3\Delta t}{4}- \varepsilon \frac{\Delta x}{2}\right) \Delta x\sum _{i=0}^{N-1} {\left[ |v|\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2\right] } \\&\quad \le \left( \frac{3\Delta t}{4}- \varepsilon \frac{\Delta x}{2}\right) _{+} \Delta x\sum _{i=0}^{N-1}\left[ {\left( D^+\hat{g}^{n+1}_{i+\frac{1}{2}}\right) ^2}\right] \\&\quad \le \left( \frac{3\Delta t}{4}- \varepsilon \frac{\Delta x}{2}\right) _{+}\frac{4}{\Delta x^2} \Vert |\hat{g}^{n+1}\Vert |^2, \end{aligned} \end{aligned}$$

where \((a)_+=\max (0, a)\) denotes the positive part of a. Applying this in (6.2) then gives

$$\begin{aligned}&\frac{1}{2\Delta t} \left( \Vert {\hat{\rho }}^{n+1}\Vert ^2-\Vert {\hat{\rho }}^n\Vert ^2\right) + \frac{\varepsilon ^2}{2\Delta t} \left( \Vert |\hat{g}^{n+1}\Vert |^2-\Vert |\hat{g}^n\Vert |^2 \right) \\&\quad \le \left( \left( \frac{3\Delta t}{4}- \varepsilon \frac{\Delta x}{2}\right) _{+} \frac{4}{\Delta x^2}-\sigma _{\text {min}}\right) \Vert |\hat{g}^{n+1}\Vert |^2. \end{aligned}$$

This means that we have the final energy estimate

$$\begin{aligned} \Vert {\hat{\rho }}^{n+1}\Vert ^2+ \varepsilon ^2\Vert |\hat{g}^{n+1}\Vert |^2 \le \Vert {\hat{\rho }}^n\Vert ^2+ \varepsilon ^2\Vert |\hat{g}^n\Vert |^2 \end{aligned}$$

if \(\Delta t\) is such that

$$\begin{aligned} \left( \frac{3\Delta t}{4}- \varepsilon \frac{\Delta x}{2}\right) _+ \frac{4}{\Delta x^2}\le \sigma _{\text {min}}. \end{aligned}$$

Since \(\sigma _{\text {min}} > 0\), an equivalent condition is \(\left( \dfrac{3\Delta t}{4}- \varepsilon \dfrac{\Delta x}{2}\right) \dfrac{4}{\Delta x^2}\le \sigma _{\text {min}}\), which gives the sufficient condition

$$\begin{aligned} \Delta t\le \frac{\Delta x^2\sigma _{\text {min}}}{3} + \frac{2}{3}\varepsilon \Delta x\,. \end{aligned}$$

This completes the proof of Theorem 6.1.

7 Numerical examples

In this section, we present several numerical examples to illustrate the effectiveness of our method.

We consider the linear transport equation with random coefficient \(\sigma (z)\):

$$\begin{aligned} \varepsilon \partial _tf + v \partial _xf = \frac{\sigma (z)}{\varepsilon }([ f]-f), \quad 0< x < 1\,, \end{aligned}$$
(101)

with initial condition:

$$\begin{aligned} f(0,x,v,z) = 0\,, \end{aligned}$$

and the boundary conditions are:

$$\begin{aligned} f(t, 0, v, z) =1, \quad v\ge 0; \qquad f(t, 1, v, z) = 0, \quad v\le 0. \qquad \end{aligned}$$

7.1 Example 1

First we consider a random coefficient with one-dimensional random parameter:

$$\begin{aligned} \sigma (z) = 2 + z, \quad z \text{ is } \text{ uniformly } \text{ distributed } \text{ in } (-1, 1). \end{aligned}$$

The limiting random diffusion equation for kinetic equation (101) is

$$\begin{aligned} \partial _t\rho = \frac{1}{3\sigma (z)} \partial _{xx}\rho \,, \end{aligned}$$
(102)

with initial condition and boundary conditions:

$$\begin{aligned} \rho (t,0,z) =1, \quad \rho (t,1,z) =0, \quad \rho (0,x,z) =0. \end{aligned}$$

The analytical solution for (102) with the given initial and boundary conditions is

$$\begin{aligned} \rho (t,x,z) = 1 - \text{ erf } \left( \frac{x}{\sqrt{\dfrac{4}{3\sigma (z)} t}} \right) . \end{aligned}$$
(103)

When \(\varepsilon \) is small, we use this as the reference solution, as it is accurate with an error of \(O(\varepsilon ^2)\). Hereafter we set \(\varepsilon =10^{-8}\). For large \(\varepsilon \) or in the case we cannot get an analytic solution, we will use the collocation method (see [27]) with the same time and spatial discretization to micro–macro system (11) as a comparison in the following examples. For more details about the collocation method, especially the s-AP property, we refer to the discussion in the work of Jin and Liu [12]. In addition, the standard 30-points Gauss–Legendre quadrature set is used for the velocity space to compute \(\rho \) in the following example.

To examine the accuracy, we use two error norms: the differences in the mean solutions and in the corresponding standard deviation, with \(\ell ^2\) norm in x:

$$\begin{aligned} e_{mean}(t)= & {} \big \Vert {\mathbb {E}}[u^h]-{\mathbb {E}}[u]\big \Vert _{\ell ^2}\,,\\ e_{std}(t)= & {} \big \Vert \sigma [u^h]-\sigma [u]\big \Vert _{\ell ^2}\,, \end{aligned}$$

where \(u^h,u\) are the numerical solutions and the reference solutions, respectively.

Fig. 1
figure 1

Example 1. Errors of the mean (solid line) and standard deviation (dash line) of \(\rho \) with respect to the gPC order at \(\varepsilon = 10^{-8}\): \(\Delta x = 0.04\) (squares), \(\Delta x = 0.02\) (circles), \(\Delta x = 0.01\) (stars)

In Fig. 1, we plot the errors in mean and standard deviation of the gPC numerical solutions at \(t = 0.01\) with different gPC orders. Three sets of results are included: solutions with \(\Delta x = 0.04\) (squares), \(\Delta x = 0.02\) (circles), \(\Delta x = 0.01\) (stars). We always use \(\Delta t = 0.0002/3\). One observes that the errors become smaller with finer mesh. One can see that the solutions decay rapidly in N and then saturate where spatial discretization error dominates. It is then obvious that the errors due to gPC expansion can be neglected at order \(M = 4\) even for \(\varepsilon = 10^{-8}\). The solution profiles of the mean and standard deviation are shown on the left and right of Fig. 2, respectively.

We also plot the profiles of the mean and standard deviation of the flux vf in Fig. 3. Here we observe good agreement among the gPC-Galerkin method, stochastic collocation method with 20 Gauss–Legendre quadrature points and analytical solution (103).

Fig. 2
figure 2

Example 1. The mean (left) and standard deviation (right) of \(\rho \) at \(\varepsilon =10^{-8}\), obtained by the gPC Galerkin at order \(M=4\) (circles), the stochastic collocation method (crosses) and limiting analytical solution (103)

Fig. 3
figure 3

Example 1. The mean (left) and standard deviation (right) obtained by gPC-Galerkin (circle) and collocation method (cross) at time \(t=0.01\)

Fig. 4
figure 4

Example 1. Differences in the mean (solid line) and standard deviation (dash line) of \(\rho \) with respect to \(\varepsilon ^2\), between limiting analytical solution (103) and the 4th-order gPC solution with \(\Delta x = 0.04\) (squares), \(\Delta x = 0.02\) (circles) and \(\Delta x = 0.01\) (stars)

In Fig. 4, we examine the difference between the solution \(t = 0.01\) obtained by the 4th-order gPC method with \(\Delta x = 0.01, \Delta t = \Delta x^2/12\) and limiting analytical solution (103). As expected, we observe the differences become smaller as \(\varepsilon \) is smaller in a quadratic fashion, before the numerical errors become dominant.

7.2 Example 2: mixing regime

In this test, we still set \(\sigma = 2 + z\). We consider \(\varepsilon >0\) depending on the space variable in a wide range of mixing scales:

$$\begin{aligned} \varepsilon (x)=10^{-3}+\frac{1}{2}[\tanh (6.5-11x)+\tanh (11x-4.5)] \end{aligned}$$
(104)

which varies smoothly from \(10^{-3}\) to O(1) as shown in Fig. 5. This tests the ability of the scheme for problems with mixing regimes or its uniform convergence in \(\varepsilon \).

Fig. 5
figure 5

\(\varepsilon (x)\)

In order to keep the conservation of the mass, the proper linear transport equation should be in the following form,

$$\begin{aligned} \partial _tf + v \partial _x\Big (\frac{1}{\varepsilon (x)} f\Big ) = \frac{\sigma }{\varepsilon ^2(x)}{{{\mathcal {L}}}} f - \sigma ^a f + S, \quad \sigma (x, z) \ge \sigma _{\text {min}} > 0, \end{aligned}$$
(105)

then micro–macro decomposition (11) changes to

$$\begin{aligned} \partial _t\rho + \partial _x[ vg]&= -\sigma ^a \rho + S , \end{aligned}$$
(106a)
$$\begin{aligned} \partial _tg + \frac{1}{\varepsilon (x)} (I-[ .]) (v\partial _xg)&= -\frac{\sigma (z)}{\varepsilon ^2(x)}g - \sigma ^a g - \frac{1}{\varepsilon (x)}v \partial _x\left( \frac{1}{\varepsilon (x)}\rho \right) . \end{aligned}$$
(106b)

We can find only the last term changed. For limiting equation (8), it also need to be changed to

$$\begin{aligned} \partial _t\rho = \partial _{x} (\kappa (z) \partial _{x}\rho ) - \partial _x(\kappa (z)a(x)\rho )-\sigma ^a(z) \rho +S, \end{aligned}$$
(107)

where we assume that

$$\begin{aligned} a(x) =\lim \limits _{\varepsilon \rightarrow 0}\frac{\varepsilon '(x)}{\varepsilon (x)}, \end{aligned}$$
(108)

exists. For the corresponding numerical scheme we only need to replace the last term

$$\begin{aligned} -\frac{1}{\varepsilon ^2}v\frac{{\hat{\rho }}^n_{i+1}-{\hat{\rho }}^n_{i}}{\Delta x} \end{aligned}$$
(109)

by

$$\begin{aligned} -\frac{1}{\varepsilon (x_{i+1/2})}v\Bigg (\frac{{\hat{\rho }}^n_{i+1}}{\varepsilon (x_{i+1})}-\frac{{\hat{\rho }}^n_{i}}{\varepsilon (x_i)}\Bigg )\frac{1}{\Delta x} \end{aligned}$$
(110)

in (85b).

The initial data are

$$\begin{aligned} f_{\text {in}}(x,v,z)=\frac{\rho _0}{2}\left[ \exp \left( -\left( \frac{v-0.75}{T_0}\right) ^2\right) +\exp \left( -\left( \frac{v+0.75}{T_0}\right) ^2\right) \right] \end{aligned}$$
(111)

with

$$\begin{aligned} \rho _0(x)=\frac{2+\sin (2\pi x)}{2},\quad T_0(x)=\frac{5+2\cos (2\pi x)}{20}. \end{aligned}$$
(112)

The reference solution is obtained using collocation method with 30 points. The parameters are set up as the following: the mesh size is \(\Delta x = 0.01\), and the corresponding t direction mesh size is \(\Delta t = \Delta x^2/3\). And we use the 5th-order gPC-Galerkin method to evolve the equation to different time \(t=0.005, t=0.01, t=0.05, t=0.1\). For the v integral, we use Legendre quadrature of 30 points.

Figure 6 shows the \(\ell ^2\) error of the mean and standard deviation with respect to the gPC order. We also see the fast (spectral) convergence of the method.

Fig. 6
figure 6

Example 2 with initial data (111)–(112). The \(\ell ^2\) error of mean and standard deviation (dash line) with respect to gPC order

7.3 Example 3: random initial data

We then add randomness on the initial data (\(\sigma =2+z\) still random).

$$\begin{aligned} f(0,x,v,z) = f(0,x,v)+0.2z \end{aligned}$$
(113)

where f(xv, 0) is the same as in (111). This time we set \(\Delta x=0.01\) and \(\Delta t = \Delta x^2/12\) and final time \(T=0.01\). First we test in the fluid limit regime \(\varepsilon =10^{-8}\) as shown in Fig. 7.

Fig. 7
figure 7

Example 3. The mean (left) and standard deviation (right) obtained by gPC-Galerkin (circle) and collocation method (cross) at time \(t=0.1, \varepsilon =10^{-8}\)

Then we test in \(\varepsilon =1\) which is shown in Fig. 8.

Fig. 8
figure 8

Example 3. The mean (left) and standard deviation (right) obtained by gPC-Galerkin (circle) and collocation method (cross) at time \(t=0.1, \varepsilon =1\)

One can see a good agreement between the gPC-SG solutions and the solutions by the collocation method.

7.4 Example 4: random boundary data

For the next example, we then add randomness on the boundary conditions:

$$\begin{aligned} f_L(t,v,z) = 2 + z,\quad f_R(t,v,z) = 1 + z. \end{aligned}$$
(114)

We also test when \(\varepsilon =10^{-8}\) and \(\varepsilon =10\) as shown in Figs. 9 and 10, again, good agreements are observed between the gPC-SG solutions and the solutions by the collocation method.

Fig. 9
figure 9

Example 4. The mean (left) and standard deviation (right) obtained by gPC-Galerkin (circle) and collocation method (cross) at time \(t=0.1, \varepsilon =10^{-8}\)

Fig. 10
figure 10

Example 4. The mean (left) and standard deviation (right) obtained by gPC-Galerkin (circle) and collocation method (cross) at time \(t=0.1, \varepsilon =10\)

7.5 Example 5: 2D random space

Finally, model the random input as a random field, in the following form:

$$\begin{aligned} \sigma (x,z_1,z_2) = 1 - \frac{\sigma z_1}{\pi ^2}\cos (2\pi x) - \frac{\sigma z_2}{4\pi ^2}\cos (4\pi x) \end{aligned}$$
(115)

where we set \(\sigma =4\) and \(z_1, z_2\) are both uniformly distributed in \((-1,1)\). The mean and standard deviation of the solution \(\rho \) at \(t=0.01\) obtained by the 5th-order gPC Galerkin with \(\Delta x = 0.025, \Delta t = 0.0002/3\) are shown in Fig. 11. We then use the high-order stochastic collocation method over 40\(\times \)40 Gauss–Legendre quadrature points to compute the reference mean and standard deviation of the solutions. In Fig. 12, we show the errors of the mean (solid lines) and standard deviation (dash lines) of \(\rho \) with respect to the order of gPC expansion. The fast spectral convergence of the errors can be clearly seen.

Fig. 11
figure 11

The mean (left) and standard deviation (right) of \(\rho \) at \(\varepsilon = 10^{-8}\), obtained by 5th-order gPC Galerkin (circles) and the stochastic collocation method (crosses). The random input has dimension \(d=2\)

Fig. 12
figure 12

Errors of the mean (solid line) and standard deviation (dash line) of \(\rho \) with respect to gPC order, with the \(d=2\)-dimensional random input

8 Conclusions

In this paper we establish the uniform spectral accuracy in terms of the Knudsen number, which consequently allows us to justify the stochastic asymptotic-preserving property of the stochastic Galerkin method for the linear transport equation with random scattering coefficients. For the micro–macro decomposition-based fully discrete scheme we also prove a uniform stability result. These are the first uniform accuracy and stability results for the underlying problem.

It is expected that our uniform stability proof is useful for more general kinetic or transport equations, which is the subject of our future study.

Acknowledgements

Research was supported by NSF Grants DMS-1522184, DMS-1514826 and the NSF RNMS: KI-Net (DMS-1107291 and DMS-1107444). Shi Jin was also supported by NSFC Grant No. 91330203 and by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin-Madison with funding from the Wisconsin Alumni Research Foundation.