Keywords

1 Introduction

In this paper, we consider the Generalized Multiscale Finite element method (GMsFEM) for solving nonlinear parabolic equations. The main objectives of the paper are the following: (1) to demonstrate the main concepts of GMsFEM and brief review of the techniques; (2) to compare various online enrichment techniques; (3) to discuss the use of the Discrete Empirical Interpolation Method (DEIM) and present its performance in reducing complexity. GMsFEM is a flexible general framework that generalizes the Multiscale Finite Element Method (MsFEM) by systematically enriching the coarse spaces. The main idea of this enrichment is to add extra basis functions that are needed to reduce the error substantially. Once the offline space is derived, it stays fixed and unchanged in the online stage. In [3, 4], it is shown that a good approximation from the reduced model can be expected only if the offline information is a good representation of the problem. For time dependent problems, online enrichment is necessary. We compare two kinds of online enrichment methods: uniform and adaptive enrichment, where the latter focuses on where to add online basis. We will discuss it in numerical results with more details. When a general nonlinearity is present, the cost to evaluate the projected nonlinear function still depends on the dimension of the original system, resulting in simulation times that can hardly improve over the original system. One approach to reduce computational cost is the POD-Galerkin method [5,6,7,8], which is applied to many applications, for example, in [9,10,11,12,13]. DEIM focuses on approximating each nonlinear function so that a certain coefficient matrix can be precomputed and, as a result, the complexity in evaluating the nonlinear term becomes proportional to the small number of selected spatial indices. In this paper, we will compare various approximations of the DEIM projection. We will illustrate these concepts by applying our proposed method to the Allen Cahn equation. The remainder of the paper is organized as follows. In Sect. 2, we present the problem setting and main ingredients of GMsFEM. In Sect. 3, we consider the methods to solve the Allen-Cahn equation.

2 Multiscale Model Reduction Using the GMsFEM

In this section, we will give the construction of our GMsFEM for nonlinear parabolic equations. First, we present some basic notations and the coarse grid formulation in Sect. 2.1. Then, we present the construction of the multiscale snapshot functions and basis functions in Sect. 2.2. The online enrichment process is introduced in Sect. 2.3.

2.1 Preliminaries

Consider the following parabolic equation in the domain \(\varOmega \subset \mathbb {R}^d\)

$$\begin{aligned} \begin{aligned} \dfrac{\partial u}{\partial t} - \text {div}( \kappa \nabla u ) =\,&f&\quad \text {in } \varOmega \times [0,T], \\ u(x,0) =\,&g(x)&\quad \text {in } \varOmega ,\\ u(x,t) =\,&0&\quad \text {on } \partial \varOmega \times [0,T]. \end{aligned} \end{aligned}$$
(1)

Here, we denote the exact solution of (1) by u, \(\kappa (x)\) is a high-contrast and heterogeneous permeability field, \(f = f(x,u)\) is the nonlinear source function depending on the u variable, g(x) is a given function and \(T>0\) is the final time. We denote the solution and the source term at \(t=t_n\) by \(u(\cdot , t_n)\) and \(f(u(\cdot ,t_n))\) respectively. The variational formulation for the problem (1) is: find \(u(\cdot ,t) \in H^1_0(\varOmega )\) such that

$$\begin{aligned} \begin{aligned} \left\langle \dfrac{\partial u}{\partial t}, v\right\rangle +\mathcal {A}(u,v)&=\left\langle f, v\right\rangle \quad \text {in } \varOmega \times [0,T], \quad \forall v \in H_0^1(\varOmega ),\\ u(x,0)&= g(x) \quad \text {in } \varOmega ,\\ u(x,t)&= 0 \quad \text {on } \partial \varOmega \times [0,T]. \end{aligned} \end{aligned}$$
(2)

where \(\mathcal {A}(u,v)=\int _{\varOmega } \kappa \nabla u\cdot \nabla v \;dx\).

In order to discretize (2) in time, we need to apply some time differencing methods. For simplicity, we first apply the implicit Euler scheme with time step \(\varDelta t>0\) and in Sect. 3, we will consider the exponential time differencing method (ETD). We obtain the following discretization for each time \(t_n=n\varDelta t,n=1,2,\cdots , N\) (\(T=N\varDelta t\)),

$$\begin{aligned} \frac{u(\cdot ,t_n)-u(\cdot ,t_{n-1})}{\varDelta t}=\text {div}(\kappa \nabla u(\cdot ,t_n))+f(u(\cdot ,t_n)). \end{aligned}$$

Let \(T^{h}\) be a partition of the domain \(\varOmega \) into fine finite elements. Here \(h>0\) is the fine grid mesh size. The coarse partition, \(T^{H}\) of the domain \(\varOmega \), is formed such that each element in \(T^{H}\) is a connected union of fine-grid blocks. More precisely, \(\forall K_{j} \in T^{H}\), \( K_{j}=\bigcup _{F\in I_{j} }F\) for some \(I_{j}\subset T^{h}\). The quantity \(H>0\) is the coarse mesh size. We will consider the rectangular coarse elements and the methodology can be used with general coarse elements. An illustration of the mesh notations is shown in the Fig. 1. We denote the interior nodes of \(T^{H}\) by \(x_i,i=1,\cdots ,N_{\text {in}}\), where \(N_\text {in}\) is the number of interior nodes. The coarse elements of \(T^{H}\) are denoted by \(K_j,j=1,2,\cdots ,N_e\), where \(N_e\) is the number of coarse elements. We define the coarse neighborhood of the nodes \(x_i\) by \(D_i:=\cup \{K_j\in T_{H}:x_i\in \overline{K_j}\}\).

2.2 The GMsFEM and the Multiscale Basis Functions

In this paper, we will apply the GMsFEM to solve nonlinear parabolic equations. The method is motivated by the finite element framework. First, a variational formulation is defined. Then we construct some multiscale basis functions. Once the fine grid is given, we can compute the fine-grid solution. Let \(\gamma _1,\cdots ,\gamma _n\) be the standard finite element basis, and define \(V_f=\text {span}\{\gamma _1,\cdots ,\gamma _n\}\) to be the fine space. We obtained the fine solution denoted by \(u_f^n\) at \(t=t_n\) by solving

$$\begin{aligned} \begin{aligned} \frac{1}{\varDelta t}\left\langle u_f^{n}, v\right\rangle +\mathcal {A}\left( u_f^{n}, v\right)&=\left\langle \frac{1}{\varDelta t} u_f^{n-1}+f(u_f^{n}), v\right\rangle , \quad \forall v \in V_f,\\ u_f^0&=g_h, \end{aligned} \end{aligned}$$
(3)

where \(g_h\) is the \(V_f\) based approximation of g. The construction of multiscale basis functions follows two general steps. First, we construct snapshot basis functions in order to build a set of possible modes of the solutions. In the second step, we construct multiscale basis functions with a suitable spectral problem defined in the snapshot space. We take the first few dominated eigenfunctions as basis functions. Using the multiscale basis functions, we obtain a reduced model.

Fig. 1.
figure 1

Left: an illustration of fine and coarse grids. Right: an illustration of a coarse neighborhood, coarse element, and oversampled domain

More specifically, once the coarse and fine grids are given, one may construct the multiscale basis functions to approximate the solution of (2). To obtain the multiscale basis functions, we first define the snapshot space. For each coarse neighborhood \( D_{i}\), define \(J_h( D_{i})\) as the set of the fine nodes of \(T^{h}\) lying on \(\partial D_{i}\) and denote the its cardinality by \(L_i \in \mathbb {N}^{+}\). For each fine-grid node \(x_j \in J_h( D_{i})\), we define a fine-grid function \(\delta _{j}^{h}\) on \(J_h( D_{i})\) as \(\delta _{j}^{h}(x_k)=\delta _{j,k}\). Here \(\delta _{j,k}=1\) if \(j=k\) and \(\delta _{j,k}=0\) if \(j\ne k\). For each \(j=1,\cdots , L_i\), we define the snapshot basis functions \(\psi _{j}^{(i)}\) (\(j=1,\cdots ,L_i\)) as the solution of the following system

$$\begin{aligned} \begin{aligned} -\text {div}\left( \kappa \nabla \psi _{j}^{(i)}\right)&=0 \quad \text{ in } D_{i} \\ \psi _{j}^{(i)}&=\delta _{j}^{h} \quad \text{ on } \partial D_{i}. \end{aligned} \end{aligned}$$
(4)

The local snapshot space \(V_{ \text{ snap } }^{(i)}\) corresponding to the coarse neighborhood \( D_{i}\) is defined as follows \(V_{snap}^{(i)}:=\) span\(\{\psi _{j}^{(i)}:j=1,\cdots ,L_{i}\}\) and the snapshot space reads \(V_{\text{ snap }} :=\bigoplus _{i=1}^{N_{\text{ in }}} V_{\text{ snap }}^{(i)}\). In the second step, a dimension reduction is performed on \(V_{\text{ snap }}\). For each \(i=1,\cdots , N_{\text{ in }}\), we solve the following spectral problem:

$$\begin{aligned} \int _{D_{i}} \kappa \nabla \phi _{j}^{(i)} \cdot \nabla v=\lambda _{j}^{(i)} \int _{D_{i}} \hat{\kappa } \phi _{j}^{(i)} v \quad \forall v \in V_{\text{ snap }}^{(i)}, \quad j=1, \ldots , L_{i} \end{aligned}$$
(5)

where \(\hat{\kappa } :=\kappa \sum _{i=1}^{N_{i n}} H^{2}\left| \nabla \chi _{i}\right| ^{2}\) and \(\{\chi _{i}\}_{i=1}^{N_{i n}}\) is a set of partition of unity that solves the following system:

$$\begin{aligned} \begin{array} {rlrl}{-\nabla \cdot \left( \kappa \nabla \chi _{i}\right) } &{} {=0} &{} {} &{} { \text{ in } K \subset D_{i}} \\ {\chi _{i}} &{} {=p_{i}} &{} {} &{} { \text{ on } \text{ each } \partial K \text{ with } K \subset D_{i}} \\ {\chi _{i}} &{} {=0} &{} {} &{} { \text{ on } \partial D_{i}} \end{array} \end{aligned}$$

where \(p_i\) is some polynomial functions and we can choose linear functions for simplicity. Assume that the eigenvalues obtained from (5) are arranged in ascending order and we may use the first \(1<l_i \le L_{i}\) (with \(l_{i} \in \mathbb {N}^{+}\)) eigenfunctions (related to the smallest \(l_i\) eigenvalues) to form the local multiscale space \(V_{\text {off}}^{(i)}:=\) snap\(\{\chi _{i}\phi _{j}^{(i)}:j=1,\cdots ,L_{i}\}\). The mulitiscale space \(V_{\text {off}}^{(i)}\) is the direct sum of the local mulitiscale spaces, namely \(V_{\text{ off }} :=\bigoplus _{i=1}^{N_{\text{ in }}} V_{\text{ off }}^{(i)}\). Once the multiscale space \(V_{\text{ off }}\) is constructed, we can find the GMsFEM solution \(u_{\text {off}}^n\) at \(t=t_n\) by solving the following equation

$$\begin{aligned} \begin{aligned} \frac{1}{\varDelta t}\left\langle u_{\mathrm {off}}^{n}, v\right\rangle + \mathcal {A}\left( u_{\mathrm {off}}^{n}, v\right)&=\left\langle \frac{1}{\varDelta t} u_{\mathrm {off}}^{n-1}+f(u_{\mathrm {off}}^{n}), v\right\rangle , \quad \\ \langle u_{\mathrm {off}}^{0},v\rangle&=\langle g,v\rangle , \quad \forall v \in V_{\mathrm {off}}. \end{aligned} \end{aligned}$$
(6)

2.3 Online Enrichment

We will present the constructions of online basis functions [1] in this section.

Online Adaptive Algorithm. In this subsection, we will introduce the method of online enrichment. After obtaining the multiscale space \(V_{\text{ off }}\), one may add some online basis functions based on local residuals. Let \(u_{\text{ off }}^n \in V_{\text{ off }}\) be the solution obtained in (6) at time \(t=t_n\). Given a coarse neighborhood \(D_i\), we define \(V_i:=H_0^1(D_i)\cap V_{\text{ snap }}\) equipped with the norm \(\Vert v\Vert _{V_i}^{2}:=\int _{D_i}\kappa |\nabla {v}|^2\). We also define the local residual operator \(R_i^n: V_i\rightarrow \mathbb {R}\) by

$$\begin{aligned} \mathcal {R}_{i}^{n}\left( v ; u_{\text {off}}^{n}\right) :=\int _{D_{i}}\left( \frac{1}{\varDelta t} u_{\text {off}}^{n-1}+f(u_{\text {off}}^{n})\right) v-\int _{D_{i}}\left( \kappa \nabla u_{\mathrm {off}}^{n} \cdot \nabla v+\frac{1}{\varDelta t} u_{\mathrm {off}}^{n} v\right) , \quad \forall v \in V_{i}. \end{aligned}$$
(7)

The operator norm \(R_i^n\), denoted by \(\Vert R_i^n\Vert _{V_{i}^{*}}\), gives a measure of the quantity of residual. The online basis functions are computed during the time-marching process for a given fixed time \(t=t_n\), contrary to the offline basis functions that are pre-computed.

Suppose one needs to add one new online basis \(\phi \) into the space \(V_i\). The analysis in [1] suggests that the required online basis \(\phi \in V_i\) is the solution to the following equation

$$\begin{aligned} \mathcal {A}(\phi , v)=\mathcal {R}_{i}^{n}\left( v ; u_{\text{ off }}^{n, \tau }\right) \quad \forall v \in V_{i}. \end{aligned}$$
(8)

We refer to \(\tau \in \mathbb {N}\) as the level of the enrichment and denote the solution of (6) by \(u_{\text {off}}^{n,\tau }\). Remark that \(V_{\text{ off }}^{n,0}:=V_{\text {off}}\) for time level \(n\in \mathbb {N}\). Let \(\mathcal {I} \subset \left\{ 1,2, \ldots , N_{i n}\right\} \) be the index set over some non-lapping coarse neighborhoods. For each \(i\in \mathcal {I}\), we obtain a online basis \(\phi _i\in V_i\) by solving (8) and define \(V_{\text{ off }}^{n, \tau +1}=V_{\text{ off }}^{n, \tau } \oplus {\text {span}}\left\{ \phi _{i} : i \in \mathcal {I}\right\} \). After that, solve (6) in \(V_{\text{ off }}^{n, \tau +1}\).

Two Online Adaptive Methods. In this section, we compare two ways to obtain online basis functions which are denoted by online adaptive method 1 and online adaptive method 2 respectively. Online adaptive method 1 is adding online basis using online adaptive method from offline space in each time step, which means basis functions obtained in last time step are not used in current time step. Online adaptive method 2 is keeping online basis functions in each time step. Using this accumulation strategy, we can skip online enrichment after a certain time period when the residual defined in (7) is under given tolerance. We also presents the results of these two methods in Fig. 3 and Fig. 4 respectively.

Numerical Results. In this section, we present some numerical examples to demonstrate the efficiency of our proposed method. The computational domain is \(\varOmega =(0,1)^2\subset \mathbb {R}^2\) and \(T=1\). The medium \(\kappa _1\) and \(\kappa _2\) are shown in Fig. 2, where the contrasts are \(10^4\) and \(10^5\) for \(\kappa _1\) and \(\kappa _2\) respectively. Without special descriptions, we use \(\kappa _1\).

For each function to be approximated, we define the following quantities \(e_a^n\) and \(e_2^n\) at \(t=t_n\) to measure energy error and \(L^2\) error respectively.

$$e_a^n=\frac{\left\| u_{\mathrm {f}}^{n}-u_{\mathrm {off}}^{n}\right\| _{V(\varOmega )}}{\left\| u_{\mathrm {f}}^{\mathrm {n}}\right\| _{V(\varOmega )}}\quad e_{2}^{n}=\frac{\left\| u_{\mathrm {f}}^{n}-u_{\mathrm {off}}^{n}\right\| _{L^{2}(\varOmega )}}{\left\| u_{\mathrm {f}}^{\mathrm {n}}\right\| _{L^{2}(\mathcal {D})}}$$

where \(u_{\mathrm {f}}^{n}\) is the fine-grid solution (reference solution) and \(u_{\mathrm {off}}^{n}\) is the approximation obtained by the GMsFEM method. We define the energy norm and \(L^2\) norm of u by

$$\Vert u\Vert _{V(\varOmega )}^2=\int _{\varOmega }\Vert \nabla {u}\Vert ^2 \quad \Vert u\Vert _{L^2}^2=\int _{\varOmega }\Vert u\Vert ^2 .$$

Example 2.1

In this example, we compare the error using adaptive online method 1 and uniform enrichment under different numbers of initial basis functions. We set the mesh size to be \(H=1/16\) and \(h=1/256\). The time step is \(\varDelta t=10^{-3}\) and the final time is \(T=1\). The initial condition is \(u(x,y,t)|_{t=0}=4(0.5-x)(0.5-y)\). We set the permeability to be \(\kappa _1\). We set the source term \(f=\frac{1}{\epsilon ^2}(u^3-u)\), where \(\epsilon =0.01\). We present the numerical results for the GMsFEM at time t = 0.1 in Table 1, 2, and 3. For comparison, we present the results where online enrichment is not applied in Table 4. We observe that the adaptive online enrichment converges faster. Furthermore, as we compare Table 4 and Table 1, we note that the online enrichment does not improve the error if we only have one offline basis function per neighborhood. Because the first eigenvalue is small, the error decreases in the online iteration is small. In particular, for each iteration, the error decrease slightly. As we increase the number of initial offline basis, the convergence is very fast and one online iteration is sufficient to reduce the error significantly.

Example 2.2

We compare online Method 1 and 2 under different tolerance. We keep H, h and the initial condition the same as in Example 2.1. We choose intial number of basis to be 450, which means we choose two initial basis per neighborhood. We keep the source term as \(f=\frac{1}{\epsilon ^2}(u^3-u)\). When \(\epsilon =0.01\), we choose the time step \(\varDelta t\) to be \(10^{-4}\). We plot the error and DOF from online Method 1 in Fig. 3 and compare with results from online Method 2 in Fig. 4. From Fig. 3 and 4, we can see the error and DOF reached stability at \(t=0.01\). In Fig. 4, we can see the DOF keeps increasing before turning steady. The error remains at a relatively low level without adding online basis after some time. As a cost, online method 2 suffers bigger errors than method 1 with same tolerance. We also apply our online adaptive method 2 under permeability \(\kappa _2\) in Fig. 5. The errors are relatively low for two kinds of permeability.

Fig. 2.
figure 2

Permeability field

Table 1. The errors for online enrichment when number of initial basis = 1. Left: Adaptive enrichment. Right: Uniform enrichment.
Table 2. The errors for online enrichment when number of initial basis = 2. Left: Adaptive enrichment. Right: Uniform enrichment
Table 3. The errors for online enrichment when number of initial basis = 3. Left: Adaptive enrichment. Right: Uniform enrichment
Table 4. The errors for different \(\epsilon \) in source term without online enrichment. Up: Energy error. Down: \(L^2\) error
Fig. 3.
figure 3

Error and DOF obtained by online method 1 in Example 2.2

Fig. 4.
figure 4

Error and DOF obtained by online method 2 in Example 2.2

Fig. 5.
figure 5

Error and DOF obtained by online method 2 in Example 2.2

3 Application to the Allen-Cahn Equation

In this section, we apply our proposed method to the Allen Cahn equation. We use the Exponential Time Differencing (ETD) for time dsicretization. To deal with the nonlinear term, DEIM is applied. We will present the two methods in the following subsections.

3.1 Derivation of Exponential Time Differencing

Let \(\tau \) be the time step. Using ETD, \(u_{\text {off}}^{n}\) is the solution to (9)

$$\begin{aligned} \begin{aligned} \left\langle u_{\text {off}}^n,v\right\rangle +\tau \mathcal {A}(u_{\text {off}}^{n},v)&=\langle \text {exp}(-\dfrac{\tau }{\epsilon ^2}\frac{f(u_{\mathrm {off}}^{n-1})}{u_{\mathrm {off}}^{n-1}})u_{\text {off}}^{n-1},v \rangle \\ \langle u_{\mathrm {off}}^{0},v\rangle&=\langle g,v\rangle \quad \forall v \in V_{\mathrm {off}} \end{aligned} \end{aligned}$$
(9)

Next, we will derive this equation. We have

$$\begin{aligned} u_t-\text {div}(\kappa \nabla u)+\frac{1}{\epsilon ^2}f(u)=0 \end{aligned}$$

Multiplying the equation by integrating factor \(e^{p(u)}\), we have

$$\begin{aligned} e^{p(u)}u_t+e^{p(u)}\frac{1}{\epsilon ^2}f(u)=e^{p(u)}\text {div}(\kappa \nabla u) \end{aligned}$$

We require the above to become

$$\begin{aligned} \dfrac{d(e^{p(u)}u)}{dt}=e^{p(u)}\text {div}(\kappa \nabla u) \end{aligned}$$
(10)

By solving

$$\dfrac{d(e^{p(u)}u)}{dt}=e^{p(u)}u_t+e^{p(u)}(\dfrac{d}{dt}p(u))u,$$

we have

$$p(u(t_n,\cdot ))-u(0,\cdot ))=\int _{0}^{t_n}\frac{1}{\epsilon ^2}\frac{f(u)}{u}.$$

Using Backward Euler method in (10), we have

$$\begin{aligned} u_n-\tau \text {div}(\kappa \nabla u_n)=e^{-p(u)_{n}}u_{n-1} \end{aligned}$$
(11)

where \(p(u)_n=p(u(t_n)-u(t_{n-1})).\) To solve (11), we approximate (11) as follows:

$$\begin{aligned} e^{-p(u)_{n}}u_{n-1}\approx e^{-\frac{\tau }{\epsilon ^2}\frac{f(u(t_{n-1}))}{u(t_{n-1})}}u(t_{n-1}). \end{aligned}$$
(12)

Using above approximation, we have

$$\begin{aligned} u_{\text {off}}^n-\tau \text {div}(\kappa \nabla u_{\text {off}}^n)=\text {exp}(-\dfrac{\tau }{\epsilon ^2}\frac{f(u_{\mathrm {off}}^{n-1})}{u_{\mathrm {off}}^{n-1}}) u_{\mathrm {off}}^{n-1}. \end{aligned}$$
(13)

3.2 DEIM Method

When we evaluate the nonlinear term, the complexity is \( O(\alpha (n)+c\cdot n)\), where \(\alpha \) is some function and c is a constant. To reduce the complexity, we approximate local and global nonlinear functions with the Discrete Empirical Interpolation Method (DEIM) [2]. DEIM is based on approximating a nonlinear function by means of an interpolatory projection of a few selected snapshots of the function. The idea is to represent a function over the domain while using empirical snapshots and information at some locations (or components). The key to complexity reduction is to replace the orthogonal projection of POD with the interpolation projection of DEIM in the same POD basis.

We briefly review the DEIM. Let \(f(\tau )\) be the nonlinear function. We are desired to find an approximation of \(f(\tau )\) at a reduced cost. To obtain a reduced order approximation of \(f(\tau )\), we first define a reduced dimentional space for it. We would like to find m basis vectors (where m is much smaller than n), \(\phi _1,\cdots ,\phi _m\), such that we can write

$$f(\tau )=\varPhi d(\tau ),$$

where \(\varPhi =(\phi _1,\cdots ,\phi _m)\). We employ POD to obtain \(\varPhi \) and use DEIM (refer Table 5) to compute \(d(\tau )\) as follows. In particular, we solve \(d(\tau )\) by using m rows of \(\varPhi \). This can be formalized using the matrix P

$$\mathrm {P}=\left[ e_{\wp _{1}}, \ldots , e_{\wp _{m}}\right] \in \mathbb {R}^{n \times m},$$

where \(e_{\wp _{i}}=[0,\cdots ,1,0,\cdots ,0]\in \mathbb {R}^{n}\) is the \(\wp _i^{th}\) column of the identity matrix \(I_n \in \mathbb {R}^{n \times n}\) for \(i=1,\cdots ,m\). Using \(P^Tf(\tau )=P^T\varPhi d(\tau )\), we can get the approximation for \(f(\tau )\) as follows:

$$f(\tau ) \approx \tilde{f}(\tau )=\varPhi d(\tau )=\varPhi \left( \mathrm {P}^{T} \varPhi \right) ^{-1} \mathrm {P}^{T} f(\tau )$$
Table 5. DEIM algorithm

3.3 Numerical Results

Example 3.3

In this example, we apply the DEIM under the same setting as in Example 2.2 and we did not use the online enrichment procedure. We compare the results in Fig. 8. To test the DEIM, we first consider the solution using DEIM where the snapshot are obtained by the same equation. First, we set \(\epsilon =0.01\). We first solve the same equation and obtain the snapshot \(\varPhi \). Secondly, we use DEIM to solve the equation again. The two results are presents in Fig. 6. The first picture are the errors we get when DEIM are not used while used in second one. The errors of these two cases differs a little since the snapshot obtained in the same equation. Then we consider the cases where the snapshots are obtained:

  1. 1.

    Different right hand side functions.

  2. 2.

    Different initial conditions.

  3. 3.

    Different permeability field.

  4. 4.

    Different time steps.

Different Right Hand Side. Since the solution for different \(\epsilon \) can have some similarities, we can use the solution from one to solve the other. In particular, since it will be more time-consuming to solve the case when \(\epsilon \) is smaller. We can use the f(u) for \(\epsilon =0.09\) to compute the solution for \(\epsilon =0.1\) since solutions for these two cases can only vary a little. I show the results in Fig. 7.

Different Initial Conditions. In this section, we consider using the snapshot from different initial conditions, we record the results in Fig. 9. We first choose the initial condition to be compared Fig. 9 and Fig. 6, we can see that different initial conditions can have less impact on the final solution since the solution is close to the one where the snapshot is obtained in the same equation.

Different Permeability Field. In this section, we consider using the snapshot from different permeability, we record the results in Fig. 10. For reference, the first two figures plots the fine solution and multiscale solution without using DEIM. And we construct snapshot from another permeability \(\kappa _1\) and we apply it to compute the solution in \(\kappa _2\). The last figure shows the of using DEIM is relatively small.

Different Time Steps. In this section, we construct the snapshot by using nonlinear function obtained in previous time step for example when \(t<0.05\). Then we apply it to DEIM to solve the equation in \(0.05<t<0.06\). We use these way to solve the equation with permeability \(\kappa _1\) and \(\kappa _2\) respectively. We plot the results in Fig. 11 and 12. From these figures, we can see that DEIM have different effects applied to different permeability. With \(\kappa _1\), the error increases significantly when DEIM are applied. But with \(\kappa _2\), the error decreased to a lower level when we use DEIM.

Fig. 6.
figure 6

Error for same \(\epsilon \)

Fig. 7.
figure 7

Error for different \(\epsilon \)

Fig. 8.
figure 8

Comparing fine and multiscale solutions.

Fig. 9.
figure 9

Using DEIM for different initial conditions

Fig. 10.
figure 10

Using DEIM for different permeability field

Fig. 11.
figure 11

Using DEIM for under different time step for \(\kappa _1\)

Fig. 12.
figure 12

Using DEIM for under different time step for \(\kappa _2\)