Skip to main content
Log in

Reaction–diffusion approximation of nonlocal interactions using Jacobi polynomials

  • Original Paper
  • Area 1
  • Published:
Japan Journal of Industrial and Applied Mathematics Aims and scope Submit manuscript

Abstract

Nonlocal interactions, which have attracted attention in various fields, result from the integration of microscopic information such as a transition possibility, molecular events, and signaling networks of living creatures. Nonlocal interactions are useful to reproduce various patterns corresponding to such detailed microscopic information. However, the approach is inconvenient for observing the specific mechanisms behind the target phenomena because of the compression of the information. Therefore, we previously proposed a method capable of approximating any nonlocal interactions by a reaction–diffusion system with auxiliary factors (Ninomiya et al., J Math Biol 75:1203–1233, 2017). In this paper, we provide an explicit method for determining the parameters of the reaction–diffusion system for the given kernel shape by using Jacobi polynomials under appropriate assumptions. We additionally introduce a numerical method to specify the parameters of the reaction–diffusion system with the general diffusion coefficients by the Tikhonov regularization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Amari, S.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybernet. 27, 77–87 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bates, P.W., Fife, P.C., Ren, X., Wang, X.: Traveling waves in a convolution model for phase transitions. Arch. Ration. Mech. Anal. 138, 105–136 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bates, P.W., Zhao, G.: Existence, uniqueness and stability of the stationary solution to a nonlocal evolution equation arising in population dispersal. J. Math. Anal. Appl. 332, 428–440 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  4. Berestycki, H., Nadin, G., Perthame, B., Ryzhik, L.: The non-local Fisher-KPP equation: traveling waves and steady states. Nonlinearity 22, 2813–2844 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chihara, T.S.: An Introduction to Orthogonal Polynomials. Gordon and Breach, New York (1978)

    MATH  Google Scholar 

  6. Coville, J., Dávila, J., Martíanez, S.: Nonlocal anisotropic dispersal with monostable nonlinearity. J. Differ. Equ. 244, 3080–3118 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Furter, J., Grinfeld, M.: Local vs. non-local interactions in population dynamics. J. Math. Biol. 27, 65–80 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  8. Hutson, V., Martinez, S., Mischaikow, K., Vickers, G.T.: The evolution of dispersal. J. Math. Biol. 47, 483–517 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  9. Kondo, S.: An updated kernel-based Turing model for studying the mechanisms of biological pattern formation. J. Theor. Biol. 414, 120–127 (2017)

    Article  MathSciNet  Google Scholar 

  10. Kuffler, S.W.: Discharge patterns and functional organization of mammalian retina. J. Neurophysiol. 16, 37–68 (1953)

    Article  Google Scholar 

  11. Laing, C.R., Troy, W.C.: Two-bump solutions of Amari-type models of neuronal pattern formation. Phys. D 178, 190–218 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  12. Laing, C.R., Troy, W.: PDE methods for nonlocal models. SIAM J. Appl. Dyn. Syst. 2, 487–516 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  13. Lefever, R., Lejeune, O.: On the origin of tiger bush. Bull. Math. Biol. 59, 263–294 (1997)

    Article  MATH  Google Scholar 

  14. Murray, J. D.: Mathematical biology. I. An introduction, vol. 17, 3rd edn. Interdisciplinary Applied Mathematics. Springer, Berlin (2002)

  15. Murray, J. D.: Mathematical biology. II. Spatial models and biomedical applications, vol. 18, 3rd edn. Interdisciplinary Applied Mathematics. Springer, Berlin (2003)

  16. Nakamasu, A., Takahashi, G., Kanbe, A., Kondo, S.: Interactions between zebrafish pigment cells responsible for the generation of Turing patterns. Proc. Natl. Acad. Sci. USA 106, 8429–8434 (2009)

    Article  Google Scholar 

  17. Nakamura, G., Potthast, R.: Inverse Modeling. IOP Publishing, Bristol (2015)

    MATH  Google Scholar 

  18. Ninomiya, H., Tanaka, Y., Yamamoto, H.: Reaction, diffusion and non-local interaction. J. Math. Biol. 75, 1203–1233 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Tanaka, Y., Yamamoto, H., Ninomiya, H.: Mathematical approach to nonlocal interactions using a reaction–diffusion system. Dev. Growth Differ. 59, 388–395 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Professor Yoshitsugu Kabeya of Osaka Prefecture University for his valuable comments and Professor Gen Nakamura of Hokkaido University for his fruitful comments for Sect. 6. The authors are particularly grateful to the referees for their careful reading and valuable comments. The first author was partially supported by JSPS KAKENHI Grant Numbers 26287024, 15K04963, 16K13778, 16KT0022. The second author was partially supported by KAKENHI Grant Number 17K14228, and JST CREST Grant Number JPMJCR14D3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshitaro Tanaka.

Additional information

Dedicated to Professor Masayasu Mimura on his 75th birthday.

Appendices

Appendix

A: Existence and boundedness of a solution of the problem (P)

Proposition 5

(Local existence and uniqueness of the solution) There exist a constant \(\tau >0\) and a unique solution \(u\in C([0,\tau ];BC(\mathbb {R}))\) of the problem (P) with an initial datum \(u_0\in BC(\mathbb {R})\).

This proposition is proved by the standard argument. That is based on the fixed point theorem for integral equation by the heat kernel.

In order to prove Theorem  1, we discuss the maximum principle as follows.

Lemma 2

(Global bounds for the solution of (P)) For a solution u of (P), it follows

$$\begin{aligned} \sup _{0\le t< \infty } \Vert u(\cdot ,t)\Vert _{BC(\mathbb {R})} < \infty . \end{aligned}$$
(25)

Proof

For a contradiction, we assume that there exists a constant \(T>0\) such that

Then, we can take \(\{T_n\}_{n\in \mathbb {N}}\) satisfying and

$$\begin{aligned} R_n:= \sup _{0\le t\le T_n} \Vert u(\cdot ,t)\Vert _{BC(\mathbb {R})} \rightarrow \infty \quad \text { as } n\rightarrow \infty \end{aligned}$$

Hence, for all \(R>0\) there exists \(N\in \mathbb {N}\) such that

$$\begin{aligned} R_n>R \qquad \text { for all } \; n\ge N. \end{aligned}$$
(26)

We suppose that \(\sup _{0\le t\le T_n,x\in \mathbb {R}}u(x,t) =\sup _{0\le t\le T_n,x\in \mathbb {R}}|u(x,t)|\), (by replacing u(xt) with \(-u(x,t)\) if necessary).

Case 1: \(u(x_n,t_n)=R_n\) for some \((x_n,t_n)\in \mathbb {R}\times [0,T_n]\).

Since \((x_n,t_n)\) is a maximum point of u on \(\mathbb {R}\times [0,T_n]\), we see that

$$\begin{aligned} u_x(x_n,t_n)=0, \quad u_t(x_n,t_n)\ge 0, \quad u_{xx}(x_n,t_n)\le 0. \end{aligned}$$

For \(r_0>0\) large enough, it holds that

$$\begin{aligned} -g_0 r^p +g_1\Vert J\Vert _{L^1(\mathbb {R})} r^2 +g_2\Vert J\Vert _{L^1(\mathbb {R})} r +g_3 r < -3 \quad \text { for all } r\ge r_0. \end{aligned}$$
(27)

Put \(R=r_0\), \(r=u(x_n,t_n)\). By (26), for all \(u(x_n, t_n)>R \) we have

$$\begin{aligned} g\left( u(x_n,t_n),J*u(x_n,t_n) \right) <0. \end{aligned}$$

Substituting \((x_n,t_n)\) in the equation of (P), we obtain that

$$\begin{aligned} 0\le u_t(x_n,t_n) = d_u u_{xx}(x_n,t_n) +g\left( u(x_n,t_n),J*u(x_n,t_n) \right) <0. \end{aligned}$$

This yields a contradiction.

Case 2: \(u(x,t)<R_n\) for all \((x,t)\in \mathbb {R}\times [0,T_n]\).

For any \(n\in \mathbb {N}\), there exists a maximum point \(t_n\in (0,T_n]\) of \(\Vert u(\cdot ,t)\Vert _{BC(\mathbb {R})}\) such that \(\Vert u(\cdot , t_n)\Vert _{BC(\mathbb {R})} =\max _{0\le t \le T_n} \Vert u(\cdot , t)\Vert _{BC(\mathbb {R})}=R_n\). If \(s\in (0,t_n)\), then we see that

$$\begin{aligned} \Vert u(\cdot , t_n-s)\Vert _{BC(\mathbb {R})} \le \max _{0\le t \le T_n}\Vert u(\cdot , t)\Vert _{BC(\mathbb {R})}=R_n \end{aligned}$$
(28)

because \(0<t_n-s\le T_n\). Here, since \(\Vert u(\cdot , t_n-s)\Vert _{BC(\mathbb {R})}-s < \Vert u(\cdot ,t_n)\Vert _{BC(\mathbb {R})} =\sup _{x\in \mathbb {R}} |u(x,t_n)|\), there exists a point \(x^{(n,s)}\in \mathbb {R}\) such that

$$\begin{aligned}&\Vert u(\cdot , t_n-s)\Vert _{BC(\mathbb {R})}-s < u(x^{(n,s)}, t_n) \quad \text { and } \end{aligned}$$
(29)
$$\begin{aligned}&u_{xx}(x^{(n,s)},t_n)\le 0. \end{aligned}$$
(30)

Let \(n\in \mathbb {N}\) be a sufficiently large. Since \(\Vert u(\cdot , t_n-s)\Vert _{BC(\mathbb {R})}\) is sufficiently large, we have \(u(x^{(n,s)},t_n)>r_0\). Hence, by (27) it follows that

$$\begin{aligned} g(u(x^{(n,s)}, t_n), J*u(x^{(n,s)}, t_n))<-3. \end{aligned}$$
(31)

By (30), (31) and the equation of (P), it holds that \(u_t(x^{(n,s)},t_n)\le -3\). Hence, there is a sufficient small constant \(\eta _0>0\) such that for any \(0<\eta <\eta _0\),

$$\begin{aligned} u(x^{(n,s)}, t_n) -u(x^{(n,s)}, t_n-\eta )<-2\eta . \end{aligned}$$
(32)

On the other hand, by (28) and (29), we obtain that for \(s\in (0,t_n)\),

$$\begin{aligned}&u(x^{(n,s)},t_n) -u(x^{(n,s)},t_n-s)\\&\quad > \Vert u(\cdot ,t_n-s))\Vert _{BC(\mathbb {R})}-s - \Vert u(\cdot ,t_n-s))\Vert _{BC(\mathbb {R})} = -s. \end{aligned}$$

Choosing \(0< s < \min \{\eta _0, t_n\}\) and taking \(\eta =s\) in (32), we see that

$$\begin{aligned} -s< u(x^{(n,s)},t_n) -u(x^{(n,s)},t_n-s) < -2s. \end{aligned}$$

This inequality is a contradiction because s is positive. Thus, we have (25) because both of Case 1 and Case 2 imply a contradiction. \(\square \)

Proof of Theorem 1

Proposition 5 and Lemma 2 immediately imply Theorem 1. \(\square \)

B: Boundedness of the solution of the problems (P) and \(\hbox {RD}_{\varepsilon }\)

Here, we show several propositions.

Proof of Proposition 1

Multiplying the equation of (P) by u and integrating it over \(\mathbb {R}\), we have

$$\begin{aligned} \frac{1}{2}\frac{d}{dt}\Vert u\Vert _{L^2(\mathbb {R})}^2 =-d_u \Vert u_x\Vert _{L^2(\mathbb {R})}^2 +\int _{\mathbb {R}} g(u,J*u)u \, dx. \end{aligned}$$

From (H1), the Schwarz inequality and the Young inequality for convolutions, one have

$$\begin{aligned} \frac{1}{2}\frac{d}{dt}\Vert u\Vert _{L^2(\mathbb {R})}^2&\le -d_u \Vert u_x\Vert _{L^2(\mathbb {R})}^2 -g_0\Vert u\Vert _{L^{p+1}(\mathbb {R})}^{p+1} +g_1 \Vert J\Vert _{L^1(\mathbb {R})} \Vert u\Vert _{L^2(\mathbb {R})} \Vert u\Vert _{L^4(\mathbb {R})}^2 \\&\quad +\left( g_2 \Vert J\Vert _{L^1(\mathbb {R})} +g_3 \right) \Vert u\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$

Moreover, using (H4), the interpolation for the Hölder inequality and the Young inequality, it holds that

$$\begin{aligned} g_1 \Vert J\Vert _{L^1(\mathbb {R})} \Vert u\Vert _{L^2(\mathbb {R})} \Vert u\Vert _{L^4(\mathbb {R})}^2&\le g_1 \Vert u\Vert _{L^{p+1}(\mathbb {R})}^{(p+1)/(p-1)} \Vert J\Vert _{L^1(\mathbb {R})} \Vert u\Vert _{L^2(\mathbb {R})}^{2(p-2)/(p-1)} \\&\le g_0 \Vert u\Vert _{L^{p+1}(\mathbb {R})}^{p+1} +C_9 \, \Vert J\Vert _{L^1(\mathbb {R})}^{(p-1)/(p-2)} \Vert u\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$

Hence we obtain

$$\begin{aligned} \frac{1}{2}\frac{d}{dt}\Vert u\Vert _{L^2(\mathbb {R})}^2\le & {} -d_u \Vert u_x\Vert _{L^2(\mathbb {R})}^2 +C_9 \, \Vert J\Vert _{L^1(\mathbb {R})}^{(p-1)/(p-2)} \Vert u\Vert _{L^2(\mathbb {R})}^2 \\&+\left( g_2 \Vert J\Vert _{L^1(\mathbb {R})} +g_3 \right) \Vert u\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$

We therefore have

$$\begin{aligned} \frac{d}{dt}\Vert u\Vert _{L^2(\mathbb {R})}^2 \le C_{10} \Vert u\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$
(33)

Next, integrating the equation of (P) multiplied by \(u_{xx}\) over \(\mathbb {R}\), one have

$$\begin{aligned} -\frac{1}{2}\frac{d}{dt} \Vert u_x\Vert _{L^2(\mathbb {R})}^2= & {} d_u \Vert u_{xx}\Vert _{L^2(\mathbb {R})}^2 -\int _{\mathbb {R}} \left\{ g_u(u,J*u) |u_x|^2\right. \\&\left. +g_v(u,J*u)(J*u_x)u_x \right\} \, dx. \end{aligned}$$

From (H2), (H3), the Schwarz inequality and the Young inequality for convolutions, we can estimate the derivative of \(\Vert u_x\Vert _{L^2}^2\) with respect to t as follows:

$$\begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert u_x\Vert _{L^2(\mathbb {R})}^2&\le -d_u \Vert u_{xx}\Vert _{L^2(\mathbb {R})}^2 -g_0 p \int _{\mathbb {R}} |u|^{p-1} |u_x|^2 \, dx\\&\quad +g_4 \Vert J\Vert _{L^1(\mathbb {R})} \Vert u\Vert _{BC(\mathbb {R})} \Vert u_x\Vert _{L^2(\mathbb {R})}^2 \\&\quad +g_5 \Vert u_x\Vert _{L^2(\mathbb {R})}^2 +g_6 \int _{\mathbb {R}} |u| |J*u_x| |u_x| \, dx + g_7 \int _{\mathbb {R}} |J*u_x| |u_x| \, dx\\&\le g_4 \Vert J\Vert _{L^1(\mathbb {R})} \Vert u\Vert _{BC(\mathbb {R})} \Vert u_x\Vert _{L^2(\mathbb {R})}^2 -g_0 p \int _{\mathbb {R}} |u|^{p-1} |u_x|^2 \, dx \\&\quad +g_6 \int _{\mathbb {R}} |u| |J*u_x| |u_x| \, dx + g_5 \Vert u_x\Vert _{L^2(\mathbb {R})}^2 \\&\quad + g_7 \Vert J \Vert _{L^1(\mathbb {R})}\Vert u_x \Vert ^2_{L^2(\mathbb {R})}. \end{aligned}$$

By the Hölder and the Young inequalities, recalling that

$$\begin{aligned} g_6 \int _{\mathbb {R}} |u| |J*u_x| |u_x| \, dx \le g_0 p \int _{\mathbb {R}} |u|^{p-1} |u_x|^2 \, dx +C_{11} \, \Vert J\Vert _{L^1(\mathbb {R})}^{(p-1)/(p-2)} \Vert u_x\Vert _{L^2(\mathbb {R})}^2, \end{aligned}$$

we obtain

$$\begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert u_x\Vert _{L^2(\mathbb {R})}^2&\le \left\{ \left( g_4 \Vert u\Vert _{BC(\mathbb {R})}+g_7 \right) \Vert J\Vert _{L^1(\mathbb {R})} + g_5 \right\} \Vert u_x \Vert ^2_{L^2(\mathbb {R})} \nonumber \\&\quad +C_{11} \, \Vert J\Vert _{L^1(\mathbb {R})}^{(p-1)/(p-2)} \Vert u_x\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$
(34)

where \(C_{11}\) is a positive constant and if \(2\le p<3\), then \(g_6=0\) which implies \(C_{11}=0\) by (H4). By Lemma 2, since \(\Vert u(\cdot ,t)\Vert _{BC(\mathbb {R})}\) is bounded in t, one see that

$$\begin{aligned} \frac{d}{dt} \Vert u_x\Vert _{L^2(\mathbb {R})}^2 \le C_{12} \Vert u_x\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$
(35)

Put \(X(t):=\Vert u(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2\) and \(Y(t):=\Vert u_x(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2\). By (33) and (35), it follows that

$$\begin{aligned} \left\{ \begin{aligned}&X(t)\le X(0)e^{C_{10} T}, \\&Y(t)\le Y(0)e^{C_{12} T} \end{aligned} \right. \quad \text { for all } \; 0\le t\le T. \end{aligned}$$

Consequently, we have

$$\begin{aligned} \sup _{0\le t\le T}\Vert u(\cdot ,t)\Vert _{H^1(\mathbb {R})} \le \Vert u_0\Vert _{H^1(\mathbb {R})}e^{k_0T}, \end{aligned}$$

where \(2k_0=\max \{C_{10},C_{12}\}\). \(\square \)

Next we give the proof of Proposition 2.

Proof of Proposition 2

First, we show that \(u^{\varepsilon }\) and \(v_j^{\varepsilon }\) are bounded in \(L^2(\mathbb {R})\) by the argument similar to the proof of Proposition 1. Multiplying the principal equation of (\(\hbox {RD}_{\varepsilon }\)) by \(u^{\varepsilon }\) and integrating it over \(\mathbb {R}\), we have

$$\begin{aligned} \frac{d}{dt} \Vert u^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 \le C_{13} \left( \Vert u^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 + \sum _{j=1}^{M}\Vert \alpha _j v_j^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \right) . \end{aligned}$$
(36)

Also, multiplying the second equation of (\(\hbox {RD}_{\varepsilon }\)) by \(v_j^{\varepsilon }\) and integrating it over \(\mathbb {R}\), we see that

$$\begin{aligned} \frac{d}{dt} \Vert v_j ^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \le \frac{1}{\varepsilon } \left( \Vert u^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 -\Vert v_j ^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \right) . \end{aligned}$$

Multiplying the above inequality by \(\alpha _j^2\) and adding those inequalities from \(j=1\) to M, we obtain that

$$\begin{aligned} \frac{d}{dt} \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \le \frac{1}{\varepsilon } \left( C_{14} \Vert u^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 - \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \right) , \end{aligned}$$
(37)

where \(C_{14}=\sum _{j=1}^{M}\alpha _j^2\). Here, put \(X(t):=\Vert u^{\varepsilon }(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2\), \(Y(t):=\sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } (\cdot ,t) \Vert _{L^2(\mathbb {R})}^2\). Hence, (36), (37) are described as follows:

$$\begin{aligned} \frac{dX}{dt} \le C_{13} (X + Y), \qquad \frac{dY}{dt} \le \frac{1}{\varepsilon } ( C_{14} X -Y ). \end{aligned}$$
(38)

Noticing that X(t), \(Y(t)\ge 0\), we have

$$\begin{aligned} \frac{d}{dt}\left( X+C_{13}\varepsilon Y \right) \le C_{13}(1+C_{14}) ( X +C_{13}\varepsilon Y ), \end{aligned}$$

which implies

$$\begin{aligned} X(t) +C_{13}\varepsilon Y(t) \le \left\{ X(0) +C_{13}\varepsilon Y(0) \right\} e^{C_{13}(1+C_{14}) T} \qquad \text { for all } \; 0\le t\le T. \end{aligned}$$

Note that \(\Vert v_j^{\varepsilon }(\cdot ,0)\Vert _{L^2(\mathbb {R})}=\Vert k^{d_j}*u_0\Vert _{L^2(\mathbb {R})} \le \Vert u_0\Vert _{L^2(\mathbb {R})}\) from \(0<\varepsilon <1\) and \(\Vert k^d\Vert _{L^1(\mathbb {R})}=1\). Recalling \(X(t)=\Vert u^{\varepsilon }(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2\), one see that

$$\begin{aligned} \Vert u^{\varepsilon }(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2 \le \left( 1+C_{13} \sum _{j=1}^{M} \alpha _j^2 \right) \Vert u_0\Vert _{L^2(\mathbb {R})}^2 \, e^{C_{13}(1+C_{14}) T} \qquad \text { for all } \; 0\le t\le T. \end{aligned}$$

Using (38) again yields

$$\begin{aligned} Y(t)\le & {} \max \left\{ Y(0),\ \max _{0\le t\le T} X(t) \right\} \\\le & {} \max \left\{ \sum _{j=1}^{M} \alpha _j^2 ,\ \left( 1+ C_{13} \sum _{j=1}^{M} \alpha _j^2 \right) e^{C_{13}(1+C_{14})T} \right\} \Vert u_0\Vert _{L^2(\mathbb {R})}^2. \end{aligned}$$

Hence, it is shown that

$$\begin{aligned} \sum _{j=1}^{M}\Vert \alpha _j v_j^{\varepsilon } (\cdot ,t) \Vert _{L^2(\mathbb {R})}^2 \le C_{15} \Vert u_0\Vert _{L^2(\mathbb {R})}^2 \left( 1 +e^{ C_{13}(1+C_{14})T} \right) \quad \text {for all}\quad 0\le t\le T. \nonumber \\ \end{aligned}$$
(39)

Therefore, each component \(u^{\varepsilon }\) and \(v_j^{\varepsilon }\) of the solution is bounded in \(L^2(\mathbb {R})\) by (36) and (39).

Next, let us show the boundedness of \(u_x^{\varepsilon }\) and \(v_{j,x}^{\varepsilon }\) in \(L^2(\mathbb {R})\). Note that we use the \(L^2\)-boundedness of \(u^{\varepsilon }\) and \(v_j^{\varepsilon }\) in the proof. Multiplying the principal equation of (\(\hbox {RD}_{\varepsilon }\)) by \(u_{xx}^{\varepsilon }\) and integrating it in \(\mathbb {R}\), similarly to the proof of Proposition 1, one see that

$$\begin{aligned} \begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2&\le -d_u\Vert u_{xx}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 +g_4\left\| {\sum _{j=1}^{M}}\alpha _j v_j^{\varepsilon } \right\| _{L^2(\mathbb {R})} \Vert u_x^{\varepsilon } \Vert _{L^4}^2 \\&\quad +C_{11} \Vert u_x^{\varepsilon }\Vert _{L^2}^{(p-3)/(p-2)} \left\| {\sum _{j=1}^{M}}\alpha _j v_{j,x}^{\varepsilon } \right\| _{L^2(\mathbb {R})}^{(p-1)/(p-2)} \\&\quad +g_5 \Vert u_x^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 +g_7 \left( \Vert u_x^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 +\left\| {\sum _{j=1}^{M}}\alpha _j v_{j,x}^{\varepsilon } \right\| _{L^2(\mathbb {R})}^2 \right) . \end{aligned} \end{aligned}$$
(40)

Here, \(C_{11}\) is the same constant used by the inequality (34), and by (H4), \(C_{11}=0\) if \(2\le p<3\). By the Gagliardo–Nirenberg–Sobolev inequality, there is a positive constant \(C_S\) satisfying

$$\begin{aligned} \Vert u_x^{\varepsilon }\Vert _{L^4(\mathbb {R})} \le C_S \Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^{3/4}\Vert u_{xx}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^{1/4}. \end{aligned}$$

Applying this to (40) yields

$$\begin{aligned}&\frac{1}{2}\frac{d}{dt} \Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2\\&\le -d_u\Vert u_{xx}^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 +g_4\left\| {\sum _{j=1}^{M}}\alpha _j v_j^{\varepsilon } \right\| _{L^2(\mathbb {R})} C_S^2 \Vert u_x^{\varepsilon } \Vert _{L^2(\mathbb {R})}^{3/2}\Vert u_{xx}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^{1/2} \\&\quad +C_{11} \Vert u_x^{\varepsilon }\Vert _{L^2}^{(p-3)/(p-2)} \left\| \sum _{j=1}^{M}\alpha _j v_{j,x}^{\varepsilon } \right\| _{L^2(\mathbb {R})}^{(p-1)/(p-2)} +g_5 \Vert u_x^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2\\&\qquad +g_7 \left( \Vert u_x^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 +\left\| {\sum _{j=1}^{M}}\alpha _j v_{j,x}^{\varepsilon } \right\| _{L^2(\mathbb {R})}^2 \right) . \end{aligned}$$

By using the Young inequality, we have

$$\begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2\le & {} \frac{C_{16}}{2} \left[ \left\{ \Vert u_0\Vert _{L^2(\mathbb {R})}^{2/3} \left( 1+e^{2 C_{13}(1+C_{14}) T/3} \right) +1 \right\} \Vert u_x^{\varepsilon }\Vert _{L^2}^2 \right. \\&\left. + \sum _{j=1}^{M} \Vert \alpha _j v_{j,x}^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \right] . \end{aligned}$$

Hence, we get the following inequality:

$$\begin{aligned} \frac{d}{dt} \Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 \le C_{17} \left\{ \Vert u_x^{\varepsilon }\Vert _{L^2}^2 + \sum _{j=1}^{M} \Vert \alpha _j v_{j,x}^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \right\} . \end{aligned}$$
(41)

Also, integrating the second equation of (\(\hbox {RD}_{\varepsilon }\)) multiplied by \(v_{j,xx}^{\varepsilon }\) over \(\mathbb {R}\), from the Young inequality, it follows that

$$\begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert v_{j,x}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2&=\frac{1}{\varepsilon } \left\{ -d_j \Vert v_{j,xx}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 +\int _{\mathbb {R}} u_x^{\varepsilon } v_{j,x} \, dx -\Vert v_{j,x}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 \right\} \\&\le \frac{1}{\varepsilon } \left( \frac{1}{2}\Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 -\frac{1}{2}\Vert v_{j,x}^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 \right) . \end{aligned}$$

Hence, multiplying this by \(\alpha _j^2\) and adding those from \(j=1\) to M yield the following:

$$\begin{aligned} \frac{d}{dt} \sum _{j=1}^{M} \Vert \alpha _jv_{j,x}^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \le \frac{ 1 }{\varepsilon } \left( C_{18} \Vert u_x^{\varepsilon }\Vert _{L^2(\mathbb {R})}^2 - \sum _{j=1}^{M} \Vert \alpha _jv_{j,x}^{\varepsilon } \Vert _{L^2(\mathbb {R})}^2 \right) , \end{aligned}$$
(42)

where \(C_{18}=\sum _{j=1}^M \alpha _j^2\). Similarly to (36) and (37), (41) and (42) are represented as follows:

$$\begin{aligned} \frac{dX}{dt} \le C_{17} (X + Y), \qquad \frac{dY}{dt} \le \frac{ 1 }{\varepsilon } ( C_{18}X -Y ), \end{aligned}$$

where \(X(t)=\Vert u_x^{\varepsilon }(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2\) and \(Y(t)=\sum _{j=1}^{M}\Vert \alpha _jv_{j,x}^{\varepsilon }(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2\). Therefore, it follows that for any \(0\le t\le T\),

$$\begin{aligned}&\Vert u_x^{\varepsilon }(\cdot ,t)\Vert _{L^2(\mathbb {R})}^2 \le \left( 1 +C_{17} \sum _{j=1}^{M} \alpha _j^2 \right) \Vert u_{0,x}\Vert _{L^2(\mathbb {R})}^2 \, e^{C_{17}(1+C_{18}) T} \quad \text { and } \\&\sum _{j=1}^{M}\Vert \alpha _j v_{j,x}^{\varepsilon }(\cdot ,t) \Vert _{L^2(\mathbb {R})}^2 \le C_{19} \Vert u_{0,x}\Vert _{L^2(\mathbb {R})}^2 \left( 1 +e^{C_{17}(1+C_{18}) T} \right) . \end{aligned}$$

\(\square \)

Proof of Proposition 3

Let \((u^{\varepsilon },v_j^{\varepsilon })\) be a solution of (\(\hbox {RD}_{\varepsilon }\)). For any \(\delta >0\) and \(k\in \mathbb {N}\), multiplying the first equation of (\(\hbox {RD}_{\varepsilon }\)) by \(u^{\varepsilon } /\sqrt{\delta +(u^{\varepsilon })^2}\) and integrating it with respect to \(x\in [-k,k]\), we have that

$$\begin{aligned} \int _{-k}^{k} u_t^{\varepsilon } \frac{u^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} \,dx= & {} d_u \int _{-k}^{k} u_{xx}^{\varepsilon } \frac{u^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} \,dx\nonumber \\&+\int _{-k}^{k} g\left( u^{\varepsilon }, \sum _{j=1}^{M} \alpha _j v_j^{\varepsilon } \right) \frac{u^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} \, dx. \end{aligned}$$
(43)

For the left-hand side of (43), it holds that

$$\begin{aligned} \int _{-k}^{k} u_t^{\varepsilon } \frac{u^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} \, dx =\frac{d}{dt} \int _{-k}^{k} \sqrt{\delta +(u^{\varepsilon })^2} \, dx \ \rightarrow \ \frac{d}{dt} \int _{-k}^{k} |u^{\varepsilon } | \, dx \quad \text { as } \delta \rightarrow 0 \end{aligned}$$

by using the dominated convergence theorem. Moreover, we calculate the first term of the right-hand side of (43) as follows:

$$\begin{aligned} d_u \int _{-k}^{k} u_{xx}^{\varepsilon } \frac{u^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} \, dx&=-d_u \int _{-k}^{k} u_x^{\varepsilon } \left( \frac{u_x^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} -\frac{(u^{\varepsilon })^2u_x^{\varepsilon }}{(\delta +(u^{\varepsilon })^2)^{3/2}} \right) \,dx \\&\quad +d_u\left[ u_x^{\varepsilon } \frac{u^{\varepsilon }}{\sqrt{\delta +(u^{\varepsilon })^2}} \right] _{x=-k}^{x=k} \\&=-d_u\delta \int _{-k}^{k} \frac{(u_x^{\varepsilon })^2}{(\delta +(u^{\varepsilon })^2)^{3/2}} \, dx +d_u \left( u_x^{\varepsilon } (k) - u_x^{\varepsilon }(-k) \right) \\&\le d_u \left( |u_x^{\varepsilon }(k) | +|u_x^{\varepsilon }(-k) | \right) . \end{aligned}$$

By (43), as \(\delta \rightarrow 0\), it holds that for any \(k\in \mathbb {N}\)

$$\begin{aligned} \frac{d}{dt} \int _{-k}^{k} |u^{\varepsilon }| \, dx \le d_u \left( |u_x^{\varepsilon } (k)| +|u_x^{\varepsilon } (-k)| \right) +\int _{-k}^{k} g\left( u^{\varepsilon }, \sum _{j=1}^{M} \alpha _j v_j^{\varepsilon } \right) \frac{u^{\varepsilon }}{|u^{\varepsilon }|} \, dx. \end{aligned}$$

Similarly to the proof of Proposition 1,

$$\begin{aligned} \frac{d}{dt} \int _{-k}^{k} |u^{\varepsilon } | \, dx\le & {} d_u \left( |u_x^{\varepsilon } (k)| +|u_x^{\varepsilon } (-k)| \right) \nonumber \\&+C_{20}\left( \int _{-k}^{k} |u^{\varepsilon } | \, dx + \sum _{j=1}^{M} \int _{-k}^{k} |\alpha _j v_j^{\varepsilon } | \, dx \right) , \end{aligned}$$
(44)

where \(C_{20}\) is a positive constant depending on \(g_1\), \(g_2\), \(g_3\) and the boundedness of \(\Vert u^{\varepsilon }\Vert _{BC(\mathbb {R})}\) by Proposition 2. By the similar argument for \(v_j^{\varepsilon }\), we estimate

$$\begin{aligned} \frac{d}{dt} \int _{-k}^{k} |v_j^{\varepsilon }| \, dx \le \frac{1}{\varepsilon } \left( d_j \left( |v_{j,x}^{\varepsilon }(k)|+|v_{j,x}^{\varepsilon }(-k)| \right) +\int _{-k}^{k} |u^{\varepsilon }| \, dx -\int _{-k}^{k} |v_j^{\varepsilon }| \, dx \right) . \end{aligned}$$

Hence, we obtain that

$$\begin{aligned} \begin{aligned}&\frac{d}{dt} \sum _{j=1}^{M} \int _{-k}^{k} |\alpha _j v_j^{\varepsilon } | \, dx \\&\quad \le \frac{1}{\varepsilon } \left\{ \sum _{j=1}^{M} d_j |\alpha _j| \left( |v_{j,x}^{\varepsilon } (k)|+|v_{j,x}^{\varepsilon } (-k)| \right) + \left( \sum _{j=1}^{M} |\alpha _j|\right) \int _{-k}^{k} |u^{\varepsilon }| \, dx \right. \\&\qquad \left. - \sum _{j=1}^{M}\int _{-k}^{k} |\alpha _j v_j^{\varepsilon } | \, dx \right\} . \end{aligned} \end{aligned}$$
(45)

By calculating the sum of (44) and (45), we see that

$$\begin{aligned}&\frac{d}{dt} \int _{-k}^{k} \left( |u^{\varepsilon } |+C_{20}\varepsilon \sum _{j=1}^{M}|\alpha _j v_j^{\varepsilon } | \right) \, dx \le c_k(t) \nonumber \\&\quad + C_{21}\int _{-k}^{k} \left( |u^{\varepsilon } | +C_{20}\varepsilon \sum _{j=1}^{M} |\alpha _j v_j^{\varepsilon } | \right) \, dx, \end{aligned}$$
(46)

where \(c_k(t)=d_u ( |u_x^{\varepsilon } (k,t)|+|u_x^{\varepsilon } (-k,t)| ) +C_{20}\sum _{j=1}^{M} d_j |\alpha _j| ( |v_{j,x}^{\varepsilon } (k,t)|+|v_{j,x}^{\varepsilon } (-k,t)| )\) depends on t and \(C_{21}= C_{20}(1 + \sum _{j=1}^M | \alpha _j | )\). Here, from Proposition 2, \(u_x^{\varepsilon } (\cdot ,t)\), \(v_{j,x}^{\varepsilon }(\cdot ,t) \in L^2(\mathbb {R})\) for a fixed \(0\le t\le T\). Hence, there is a subsequence \(\{k_m\}_{m\in \mathbb {N}}\) satisfying \(k_m\rightarrow \infty \) as \(m\rightarrow \infty \) such that \(u_x^{\varepsilon } (k_m,t)\), \(u_x^{\varepsilon } (-k_m,t)\), \(v_{j,x}^{\varepsilon }(k_m,t)\), \(v_{j,x}^{\varepsilon }(-k_m,t)\rightarrow 0\) as \(m\rightarrow \infty \). Hence, \(c_{k_m}(t)\rightarrow 0\) as \(m\rightarrow \infty \). Note that \(k_m\) depends on a time t. Taking the limit of (46) on \(k=k_m\) as \(m\rightarrow \infty \), we have the following inequality

$$\begin{aligned}&\frac{d}{dt} \left( \Vert u^{\varepsilon } \Vert _{L^1(\mathbb {R})}+ C_{20} \varepsilon \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } \Vert _{L^1(\mathbb {R})} \right) \\&\quad \le C_{21} \left( \Vert u^{\varepsilon } \Vert _{L^1(\mathbb {R})}+ C_{20}\varepsilon \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } \Vert _{L^1(\mathbb {R})} \right) . \end{aligned}$$

Using the classical Gronwall Lemma, we have

$$\begin{aligned} \Vert u^{\varepsilon } \Vert _{L^1(\mathbb {R})} + C_{20}\varepsilon \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } \Vert _{L^1(\mathbb {R})} \le \left( 1+ C_{20} \varepsilon \sum _{j=1}^{M} |\alpha _j| \right) \Vert u_0 \Vert _{L^1(\mathbb {R})} \, e^{C_{21}T}. \end{aligned}$$

Therefore, we see that

$$\begin{aligned} \Vert u^{\varepsilon } \Vert _{L^1(\mathbb {R})}+ C_{20}\varepsilon \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon } \Vert _{L^1(\mathbb {R})} \le C_{22}, \end{aligned}$$

and it is shown that \(\sup _{0\le t\le T}\Vert u^{\varepsilon }(\cdot ,t)\Vert _{L^1(\mathbb {R})}\) is bounded. Furthermore, since (45) holds and \(c_{k_m}(t)\rightarrow 0\) as \(m\rightarrow \infty \), we obtain

$$\begin{aligned} \frac{d}{dt} \sum _{j=1}^{M} \int _{\mathbb {R}} |\alpha _j v_j^{\varepsilon }| \, dx \le \frac{1}{\varepsilon } \left( C_{23} -\sum _{j=1}^{M} \int _{\mathbb {R}} |\alpha _j v_j^{\varepsilon }| \, dx \right) , \end{aligned}$$

where \(C_{23}= C_{22} \sum _{j=1}^{M} |\alpha _j|\). Hence, noting that \(\Vert v_j^{\varepsilon }(\cdot ,0)\Vert _{L^1(\mathbb {R})}\le \Vert u_0\Vert _{L^1(\mathbb {R})}\), we have

$$\begin{aligned} \sum _{j=1}^{M} \int _{\mathbb {R}} |\alpha _j v_j^{\varepsilon }| \, dx\le & {} \max \left\{ C_{23}, \sum _{j=1}^{M} |\alpha _j| \Vert v_j^{\varepsilon }(\cdot ,0)\Vert _{L^1(\mathbb {R})} \right\} \\\le & {} \max \left\{ C_{23}, \sum _{j=1}^{M} |\alpha _j| \Vert u_0\Vert _{L^1(\mathbb {R})} \right\} . \end{aligned}$$

Consequently, we get the boundedness of \(\sup _{0\le t\le T} \sum _{j=1}^{M} \Vert \alpha _j v_j^{\varepsilon }(\cdot ,t) \Vert _{L^1(\mathbb {R})}\), so that Proposition 3 is shown. \(\square \)

C: Proof of Lemma 1

Proof

Put \(V_j:=v_j-k^{d_j}*u^{\varepsilon }\). Note that \(k^{d_j}*u^{\varepsilon }\) is a solution of \(d_j(k^{d_j}*u^{\varepsilon })_{xx}-k^{d_j}*u^{\varepsilon } +u^{\varepsilon }=0\). Since \(u^{\varepsilon }\) is the first component of the solution to (\(\hbox {RD}_{\varepsilon }\)), we can calculate as follows:

$$\begin{aligned} k^{d_j}*u_t^{\varepsilon } =\frac{d_u}{d_j} \left( k^{d_j}*u^{\varepsilon }-u^{\varepsilon } \right) +k^{d_j}* \left( g\left( u^{\varepsilon }, \sum _{j=1}^{M}\alpha _jv_j^{\varepsilon } \right) \right) . \end{aligned}$$

Recalling that \(\Vert k^{d_j}\Vert _{L^1(\mathbb {R})}=1\) and both of \(\Vert u^{\varepsilon }\Vert _{L^2(\mathbb {R})}\) and \(\Vert v_j^{\varepsilon }\Vert _{L^2(\mathbb {R})}\) are bounded with respect to \(\varepsilon \) from Proposition 2, the right-hand side of the previous identity is bounded in \(L^2(\mathbb {R})\). Hence, there exists a positive constant \(C_{24}\) independent of \(\varepsilon \) such that

$$\begin{aligned} \Vert k^{d_j}*u_t^{\varepsilon }\Vert _{L^2(\mathbb {R})}\le C_{24}. \end{aligned}$$

Recalling that \(\varepsilon v_{j,t}^{\varepsilon }=d_jv_{j,xx}+u^{\varepsilon }-v_j^{\varepsilon }\), the equation of \(V_j\) becomes

$$\begin{aligned} V_{j,t} =\frac{1}{\varepsilon }\left( d_j V_{j,xx} - V_j \right) -k^{d_j}*u_t^{\varepsilon }. \end{aligned}$$

Multiplying this equation by \(V_j\) and integrating it over \(\mathbb {R}\) yield

$$\begin{aligned} \frac{d}{dt}\frac{1}{2} \Vert V_j\Vert _{L^2(\mathbb {R})}^2&\le -\frac{d_j}{\varepsilon }\Vert V_{j,x}\Vert _{L^2(\mathbb {R})}^2 -\frac{1}{\varepsilon } \Vert V_j\Vert _{L^2(\mathbb {R})}^2 -\int _{\mathbb {R}} \left( k^{d_j}*u_t^{\varepsilon } \right) V_j \, dx \\&\le -\frac{1}{2\varepsilon } \Vert V_j\Vert _{L^2(\mathbb {R})}^2 +\frac{\varepsilon }{2} \left\| k^{d_j}*u_t^{\varepsilon } \right\| _{L^2(\mathbb {R})}^2 \le -\frac{1}{2\varepsilon } \Vert V_j\Vert _{L^2(\mathbb {R})}^2 +\frac{\varepsilon }{2}C^2_{24}. \end{aligned}$$

Since it is shown that

$$\begin{aligned} \frac{d}{dt} \Vert V_j\Vert _{L^2(\mathbb {R})}^2 \le -\frac{1}{\varepsilon } \Vert V_j\Vert _{L^2(\mathbb {R})}^2 +\varepsilon C^2_{24}, \end{aligned}$$

we get \(\Vert V_j\Vert _{L^2(\mathbb {R})}^2\le \min \{\Vert V_j(\cdot ,0)\Vert _{L^2(\mathbb {R})}^2,\, (C_{24}\varepsilon )^2\}\). Noting that \(V_j(\cdot ,0)=0\), we obtain that

$$\begin{aligned} \Vert V_j\Vert _{L^2(\mathbb {R})}\le C_{24}\varepsilon . \end{aligned}$$

Therefore, (9) is proved. \(\square \)

D: Polynomial approximation

Proof of Proposition 4

Let \(\phi \in BC([0,\infty ])\). We change variables x in terms of y as follows:

$$\begin{aligned} y=e^{-x}, \qquad \psi (y):=\phi (x). \end{aligned}$$
(47)

Since y is decreasing in x and y belongs to [0, 1] when x belongs to \([0,\infty ]\), we have the inverse function of y and the inverse function is represented by \(x=-\log y\). Also, since \(\phi (x)\) is bounded at infinity by \(\phi \in BC([0,+\infty ])\), one have \(\psi \in C([0,1])\). Hence, applying the Stone–Weierstrass theorem to \(\psi \), for any \(\varepsilon \), there exists a polynomial function \(p(y)=\sum _{j=0}^M\beta _j {y}^j\) such that

$$\begin{aligned} \left| \psi (y) - \sum _{j=0}^{M} \beta _j {y}^j\right| < \varepsilon \quad \text { for all } y\in [0,1]. \end{aligned}$$

Substituting \(y=e^{-x}\) to the previous inequality, it follows that for all \(x\in [0,+\infty ]\)

$$\begin{aligned} \left| \phi (x) - \sum _{j=0}^{M} \beta _j e^{-jx}\right| < \varepsilon \end{aligned}$$

due to (47). \(\square \)

E: Examples of calculated parameters

We provide the examples of values of \(\alpha _1,\ldots , \alpha _5\) which are explicitly calculated by using (8). We consider the case for \(J_1\) and \( J_2\).

In the case of \(J_1(x)\), \(\alpha _1,\ldots ,\alpha _5\) are calculated by

$$\begin{aligned} \begin{array}{l l l} \alpha _1&{}=&{} 30 \left( -140 e \text {erfc}(1)-504 e^4 \text {erfc}(2)+15 e^{\frac{1}{4}} \text {erfc}\left( \frac{1}{2}\right) +420 e^{\frac{9}{4}} \text {erfc}\left( \frac{3}{2}\right) \right. \\ &{}&{}\left. +210 e^{\frac{25}{4}} \text {erfc}\left( \frac{5}{2}\right) \right) \approx -0.3166,\\ \alpha _2 &{}=&{}-210 \left( -105 e \text {erfc}(1)-420 e^4 \text {erfc}(2)+10e^{\frac{1}{4}} \text {erfc}\left( \frac{1}{2}\right) +336 e^{\frac{9}{4}} \text {erfc}\left( \frac{3}{2}\right) \right. \\ &{}&{}\left. +180 e^{\frac{25}{4}} \text {erfc}\left( \frac{5}{2}\right) \right) \approx 1.619,\\ \alpha _3 &{}=&{}280 \left( -168 e \text {erfc}(1)-720 e^4 \text {erfc}(2)+15 e^{\frac{1}{4}} \text {erfc}\left( \frac{1}{2}\right) +560 e^{\frac{9}{4}} \text {erfc}\left( \frac{3}{2}\right) \right. \\ &{}&{}\left. +315 e^{\frac{25}{4}} \text {erfc}\left( \frac{5}{2}\right) \right) \approx 2.314, \\ \alpha _4 &{}=&{}-630 \left( -70 e \text {erfc}(1)-315 e^4 \text {erfc}(2)+6 e^{\frac{1}{4}} \text {erfc}\left( \frac{1}{2}\right) +240 e^{\frac{9}{4}} \text {erfc}\left( \frac{3}{2}\right) \right. \\ &{}&{}\left. +140e^{\frac{25}{4}} \text {erfc}\left( \frac{5}{2}\right) \right) \approx -4.438, \\ \alpha _5 &{}=&{}252 \left( -60 e \text {erfc}(1)-280 e^4 \text {erfc}(2)+5 e^{\frac{1}{4}} \text {erfc}\left( \frac{1}{2}\right) +210 e^{\frac{9}{4}} \text {erfc}\left( \frac{3}{2}\right) \right. \\ &{}&{}\left. +126 e^{\frac{25}{4}} \text {erfc}\left( \frac{5}{2}\right) \right) \approx 1.811, \\ \end{array} \end{aligned}$$

where

$$\begin{aligned} \text { erfc}(x):=1-\frac{2}{\sqrt{\pi }}\int _{0}^{x}e^{-t^2}dt. \end{aligned}$$

For the case of \(J_2(x)\), \(\alpha _1,\ldots ,\alpha _5\) are given by

$$\begin{aligned} \alpha _1&=\dfrac{3772961672081048906951}{251195073357821392800}\approx 15.02, \\ \alpha _2&=-\dfrac{1058305332396960720827}{17942505239844385200}\approx -58.98,\\ \alpha _3&=\dfrac{15614015192211958306819}{161482547158599466800}\approx 96.69, \\ \alpha _4&=-\dfrac{2167590862829621235761}{29904175399740642000}\approx -72.48, \\ \alpha _5&=\dfrac{4571014947001979131879}{209329227798184494000}\approx 21.84. \end{aligned}$$

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ninomiya, H., Tanaka, Y. & Yamamoto, H. Reaction–diffusion approximation of nonlocal interactions using Jacobi polynomials. Japan J. Indust. Appl. Math. 35, 613–651 (2018). https://doi.org/10.1007/s13160-017-0299-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13160-017-0299-z

Keywords

Mathematics Subject Classification

Navigation