Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The goal of this paper is to present a rigorous derivation of a traffic flow macroscopic model by homogenization of a follow-the-leader model, see [8, 10]. The idea is to rescale the microscopic model, which describes the dynamics of each vehicle individually, in order to get a macroscopic model which describes the dynamics of density of vehicles. Several studies have been done about the connection between microscopic and macroscopic traffic flow model. This type of connection is important since it allows us to deduce macroscopic models rigorously and without using strong assumptions. We refer for example to [1,2,3] where the authors rescaled the empirical measure and obtained a scalar conservation law (LWR (Lighthill-Whitham-Richards) model). More recently, another kind of macroscopic models appears. These models rely on the Moskowitz function and make appear an Hamilton-Jacobi equation. This is the setting of our work which is a generalization of [6]. Indeed, authors in [6] considered a single road and one velocity throughout this road with a local perturbation at the origin while we consider two different velocities and a transition zone which can be seen as a local perturbation thats slows down the vehicles. At the macroscopic scale, we get an Hamilton-Jacobi equation with a junction condition at zero and an effective flux limiter. In order to have our homogenization result, we will construct the correctors. The main new technical difficulties comes from the construction of correctors and in particular the gradient estimates are more complicated from that in [6] because the gradient on the left and on the right may differ.

2 The Microscopic Model

In this paper, we consider a “follow the leader” model of the following form

figure a
$$ \dot{U}_j(t)= V_{1}(U_{j+1}(t) - U_j(t))\varphi \left( U_{j}(t)\right) + V_{2}(U_{j+1}(t) - U_j(t))\left( 1-\varphi \left( U_{j}(t)\right) \right) , $$

where \(U_{j}\) denotes the position of the j-th vehicle and \(\dot{U}_{j}\) its velocity. The function \(\varphi \) simulates the presence of a local perturbation around the origin which allows us to pass from the optimal velocity function \(V_1\) (on the left of the origin) to \(V_2\) (on the right). We make the following assumptions on \(V_{1}\), \(V_{2}\) and \(\varphi \).

Assumption (A).

  • (A1) \(V_{1},V_{2}:\mathbb {R}\rightarrow \mathbb {R}^+\) are Lipschitz continuous, non-negative and non-decreasing.

  • (A2) For \(i=1,2\), there exists a \(h^{i}_0\in (0,+\infty )\) such that

    $$ V_{i}(h)=0 \text { for all } h\le h^i_0. $$
  • (A3) For \(i=1,2\), there exists a \(h^{i}_{max}\in (0,+\infty )\) such that

    $$ V_{i}(h)=V_{imax} \text { for all }h\ge h^{i}_{max}. $$
  • (A4) For \(i=1,2\), there exists a real \(p^i_0\in [-1/h^i_0,0)\) such that the function \(p\mapsto pV_{i}(-1/p)\) is decreasing on \([-1/h_0^i,p^i_0)\) and increasing on \([p^i_0,0)\).

  • (A5) The function \(\varphi :\mathbb {R}\rightarrow [0,1]\) is Lipschitz continuous and

    $$\begin{aligned} \varphi (x)= {\left\{ \begin{array}{ll} 1 &{} \text {if} \, x\le -r \\ 0 &{} \text {if} \, x>r. \end{array}\right. } \end{aligned}$$

3 The Homogenization Result

We introduce the “cumulative distribution function” of the vehicles:

$$\rho (t,y) = -\left( \sum _{i\ge 0 } H\left( y -U_i(t)\right) + \sum _{i<0}\left( -1 + H\left( y-U_i(t)\right) \right) \right) $$

and we make the following rescaling

$$\rho ^{\varepsilon }(t,y)=\varepsilon \rho \left( t/\varepsilon ,y/\varepsilon \right) \!.$$

\(\rho ^{\varepsilon }\) is a discontinuous solution of the following equation: for \((t,x)\in (0,+\infty )\times \mathbb {R}\),

$$\begin{aligned} \left\{ \begin{array}{l l} u_t^\varepsilon + \left( M_{1}^\varepsilon \left[ \dfrac{u^\varepsilon (t,\cdot )}{\varepsilon } \right] (x)\varphi \left( \dfrac{x}{\varepsilon } \right) +M_{2}^\varepsilon \left[ \dfrac{u^\varepsilon (t,\cdot )}{\varepsilon } \right] (x)\left( 1-\varphi \left( \dfrac{x}{\varepsilon } \right) \right) \right) \cdot |u_x^\varepsilon | =0 \\ \\ u^\varepsilon (0,x)= u_0(x) \end{array} \right. \end{aligned}$$
(3.1)

where the non-local operators \(M_{i}^\varepsilon \) and \(M_{2}^\varepsilon \) are defined by

$$\begin{aligned} M_{i}^{\varepsilon } [U](x)=\int _{-\infty }^{+\infty } J_{i}(z) E\left( U(x+\varepsilon z) - U(x) \right) dz - \dfrac{3}{2}V_{imax} \end{aligned}$$
(3.2)

with

$$\begin{aligned} E(z)= \left\{ \begin{array}{l} 0 \quad \text{ if } z \ge 0,\\ 1/2\quad \text{ if } -1 \le z< 0\\ 3/2 \quad \text{ if } z < -1. \end{array} \right. , \quad J_{1}=V_{1}' \, \text {and} \, J_{2}=V'_{2} \, \hbox {on} \, \mathbb {R}. \end{aligned}$$
(3.3)

We also assume that the initial condition satisfies the following assumption.

(A0) (Gradient Bound). Let \(k_{0}=\max \left( k_{0}^{1},k_{0}^{2}\right) \) with \(k^{i}_{0}=1/h^{i}_{0}\). The function \(u_{0}\) is Lipschitz continuous and satisfies

$$\begin{aligned} -k_{0}\le (u_{0})_{x}\le 0. \end{aligned}$$

We have the following theorem (see [6]).

Theorem 1

Assume (A0) and (A). Then, there exists a unique viscosity solution \(u^{\varepsilon }\) of (3.1). Moreover, the function \(u^{\varepsilon }\) is continuous and there exists a constant K such that

$$\begin{aligned} u_0(x) \le u^{\varepsilon }(t,x) \le u_0(x) +K t. \end{aligned}$$

We will introduce now the macroscopic model which is a Hamilton-Jacobi equation on a junction. The Hamiltonians \(\overline{H}_{1}\) and \(\overline{H}_{2}\) are called effective Hamiltonians (see Proposition 2.9 in [6]) and are defined as follows: for \(i=1,2\)

$$\begin{aligned} \overline{H}_{i}(p)= \left\{ \begin{array}{l l} -p - k^{i}_0 &{} \text{ for } p<-k^{i}_0,\\ -V_{i}\left( \dfrac{-1}{p}\right) \cdot |p|&{} \text{ for } -k^{i}_0\le p \le 0,\\ p &{} \text{ for } p>0, \end{array}s \right. \end{aligned}$$
(3.4)

with

$$\begin{aligned} H^{i}_{0}=\min \limits _{p\in \mathbb {R}}\overline{H}_{i}(p)\quad \text {and} \, H_{0}=\max \left( H_{0}^{1},H_{0}^{2}\right) \!. \end{aligned}$$
(3.5)

Now we can define the limit problem. We refer to [9] for more details about existence and uniqueness of solution for this type of equation.

$$\begin{aligned} {\left\{ \begin{array}{ll} u^{0}_{t}+\overline{H}_{1}(u_x^{0})=0 &{} \text {for}\quad (t,x)\in (0,+\infty )\times (-\infty ,0)\\ u^{0}_{t}+\overline{H}_{2}(u_x^{0})=0 &{} \text {for}\quad (t,x)\in (0,+\infty )\times (0,+\infty )\\ u^{0}_{t}+F_{\overline{A}}(u_{x}^{0}(t,0^{-}),u_{x}^{0}(t,0^{+}))=0 &{} \text {for}\quad (t,x)\in (0,+\infty )\times \{0\} \\ u^{0}(0,x)=u_{0}(x) &{} \text {for}\quad x\in \mathbb {R}. \end{array}\right. } \end{aligned}$$
(3.6)

where \(\overline{A}\) has to be determined and \(F_{\overline{A}}\) is defined by

$$\begin{aligned} F_{A}(p_{-},p_{+})=\max \left( \overline{A},\overline{H}_{1}^{+}(p_{-}),\overline{H}_{2}^{-}(p_{+})\right) \!; \end{aligned}$$

\(\overline{H}_{1}^{+}\) and \(\overline{H}_{2}^{-}\) represent respectively the increasing and the decreasing part of \(\overline{H}_{1}\) and \(\overline{H}_{2}\). The following theorem is our main result in this paper.

Theorem 2

There exists \(\overline{A} \in \left[ H^{1}_0,0\right] \) such that the function \(u^\varepsilon \) defined by Theorem 1 converge locally uniformly towards the unique solution \(u^0\) of (3.6).

Remark 1

Formally, if we derive (3.6), we will obtain a scalar conservation law with discontinuous flux whose literature is very rich, see for example [4]. However, the passage from microscopic to macroscopic models are more difficult in this setting and in particular on networks. On the contrary, the approach proposed in this paper can be extended to models posed on networks (see [5]).

4 Correctors for the Junction

The key ingredient to prove the convergence result is to construct correctors for the junction. Given \(\overline{A} \in \mathbb {R}\), we introduce two real numbers \(\overline{p}_{1}, \overline{p}_{2}\in \mathbb {R}\), such that

$$\begin{aligned} \overline{H}_{2}\left( \overline{p}_{2} \right) = \overline{H}_{2}^+ \left( \overline{p}_{2} \right) = \overline{H}_{1}\left( \overline{p}_{1}\right) = \overline{H}_{1}^-\left( \overline{p}_{1} \right) = \overline{A}. \end{aligned}$$
(4.1)

If \(\overline{A}\le H_{0}\), we then define \(\overline{p}_{1}, \overline{p}_{2}\in \mathbb {R}\) as the two real numbers satisfying

$$\begin{aligned} \overline{H}_{2}\left( \overline{p}_{2} \right) = \overline{H}_{2}^+ \left( \overline{p}_{2} \right) = \overline{H}_{1}\left( \overline{p}_{1}\right) = \overline{H}_{1}^-\left( \overline{p}_{1} \right) = H_{0}. \end{aligned}$$
(4.2)

Due to the form of \(\overline{H}_{1}\) and \(\overline{H}_{2}\) this two real numbers exist and are unique. We consider now the following problem: find \(\lambda \in \mathbb {R}\) such that there exists a solution w of the following global-in-time Hamilton-Jacobi equation

$$\begin{aligned} \left( M_{1}[w](x)\cdot \varphi (x)+M_{2}[w](x)\cdot \left( 1-\varphi (x)\right) \right) \cdot \left| w_{x}\right| =\lambda \quad \text {for} \, x\in \mathbb {R} \end{aligned}$$
(4.3)

with

$$\begin{aligned} M_{i}[U](x)=\int _{-\infty }^{+\infty } J_{i}(z) E\left( U(x+z) - U(x) \right) dz - \dfrac{3}{2}V_{imax} \end{aligned}$$
(4.4)

Theorem 3 (Existence of a global corrector for the junction)

Assume (A).

  1. (i)

    (General properties) There exists a constant \(\bar{A}\in [H^{1}_0,0]\) such that there exists a solution w of (4.3) with \(\lambda =\overline{A}\) and such that there exists a constant \(C>0\) and a globally Lipschitz continuous function m such that for all \(x\in \mathbb {R}\),

    $$\begin{aligned} |w(x)- m(x)| \le C. \end{aligned}$$
    (4.5)
  2. (ii)

    (Bound from below at infinity) If \(\bar{A}>H^{1}_0\), then there exists \(\gamma _0\) such that for every \(\gamma \in (0,\gamma _0)\), we have

    $$\begin{aligned} \left\{ \begin{array}{l l} w(x - h) - w(x) \ge ( -\overline{p}_{1} - \gamma )h - C &{} \text{ for } x\le -r \text{ and } h\ge 0,\\ w(x+h) - w(x) \ge (\overline{p}_{2} - \gamma )h - C &{} \text{ for } x\ge r \text{ and } h\ge 0.\\ \end{array} \right. \end{aligned}$$
    (4.6)
  3. (iii)

    (Rescaling w) For \(\varepsilon >0\), we set

    $$\begin{aligned} w^\varepsilon (x)= \varepsilon w\left( \dfrac{x}{\varepsilon }\right) , \end{aligned}$$

then (along a subsequence \(\varepsilon _n \rightarrow 0\)) we have that \(w^\varepsilon \) converges locally uniformly towards a function \(W=W(x)\) which satisfies

$$\begin{aligned} \left\{ \begin{array}{l l} |W(x) - W(y)| \le C|x-y| &{} \text{ for } \text{ all } x,y\in \mathbb {R},\\ \overline{H}_{1}( W_x)= \overline{A} &{} \text{ for } \text{ all } x<0,\\ \overline{H}_{2}( W_x)= \overline{A} &{} \text{ for } \text{ all } x>0. \end{array} \right. \end{aligned}$$
(4.7)

In particular, we have (with \(W(0)=0\))

$$\begin{aligned} W(x)=\overline{p}_{1} x 1_{\{x<0\}} + \overline{p}_{2} x 1_{\{x>0 \}} . \end{aligned}$$
(4.8)

5 Proof of Theorem 3

This section contains the proof of Theorem 3. To do this, we will construct correctors on truncated domains and then pass to the limit as the size of the domain goes to infinity. For \(l\in (r,+\infty )\), \(r<<l\) and \(r\le R<<l\), we want to find \(\lambda _{l,R}\), such that there exists a solution \(w^{l,R}\) of

$$\begin{aligned} \left\{ \begin{array}{l l } Q_R\left( x,[w^{l,R}],w_x^{l,R}\right) = \lambda _{l,R} &{} \text{ if } x\in (-l,l) \\ \overline{H}_{1}^{-}( w_x^{l,R}) = \lambda _{l,R} &{} \text{ if } x\in \{-l\}\\ \overline{H}_{2}^+(w_x^{l,R}) = \lambda _{l,R} &{} \text{ if } x\in \{l\}, \end{array} \right. \end{aligned}$$
(5.1)

with

$$\begin{aligned} Q_R(x,[U],q)&= \psi _R(x)\cdot M_{2}[U](x)\cdot \left( 1-\varphi (x) \right) \cdot |q|+ (1- \psi _R(x))\cdot \overline{H}_{2}(q)\end{aligned}$$
(5.2)
$$\begin{aligned}&+ \varPhi _R(x)\cdot M_{1}[U](x)\cdot \varphi (x)\cdot |q|+ (1- \varPhi _R(x))\cdot \overline{H}_{1}(q) \end{aligned}$$
(5.3)

and \(\psi _R,\varPhi _{R} \in C^\infty \), \(\psi _R,\varPhi _{R}: \mathbb {R} \rightarrow [0,1]\), with

$$\begin{aligned} \psi _R \equiv \left\{ \begin{array}{l} 1 \quad x\le R\\ 0 \quad x>R+1 \end{array} \right. \quad \text{ and }\quad \varPhi _R \equiv \left\{ \begin{array}{l} 1 \quad x\ge -R\\ 0 \quad x<-R-1. \end{array} \right. \end{aligned}$$
(5.4)

Proposition 1 (Existence of correctors on truncated domains)

There exists a unique \(\lambda _{l,R} \in \mathbb {R}\) such that there exists a solutions \(w^{l,R}\) of (5.1). Moreover, there exists a constant C (depending only on \(k_0\)), and a Lipschitz continuous function \(m^{l,R}\), such that

$$\begin{aligned} \left\{ \begin{array}{l l} H^{1}_0 \le \lambda _{l,R}\le 0,\\ |m^{l,R}(x) - m^{l,R}(y)| \le C|x-y| &{} \text{ for } x,y\in [-l,l],\\ |w^{l,R}(x) - m^{l,R}(x)| \le C &{} \text{ for } x \in \mathbb {R}\times [-l,l]. \end{array} \right. \end{aligned}$$
(5.5)

Proof

We only give the main steps of the proof. Classically, we will consider the approximated problem depending on the parameter \(\delta \) and then take \(\delta \) to 0.

$$\begin{aligned} \left\{ \begin{array}{l l} \delta v^\delta + Q_{R}(x,[v^{\delta }],v^{\delta }_{x})=0 &{} \text{ for } x\in (-l,l)\\ \delta v^\delta + \overline{H}_{1}^-(v^\delta _x) =0 &{} \text{ for } x\in \{-l\}\\ \delta v^\delta + \overline{H}_{2}^+(v^\delta _x) =0 &{} \text{ for } x\in \{l\}\\ \end{array} \right. \end{aligned}$$
(5.6)
  • Step 1: construction of barriers. Using Perron’s method and 0 and \(\delta ^{-1}|H^{1}_{0}|\) as barriers, we deduce that there exists a continuous viscosity solution \(v^\delta \) of (5.6) which satisfies

    $$\begin{aligned} 0 \le v^\delta \le \dfrac{|H_{0}^{1}|}{\delta }. \end{aligned}$$
    (5.7)
  • Step 2: control of the space oscillations of \(v^\delta \). The function \(v^\delta \) satisfies for all \(x,y\in [-l,l]\), \(x\ge y\),

    $$\begin{aligned} -k_{0}(x-y) -1 \le v^\delta (x) - v^\delta (y) \le 0, \end{aligned}$$

    with \(k_{0}=\max (k_{0}^{1},k_{0}^{2})\) (see [6, Lemma 6.5]).

  • Step 3: construction of a Lipschitz estimate. As in [6, Lemma 6.6] we can construct a Lipschitz continuous function \(m^\delta \), such that there exists a constant C, (independent of lR and \(\delta \)) such that

    $$\begin{aligned} \left\{ \begin{array}{ll} |m^\delta (x) - m^\delta (y)| \le C|x-y| &{} \text{ for } \text{ all } x,y\in [-l,l],\\ |v^\delta (x) - m^\delta (x) |\le C &{} \text{ for } \text{ all } x \in [-l,l]. \end{array} \right. \end{aligned}$$
    (5.8)
  • Step 4: passing to the limit as \(\delta \) goes to 0. Classicly, taking \(\delta \) to zero, we get \(\lambda _{l.R}, w^{l,R}\) and \(m^{l,R}\) satisfiying (5.5). The uniqueness of \(\lambda _{l,R}\) is classical so we skip it. This ends the proof of Proposition 1.    \(\square \)

Proposition 2

The following limits exist (up to a subsequence)

$$ \overline{A}_R= \lim \limits _{l\rightarrow +\infty }\lambda _{l,R}, \quad \mathrm{and}\quad \overline{A}= \lim \limits _{R\rightarrow +\infty } \overline{A}_R. $$

Moreover, we have

$$\begin{aligned} H^{1}_0 \le \overline{A}_R,\overline{A} \le 0. \end{aligned}$$

Proposition 3 (Control of the slopes on a truncated domain)

Assume that l and R are big enough. Let \(w^{l,R}\) be the solution of (5.1) given by Proposition 1. We also assume that up to a sub-sequence \(\overline{A}=\lim \limits _{R\rightarrow +\infty } \lim \limits _{l\rightarrow +\infty }\lambda _{l,R}>H^{1}_0\). Then there exits a \(\gamma _0>0\) such that for all \(\gamma \in (0,\gamma _0)\), there exists a constant C (independent of l and R) such that for all \(x\le -r\) and \(h\ge 0\)

$$\begin{aligned} w^{l,R}(x-h) - w^{l,R}(x) \ge (-\overline{p}_{1} - \gamma )h - C. \end{aligned}$$
(5.9)

Similarly, for all \(x\ge r\) and \(h\ge 0\),

$$\begin{aligned} w^{l,R}(x+h)- w^{l,R}(x) \ge (\overline{p}_{2} - \gamma ) h - C. \end{aligned}$$
(5.10)

Proof

We only prove (5.9) since the proof for (5.10) is similar. For \(\sigma >0\) small enough, we denote by \(p_-^\sigma \) the real number such that

$$ \overline{H}_{1}(p_-^\sigma )= \overline{H}_{1}^-(p_-^\sigma )= \lambda _{l,R} - \sigma . $$

Let us now consider the function \(w^-= p_-^\sigma x\) that satisfies

$$ \begin{array}{l} \overline{H}_{1}(w_x^-)= \lambda _{l,R} - \sigma \quad \text{ for } x\in \mathbb {R}. \end{array} $$

We also have

$$\begin{aligned} M_{1}[w^-](x)=-V_{1}\left( \dfrac{-1}{p^{\sigma }_{-}}\right) . \end{aligned}$$

For all \(x\in (-l,-r)\), using that \(\varphi (x)=1\) and \(\psi _{R}(x)=1\), we deduce that \(w^{-}\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} Q_{R}\left( x,[w^{-}],w^{-}_{x}\right) = \lambda _{l,R}- \mu &{} \text{ for } x\in (-l,-r) \\ \overline{H}_{1}^-(w_x^+) = \lambda _{l,R} - \mu &{} \text{ for } x\in \{-l\}. \end{array}\right. } \end{aligned}$$

Using the comparaison principle, we deduce that for all \(h\ge 0\), for all \(x\in (-l,-r)\), we have that

$$\begin{aligned} w^{l,R}(x-h)-w^{l,R}(x)\ge -p^{\sigma }_{-}h-2C. \end{aligned}$$

Finally, for \(\gamma _{0}\) and \(\sigma \) small enough, we can set \(p^{\sigma }_{-}=\overline{p}_{1}+\gamma \).    \(\square \)

Proof of Theorem

3. The proof is performed in two steps.

Step 1: proof of (i) and (ii). The goal is to pass to the limit as \(l\rightarrow +\infty \) and then as \(R\rightarrow +\infty \). There exists \(l_n \rightarrow +\infty \), such that

$$\begin{aligned} m^{l_n,R} - m^{l_n,R}(0) \rightarrow m^R \quad \text{ as } n\rightarrow +\infty , \end{aligned}$$

the convergence being locally uniform. We also define

$$\begin{aligned} \begin{array}{l} \overline{w}^R(x)= {\limsup \limits _{l_n\rightarrow +\infty }}^* \left( w^{l_n,R}- w^{l_n,R}(0) \right) \!, \\ \underline{w}^R(x) = {\liminf \limits _{l_n\rightarrow +\infty }}_* \left( w^{l_n,R}- w^{l_n,R}(0) \right) \!.\\ \end{array} \end{aligned}$$

Thanks to (5.5), we know that \(\overline{w}^R\) and \(\underline{w}^R\) are finite and satisfy

$$\begin{aligned} m^R-C \le \underline{w}^R \le \overline{w}^R \le m^R +C. \end{aligned}$$

By stability of viscosity solutions, \(\overline{w}^R-2C\) and \(\underline{w}^R\) are respectively a sub and a super-solution of

$$\begin{aligned} \begin{array}{l} Q_R(x,[w^R],w_x^R) = \overline{A}_R \quad \text{ for } x\in \mathbb {R}\\ \end{array} \end{aligned}$$
(5.11)

Therefore, using Perron’s method, we can construct a solution \(w^R\) of (5.11) with \(m^R, \overline{A}^R\) and \(w^R\) satisfying

$$\begin{aligned} \left\{ \begin{array}{l l} |m^R(x)- m^R(y)| \le C |x-y| &{} \text{ for } \text{ all } x,y\in \mathbb {R},\\ |w^R(x) - m^R(x) | \le C &{} \text{ for } x\in \mathbb {R}\times \mathbb {R},\\ H^{1}_0\le \overline{A}_R \le 0. \end{array} \right. \end{aligned}$$
(5.12)

Using Proposition 3, if \(\overline{A}>H_0\), we know that there exists \(\gamma _0\) and \(C>0\), such that for all \(\gamma \in (0,\gamma _0)\),

$$\begin{aligned} \left\{ \begin{array}{l l} w^R(x-h)- w^R(x) \ge (-\overline{p}_1 -\gamma )h - C &{} \text{ for } \text{ all } x \le -r, \ h\ge 0, \\ w^R(x+h)- w^R(x) \ge (\overline{p}_2 - \gamma )h - C &{} \text{ for } \text{ all } x\ge r, \ h\ge 0. \end{array} \right. \end{aligned}$$
(5.13)

Passing to the limit as \(R\rightarrow +\infty \) and proceeding as above, the proof is complete.

Step 2: proof of (iii). Using (4.6), we have that

$$\begin{aligned} w^\varepsilon (x)= \varepsilon m\left( \dfrac{x}{\varepsilon } \right) + O(\varepsilon ). \end{aligned}$$

Therefore, we can find a sequence \(\varepsilon _n \rightarrow 0\), such that

$$\begin{aligned} w^{\varepsilon _n} \rightarrow W \quad \text{ locally } \text{ uniformly } \text{ as } n\rightarrow +\infty , \end{aligned}$$

with \(W(0)=0\). Like in [7](Appendix A.1), we have that

$$ \overline{H}_{1}(W_x)= \overline{A} \quad \text{ for } x<0 \quad \mathrm{and}\quad \overline{H}_{2}(W_x)= \overline{A} \quad \text{ for } x>0. $$

For all \(\gamma \in (0,\gamma _0)\), we have that if \(\overline{A}>H^{1}_0\) and \(x>0\),

$$\begin{aligned} W_x \ge \overline{p}_2 - \gamma , \end{aligned}$$

where we have used (4.6). Therefore we get

$$\begin{aligned} W_x= \overline{p}_2 \quad \text{ for } x>0, \end{aligned}$$

Similarly, we get \( W_x = \overline{p}_1\) for \(x<0\). This ends the proof of Theorem 3.    \(\square \)

6 Proof of Convergence

In this section, we will prove our homogenization result. Classicly, the proof relies on the existence of correctors. We will just prove the convergence result at the junction point since at any other point, the proof is classical using that \(v=0\) is a corrector, see [6].

Proof of Theorem

2. We introduce

$$\begin{aligned} \overline{u}(t,x)= {\limsup _{\varepsilon \rightarrow 0}}^* u^\varepsilon \quad \text{ and } \quad \underline{u}(t,x)= {\liminf _{\varepsilon \rightarrow 0}}_* u^\varepsilon . \end{aligned}$$
(6.1)

Let us prove that \(\overline{u}\) is a sub-solution of (3.6) at the point 0, (the proof for \(\underline{u}\) is similar and we skip it). The definition of viscosity solution for Hamilton-Jacobi equation is presented in Sect. 2 in [9]. We argue by contradiction and assume that there exist a test function \(\varPsi \in \mathcal {C}^1(J_\infty )\) such that

$$\begin{aligned} \left\{ \begin{array}{l l l} \overline{u}(\bar{t},0)= \varPsi (\bar{t},0) \\ \overline{u} \le \varPsi &{} \text{ on } \mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0) &{} \text{ with } \bar{r}>0\\ \overline{u} \le \varPsi - 2 \eta &{} \text{ outside } \mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0) &{} \text{ with } \eta>0\\ \varPsi _t (\bar{t},0) + F_{\overline{A}}(\varPsi _x (\bar{t},0^{-}),\varPsi _x (\bar{t},0^{+}))= \theta &{} \text{ with } \theta >0. \end{array} \right. \end{aligned}$$
(6.2)

According to [9], we may assume that the test function has the following form

$$\begin{aligned} \varPsi (t,x)= g(t) + \overline{p}_{1} x 1_{\{x<0\}} + \overline{p}_{2} x 1_{\{x>0\}} \quad \text{ on } \mathcal {Q}_{\bar{r},2\bar{r}}(\bar{t},0), \end{aligned}$$
(6.3)

The last line in condition (6.2) becomes

$$\begin{aligned} g'(\bar{t}) + F_{\overline{A}}(\overline{p}_{1}, \overline{p}_{2}) = g'(\bar{t}) + \overline{A}=\theta . \end{aligned}$$
(6.4)

Let us consider w the solution of (4.3) provided by Theorem 3, and let us denote

$$\begin{aligned} \varPsi ^\varepsilon (t,x)= \left\{ \begin{array}{l l} g(t) + w^\varepsilon (x) &{} \text{ on } \mathcal {Q}_{\bar{r},2\bar{r}}(\bar{t},0),\\ \varPsi (t,x) &{} \text{ outside } \mathcal {Q}_{\bar{r},2\bar{r}}(\bar{t},0). \end{array} \right. \end{aligned}$$
(6.5)

We claim that \(\varPsi ^{\varepsilon }\) is a viscosity solution on \(Q_{\bar{r},\bar{r}}(\bar{t},0)\) of the following problem,

$$\begin{aligned} \varPsi _{t}^\varepsilon + \left( \tilde{M}_{1}^\varepsilon \left[ \dfrac{\varPsi ^\varepsilon }{\varepsilon }(t,\cdot ) \right] (x)\varphi \left( \dfrac{x}{\varepsilon }\right) +\tilde{M}_{2}^\varepsilon \left[ \dfrac{\varPsi ^\varepsilon }{\varepsilon }(t,\cdot ) \right] (x)\left( 1-\varphi \left( \dfrac{x}{\varepsilon }\right) \right) \right) \cdot |\varPsi ^\varepsilon _x|\ge \dfrac{\theta }{2}. \end{aligned}$$

Indeed, let h be a test function touching \(\varphi ^\varepsilon \) from below at \((t_1,x_1)\in \mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0)\), so we have that the function \(\chi (y)=\dfrac{1}{\varepsilon }\left( h(t_{1},\varepsilon y)-g(t_{1})\right) \) touches w from below at \(\dfrac{x_{1}}{\varepsilon }\) which implies that

$$\begin{aligned} \left( \tilde{M}_{1}\left[ w \right] \left( \dfrac{x_1}{\varepsilon } \right) \varphi \left( \dfrac{x_1}{\varepsilon } \right) +\tilde{M}_{2}\left[ w \right] \left( \dfrac{x_1}{\varepsilon } \right) \left( 1-\varphi \left( \dfrac{x_1}{\varepsilon }\right) \right) \right) \cdot |h_x(t_1,x_1)| \ge \overline{A}. \end{aligned}$$
(6.6)

Using (6.4) and the fact that \(h_{t}(t_{1},x_{1})=g^{\prime }(t_{1})\) and computing (6.6), we get the desired result.

Getting the Contradiction. We have that for \(\varepsilon \) small enough

$$\begin{aligned} u^\varepsilon + \eta \le \varPsi = g(t) + \overline{p}_{1} x 1_{\{x<0\}} + \overline{p}_{2} x 1_{\{x>0\}} \quad \text{ on } \mathcal {Q}_{\bar{r},2\bar{r}}(\bar{t},0)\backslash \mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0). \end{aligned}$$

Using the fact that \(w^\varepsilon \rightarrow W\), and using (4.8), we have for \(\varepsilon \) small enough

$$\begin{aligned} u^\varepsilon + \dfrac{\eta }{2} \le \varPsi ^\varepsilon \quad \text{ on } \mathcal {Q}_{\bar{r},2\bar{r}}(\bar{t},0)\backslash \mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0). \end{aligned}$$

Combining this with (6.5), we get that

$$\begin{aligned} u^\varepsilon + \dfrac{\eta }{2} \le \varPsi ^\varepsilon \quad \text{ outside } \mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0). \end{aligned}$$

By the comparison principle on bounded subsets the previous inequality holds in \(\mathcal {Q}_{\bar{r},\bar{r}}(\bar{t},0)\). Passing to the limit as \(\varepsilon \rightarrow 0\) and evaluating the inequality at \((\bar{t},0)\), we obtain the following contradiction

$$\begin{aligned} \overline{u}(\bar{t},0) + \dfrac{\eta }{2} \le \varPsi (\bar{t},0) = \overline{u}(\bar{t},0). \end{aligned}$$

   \(\square \)