# Stochastic homogenization of viscous superquadratic Hamilton–Jacobi equations in dynamic random environment

• Wenjia Jing
• Panagiotis E. Souganidis
• Hung V. Tran
Open Access
Research
Part of the following topical collections:
1. Frontiers in Applied and Computational Mathematics: Special Collection in Honor of Bjorn Engquist in the Occasion of His 70th Birthday

## Abstract

We study the qualitative homogenization of second-order Hamilton–Jacobi equations in space-time stationary ergodic random environments. Assuming that the Hamiltonian is convex and superquadratic in the momentum variable (gradient), we establish a homogenization result and characterize the effective Hamiltonian for arbitrary (possibly degenerate) elliptic diffusion matrices. The result extends previous work that required uniform ellipticity and space-time homogeneity for the diffusion.

### Keywords

Stochastic homogenization Hamilton–Jacobi equations Viscosity solutions Dynamic random environment Time-dependent Hamiltonian Convex analysis

### Mathematics Subject Classification

35B27 70H20 49L25

## 1 Background

We study the homogenized behavior of the solution $$u^\varepsilon = u^\varepsilon (x,t,\omega )$$ to the second-order (viscous) Hamilton–Jacobi equation
\begin{aligned} \left\{ \begin{aligned}&u^\varepsilon _t - \varepsilon \mathrm{tr}\,\left( A\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon }, \omega \right) D^2 u^\varepsilon \right) + H\left( Du^\varepsilon , \frac{x}{\varepsilon },\frac{t}{\varepsilon },\omega \right) =0&\quad&\text {in} \ {\mathbb {R}}^n \times (0,+\infty ),\\&u^\varepsilon =u_0&\quad&\text {on} \ {\mathbb {R}}^n \times \{0\}, \end{aligned} \right. \end{aligned}
(1)
where $$u_0 \in \mathrm{BUC}({\mathbb {R}}^n)$$, the space of bounded uniformly continuous functions in $${\mathbb {R}}^n$$, and, for each element $$\omega$$ of the underlying probability space $$(\varOmega , {\mathscr {F}}, {\mathbb {P}})$$, the diffusion matrix $$A = (a_{ij}(x,t,\omega ))$$ is elliptic, possibly degenerate, and, for all xt and $$\omega$$, the Hamiltonian $$H=H(p,x,t,\omega )$$ is convex and has superquadratic growth in p. Moreover, $$A(\cdot ,\cdot ,\omega )$$ and $$H(p,\cdot ,\cdot ,\omega )$$ are stationary ergodic random fields on $$(\varOmega ,{\mathscr {F}},{\mathbb {P}})$$. The precise assumptions are detailed in Sect. 2.
The standard viscosity solution theory yields that, for each $$\omega \in \varOmega$$, (1) is well posed. The homogenization result is that there exists an effective Hamiltonian $$\overline{H}: {\mathbb {R}}^n \rightarrow {\mathbb {R}}$$ such that, if $$\overline{u}$$ is the unique solution to the homogenized Hamilton–Jacobi equation
\begin{aligned} \left\{ \begin{aligned}&\overline{u}_t + {{\overline{H}}}(D{{\overline{u}}}) = 0&\quad&\text {in} \ {\mathbb {R}}^n \times (0,\infty ),\\&{{\overline{u}}} = u_0&\quad&\text {on} \ {\mathbb {R}}^n \times \{0\}, \end{aligned} \right. \end{aligned}
(2)
then, for almost every $$\omega \in \varOmega$$, the solution $$u^\varepsilon$$ to (1) converges locally uniformly to $$\overline{u}$$, that is there exists an event $${{\widetilde{\varOmega }}} \in {\mathscr {F}}$$ with full measure such that, for every $$\omega \in {{\widetilde{\varOmega }}}, R>0$$ and $$T > 0$$,
\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{|x| \le R, t \in [0,T]} \, \left| u^\varepsilon (x,t,\omega ) - \overline{u}(x,t) \right| = 0. \end{aligned}
(3)
The “viscous” Hamilton–Jacobi equation (1) arises naturally in the study of large deviations of diffusion process in spatiotemporal random media, in which case H is quadratic in the gradient. It also finds applications in stochastic optimal control theory; we refer to Fleming and Soner [15] for more details. The homogenization result above serves as a model reduction in the setting when the environment is highly oscillatory but, nevertheless, satisfies certain self-averaging properties. In particular, when the diffusion matrix in the underlying stochastic differential equation depends on time, the coefficient A in (1) will be time dependent as well. As far as we know and as argued below, the homogenization problem in this setting has been open.

The periodic homogenization of coercive Hamilton–Jacobi equations was first studied by Lions, Papanicolaou, and Varadhan [20] and, later, Evans [12, 13] and Majda and Souganidis [23]. Ishii proved in [16] homogenization in almost periodic settings. The stochastic homogenization of first-order Hamilton–Jacobi equations was established independently by Souganidis [27] and Rezakhanlou and Tarver [24]. Later Lions and Souganidis [22] and Kosygina, Rezakhandou, and Varadhan [18] proved independently stochastic homogenization for viscous Hamilton–Jacobi equations using different methods and complementary assumptions. In [21] Lions and Souganidis gave a simpler proof for homogenization in probability using weak convergence techniques. This program was extended by Armstrong and Souganidis in [5, 6] where the metric-problem approach was introduced. Some of the results of [5, 6] were revisited by Armstrong and Tran in [7]. All of the aforementioned works in random homogenization required the Hamiltonian H to be convex. The homogenization of general non-convex Hamiltonians in random environments remains to date an open problem. A first extension to level-set convex Hamiltonians was shown by Armstrong and Souganidis in [6]. Later, Armstrong, Tran, and Yu [3, 4] proved stochastic homogenization for separated Hamiltonians of the form $$H=h(p)-V(x,\omega )$$ with general non-convex h and random potential $$V(x,\omega )$$ in one dimension. Their methods also established homogenization of some special non-convex Hamiltonians in all dimensions. Armstrong and Cardaliaguet [2] studied the homogenization of positively homogeneous non-convex Hamilton–Jacobi equations in strongly mixing environments. More recently Feldman and Souganidis [14] established homogenization of strictly star-shaped Hamiltonians in similar environments. Ziliotto [28] constructed an example of a non-convex separated Hamiltonian in two dimensions that does not homogenize. Feldman and Souganidis [14] extended the construction to any separated H that has a strict saddle point. In addition, [14] also yields non-convex Hamiltonians with space-time random potentials for which the Hamilton–Jacobi equation does not homogenize.

The aforementioned PDE approaches for stochastic homogenization, that is the weak convergence technique and the metric-problem approach, were developed for random environments that are time independent. In this setting, one has uniform in $$\varepsilon$$ and $$\omega$$ Lipschitz estimates for $$u^\varepsilon (\cdot ,\omega )$$, which, however, are not available if A and H depend on t. Nevertheless, Kosygina and Varadhan [19] established homogenization results for viscous Hamilton–Jacobi equations with constant diffusion coefficients, more precisely A being the identity matrix, using the stochastic control formula and invariant measures. For first-order equations with superlinear Hamiltonians, homogenization results were proved by Schwab [26]. Recently, the authors [17] established homogenization for linearly growing Hamiltonians that are periodic in space and stationary ergodic in time.

In this paper, we extend and combine the methodologies of [22] and [5, 6, 7] and obtain stochastic homogenization for general viscous Hamilton–Jacobi equations in dynamic random environments. The results of [22] was based on the analysis of a special solution to (1) that we loosely call the fundamental solution. This is a subadditive, stationary function which, in view of the subadditive ergodic theorem, has a homogenized limit, that identifies the convex conjugate of the effective Hamiltonian $$\overline{H}$$. At the $$\varepsilon >0$$ level, however, the fundamental solution gives rise only to supersolutions $$v^\varepsilon$$ to (2). One of the key steps in [22] was to show that the difference between $$u^\varepsilon$$ and $$v^\varepsilon$$ tends to 0 as $$\varepsilon \rightarrow 0$$. This made very strong use of the uniform Lipschitz estimates on $$u^\varepsilon$$ which were also proved there and are not available for time-dependent problems. The methodology of [5, 6] was based on the analysis of the solution to the metric problem which loosely speaking is the “minimal cost” to connect two points. The metric solution is a subadditive stationary function and has a homogenized limit, which, at each level, is the support function of the level set of the effective $$\overline{H}$$. The homogenization result was then proved in [5, 6] by developing a reversed perturbed test function argument. In the dynamic random setting, however, the “metric” between two points in space must depend on a starting time and hence is not suitable for such environments.

Here we use the fundamental solution approach of [22] to find the effective Hamiltonian and the reverse perturbed test function method of [5, 6] to establish the homogenization. The main contribution of the paper is to “go away” from the need to have uniform in $$\varepsilon$$ Lipschitz bounds. Indeed, the uniform convergence of the fundamental solution to its homogenized limit uses only a uniform (in $$\varepsilon$$) modulus of continuity in its first pair of variables, which is available for superquadratic Hamilton–Jacobi equations [9, 10]. Similarly, the reverse perturbed test function argument is adapted to work without the need of Lipschitz bounds.

We summarize next the main results of this paper. For each fixed $$\omega \in \varOmega$$, let $${\mathscr {L}}(x,t,y,s,\omega )$$ denote the fundamental solution of (1); see (7) below. The first result is that $${\mathscr {L}}(x,t,y,s,\omega )$$ has long-time average, that is there exists a convex function $$\overline{L} : {\mathbb {R}}^n \rightarrow {\mathbb {R}}$$, known as the effective Lagrangian, such that, for a.s. $$\omega \in \varOmega$$ and locally uniformly in (xt) for $$t>0$$,
\begin{aligned} \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0,0,\omega ) = t\,\overline{L}\left( \frac{x}{t}\right) . \end{aligned}
(4)
We note that although the pointwise convergence for fixed (xt) is a direct consequence of subadditive ergodic theorem, the locally uniform convergence requires some uniform (in $$\omega$$ and $$\rho$$) continuity of the scaled function $$\rho ^{-1} {\mathscr {L}}(\rho \,\cdot , \rho \,\cdot , 0,0,\omega )$$. This is where the superquadratic growth of H is used. Indeed, under this assumption, Cannarsa and Cardaliaguet [9] and Cardaliaguet and Silvestre [10] obtained space-time $$C^{0,\alpha }$$-estimates for bounded solutions, which depend on the growth condition of H but neither on the ellipticity of diffusion matrix A nor on the smoothness of H or A. Here we obtain the desired continuity by applying these regularity results to the scaled fundamental solutions.
The effective Hamiltonian is then defined by
\begin{aligned} \overline{H}(p) = \sup _{v \in {\mathbb {R}}^n} \left( p\cdot v - \overline{L}(v) \right) , \end{aligned}
(5)
and the homogenized equation is precisely (2). Then we show that $$\overline{H}$$ is also the limit of the solutions to the approximate cell problem, a fact which yields the homogenization for the general equation.

The rest of the paper is organized as follows. In the remaining part of the introduction, we review most of the standard notation used in the paper. In the next section, we introduce the precise assumptions and state the main results. In Sect. 3 we prove (4). In Sect. 4, we show that the effective $$\overline{H}$$ defined in (5) agrees with the uniform limit of the solution to the approximate cell problem. The homogenization result for the Cauchy problem (1) follows from this fact. In Sect. 5 we show that, as a consequence of the homogenization result proved in this paper, the effective Hamiltonian is given by formulae similar to the ones established in [19, 22].

### 1.1 Notations

We work in the n-dimensional Euclidean space $${\mathbb {R}}^n$$. The subset of points with rational coordinates is denoted by $${\mathbb {Q}}^n$$. The open ball in $${\mathbb {R}}^n$$ centered at x with radius $$r > 0$$ is denoted by $$B_r(x)$$, and this notation is further simplified to $$B_r$$ if the center is the origin. The volume of a measurable set $$A \subseteq {\mathbb {R}}^n$$ is denoted by $$\mathrm {Vol}(A)$$. The $$n+1$$ dimensional space-time is denoted by $${\mathbb {R}}^n \times {\mathbb {R}}$$ or simply by $${\mathbb {R}}^{n+1}$$. The space-time cylinder of horizontal radius $$R>0$$ and vertical interval $$(r,\rho )$$ centered at a space-time point (xt) is denoted by $$Q_{R,r,\rho }(x,t)$$, that is $$Q_{R,r,\rho }(x,t) = \{(y,s) \,:\, y \in B_R(x), s \in (t+r,t+\rho )\}$$; to further simplify notations, we omit the reference point (xt) when it is (0, 0). Moreover, $$Q_R$$ is a short-hand notation for the cylinder $$Q_{R,-R,R}$$. For two vectors $$u, v \in {\mathbb {R}}^n, \langle u, v\rangle$$ denotes the inner product between u and v, and $${\mathbb {M}}^{n\times m}$$ denotes the set of n by m matrices with real entries, and $${\mathbb {M}}^n$$ is a short-hand notation of $${\mathbb {M}}^{n \times n}$$. The identity matrix is denoted by Id. Finally, $${\mathscr {B}}(\varXi )$$ denotes the Borel $$\sigma$$-algebra of the metric space $$\varXi$$.

## 2 Assumptions, the fundamental solution, and the main results

### 2.1 The general setting and assumptions

We consider a probability space $$(\varOmega , {\mathscr {F}}, {\mathbb {P}})$$ endowed with an ergodic group of measure preserving transformations $$\{\tau _{(x,t)} \,:\, (x,t)\in {\mathbb {R}}^{n+1}\}$$, that is, a family of maps $$\tau _{(x,t)}:\varOmega \rightarrow \varOmega$$ satisfying, for all $$(x,t), (x',t')\in {\mathbb {R}}^{n+1}$$ and all $$E \in {\mathscr {F}}$$,
1. (P1)

$$\tau _{(x+x',t+t')}=\tau _{(x,t)}\circ \tau _{(x',t')} \, \hbox { and }\; {\mathbb {P}}[\tau _{(x,t)} E] = {\mathbb {P}}[E]$$,

and
1. (P2)

$$\tau _{(y,s)}(E)=E \ \text {for all} \ (y,s) \in {\mathbb {R}}^{n+1} \hbox { implies }{\mathbb {P}}[E] \in \{0,1\}$$.

The diffusion matrix $$A = (a_{ij}(x,t,\omega )) \in {\mathbb {M}}^n$$ is given by
\begin{aligned} A = \sigma \sigma ^T \end{aligned}
where $$\sigma = \sigma (x,t,\omega )$$ is an $$M^{n\times m}$$ valued random process.
As far as $$H:{\mathbb {R}}^n \times {\mathbb {R}}^n \times {\mathbb {R}}\times \varOmega \rightarrow {\mathbb {R}}$$ and $$\sigma : {\mathbb {R}}^n\times {\mathbb {R}}\times \varOmega \rightarrow {\mathbb {M}}^{n\times m}$$ are concerned, we assume henceforth that
1. (A1)

H and $$\sigma$$ are $${\mathscr {B}}({\mathbb {R}}^{n}\times {\mathbb {R}}^n \times {\mathbb {R}})\times {\mathscr {F}}$$ and $${\mathscr {B}}({\mathbb {R}}^n \times {\mathbb {R}}) \times {\mathscr {F}}$$ measurable, respectively,

2. (A2)
for any fixed $$p \in {\mathbb {R}}^d, \sigma$$ and H are stationary in x and t, that is, for every $$(x,t)\in {\mathbb {R}}^{n+1}, (y,s)\in {\mathbb {R}}^{n+1}$$ and $$\omega \in \varOmega$$,
\begin{aligned} \sigma (x+y,t+s,\omega ) = \sigma (x,t,\tau _{(y,s)}\omega ) \ \text {and} \ H(p, x+y,t+s,\omega ) = H(p,x,t,\tau _{(y,s)}\omega ), \end{aligned}

3. (A3)

for each $$p \in {\mathbb {R}}^n$$ and $$\omega \in \varOmega , \sigma (\cdot ,\cdot ,\omega )$$ and $$H(p,\cdot ,\cdot ,\omega )$$ are Lipschitz continuous in x and t,

4. (A4)
there exists $$\gamma > 2$$ and $$C \ge 1$$ such that, for all $$(x,t) \in {\mathbb {R}}^{n+1}, \omega \in \varOmega$$ and $$p \in {\mathbb {R}}^n$$,
\begin{aligned} \frac{1}{C} |p|^\gamma - C \le H(p,x,t,\omega ) \le C(|p|^\gamma + 1), \end{aligned}
(6)

and, finally,
1. (A5)

the mapping $$p\mapsto H(p,x,t,\omega )$$ is convex for all $$(x,t,\omega ) \in {\mathbb {R}}^{n+1} \times \varOmega$$.

Since throughout the paper we use all the above assumptions, we summarize them as
1. (A)

assumptions (P1), (P2), (A1). (A2), (A3), (A4), and (A5) hold.

### 2.2 The fundamental solution

For each $$\omega \in \varOmega$$ and $$(y,s) \in {\mathbb {R}}^n \times {\mathbb {R}}$$, we define the fundamental solution $${\mathscr {L}}:= {\mathscr {L}}(\cdot ,\cdot ,y,s,\omega ) : {\mathbb {R}}^n \times (s,\infty ) \rightarrow {\mathbb {R}}$$ to be the unique viscosity solution to
\begin{aligned} \left\{ \begin{aligned}&{\mathscr {L}}_t - \mathrm{tr}\,(A(\cdot ,\cdot ,\omega )D^2 {\mathscr {L}}) + H(D {\mathscr {L}},\cdot ,\cdot ,\omega ) = 0&\quad&\text{ in } \ {\mathbb {R}}^n \times (s,\infty ),\\&{\mathscr {L}}(\cdot ,s,y,s,\omega ) = \delta (\cdot ,y)&\quad&\text{ in } \ {\mathbb {R}}^n, \end{aligned} \right. \end{aligned}
(7)
where $$\delta (x,y)=0$$ if $$x=y$$ and $$\delta (x,y)=\infty$$ in $${\mathbb {R}}^n \setminus \{y\}.$$ As in Crandall, Lions, and Souganidis [11], this initial condition is understood in the sense that $${\mathscr {L}}(\cdot ,t,y,s,\omega )$$ converges, as t decreases to s, locally uniformly on $${\mathbb {R}}^n$$ to the function $$\delta (\cdot ,y)$$. The existence and uniqueness of $${\mathscr {L}}$$ follows from an almost straightforward modification of the results of [11]. In view of the stochastic control representation of Hamilton–Jacobi equations, $${\mathscr {L}}(x,t,y,s,\omega )$$ is the “minimal cost” for a controlled diffusion process in the random environment determined by $$(\sigma ,H)$$ to reach the vertex (ys) from (xt).

### 2.3 Main theorems

The first result is about the long-time behavior of the fundamental solution which yields the effective Lagrangian $$\overline{L}$$. The proof is given at the end of Sect. 3.

### Theorem 1

Assume (A). There exist $${{\widetilde{\varOmega }}} \in {\mathscr {F}}$$ with $${\mathbb {P}}({{\widetilde{\varOmega }}}) = 1$$ and a convex function $$\overline{L} : {\mathbb {R}}^n \rightarrow {\mathbb {R}}$$ such that, for all $$\omega \in {{\widetilde{\varOmega }}}, r>0$$ and $$R>r$$,
\begin{aligned} \lim _{\rho \rightarrow \infty } \sup _{(y,s)\in Q_R} \, \sup _{(x,t) \in Q_{R,r,R}((y,s))} \left|\frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, \rho y, \rho s, \omega ) - (t-s) \overline{L}\left( \frac{x-y}{t-s}\right) \right|= 0. \end{aligned}
(8)

Let $${{\overline{u}}}$$ be the solution to (2), with the effective Hamiltonian $$\overline{H}$$ is defined (5) and is, hence, the Legendre transform of the effective Lagrangian $$\overline{L}$$.

The homogenization result is stated next.

### Theorem 2

Assume (A) and let $${{\widetilde{\varOmega }}}$$ be as in Theorem 1. For each $$\omega \in {{\widetilde{\varOmega }}}$$, the solution $$u^\varepsilon$$ of (1) converges, as $$\varepsilon \rightarrow 0$$ and locally uniformly in $${\mathbb {R}}^n \times [0,\infty )$$, to $${{\overline{u}}}$$.

It is well known that Theorem 2 follows from variations of the perturbed test function method [12] if, for each $$p \in {\mathbb {R}}^n$$, the solution $$w^\varepsilon$$ of the approximate auxiliary (cell) problem
\begin{aligned} \varepsilon w^\varepsilon (x,t) + w^\varepsilon _t(x,t) - \mathrm{tr}\,(A(x,t,\omega )D^2 w^\varepsilon (x,t)) + H(p+Dw^\varepsilon ,x,t) = 0 \quad \text { in } {\mathbb {R}}^n \times {\mathbb {R}}\nonumber \\ \end{aligned}
(9)
satisfies $$\varepsilon w^\varepsilon$$ converge uniformly to $$-\overline{H}(p)$$ in cylinders of radius $${\sim }1/\varepsilon$$ as $$\varepsilon \rightarrow 0$$. In the classical periodic setting the convergence is uniform. The need to consider large sets varying with $$\varepsilon$$ was first identified in [27]. Because this is standard, we omit the proof and refer, for example, to [5, section 7.3] for the complete argument.

For all $$\varepsilon >0$$ and $$\omega \in \varOmega$$, the approximate cell problem (9) is well posed. Recall that $$Q_R \subseteq {\mathbb {R}}^{n+1}, R>0$$, is the cylinder centered at (0, 0) with radius R. The precise statement about the convergence of $$\varepsilon w^\varepsilon$$ to $$-\overline{H}(p)$$ is stated in the next Theorem.

### Theorem 3

Assume (A) and let $${{\widetilde{\varOmega }}}$$ be as in Theorem 1. Then, for all $$\omega \in {{\widetilde{\varOmega }}}, p\in {\mathbb {R}}^n$$ and $$R>0$$,
\begin{aligned} \lim _{\varepsilon \rightarrow 0} \, \sup _{Q_{R/\varepsilon }} \, \left| \varepsilon w^\varepsilon (\cdot ,\cdot ,\omega ,p) + \overline{H}(p)\right| = 0. \end{aligned}
(10)

The proof of Theorem 3, which is given in Sect. 4, is based on reversed test function argument of [5, 8]. The differences here are the lack of Lipschitz bounds and the need to apply the method to the scaled versions of the fundamental solution instead of the metric solution.

## 3 The long-time behavior of the fundamental solution

We investigate the long-time average of the fundamental solution $${\mathscr {L}}$$, as $$\rho \rightarrow \infty$$. The averaged function is given by the subadditive ergodic theorem, which is a natural tool for the study of $${\mathscr {L}}$$ in view of the following Lemma.

### Lemma 1

Assume (A). Then, for all $$\omega \in \varOmega$$ and $$x,y,z\in {\mathbb {R}}^d$$,
1. (i)
if $$t,s,\rho \in {\mathbb {R}}$$ and $$t \ge s$$, then
\begin{aligned} {\mathscr {L}}(x+z,t+\rho ,y+z,s+\rho ,\omega ) = {\mathscr {L}}(x,t,y,s,\tau _{(z,\rho )}\omega ), \end{aligned}
(11)
and

2. (ii)
if $$t, s, r \in {\mathbb {R}}$$ satisfy $$s \le r \le t$$, then
\begin{aligned} {\mathscr {L}}(x,t,y,s,\omega ) \le {\mathscr {L}}(x,t,z,r,\omega ) + {\mathscr {L}}(z,r,y,s,\omega ). \end{aligned}
(12)

The stationarity of $${\mathscr {L}}$$ is an immediate consequence of the uniqueness of (7) and the stationarity of the environment. The subadditivity of $${\mathscr {L}}$$ follows from the comparison principle for (7) and the singular initial conditions of the fundamental solutions. Since the proof of Lemma 1 is standard, we omitted it.

Next we recall a result of [22, Proposition 6.9] that concerns bounds on the unscaled function $${\mathscr {L}}$$. Although [22] considered time homogeneous environments, the proof of the following result does not depend on that fact.

### Lemma 2

Assume (A) and let $$\gamma ' := \gamma /(\gamma -1)$$. There exists a constant $$C>0$$ such that, for all $$\omega \in \varOmega , x,y \in {\mathbb {R}}^n$$ and $$t, s \in {\mathbb {R}}$$ with $$t > s$$,
\begin{aligned} -C(t-s) \le {\mathscr {L}}(x,t,y,s,\omega ) \le C\left( \frac{|x-y|^{\gamma '}}{(t-s)^{\gamma '-1}} + (t-s)^{1-\frac{\gamma '}{2}} + (t-s)\right) . \end{aligned}
(13)
To study the long-time average of $${\mathscr {L}}$$, we define, for $$\varepsilon > 0$$, the rescaled function
\begin{aligned} {\mathscr {L}}^\varepsilon (x,t,y,s,\omega ) := \varepsilon {\mathscr {L}}\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon },\frac{y}{\varepsilon },\frac{s}{\varepsilon },\omega \right) . \end{aligned}
(14)
It is immediate that, for each fixed $$(y,s) \in {\mathbb {R}}^{n+1}, {\mathscr {L}}^\varepsilon (\cdot ,\cdot ,y,s,\omega )$$ solves
\begin{aligned} \left\{ \begin{aligned}&{\mathscr {L}}^\varepsilon _t - \varepsilon \mathrm{tr}\,\left( A\left( \frac{\cdot }{\varepsilon },\frac{\cdot }{\varepsilon },\omega \right) D^2{\mathscr {L}}^\varepsilon \right) + H\left( D {\mathscr {L}}^\varepsilon , \frac{\cdot }{\varepsilon },\frac{\cdot }{\varepsilon },\omega \right) = 0&\quad&\text { in } {\mathbb {R}}^n \times (s,\infty ),\\&{\mathscr {L}}^\varepsilon (\cdot ,s,y,s,\omega ) = \varepsilon \delta \left( \frac{\cdot }{\varepsilon },\frac{y}{\varepsilon }\right) = \delta (\cdot ,y)&\quad&\text { on } {\mathbb {R}}^n \times \{s\}. \end{aligned} \right. \end{aligned}
(15)
It now follows from Lemma 2, after the rescaling, that, for all $$t > s$$,
\begin{aligned} -C(t-s) \le {\mathscr {L}}^\varepsilon (x,t,y,s,\omega ) \le C\left( \frac{|x-y|^{\gamma '}}{(t-s)^{\gamma '-1}} + \varepsilon ^{\frac{\gamma '}{2}}(t-s)^{1-\frac{\gamma '}{2}} + (t-s)\right) . \end{aligned}
Note that $$\gamma ' \in (1,2)$$. As a result, for all $$0< \varepsilon < 1, R \ge 1, r \in (0,1), x,y \in B_R$$, and $$t,s \in {\mathbb {R}}$$ with $$t-s \in (r, R)$$, we have
\begin{aligned} |{\mathscr {L}}^\varepsilon (x,t,y,s,\omega )| \le CR\left( \frac{R}{r} \right) ^{\gamma ' - 1} + CR, \end{aligned}
(16)
and, hence, $${\mathscr {L}}^\varepsilon$$ is uniformly bounded on the set $$\{(x,y,t,s) \,:\, x,y \in B_R, \, r \le t-s \le R\}$$. This and the superquadratic growth of H allow us to apply the Hölder regularity results in [9, 10] to get the following uniform in $$\varepsilon$$ estimates for $${\mathscr {L}}^\varepsilon$$.

### Proposition 1

Assume (A). Then there exists $$\alpha \in (0,1)$$, and, for all $$R \ge 1, r \in (0,1)$$, and $$(y,s) \in {\mathbb {R}}^{n+1}$$, the function $${\mathscr {L}}^\varepsilon (\cdot ,\cdot ,y,s,\omega )$$ is uniformly with respect to $$\varepsilon , \omega$$, and (ys) $$\alpha$$-Hölder continuous on the set $$Q_{R,r,R}(y,s)$$.

We omit the proof of Proposition 1, which, in view of (16), is an immediate consequence of Theorem 6.7 of [9] (see also Theorem 1.3 of [10]).

### 3.1 Long-time average of $${\mathscr {L}}$$

The stationarity of $${\mathscr {L}}$$ in (11) and the scaling in the definition of $${\mathscr {L}}^\varepsilon$$ suggest that the limit, as $$\varepsilon \rightarrow 0$$, of $${\mathscr {L}}^\varepsilon (x,t,y,s,\omega )$$ must only depend on $$(x-y)/(t-s)$$. To get the limit, it suffices to set $$(y,s) = (0,0), t = 1 > s$$, and study the limit of the function $$\rho ^{-1}{\mathscr {L}}(\rho x,\rho ,0,0,\omega )$$ as $$\rho \rightarrow \infty$$. This is possible using the subadditive ergodic theorem, which yields a random variable $$\overline{L}(x,\omega )$$.

### Theorem 4

Assume (A). For any $$x \in {\mathbb {R}}^n$$, there exists a random variable $$\overline{L}(x,\omega )$$ and $$\varOmega _{x} \in {\mathscr {F}}$$ of full measure such that, for all $$\omega \in \varOmega _{x}$$,
\begin{aligned} \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , 0, 0, \omega ) = \overline{L}(x,\omega ). \end{aligned}
(17)
Moreover, $$\overline{L}(x,\cdot )$$ is almost surely the constant $${\mathbb {E}}\overline{L}(x,\cdot ).$$

That $$\overline{L}(x,\cdot )$$ is deterministic is important for the final homogenization result. This is usually proved by showing that $$\overline{L}(x,\cdot )$$ is invariant with respect to the translations $$\{\tau _{(y,s)}\}_{(y,s)\in {\mathbb {R}}^{n+1}}$$. In the time homogeneous setting [22] or for first-order equations [26], the translation invariance of $$\overline{L}(x,\cdot )$$ is a consequence of uniform in $$\varepsilon$$ continuity of $${\mathscr {L}}^\varepsilon (x,t,y,s,\omega )$$ in all of its variables. For the problem at hand Proposition 1 gives that $${\mathscr {L}}^\varepsilon$$ is uniformly continuous with respect to its first pair of variables. The uniform continuity with respect to the second pair of variables, that is the vertex, is more subtle and unknown up to now.

We prove next that $$\overline{L}(x,\cdot )$$ is translation invariant without using uniform continuity of $${\mathscr {L}}^\varepsilon$$ with respect to (ys). The argument is based on two observations. Firstly, $$\overline{L}(x,\cdot )$$ is invariant when the vertex varies along the line $$l_x := \{(tx,t)\,:\, t \in {\mathbb {R}}\}$$. Secondly, the subadditive property (13) and the uniform bounds (16) yield one-sided bounds for $${\mathscr {L}}$$. Indeed, to bound $${\mathscr {L}}(\cdot ,\cdot ,y,s,\omega )$$ from above, we compare it with $${\mathscr {L}}(\cdot ,\cdot ,z,r)$$ at a vertex (zr) such that $$r>s$$, and bounded $$|r-s|$$ and $$|z-y|$$. Similarly, for a lower bound, we compare with a vertex that has $$r<s$$.

The proof of Theorem 4 is divided in three steps. In the first step we identify $$\overline{L}(x,\omega )$$ by applying the subadditive ergodic theorem to $$\rho ^{-1} {\mathscr {L}}(\rho x, \rho , kx,k,\omega )$$ with vertex $$(kx,k) \in l_x$$. Then, we establish the invariance of $$\overline{L}(x,\omega )$$ with respect to vertices in $$l_x$$. Finally in the third step, we show that $$\overline{L}(x,\cdot )$$ is invariant with respect to $$\{\tau _{(y,s)}\}$$ and, hence, deterministic.

### Proof (Proof of Theorem 4)

Step 1: The convergence of $${\mathscr {L}}$$ with vertex (0, 0). This is a straightforward application of the classical subadditive ergodic theorem (see, for instance, Theorem 2.5 of Akcoglu and Krengel [1]). For the sake of the reader we briefly recall the argument next.

Fix $$x \in {\mathbb {R}}^n$$, let $${\mathscr {I}}$$ be the set of intervals of the form $$[a,b) \subset [0,\infty )$$, and consider the map $$F: {\mathscr {I}}\times \varOmega \rightarrow {\mathbb {R}}$$
\begin{aligned} F([a,b),\omega ) := {\mathscr {L}}(bx,b,ax,a,\omega ). \end{aligned}
Lemma 1 yields that $$F(\cdot ,\omega )$$ is a stationary subadditive family with respect to the measure preserving semigroup $$(\theta _{c})_{c \in {\mathbb {R}}_+}$$ given by $$\theta _c\, \omega = \tau _{(cx,c)}\omega$$. Moreover, it follows from (13), that the family $$\{F([a,b),\cdot )\,:\, [a,b) \subseteq (0,1)\}$$ is uniformly integrable in $$\varOmega$$.
Then the subadditive ergodic theorem implies the existence a random variable $$\overline{L}(x,\omega ; 0)$$ which is invariant with respect to $$\{\theta _c\}_{c\in {\mathbb {R}}_+}$$, and an event $$\varOmega _{x,0}$$ with full measure, such that, for all $$\omega \in \varOmega _{x,0},$$
\begin{aligned} \lim _{\rho \rightarrow \infty } \frac{1}{\rho } F([0,\rho ),\omega ) = \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , 0, 0, \omega ) = \overline{L}(x,\omega ; 0). \end{aligned}
Here, the parameter 0 in $$\overline{L}(x,\omega ;0)$$ and $$\varOmega _{x,0}$$ indicates that the vertex of $${\mathscr {L}}$$ is (0x, 0).
By the same argument, for each $$k\in {\mathbb {Z}}$$, there exist a random variable $$\overline{L}(x,\cdot ;k)$$, which is invariant with respect to $$\{\theta _c\}_{c\in {\mathbb {R}}_+}$$, and events $$\varOmega _{x,k}$$ of full measure such that, for all $$\omega \in \varOmega _{x,k}$$,
\begin{aligned} \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , kx, k, \omega ) = \overline{L}(x,\omega ;k). \end{aligned}
(18)
We note that, for all $$c \in {\mathbb {R}}_+$$ and $$k\in {\mathbb {Z}}$$, $$\overline{L}(x, \theta _c \omega ;k) = \overline{L}(x,\omega ;k)$$. Even so, $$\overline{L}(x,\cdot ;k)$$ is not necessarily deterministic, because the semigroup $$(\theta _c)_{c \in {\mathbb {R}}_+}$$ may not be ergodic.
Step 2: The invariance of $$\overline{L}(x,\cdot ;k)$$ with respect to $$k \in {\mathbb {Z}}$$. Let $$\varOmega _x = \bigcap _{k\in {\mathbb {Z}}} \varOmega _{x,k}$$. Then
\begin{aligned} \overline{L}(x,\cdot \,;k) = \overline{L}(x,\cdot \,;0) \quad \text { on } \varOmega _x \ \text { for all } k \in {\mathbb {Z}}. \end{aligned}
(19)
The $$\{\theta _c\}$$ invariance of $$\overline{L}(x,\cdot \,;k)$$ and (11) imply, for all $$\omega \in \varOmega _x$$ and $$k \in {\mathbb {Z}}$$,
\begin{aligned} \overline{L}(x,\omega ;k) = \overline{L}(x,\tau _{(x,1)}\omega ;k) = \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x + x, \rho +1, (k+1)x, k+1,\omega ). \end{aligned}
The uniform Hölder continuity of $$\frac{1}{\rho } {\mathscr {L}}(\rho \cdot , \rho \cdot , (k+1)x,k+1,\omega )$$ in Proposition 1 shows
\begin{aligned} \lim _{\rho \rightarrow \infty } \frac{1}{\rho } \left| {\mathscr {L}}(\rho x + x, \rho +1, (k+1)x, k+1,\omega ) - {\mathscr {L}}(\rho x, \rho , (k+1)x, k+1,\omega )\right| = 0. \end{aligned}
Combining the last two observations, we find that
\begin{aligned} \overline{L}(x,\cdot \,;k) = \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , (k+1)x, k+1,\omega ) = \overline{L}(x,\cdot \,;k+1). \end{aligned}
We henceforth denote $$\overline{L}(x,\cdot ;k)$$ by $$\overline{L}(x,\cdot )$$ and conclude that the rescaled function $$\rho ^{-1}{\mathscr {L}}(\rho x, \rho , kx, k, \omega )$$ converges to $$\overline{L}(x,\cdot )$$ for all $$k\in {\mathbb {Z}}$$ and $$\omega \in \varOmega _x$$.

Step 3: $$\overline{L}(x,\cdot )$$ is deterministic. We show that $$\overline{L}(x,\cdot )$$ is translation invariant with respect to $$\{\tau _{(y,s)}\}, (y,s) \in {\mathbb {R}}^{n+1}$$. The conclusion then follows from ergodicity of $$\{\tau _{(y,s)}\}$$.

Fix $$\omega \in \varOmega _x$$ and $$(y,s) \in {\mathbb {R}}^{n+1}$$ and choose $$k_1 \in {\mathbb {Z}}$$ so that $$k_1 \in [s,s+1)$$. It follows from (11) and (12), that
\begin{aligned} {\mathscr {L}}(\rho x, \rho , 0,0,\tau _{(y,s)}\omega )&= {\mathscr {L}}(\rho x + y, \rho + s, y,s,\omega )\nonumber \\&\le {\mathscr {L}}(\rho x + y, \rho +s, k_1x, k_1,\omega ) + {\mathscr {L}}(k_1 x, k_1, y,s,\omega ). \end{aligned}
(20)
Using (13), $$k_1 - s \in [0,1), \gamma ' \in (1,2)$$ and that $$k_1 x$$ and y are bounded, we observe
\begin{aligned} \lim _{\rho \rightarrow \infty } \frac{1}{\rho } \left| {\mathscr {L}}(k_1 x, k_1,y,s) \right| \le \lim _{\rho \rightarrow \infty } \frac{C}{\rho } \left( \left| k_1 x - y\right| ^{\gamma '} + |k_1 - s|^{1-\frac{\gamma '}{2}} + |k_1 - s| \right) = 0. \end{aligned}
For the other term in the right-hand side of (20), we have
\begin{aligned} \begin{aligned}&\lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x + y, \rho + s, k_1 x,k_1,\omega ) = \lim _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , k_1 x,k_1,\omega )\\&\quad + \lim _{\rho \rightarrow \infty } \left[ \frac{1}{\rho } {\mathscr {L}}(\rho x + y, \rho + s, k_1 x,k_1,\omega ) - \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , k_1 x,k_1,\omega )\right] . \end{aligned} \end{aligned}
As in Step 2, the second term on the right-hand side above converges to zero in view of Proposition 1, while the limit of the first term is precisely $$\overline{L}(x,\omega )$$. It follows that
\begin{aligned} \limsup _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , 0,0,\tau _{(y,s)}\omega ) \le \overline{L}(x,\omega ). \end{aligned}
(21)
Similarly, given $$(y,s) \in {\mathbb {R}}^{n+1}$$, we choose $$k_2 \in {\mathbb {Z}}$$ such that $$k_2 \in (s-1,s]$$ and argue as above to find
\begin{aligned} \liminf _{\rho \rightarrow \infty } \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , 0,0,\tau _{(y,s)}\omega ) \ge \overline{L}(x,\omega ). \end{aligned}
(22)
Since (ys) is arbitrary and $$\varOmega _x$$ has full measure, we conclude that $$\overline{L}(x,\cdot )$$ is translation invariant. $$\square$$

Next, we show that the limit $$\overline{L}$$ is local uniform continuous, and the convergence holds locally uniformly in x, again, with fixed vertices.

### Lemma 3

Assume (A). The map $$\overline{L}: {\mathbb {R}}^n \rightarrow {\mathbb {R}}$$ is locally uniformly continuous, and there exists an event $$\varOmega _1$$ with $${\mathbb {P}}(\varOmega _1) = 1$$ such that, for all $$R>0$$ and $$\omega \in \varOmega _1$$,
\begin{aligned} \lim _{\rho \rightarrow \infty } \sup _{x\in B_R} \left| \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho , 0, 0,\omega ) - \overline{L}(x)\right| = 0. \end{aligned}
(23)

### Proof

For any $$R > 0$$. For all $$x,y \in B_R$$, in view of Theorem 4 and Proposition 1, there exists $$\omega \in \varOmega _x \cap \varOmega _y$$ such that
\begin{aligned} \overline{L}(x) - \overline{L}(y) = \lim _{\varepsilon \rightarrow 0} \left( {\mathscr {L}}^\varepsilon (x,1,0,0,\omega ) - {\mathscr {L}}^\varepsilon (y,1,0,0,\omega )\right) \le C|x-y|^\alpha , \end{aligned}
where the Hölder component and the bound C only depend on R and the parameters in (A). Since the estimate above still holds if x and y are switched, it follows that $$\overline{L}$$ is local uniform continuous.

For each $$z \in {\mathbb {Q}}^n$$, let $$\varOmega _z$$ be the event of full measure defined in Theorem 4. Let $$\varOmega _1 := \bigcap _{z \in {\mathbb {Q}}^n} \varOmega _z \in {\mathscr {F}}$$ and observe that $${\mathbb {P}}(\varOmega _1) = 1$$.

Fix $$R > 0$$ . For any $$x \in B_R$$, there exist $$\{x_k\} \in {\mathbb {Q}}^n \cap B_{2R}$$ such that $$x_k \rightarrow x$$ as $$k \rightarrow \infty$$. Note that, for all $$\omega \in \varOmega _1$$,
\begin{aligned} \begin{aligned} \left| {\mathscr {L}}^\varepsilon (x,1,0,0,\omega ) - \overline{L}(x)\right| \le&\left| {\mathscr {L}}^\varepsilon (x,1,0,0,\omega ) - {\mathscr {L}}^\varepsilon (x_k,1,0,0,\omega )\right| \\&+ \left| {\mathscr {L}}^\varepsilon (x_k,1,0,0) - \overline{L}(x_k)\right| + \left| \overline{L}(x_k)- \overline{L}(x)\right| . \end{aligned} \end{aligned}
Proposition 1, the fact that $$\{x_k\}_{k\in {\mathbb {N}}} \cup \{x\} \subseteq B_{2R}$$, and the local uniform continuity of $$\overline{L}$$ yield that, for all $$\omega \in \varOmega _1, \lim _{\varepsilon \rightarrow 0} {\mathscr {L}}^\varepsilon (x,1,0,0,\omega ) = \overline{L}(x)$$. It also follows from these facts that $$\{{\mathscr {L}}^\varepsilon (\cdot ,1,0,0,\omega )\}_{\varepsilon \in (0,1)}$$ and $$\overline{L}$$ are equicontinuous on $$B_{2R}$$, and, hence, (23) holds. $$\square$$

Next, we prove Theorem 1. The argument follows as in [6, 8] from a combination of Egoroff’s and Birkhoff’s ergodic theorems. We need, however, to extend the method to the setting of space-time random environment and, in particular, modify the reasoning so that it does not rely on uniform continuity with respect to the vertex.

### Proof (Proof of Theorem 1)

Step 1. We first show that (23) to: for all $$0< r < R$$ and $$R \ge 1$$,
\begin{aligned} {\mathbb {P}}\left[ \lim _{\rho \rightarrow \infty } \sup _{(x,t) \in Q_{R,r,R}} \left| \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0,0,\omega ) - t\overline{L}\left( \frac{x}{t}\right) \right| = 0\right] = 1. \end{aligned}
(24)
Fix an $$\omega \in \varOmega _1$$ and observe that
\begin{aligned} \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0,0,\omega ) - t\overline{L}\left( \frac{x}{t}\right) = t\left[ \frac{1}{\rho t} {\mathscr {L}}\left( \frac{\rho t x}{t}, \rho t, 0,0,\omega \right) - \overline{L}\left( \frac{x}{t}\right) \right] . \end{aligned}
Since $$r \le t \le R$$ and $$|x| \le R$$, we have $$|x/t| \le R/r$$, and
\begin{aligned} \sup _{(x,t) \in Q_{R,r,R}} \left| \frac{1}{\rho t} {\mathscr {L}}(\rho x, \rho t, 0,0,\omega ) - \overline{L}\left( \frac{x}{t}\right) \right| \le \sup _{\begin{array}{c} r \le t \le R\\ y \in B_{R/r} \end{array}} \left| \frac{1}{\rho t} {\mathscr {L}}(\rho t y, \rho t, 0,0,\omega ) - \overline{L}(y)\right| . \end{aligned}
In view of (23), for any $$\delta > 0$$, there exists $$\rho _\delta = \rho _\delta (r,R,\omega )> 0$$ such that, if $$\rho ' > \rho _\delta$$, then
\begin{aligned} \sup _{y \in B_{R/r}} \left| \frac{1}{\rho '} {\mathscr {L}}(\rho ' y, \rho ', 0,0,\omega ) - \overline{L}(y)\right| < \delta . \end{aligned}
It follows that, if $$\rho > r^{-1} \rho _\delta$$, then $$\rho t > \rho _\delta$$ for all $$t \in [r,R]$$ and, as a consequence,
\begin{aligned} \sup _{r \le t \le R} \, \sup _{y \in B_{R/r}} \left| \frac{1}{\rho t} {\mathscr {L}}(\rho t y, \rho t, 0,0,\omega ) - \overline{L}(y)\right| < \delta . \end{aligned}
Combining the estimates above, yields (24).
Step 2. We show that, for all $$R> r > 0$$ with $$R \ge 1$$,
\begin{aligned} {\mathbb {P}}\left[ \lim _{\rho \rightarrow \infty } \sup _{(y,s) \in Q_R} \sup _{(x,t)\in Q_{R,r,R}(y,s)} \left| \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, \rho y, \rho s,\omega ) - (t-s)\overline{L}\left( \frac{x-y}{t-s}\right) \right| = 0\right] = 1.\nonumber \\ \end{aligned}
(25)
Note that by choosing a sequence $$R_k \uparrow \infty , r_k \downarrow 0$$ and intersecting events of full measures, the above statement is equivalent to that of Theorem 1. Hence, we only need to prove that
\begin{aligned} {\mathbb {P}}\left[ \limsup _{\rho \rightarrow \infty } \sup _{(y,s) \in Q_R} \sup _{(x,t)\in Q_{R,r,R}(y,s)} \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, \rho y, \rho s,\omega ) - (t-s)\overline{L}\left( \frac{x-y}{t-s}\right) \le 0 \right] = 1,\nonumber \\ \end{aligned}
(26)
and
\begin{aligned} {\mathbb {P}}\left[ \liminf _{\rho \rightarrow \infty } \inf _{(y,s) \in Q_R} \inf _{(x,t)\in Q_{R,r,R}(y,s)} \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, \rho y, \rho s,\omega ) - (t-s)\overline{L}\left( \frac{x-y}{t-s}\right) \ge 0\right] = 1.\nonumber \\ \end{aligned}
(27)
Observe that, in view of (24), as $$\rho \rightarrow \infty$$ and for all $$\omega \in \varOmega _1$$,
\begin{aligned} X_\rho (\omega ) := \sup _{(x,t) \in B_{R,r,R}} \left| \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0,0,\omega ) - t\overline{L}\left( \frac{x}{t}\right) \right| \rightarrow 0. \end{aligned}
Then Egoroff’s theorem yields, for any $$0< \varepsilon <1$$, an event $$\varOmega _\varepsilon \subset \varOmega _1$$ such that $${\mathbb {P}}(\varOmega _\varepsilon ) \ge 1- \varepsilon ^{n+1}/8$$ and
\begin{aligned} \lim _{\rho \rightarrow \infty } \sup _{\omega \in \varOmega _\varepsilon } X_\rho (\omega ) = 0. \end{aligned}
In particular, there exists $$T_\varepsilon > 0$$ such that, for all $$\rho > T_\varepsilon$$,
\begin{aligned} \sup _{\omega \in \varOmega _\varepsilon } X_\rho (\omega ) < \frac{\varepsilon }{2}. \end{aligned}
(28)
The ergodic theorem gives an event $${{\widetilde{\varOmega }}}_\varepsilon$$ such that $${\mathbb {P}}({{\widetilde{\varOmega }}}_\varepsilon ) = 1$$ and for all $$\omega \in {{\widetilde{\varOmega }}}_\varepsilon$$,
\begin{aligned} \lim _{K\rightarrow \infty } \frac{1}{\mathrm {Vol}(Q_K)} \int _{B_K} \int _{-K}^K \chi _{\varOmega _\varepsilon } \left( \tau _{(y,s)} \omega \right) \hbox {d}s \hbox {d}y = {\mathbb {P}}(\varOmega _\varepsilon ) \ge 1- \frac{1}{8}\varepsilon ^{n+1}. \end{aligned}
It follows that, for every $$\omega \in {{\widetilde{\varOmega }}}_\varepsilon$$, there exists $$K_\varepsilon (\omega )$$ such that if $$K > K_\varepsilon (\omega )$$,
\begin{aligned} \mathrm {Vol } \left\{ (y,s) \in Q_K \,:\, \tau _{(y,s)}\omega \in \varOmega _\varepsilon \right\} \ge \left( 1-\frac{1}{4}\varepsilon ^{n+1}\right) \mathrm {Vol}(Q_K). \end{aligned}
Let $${{\widetilde{\varOmega }}}_1$$ be $$\varOmega _1$$, for each $$k \in {\mathbb {N}}, k \ge 2$$, let $${{\widetilde{\varOmega }}}_{\frac{1}{k}}$$ be defined as $${{\widetilde{\varOmega }}}_{\varepsilon }$$ with $$\varepsilon = {\frac{1}{k}}$$, set $${{\widetilde{\varOmega }}}: = \cap _{k=1}^\infty {{\widetilde{\varOmega }}}_{\frac{1}{k}},$$ and note $${{\widetilde{\varOmega }}} \in {\mathscr {F}}$$ and $${\mathbb {P}}({{\widetilde{\varOmega }}}) = 1$$.

Fix now an $$\omega \in {{\widetilde{\varOmega }}}$$. For any $$\varepsilon > 0$$ small, choose k large such that $${\frac{1}{k}} < \frac{\varepsilon }{2}$$, and, for $$R \ge 1$$ given, set $$\rho _\varepsilon (\omega ) = R^{-1} \max \{T_{1/k}, K_{1/k}(\omega )\}$$, and observe that if $$\rho > \rho _\varepsilon$$, then $$\rho R > \max \{T_{1/k},K_{1/k}\}$$.

For each $$(y,s) \in Q_R$$, let $$C^+_{\rho \varepsilon R}(y,s)$$ (and, respectively, $$C^-_{\rho \varepsilon R}(y,s)$$) be the region bounded between the cylinder $$Q_{\rho \varepsilon R}(y,s)$$ and the cone at (ys) with unit upward (and, respectively, downward) opening, that is
\begin{aligned} \begin{aligned} C^+_{\rho \varepsilon R}(y,s) := Q_{\rho \varepsilon R}(y,s) \cap \{(x,t) \,:\, t > s, \, |x - y|/(t-s) \le 1\},\\ C^-_{\rho \varepsilon R}(y,s) := Q_{\rho \varepsilon R}(y,s) \cap \{(x,t) \,:\, t < s, \, |x - y|/(s-t) \le 1\}, \end{aligned} \end{aligned}
and note that, for $$\varepsilon$$ small,
\begin{aligned} \mathrm {Vol} \left( Q_{\rho R} \cap C^\pm _{\varepsilon \rho R} \right) \, \ge \, \frac{1}{8}\varepsilon ^{n+1} \, \mathrm {Vol}(Q_{\rho R}). \end{aligned}
It follows that, for every $$(y,s) \in Q_{R}$$, there exists $$({{\hat{y}}}, {{\hat{s}}}) \in Q_{R}$$ such that $$(\rho {\hat{y}}, \rho {\hat{s}}) \in C^+_{\rho \varepsilon R}(\rho y, \rho s)$$ and $$\tau _{(\rho {{\hat{y}}}, \rho {{\hat{s}}})}\omega \in \varOmega _{1/k}$$.
We observe that
\begin{aligned} \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0, 0, \tau _{(\rho y, \rho s)}\omega ) - t\overline{L}\left( \frac{x}{t}\right) = \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0, 0, \tau _{(\rho {\hat{y}},\rho {\hat{s}})}\omega ) - t\overline{L}\left( \frac{x}{t}\right) + E_\rho ,\nonumber \\ \end{aligned}
(29)
with
\begin{aligned} E_\rho :=\, \frac{1}{\rho } {\mathscr {L}}(\rho (x+y), \rho (t+s), \rho y, \rho s,\omega ) - \frac{1}{\rho } {\mathscr {L}}(\rho (x+{\hat{y}}), \rho (t+{\hat{s}}), \rho {\hat{y}}, \rho {\hat{s}},\omega ), \end{aligned}
which is the error term resulted from the change of vertices. Because $$(\rho {\hat{y}}, \rho {\hat{s}}) \in \varOmega _{1/k}$$, the difference of the first two terms on the right-hand side of (29) is bounded from above by $$\frac{\varepsilon }{2}$$.
In view of (12), the error $$|E_\rho |$$ can be bounded by
\begin{aligned} |E_\rho | \le |E_\rho ^1|+ |E_\rho ^2|, \end{aligned}
where
\begin{aligned} E_\rho ^1:= \frac{1}{\rho } {\mathscr {L}}(\rho (x+y), \rho (t+s), \rho y, \rho s,\omega ) - \frac{1}{\rho } {\mathscr {L}}(\rho (x+{\hat{y}}), \rho (t+{\hat{s}}), \rho y, \rho s,\omega ), \end{aligned}
and
\begin{aligned} E_\rho ^2:=\frac{1}{\rho } {\mathscr {L}}(\rho {\hat{y}}, \rho {\hat{s}}, \rho y, \rho s,\omega ). \end{aligned}
Proposition 1 yields that $$|E_\rho ^1|=O(\varepsilon ^\alpha )$$ for some exponent $$\alpha$$ depending on R, while (13) gives
\begin{aligned} |E_\rho ^2| \le C\left( |s-{\hat{s}}| + \rho ^{-\frac{\gamma '}{2}} |s-{\hat{s}}|^{1-\frac{\gamma '}{2}}\right) \le CR \varepsilon , \end{aligned}
provided that $$|y - {\hat{y}}|/|s-{\hat{s}}| \le 1$$ and $$|s - {\hat{s}}| \le \varepsilon R$$.
In conclusion we have that, uniformly for all $$(y,s) \in Q_R$$
\begin{aligned} \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0, 0, \tau _{(\rho y, \rho s)}\omega ) - t\overline{L}\left( \frac{x}{t}\right) \le \frac{\varepsilon }{2} + O(\varepsilon ^\alpha ) + CR \varepsilon , \end{aligned}
and, therefore, for all $$\omega \in {{\widetilde{\varOmega }}}$$,
\begin{aligned} \sup _{(y,s) \in Q_R} \, \sup _{(x,t) \in Q_{R,r,R}} \, \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0, 0, \tau _{(\rho y, \rho s)}\omega ) - t\overline{L}\left( \frac{x}{t}\right) \le \frac{\varepsilon }{2} + \varrho (\varepsilon R) + CR \varepsilon . \end{aligned}
Sending $$\varepsilon \rightarrow 0$$, we obtain that
\begin{aligned} {\mathbb {P}}\left[ \limsup _{\rho \rightarrow \infty } \sup _{(y,s) \in Q_R} \sup _{(x,t) \in Q_{R,r,R}} \frac{1}{\rho } {\mathscr {L}}(\rho x, \rho t, 0,0,\tau _{(\rho y,\rho s)}\omega ) - t\overline{L}\left( \frac{x}{t}\right) \le 0\right] = 1. \end{aligned}
In view of (11), the statement above is equivalent to (26).

Similarly, by repeating the argument above, choosing $$(\rho {\hat{y}},\rho {\hat{s}}) \in C^-_{\rho \varepsilon R}(\rho y, \rho s)$$ and $$\tau _{(\rho {\hat{y}}, \rho {\hat{s}})} \omega \in \varOmega _{1/k}$$, we can bound the quantity in (29) from below, and establish (27). $$\square$$

Finally, we note the following fact about $$\overline{L}$$ and $$\overline{H}$$ defined by (5).

### Corollary 1

The functions $$\overline{L} : {\mathbb {R}}^n \rightarrow {\mathbb {R}}$$ and $$\overline{H} : {\mathbb {R}}^n \rightarrow {\mathbb {R}}$$ are convex.

The convexity of $$\overline{L}$$ is a straightforward consequence of Theorem 1. Finally, as the Legendre transform of a convex function, $$\overline{H}$$ is also convex.

## 4 The proof of Theorem 3

According to the remarks at the end of Sect. 2, this also completes the proof of the homogenization result of Theorem 2.

For each $$p \in {\mathbb {R}}^n$$, let $$w_\varepsilon := \varepsilon w^\varepsilon (\frac{\cdot }{\varepsilon },\frac{\cdot }{\varepsilon };\omega ,p)$$, where $$w^\varepsilon$$ is the solution to the approximate cell problem (9). It follows that
\begin{aligned} w_\varepsilon + \left( w_\varepsilon \right) _t - \varepsilon \mathrm{tr}\,\left( A\left( \frac{\cdot }{\varepsilon },\frac{\cdot }{\varepsilon },\omega \right) D^2 w_\varepsilon \right) + H\left( p+Dw_\varepsilon ,\frac{\cdot }{\varepsilon },\frac{\cdot }{\varepsilon },\omega \right) = 0 \quad \text { in } {\mathbb {R}}^n \times {\mathbb {R}}.\nonumber \\ \end{aligned}
(30)
Then, for any $$R>0$$, (10) is equivalent to
\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \sup _{(x,t) \in Q_R} \left| w_\varepsilon (x,t;p,\omega ) + \overline{H}(p)\right| = 0. \end{aligned}
(31)
For the proof of Theorem 3 we need to recall some notions from convex analysis. We have seen that $$\overline{H}$$ is a convex function defined on $${\mathbb {R}}^n$$. The epigraph of $$\overline{H}$$ is defined by
\begin{aligned} \mathrm{epi}\,(\overline{H}) = \left\{ (p,s)\,:\, p \in {\mathbb {R}}^n \ \text {and} \ s \in [\overline{H}(p),\infty ) \right\} . \end{aligned}
Note that $$\mathrm{epi}\,(\overline{H})$$ is a closed convex subset of $${\mathbb {R}}^{n+1}$$. Given a closed convex subset D of $${\mathbb {R}}^k$$, a point $$p \in D$$ is called an extreme point if, whenever $$p = \lambda x + (1-\lambda )y, x,y \in D$$ and $$\lambda \in [0,1]$$, then either $$x = p$$ or $$y=p$$. A point $$p \in D$$ is called an exposed point, if there exists a linear functional $$f: {\mathbb {R}}^k \rightarrow {\mathbb {R}}$$ such that $$f(p) > f(p')$$ for all $$p' \in D \setminus \{p\}$$.

We denote by $$\partial \overline{L}(q)$$ the subdifferential of $$\overline{L}$$ at q. If $$\partial \overline{L}(q)$$ contains exactly one element, then $$\overline{L}$$ is differentiable at q and the unique element is $$D\overline{L}(q)$$. The following classification of vectors $$p \in {\mathbb {R}}^n$$ will be useful in the proof of Theorem 3.

### Lemma 4

Let $$\overline{L}$$ and $$\overline{H}$$ be defined by Theorem 4 and (5), respectively. Then
1. (i)

for all $$p \in {\mathbb {R}}^n, (p,\overline{H}(p))$$ is on the boundary of $$\mathrm{epi}\,(\overline{H})$$ and $$p \in \partial \overline{L}(q)$$ for some $$q \in {\mathbb {R}}^n$$, and

2. (ii)

if $$(p,\overline{H}(p))$$ is an exposed point of $$\mathrm{epi}\,(\overline{H})$$, then $$p = D\overline{L}(q)$$ for some $$q \in {\mathbb {R}}^n$$.

### Proof

The domain of $$\overline{H}$$ is $${\mathbb {R}}^n$$ and, since $$\overline{H}$$ is continuous and locally bounded, it follows that $$\overline{H}$$ is a closed proper convex function. The first claim of part (i) is obvious. Hence, there exists $$q \in {\mathbb {R}}^n$$ so that the function $$x \mapsto x \cdot q - \overline{H}(x)$$ achieves its supremum at p. It follows that $$q \in \partial \overline{H}(p)$$. Since $$\overline{H}$$ is a closed proper convex function, by [25, Corollary 23.5.1], $$p \in \partial \overline{L}(q)$$ also holds. Part (ii) follows directly from [25, Corollary 25.1.2]. $$\square$$

### Proof (Proof of Theorem 3)

Step 1: We prove that for any fixed $$\omega \in {{\widetilde{\varOmega }}}, p \in {\mathbb {R}}^n$$ and $$R \ge 1$$,
\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \sup _{(x,t) \in Q_R} \left( w_\varepsilon (x,t;p) + \overline{H}(p) \right) \le 0. \end{aligned}
(32)
Lemma 4 (i) yields a $$q \in {\mathbb {R}}^n$$ such that $$p \in \partial \overline{L}(q)$$. This implies
\begin{aligned} \overline{H}(p) + \overline{L}(q) - p\cdot q = 0. \end{aligned}
(33)
Arguing by contradiction, we assume (32) fails, so there exist $$\delta > 0$$, a subsequence $$\varepsilon _k \rightarrow 0$$, and a sequence $$\{(z_k,s_k)\}_{k\in {\mathbb {N}}} \in Q_R$$ such that
\begin{aligned} w_{\varepsilon _k}(z_k,s_k) + \overline{H}(p) \ge \delta > 0. \end{aligned}
For notational simplicity, the subscript k in $$\varepsilon _k$$ and in $$(z_k,s_k)$$ is henceforth suppressed. Since $$\omega$$ and p are also fixed, any dependence on these parameters is also suppressed.
Next, for some small real number $$c>0$$ and some $$\lambda \in (0,1)$$ close to 1 to be chosen and $$(x,t) \in {\mathbb {R}}^n \times (-\infty ,s)$$, we define
\begin{aligned} W^\varepsilon (x,t) := \lambda \left( w_\varepsilon (x,t) - w_\varepsilon (z,s)\right) - c\delta \psi (x) -c\delta (s - t) , \end{aligned}
where
\begin{aligned} \psi (x):=\left( (1+|x-z|^2)^{\frac{1}{2}} -1\right) ; \end{aligned}
note that
\begin{aligned} |D\psi (x)| < 1 \quad \text {and} \quad (1+|x|^2)^{-\frac{3}{2}} Id \le D^2 \psi (x) \le (1+|x|^2)^{-\frac{1}{2}} Id. \end{aligned}
Let $$U_\varepsilon := \{(x,t) \in {\mathbb {R}}^n \times {\mathbb {R}}\,:\, W^\varepsilon \ge -\frac{\delta }{4}\} \cap \{(x,t) \in {\mathbb {R}}^n \times {\mathbb {R}}\,:\, t \le s\}$$. It follows that
\begin{aligned} W^\varepsilon _t - \varepsilon \mathrm{tr}\,\left( A\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon }\right) D^2 W^\varepsilon \right) + H\left( p+DW^\varepsilon ,\frac{x}{\varepsilon },\frac{t}{\varepsilon }\right) \le \overline{H}(p) - \frac{\delta }{4} \quad \text {in } U_\varepsilon . \end{aligned}
(34)
Indeed, if $$\varphi \in C^2({\mathbb {R}}^n\times {\mathbb {R}})$$ and if $$W^\varepsilon - \varphi$$ attains a local maximum at $$(x_0,t_0)$$ in $$U_\varepsilon$$, then the mapping
\begin{aligned} (x,t) \mapsto w_\varepsilon (x,t) - \lambda ^{-1} (\varphi (x,t) + c\delta \psi (x-z) + c\delta (s-t)) \end{aligned}
attains a local maximum at $$(x_0,t_0)$$.
Since $$w_\varepsilon$$ is the viscosity solution of (30), we find
\begin{aligned}&w_\varepsilon (x_0,t_0) + \lambda ^{-1} \left( \varphi _t(x_0,t_0) - c\delta \right) \\&\qquad -\, \lambda ^{-1} \varepsilon \mathrm{tr}\,\left( A\left( \frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) (D^2 \varphi (x_0,t_0) + c\delta D^2 \psi (x_0)\right) \\&\qquad +\, H\left( p + \lambda ^{-1} (D\varphi (x_0,t_0) + c\delta D\psi (x_0)),\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \le 0, \end{aligned}
while the convexity of H in p gives
\begin{aligned} \begin{aligned}&H\left( p+ D\varphi (x_0,t_0),\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \\&\quad = H\Big (\lambda \Big (p+\frac{D\varphi (x_0,t_0) + c\delta D\psi (x_0)}{\lambda }\Big )+(1-\lambda )\Big (p-\frac{c\delta D\psi (x_0)}{1-\lambda }\Big ),\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\Big )\\&\quad \le \lambda H\Big (p+\frac{D\varphi (x_0,t_0) + c\delta D\psi (x_0)}{\lambda },\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\Big ) + (1-\lambda ) H\Big (p-\frac{c\delta D\psi (x_0)}{1-\lambda },\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\Big ). \end{aligned} \end{aligned}
We use the growth assumption (6) to choose $$\lambda (p) \in (0,1]$$ so that $$1 - \lambda (p)$$ is small and
\begin{aligned} -\lambda \delta + \lambda \overline{H}(p) + (1-\lambda ) \sup _{p' \in B_1(p)} \sup _{(x,t) \in {\mathbb {R}}^{n} \times {\mathbb {R}}} H(p',x,t) \le \overline{H}(p) - \frac{3\delta }{4}, \end{aligned}
and we fix a small enough $$c>0$$ so that $$c < 1/8$$ and $$c\delta <1-\lambda$$. Then, for all $$(x,t) \in {\mathbb {R}}^n \times {\mathbb {R}}$$,
\begin{aligned} p - c\delta (1-\lambda )^{-1} D\psi (x) \in B_1(p) \quad \text {and}\quad |\mathrm{tr}\,(A(x,t) c\delta D^2 \psi (x))| < \frac{\delta }{16}. \end{aligned}
Combining the estimates above, we get, for $$\varepsilon$$ sufficiently small,
\begin{aligned}&\varphi _t(x_0,t_0) - \varepsilon \mathrm{tr}\,\left( A\left( \frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) D^2 \varphi (x_0,t_0)\right) + H\left( p+D\varphi (x_0,t_0),\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \nonumber \\&\quad \le -\,\lambda w_\varepsilon (x_0,t_0) + c\delta + \varepsilon c\delta \mathrm{tr}\,\left( A\left( \frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) D^2\psi (x_0)\right) \nonumber \\&\qquad +\, H\left( p+D\varphi (x_0,t_0),\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \nonumber \\&\qquad -\, \lambda H\left( p+\frac{D\varphi (x_0,t_0) + c\delta D\psi (x_0)}{\lambda },\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \nonumber \\&\quad \le -\,W^\varepsilon (x_0,t_0) - \lambda w_\varepsilon (z,s) + (1-\lambda ) H\left( p-\frac{c\delta D\psi (x_0)}{1-\lambda },\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) + \frac{\delta }{4}\nonumber \\&\quad \le -W^\varepsilon (x_0,t_0) - \lambda \delta + \lambda \overline{H}(p) + (1-\lambda ) \sup _{p' \in B_1(p)} \Vert H(p',\cdot ,\cdot )\Vert _{L^\infty } + \frac{\delta }{4}\nonumber \\&\quad \le -W^\varepsilon (x_0,t_0) + \overline{H}(p) - \frac{\delta }{2} \le \, \overline{H}(p) - \frac{\delta }{2}, \end{aligned}
(35)
with the last inequality holding because $$(x_0,t_0) \in U_\varepsilon$$ and, hence, $$-W^\varepsilon (x_0,t_0) \le \frac{\delta }{4}$$. This proves (34).
Next we compare $$W^\varepsilon$$ with $$V^\varepsilon := V^\varepsilon (x,t)$$ which is defined, for some large $$r>0$$ to be chosen, by
\begin{aligned} V^\varepsilon (x,t) := {\mathscr {L}}^\varepsilon (x,t,z-rq,s-r) {-} {\mathscr {L}}^\varepsilon (z,s,z-rq,s-r) - p \cdot (x-z) + \overline{H}(p) (t-s).\nonumber \\ \end{aligned}
(36)
In view of (15), $$V^\varepsilon$$ satisfies
\begin{aligned} V^\varepsilon _t - \varepsilon \mathrm{tr}\,\left( A\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon }\right) D^2 V^\varepsilon \right) + H\left( p+DV^\varepsilon ,\frac{x}{\varepsilon },\frac{t}{\varepsilon }\right) = \overline{H}(p) \quad \text { in } {\mathbb {R}}^n \times (-r+s,\infty ).\quad \end{aligned}
Let $$\partial _s U_\varepsilon := \{t<s\}\cap \partial \{W^\varepsilon \ge - \frac{\delta }{4}\}$$ be the parabolic boundary of the space-time domain $$U_\varepsilon$$ and note that $$W^\varepsilon = - \frac{\delta }{4}$$ on $$\partial _s U_\varepsilon$$.
The comparison principle for (34), yields
\begin{aligned} \sup _{U_\varepsilon } \left( W^\varepsilon - V^\varepsilon \right) = \sup _{\partial _s U_\varepsilon } \left( W^\varepsilon - V^\varepsilon \right) = -\frac{\delta }{4} - \inf _{\partial _s U_\varepsilon } V^\varepsilon , \end{aligned}
(37)
and, since $$(z,s) \in U_\varepsilon \cap \{t = s\}$$ is an interior point of $$U_\varepsilon$$ and $$W^\varepsilon (z,s) = V^\varepsilon (z,s) = 0$$, the left-hand side is non-negative.
In view of the bound $$|w_\varepsilon | \le C$$ and the linear growth of $$\psi (x)+(s-t)$$, we find that $$U_\varepsilon \subset Q_{R'}(z,s)$$ provided $$R' = 2C/c\delta$$. It follows that
\begin{aligned} \begin{aligned}&\inf _{(x,t) \in Q_{R'}(z,s)} \Big ({\mathscr {L}}^\varepsilon (x,t,z-rq,s-r) - {\mathscr {L}}^\varepsilon (z,t,z-rq,s-r) \\&\quad \quad - p\cdot (x-z) + \overline{H}(p)(t-s)\Big ) \le -\frac{\delta }{4}. \end{aligned} \end{aligned}
Send $$\varepsilon \rightarrow 0$$. Since $$\{(z_j,s_j)\} \subset Q_R$$, we may assume that $$(z,s) \rightarrow (z_0,s_0)$$. By Theorem 1, $${\mathscr {L}}^\varepsilon$$ converges uniformly on $$Q_{R'}(z,s)$$. We get
\begin{aligned} \begin{aligned}&\inf _{(x,t) \in Q_{R'+1}(z_0,s_0)} \Big ( (r-s_0+t)\overline{L}\left( \frac{x-z_0+rq}{r-s_0+t}\right) - r\overline{L}(q) \\&\quad \quad - p\cdot (x-z_0) + \overline{H}(p)(t-s_0) \Big ) \le -\frac{\delta }{4}. \end{aligned} \end{aligned}
The fact $$p \in \partial \overline{L}(q)$$ implies
\begin{aligned} \overline{L}\left( \frac{x-z_0+rq}{r-s_0+t}\right) - \overline{L}(q) \ge p \cdot \left( \frac{x-z_0+rq}{r-s_0+t} - q\right) = p \cdot \frac{(x-z_0) + (s_0-t)q}{r-s_0+t}. \end{aligned}
(38)
As a result, for r sufficiently large, we have
\begin{aligned} \inf _{(x,z) \in Q_{R'+1}(z_0,s_0)} \left( (t-s_0)\left( \overline{H}(p) + \overline{L}(q) - p\cdot q\right) \right) \le -\frac{\delta }{4}, \end{aligned}
which, combined with (33), yields $$0 \le -\delta /4$$. This is a contradiction and, hence, (32) must hold.
Step 2: For any fixed $$\omega \in {{\widetilde{\varOmega }}}, p \in {\mathbb {R}}^n$$ and $$R \ge 1$$,
\begin{aligned} \liminf _{\varepsilon \rightarrow 0} \inf _{(x,t) \in Q_R} w_\varepsilon (x,t;p) + \overline{H}(p) \ge 0. \end{aligned}
(39)
We claim that this task can be reduced to the case of $$(p,\overline{H}(p))$$ being an exposed point of $$\mathrm{epi}\,(\overline{H})$$.

Indeed, assume that (39) holds for all exposed $$(p,\overline{H}(p))$$. Then if $$p \in {\mathbb {R}}^n$$ is such that $$(p,\overline{H}(p))$$ is an extreme point of $$\mathrm{epi}\,(\overline{H})$$, then by Straszewicz’s theorem [25, Theorem 18.6], there exists a sequence $$\{p_j\}$$ converging to p, such that $$\{(p_j,\overline{H}(p_j))\}$$ are exposed points of $$\mathrm{epi}\,(\overline{H})$$. In view of the continuity of the mapping $$p \mapsto w^\varepsilon (\cdot ,\cdot ,p)$$, (39) holds for extremal $$(p,\overline{H}(p))$$.

For any other $$p \in {\mathbb {R}}^n, (p,\overline{H}(p))$$ can be written as a convex combination of extremal $$\{(p_j,\overline{H}(p_j))\}_{j=1}^{n+2}$$. We have proved that (39) holds for each $$p_j$$. Since the mapping $$p \mapsto w^\varepsilon (\cdot ,\cdot ,p)$$ is concave, and p is a convex combination of $$\{p_j\}_{j=1}^{n+2}$$, we conclude that (39) holds for p.

Step 3: If $$p \in {\mathbb {R}}^n$$ and if $$(p,\overline{H}(p))$$ is an exposed point of $$\mathrm{epi}\,(\overline{H})$$, then (39) holds. Although the proof of (39) follows along the lines of Step 1, there is an important difference. The inequality (33), which holds for any $$p \in \partial \overline{L}(q)$$, is useful only to establish the upper bound as seen in Step 1. Here, however, p satisfies the additional condition that $$(p,\overline{H}(p))$$ is exposed, and, hence, in view of Lemma 4, $$p = D\overline{L}(q)$$ for some $$q \in {\mathbb {R}}^n$$. This amounts to
\begin{aligned} \overline{L}(y) - \overline{L}(q) = p \cdot (y-q) + o(|y-q|), \end{aligned}
(40)
which is a stronger fact than (33).
Arguing by contradiction, we assume that (39) fails, so there exists $$\delta > 0$$, a subsequence $$\{\varepsilon _k\}_{k\in {\mathbb {N}}}$$ converging to 0, a sequence $$\{(z_k,s_k)\}_{k\in {\mathbb {N}}} \subseteq Q_R$$ such that
\begin{aligned} -w_{\varepsilon _k}(z_k,s_k) - \overline{H}(p) \ge \delta > 0; \end{aligned}
as before, the subscript k is suppressed henceforth.
Using (6), we take $$\lambda >1$$ such that
\begin{aligned} \lambda \delta + \lambda \overline{H}(p) + (\lambda -1) \inf _{p' \in B_1(p)} \inf _{(x,t) \in {\mathbb {R}}^{n} \times {\mathbb {R}}} H(p',x,t) \ge \overline{H}(p) + \frac{3\delta }{4}. \end{aligned}
After $$\lambda$$ is fixed, we choose $$0< c < \frac{1}{8}$$ so that $$c\delta < \lambda -1$$, and for $$x \in {\mathbb {R}}^n$$ and $$t \le s$$, we define
\begin{aligned} W^\varepsilon (x,t) := \lambda \left( w_\varepsilon (x,t) - w_\varepsilon (z,s)\right) + c\delta \left( (1+|x-z|^2)^{\frac{1}{2}} -1\right) + c\delta (s - t), \end{aligned}
and set $$U_\varepsilon := \{W^\varepsilon \le \frac{\delta }{4}\} \cap \{t\le s\}$$.
We claim that
\begin{aligned} W^\varepsilon _t - \varepsilon \mathrm{tr}\,\left( A\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon }\right) D^2 W^\varepsilon \right) + H\left( p+DW^\varepsilon ,\frac{x}{\varepsilon },\frac{t}{\varepsilon }\right) \ge \overline{H}(p) + \frac{\delta }{4} \quad \text {in } U_\varepsilon . \end{aligned}
(41)
This can be proved by the same argument that led to (34), provided we replace (35) by
\begin{aligned} \begin{aligned}&H\left( p+D \varphi (x_0,t_0),\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \,-\, \lambda H\left( p+\frac{D\varphi (x_0,t_0)-c\delta D\psi (x_0)}{\lambda },\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) \\&\quad \ge \, (\lambda -1) H\left( p- \frac{c\delta D \psi (x_0)}{\lambda -1},\frac{x_0}{\varepsilon },\frac{t_0}{\varepsilon }\right) . \end{aligned} \end{aligned}
Then we compare $$W^\varepsilon$$ with the function $$V^\varepsilon$$ defined by (36) on the domain $$U_\varepsilon$$, and get
\begin{aligned} \sup _{U_\varepsilon } \left( V^\varepsilon - W^\varepsilon \right) = \sup _{\partial _s U_\varepsilon } \left( V^\varepsilon - W^\varepsilon \right) = -\frac{\delta }{4} + \sup _{\partial _s U_\varepsilon } V^\varepsilon , \end{aligned}
The left-hand side is non-negative since $$V^\varepsilon (z,s) = W^\varepsilon (z,s) = 0$$ and (zs) is an interior point of $$U_\varepsilon$$. Moreover, if $$R' = 2\Vert w_\varepsilon \Vert _{L^\infty }/c\delta$$, then $$U_\varepsilon \subset Q_{R'}(z,s)$$, and, hence
\begin{aligned} \begin{aligned}&\sup _{Q_{R'}(z,s)} \Big ({\mathscr {L}}^\varepsilon (x,t,z-rq,s - r) - {\mathscr {L}}^\varepsilon (z,t,z-rq,s-r)\\&\quad \quad - p\cdot (x-z) + \overline{H}(p)(t-s)\Big ) \ge \frac{\delta }{4}. \end{aligned} \end{aligned}
As in Step 1, we may assume $$(z_k,s_k) \rightarrow (z_0,s_0) \in \overline{Q}_R$$. Sending $$\varepsilon _k$$ to 0, we get
\begin{aligned}&\sup _{(x,t) \in Q_{R'+1}(z_0,s_0)} \Big ( (r-s_0+t)\overline{L}\left( \frac{x-z_0+rq}{r-s_0+t}\right) - r\overline{L}(q) \nonumber \\&\quad \quad - p\cdot (x-z_0) + \overline{H}(p)(t-s_0) \Big ) \ge \frac{\delta }{4}. \end{aligned}
(42)
Using that $$p = D\overline{L}(q)$$ and $$\overline{L}(q) + \overline{H}(p) = p\cdot q$$, we have
\begin{aligned} \begin{aligned}&(r-s_0+t)\overline{L}\left( \frac{x-z_0+rq}{r-s_0+t}\right) - r\overline{L}(q) - p\cdot (x-z_0) + \overline{H}(p)(t-s_0)\\&\quad = (r-s_0+t) \left[ \overline{L}\left( \frac{x-z_0+rq}{r-s_0+t}\right) - \overline{L}(q) - p\cdot \frac{x-z_0 +rq -(r - s_0+t)q}{r-s_0+t} \right] \\&\quad = (r-s_0+t) \cdot o\left( \left| \frac{x-z_0+(s_0-t)q}{r-s_0+t}\right| \right) \end{aligned} \end{aligned}
Since $$|x-z_0 + (s_0-t)q| \le (1+|q|)R$$ is finite and the estimate (42) holds for all large r, sending $$r \rightarrow \infty$$, yields $$\frac{\delta }{4} \le 0$$, which is a contradiction. $$\square$$

## 5 Some formulae for the effective hamiltonian

Arguments similar to the ones in [22] yield that, once homogenization theory is established, the effective Hamiltonian $$\overline{H}(p)$$ is given by
\begin{aligned} \overline{H}(p) = \inf _{\psi \in {\mathscr {S}}} \sup _{(x,t) \in {\mathbb {R}}^{n+1}} \left[ \psi _t - \mathrm{tr}\,\left( A(x,t) D^2 \psi (x,t) \right) + H(p+D\psi (x,t),x,t)\right] , \end{aligned}
where the sup of the value of the differential operator evaluated on $$\psi$$ should be interpreted in the viscosity sense, and
\begin{aligned} \begin{aligned} {\mathscr {S}}&:= \Big \{ \psi : {\mathbb {R}}^{n+1} \times \varOmega \rightarrow {\mathbb {R}}~:~ \psi (\cdot ,\cdot ,\omega ) \in C({\mathbb {R}}^{n+1}), \\&\quad \quad \lim _{|(x,t)| \rightarrow \infty } \frac{|\psi (x,t,\omega )|}{|(x,t)|} = 0 \text { for a.s. } \omega \in \varOmega ,\\&\quad \quad \psi (x+y,t+s,\omega ) - \psi (x,t,\omega ) \text { is stationary in } (y,s) \text { for all } (x,t) \in {\mathbb {R}}^{n+1}\Big \}. \end{aligned} \end{aligned}
that is, $${\mathscr {S}}$$ is the set of random processes that are sublinear in (xt) and have stationary increments. Note that if $$\psi \in {\mathscr {S}}$$ is also differentiable with respect to (xt), then the stationarity of increments is equivalent to $$\psi _t$$ and $$D\psi$$ being stationary, and the sublinearity is equivalent to $${\mathbb {E}}[\psi _t] = 0$$ and $${\mathbb {E}}[D\psi ] = 0$$.

Another formula for effective Hamiltonian was introduced in [18] for time homogeneous random environment, and then generalized in [19] to space-time random environment, both under the assumption that the diffusion term is given by the identity matrix. We recall how this formula was obtained, and write it in the form that it should take when the diffusion matrix is more general.

Any random variable $${\widetilde{b}}$$ gives rise to a stationary random process $$b(x,t,\omega ) = {\widetilde{b}}(\tau _{(x,t)}\omega )$$. In the reverse direction, for any stationary random process $$b(x,t,\omega )$$, we can lift it to the probability space and identify it with $${{\widetilde{b}}}(\omega ) := b(0,0,\omega )$$. For notational simplicity, we omit the tilde in $${\widetilde{b}}$$ from now on. The translation group $$\{\tau _{(x,t)}: (x,t) \in {\mathbb {R}}^{n+1}\}$$ acting on $$L^2(\varOmega )$$ are isometric. Let $$\partial _t, D_i, i = 1,2,\cdots ,n$$, by an abuse of notations, be the corresponding infinitesimal generators; we denote further $$D = (D_1,\cdots ,D_n)$$.

Let $${\mathbf {B}}:= L^\infty (\varOmega ,{\mathbb {R}}^n)$$ be the space of essentially bounded maps from $$\varOmega$$ to $${\mathbb {R}}^n$$. Given any $$b \in {\mathbf {B}}$$ and $$A = \sigma \sigma ^T$$ satisfying (A1), (A2), and (A3), let $$x(t,\omega )$$ be the diffusion process starting from 0 at time 0 such that
\begin{aligned} \hbox {d}x(t) = b(\tau _{(x(t),-t)}\omega ) \hbox {d}t + \sqrt{2} \sigma (x(t),-t) \hbox {d}B_t \quad \text { for all } t > 0. \end{aligned}
In the above, $$(B_t)_{t\ge 0}$$ is a standard m-dimensional Brownian motion, independent of H and $$\sigma$$. This process can be viewed as a diffusion in the probability space as follows. Pick a starting point $$\omega \in \varOmega$$, and define the walk $$\omega (t) = \tau _{(x(t,\omega ),-t)} \omega , t\ge 0$$. This is a Markov process on $$\varOmega$$ with generator
\begin{aligned} {\mathscr {L}}_{b,\sigma } = -\partial _t + \mathrm{tr}\,(\sigma (\omega )\sigma (\omega )^T D^2) + b(\omega ) \cdot D. \end{aligned}
(43)
Let $${\mathbf {D}}:= \{\varPhi \in L^\infty (\varOmega ;{\mathbb {R}}) \,:\, {\mathbb {E}}[\varPhi ] = 1, \varPhi > 0 \text { and } (\partial _t \varPhi , D\varPhi ) \in L^\infty \}$$. Finally, let
\begin{aligned} {\mathscr {E}}:= \left\{ (b,\varPhi ) \in {\mathbf {B}}\times {\mathbf {D}}\;:\; \partial _t \varPhi + D^2_{ij}(A\varPhi ) - \mathrm{div}\,(b\varPhi ) = 0\right\} , \end{aligned}
(44)
where the equation should be understood in the weak sense, that is for all $$G \in C^\infty _0({\mathbb {R}}^{n+1},{\mathbb {R}})$$,
\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}} \int _{{\mathbb {R}}^n} \big [\partial _t G(x,t) + \langle -b + \mathrm{div}\,A, DG(x,t)\rangle \big ] \varPhi (\tau _{(t,x)}\omega ) \\&\quad + \langle A D\varPhi (\tau _{(t,x)}\omega ), DG(x,t)\rangle \ \hbox {d}x \hbox {d}t = 0. \end{aligned} \end{aligned}
Hence, $${\mathscr {E}}$$ consists of all pairs $$(b,\varPhi )$$ such that $$\varPhi$$ is the density of an invariant measure of the Markov process $${\mathscr {L}}_{b,\sigma }$$. We note that for any $$v \in {\mathbb {R}}^n$$, the pair $$(b,\varPhi )$$, where $$b_j = v_j + D_i A_{ij}$$ and $$\varPhi \equiv 1$$, satisfies the equation above and, hence, $${\mathscr {E}}$$ is non-empty.
Following [18, 19], the effective Hamiltonian, for each $$p \in {\mathbb {R}}^n$$, should be given by
\begin{aligned} {\widetilde{H}}(p) = \sup _{(b,\varPhi )\in {\mathscr {E}}} {\mathbb {E}}\left[ \left( \langle -b, p\rangle - L(-b(\omega ),\omega )\right) \varPhi (\omega ) \right] . \end{aligned}
(45)
Note that in this formula, A does not need to be uniformly elliptic and can be degenerate.

As a corollary of Theorem 2, we can show that the above formulae for effective Hamiltonian holds in the setting of this paper.

### Theorem 5

Assume (A) so that Theorem 2 holds. Then, for all $$p \in {\mathbb {R}}^n, \overline{H}(p) = {{\widetilde{H}}}(p)$$.

We only sketch the proof. Given the homogenization result, Theorem B of [21] provides a method to establish $${\widetilde{H}} \le \overline{H}$$, which is easily applied here. Note that even though [21] concerned only time homogeneous environment, the proof of Theorem B there does not rely on this fact.

The inequality $${\widetilde{H}}(p) \ge \overline{H}(p)$$ follows from the fact that, for any $$\delta > 0$$, there exists $$\psi _\delta \in {\mathscr {S}}$$, such that $$\psi _\delta$$ is a subsolution to
\begin{aligned} \begin{aligned} \partial _t \psi _\delta - \mathrm{tr}\,(A(x,t,\omega )D^2 \psi _\delta ) + H(p+D\psi _\delta ,x,t,\omega ) \le {\widetilde{H}}(p) + \delta \quad \text { on } \, {\mathbb {R}}^{d+1}. \end{aligned} \end{aligned}
This claim is proved in [19] for $$A \equiv Id$$, but the proof, which utilizes the min-max theorem, extends easily to general diffusion matrix $$A \in C^{1,\alpha }$$. We emphasize that neither $$A = Id$$ nor A being uniformly elliptic is needed for this claim.

It is difficult to prove the homogenization result of this paper using the method of [18, 19]. Indeed, in these references, the uniform lower bound $$\liminf _{\varepsilon \rightarrow 0} \inf _{Q_R}(u^\varepsilon - u) \ge 0$$ is established using the ergodic theorem, which requires uniqueness of invariant measure for a given drift. For this, the uniform ellipticity of A is crucial. The stronger assumption that H grows superquadratically in p does not seem to help to remove uniform ellipticity requirement of A. In that sense, the fact that (45) provides the formula for the effective Hamiltonian for possibly degenerate diffusion matrix A, though only under the restrictive superquadratic growth assumption, is a new fact.

### Acknowledgements

WJ is supported in part by the NSF Grant DMS-1515150. PS is supported in part by the NSF Grant DMS-1266383 and DMS-1600129. HT is supported in part by the NSF Grant DMS-1361236.

### References

1. 1.
Akcoglu, M.A., Krengel, U.: Ergodic theorems for superadditive processes. J. Reine Angew. Math. 323, 53 (1981). doi:
2. 2.
Armstrong, S.N., Cardaliaguet, P.: Stochastic homogenization of quasilinear Hamilton–Jacobi equations and geometric motions. J. Eur. Math. Soc. (accepted)Google Scholar
3. 3.
Armstrong, S.N., Tran, H.V., Yu, Y.: Stochastic homogenization of a nonconvex Hamilton–Jacobi equation. Calc. Var. Partial Differ. Equ. 54(2), 1507 (2015). doi:
4. 4.
Armstrong, S.N., Tran, H.V., Yu, Y.: Stochastic homogenization of nonconvex Hamilton–Jacobi equations in one space dimension. J. Differ. Equ. 261(5), 2702 (2016). doi:
5. 5.
Armstrong, S.N., Souganidis, P.E.: Stochastic homogenization of Hamilton–Jacobi and degenerate Bellman equations in unbounded environments. J. Math. Pures Appl. (9) 97(5), 460 (2012). doi:
6. 6.
Armstrong, S.N., Souganidis, P.E.: Stochastic homogenization of level-set convex Hamilton–Jacobi equations. Int. Math. Res. Not. 2013(15), 3420 (2013)
7. 7.
Armstrong, S.N., Tran, H.V.: Stochastic homogenization of viscous Hamilton–Jacobi equations and applications. Anal. PDE 7(8), 1969 (2014). doi:
8. 8.
Armstrong, S.N., Tran, H.V.: Viscosity solutions of general viscous Hamilton–Jacobi equations. Math. Ann. 361(3–4), 647 (2015). doi:
9. 9.
Cannarsa, P., Cardaliaguet, P.: Hölder estimates in space-time for viscosity solutions of Hamilton–Jacobi equations. Comm. Pure Appl. Math. 63(5), 590 (2010). doi:
10. 10.
Cardaliaguet, P., Silvestre, L.: Hölder continuity to Hamilton–Jacobi equations with superquadratic growth in the gradient and unbounded right-hand side. Comm. Partial Differ. Equ. 37(9), 1668 (2012). doi:
11. 11.
Crandall, M.G., Lions, P.L., Souganidis, P.E.: Maximal solutions and universal bounds for some partial differential equations of evolution. Arch. Rational Mech. Anal. 105(2), 163 (1989). doi:
12. 12.
Evans, L.C.: The perturbed test function method for viscosity solutions of nonlinear PDE. Proc. R. Soc. Edinburgh Sect. A 111(3–4), 359 (1989)
13. 13.
Evans, L.C.: Periodic homogenisation of certain fully nonlinear partial differential equations. Proc. R. Soc. Edinburgh Sect. A 120(3–4), 245 (1992)
14. 14.
Feldman, W., Souganidis, P.E.: Preprint (2016) Homogenization and non-homogenization of certain non-convex Hamilton-Jacobi equations (2016). arXiv:1609.09410
15. 15.
Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions, Stochastic Modelling and Applied Probability, vol. 25, 2nd edn. Springer, New York (2006)
16. 16.
Ishii, H.: Almost periodic homogenization of Hamilton-Jacobi equations. In: International Conference on Differential Equations, vol. 1, 2, pp. 600–605 (Berlin, 1999). World Sci. Publ., River Edge, NJ (2000)Google Scholar
17. 17.
Jing, W., Souganidis, P.E., Tran, H.V.: Large time average of reachable sets and Applications to Homogenization of interfaces moving with oscillatory spatio-temporal velocity. Preprint (arXiv:1408.2013 [math.AP])
18. 18.
Kosygina, E., Rezakhanlou, F., Varadhan, S.R.S.: Stochastic homogenization of Hamilton–Jacobi–Bellman equations. Comm. Pure Appl. Math. 59(10), 1489 (2006). doi:
19. 19.
Kosygina, E., Varadhan, S.R.S.: Homogenization of Hamilton–Jacobi–Bellman equations with respect to time-space shifts in a stationary ergodic medium. Comm. Pure Appl. Math. 61(6), 816 (2008). doi:
20. 20.
Lions, P.L., Papanicolaou, G.C., Varadhan, S.: Homogenization of Hamilton–Jacobi Equations. Unpublished preprint (1987)Google Scholar
21. 21.
Lions, P.L., Souganidis, P.E.: Stochastic homogenization of Hamilton–Jacobi and “viscous”–Hamilton–Jacobi equations with convex nonlinearities—revisited. Commun. Math. Sci. 8(2), 627 (2010). http://projecteuclid.org/getRecord?id=euclid.cms/1274816896
22. 22.
Lions, P.L., Souganidis, P.E.: Homogenization of “viscous” Hamilton–Jacobi equations in stationary ergodic media. Comm. Partial Differ. Equ. 30(1–3), 335 (2005). doi:
23. 23.
Majda, A.J., Souganidis, P.E.: Large-scale front dynamics for turbulent reaction-diffusion equations with separated velocity scales. Nonlinearity 7(1), 1 (1994). http://stacks.iop.org/0951-7715/7/1
24. 24.
Rezakhanlou, F., Tarver, J.E.: Homogenization for stochastic Hamilton–Jacobi equations. Arch. Ration. Mech. Anal. 151(4), 277 (2000). doi:
25. 25.
Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series, No. 28. Princeton University Press, Princeton, NJ (1970)Google Scholar
26. 26.
Schwab, R.W.: Stochastic homogenization of Hamilton–Jacobi equations in stationary ergodic spatio-temporal media. Indiana Univ. Math. J. 58(2), 537 (2009). doi:
27. 27.
Souganidis, P.E.: Stochastic homogenization of Hamilton–Jacobi equations and some applications. Asymptot. Anal. 20(1), 1 (1999)
28. 28.
Ziliotto, B.: Stochastic homogenization of nonconvex Hamilton–Jacobi equations: a counterexample. Comm. Pure Appl. Math. (2016). doi: