# A game theoretical approach for a nonlinear system driven by elliptic operators

## Abstract

In this paper we find viscosity solutions to an elliptic system governed by two different operators (the Laplacian and the infinity Laplacian) using a probabilistic approach. We analyze a game that combines the tug-of-war with random walks in two different boards. We show that these value functions converge uniformly to a viscosity solution of the elliptic system as the step size goes to zero. In addition, we show uniqueness for the elliptic system using pure PDE techniques.

## Introduction

Our goal in this paper is to describe a probabilistic game whose value functions approximate viscosity solutions to the following elliptic system:

\begin{aligned} \left\{ \begin{array}{ll} - {\frac{1}{2}}\Delta _{\infty }u(x) + u(x) - v(x)=0 \qquad &{} \ x \in \Omega , \\ - \frac{\kappa }{2} \Delta v(x) + v(x) - u(x)=0 \qquad &{} \ x \in \Omega , \\ u(x) = f(x) \qquad &{} \ x \in \partial \Omega , \\ v(x) = g(x) \qquad &{} \ x \in \partial \Omega , \end{array} \right. \end{aligned}
(1)

here $$\kappa >0$$ is a constant that can be chosen adjusting the parameters of the game. The domain $$\Omega \subset {{\mathbb {R}}}^N$$ is assumed to be bounded and satisfy the uniform exterior ball property, that is, there is $$\theta > 0$$ such that for all $$y\in \partial \Omega$$ there exists a closed ball of radius $$\theta$$ that only touches $$\overline{\Omega }$$ at y. This means that, for each $$y\in \partial \Omega$$ there exists a $$z_{y}\in {{\mathbb {R}}}^{N}\backslash \Omega$$ such that $$\overline{B_{\theta }(z_{y})}\cap \overline{\Omega }= \{ y \}$$. The boundary data f and g are assumed to be Lipschitz functions.

Notice that this system involves two differential operators, the usual Laplacian

\begin{aligned} \Delta \phi = \sum \limits _{i=1}^{N} \partial _{x_{i}x_{i}}\phi \end{aligned}

and the normalized infinity Laplacian (see )

\begin{aligned} \Delta _{\infty }\phi = \left\langle D^{2}\phi \frac{\nabla \phi }{|\nabla \phi |},\frac{\nabla \phi }{| \nabla \phi |}\right\rangle =\frac{1}{| \nabla \phi |^2}\sum \limits _{i,j=1}^{N} \partial _{x_{i}}\phi \partial _{x_{i}x_{j}}\phi \partial _{x_{j}}\phi , \end{aligned}

that is a 1-homogeneous, second order, degenerate elliptic operator.

This system (1) is not variational (there is no associated energy). Therefore, to find solutions one possibility is to use monotonicity methods (Perron’s argument). Here we will look at the system in a different way and to obtain existence of solutions we find an approximation using game theory. This approach not only gives existence of solutions but it also provide us with a description that yield some light on the behaveiour of the solutions. At this point we observe that we will understand solutions to the system in the viscosity sense, this is natural since the infinity Laplacian is not variational (see Sect. 2 for the precise definition).

The fundamental works by Doob, Feller, Hunt, Kakutani, Kolmogorov and many others show the deep connection between classical potential theory and probability theory. See, for example, [13,14,15, 19, 21] and the book . The main idea that is behind this relation is that harmonic functions and martingales have something in common: the mean value formulas. A well known fact is that u is harmonic, that is u verifies the PDE $$\Delta u =0$$, if and only if it verifies the mean value property $$u(x) = \frac{1}{|B_\varepsilon (x) |} \int _{B_\varepsilon (x) } u(y) \, dy$$. In fact, we can relax this condition by requiring that it holds asymptotically $$u(x) = \frac{1}{|B_\varepsilon (x) |} \int _{B_\varepsilon (x) } u(y) \, dy + o(\varepsilon ^2)$$, as $$\varepsilon \rightarrow 0$$. See . The connection between the Laplacian and the Bownian motion or with the limit of random walks as the step size goes to zero is also well known, see .

The ideas and techniques used for linear equations have been extended to cover nonlinear cases as well. Concerning nonlinear equations, for a mean value property for the $$p-$$Laplacian (including the infinity Laplacian) we refer to [16, 20, 22, 24]. For a probabilistic approximation of the infinity Laplacian there is a game (called Tug-of-War game in the literature) that was introduced in  and generalized in several directions to cover other equations, like the $$p-$$Laplacian, see [1, 2, 5, 8, 23,24,25,26,27, 30, 31] and the book .

Now let us describe the game that is associated with (1). It is a two-player zero-sum game played in two different bards (two different copies of the set $$\Omega \subset {\mathbb {R}}^N$$). Fix a parameter, $${\varepsilon }>0$$ and two final payoff functions $$\overline{f}, \overline{g} :{\mathbb {R}}^N \setminus \Omega \mapsto {\mathbb {R}}$$ (one for each board, $$\overline{f}$$ for the first board and $$\overline{g}$$ for the second one). These payoff functions $$\overline{f}$$ and $$\overline{g}$$ are just two Lipschitz extensions to $${\mathbb {R}}^N \setminus \Omega$$ of the boundary data f and g that appear in (1). The rules of the game are the following: the game starts with a token at an initial position $$x_0 \in \Omega$$, in one of the two boards. In the first board, with probability $$1-{\varepsilon }^2$$, the players play Tug-of-War as described in [24, 29] (this game is associated with the infinity Laplacian). Playing Tug-of-War, the players toss a fair coin and the winner chooses a new position of the game with the restriction that $$x_1 \in B_{\varepsilon }(x_0)$$. When the token is in the first board, with probability $${\varepsilon }^2$$ the token jumps to the other board (at the same position $$x_0$$). In the second board with probability $$1-{\varepsilon }^2$$ the token is moved at random (uniform probability) to some point $$x_1 \in B_{\varepsilon }(x_0)$$ and with probability $${\varepsilon }^2$$ the token jumps back to the first board (without changing the position). The game continues until the position of the token leaves the domain and at this point $$x_\tau$$ the first player gets $$\overline{f}(x_\tau )$$ and the second player $$- \overline{f}(x_\tau )$$ if they are playing in the first board while they obtain $$\overline{g}(x_\tau )$$ and $$- \overline{g}(x_\tau )$$ if they are playing in the second board (we can think that Player $$\text {II}$$ pays to Player $$\text {I}$$ the amount given by $$\overline{f}(x_\tau )$$ or by $$\overline{g}(x_\tau )$$ according to the board in which the game ends). This game has a expected value (the best outcome of the game that both players expect to obtain playing their best, see Sect. 3 for a precise definition). In this case the value of the game is given by a pair of functions $$(u^{\varepsilon }, v^{\varepsilon })$$, defined in $$\Omega$$ that depends on the size of the steps, $${\varepsilon }$$. For $$x_0 \in \Omega$$, the value of $$u^{\varepsilon }(x_0)$$ is the expected outcome of the game when it starts at $$x_0$$ in the first board and $$v^{\varepsilon }(x_0)$$ is the expected value starting at $$x_0$$ in the second board.

Our first theorem ensures that this game has a well-defined value and that this pair of functions $$(u^{\varepsilon }, v^{\varepsilon })$$ verifies a system of equations (called the dynamic programming principle (DPP)) in the literature). Similar results are proved in [5, 23, 25, 29, 31].

### Theorem 1

The game described above has a value$$(u^{\varepsilon }, v^{\varepsilon })$$that verifies

\begin{aligned} \left\{ \begin{array}{ll} u^{{\varepsilon }}(x)={\varepsilon }^{2}v^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) \Big \} \quad &{} \ x \in \Omega , \\ v^{{\varepsilon }}(x)={\varepsilon }^{2}u^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy \quad &{} \ x \in \Omega , \\ u^{{\varepsilon }}(x) = \overline{f}(x)& \ x \in {{\mathbb {R}}}^{N} \backslash \Omega , \\ v^{{\varepsilon }}(x) = \overline{g}(x) & \ x \in {{\mathbb {R}}}^{N} \backslash \Omega . \end{array} \right. \end{aligned}
(2)

Moreover, there is a unique solution to (2).

Notice that (2) can be see as a sort of mean value property (or a discretization at scale of size $${\varepsilon }$$) for the system (1). Let see intuitively why the DPP (2) holds. Playing in the first board, at each step Player $$\text {I}$$ chooses the next position of the game with probability $$\frac{1-{\varepsilon }^2}{2}$$ and aims to obtain $$\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y)$$ (recall this player seeks to minimize the expected payoff); with probability $$\frac{1-{\varepsilon }^2}{2}$$ it is Player $$\text {II}$$ who choses and aims to obtain $$\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y)$$ and finally with probability $${\varepsilon }^2$$ the board changes (and therefore $$v^{\varepsilon }(x)$$ comes into play). Playing in the second board, with probability $$1-{\varepsilon }^2$$ the point moves at random (but stays in the second board) and hence the term $$\fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy$$ appears, but with probability $${\varepsilon }^2$$ the board is changed and hence we have $$u^{\varepsilon }(x)$$ in the second equation. The equations in the (DPP) follow just by considering all the possibilities. Finally, the final payoff at $$x \not \in \Omega$$ is given by $$\overline{f} (x)$$ in the first board and by $$\overline{g}(x)$$ in the second board, giving the exterior conditions $$u^{{\varepsilon }}(x) = \overline{f}(x)$$ and $$v^{{\varepsilon }}(x) = \overline{g}(x)$$.

Our next goal is to look for the limit as $${\varepsilon }\rightarrow 0$$. Our main result in this paper is to show that, under our regularity conditions on the data ($$\partial \Omega$$ verifies the uniform exterior ball condition, and f and g are Lipschitz), these value functions $$u^{\varepsilon }, v^{\varepsilon }$$ converge uniformly in $${\overline{\Omega }}$$ to a pair of continuous limits uv that are characterized as being the unique viscosity solution to (1).

### Theorem 2

Let$$(u^{\varepsilon }, v^{\varepsilon })$$be the values of the game. Then, there exists a pair of continuous functions in$${\overline{\Omega }}$$, (uv), such that

\begin{aligned} u^{\varepsilon }\rightarrow u, \quad \hbox {and} \quad v^{\varepsilon }\rightarrow v, \qquad \hbox { as } {\varepsilon }\rightarrow 0, \end{aligned}

uniformly in$${\overline{\Omega }}$$. Moreover, the limit (uv) is characterized as the unique viscosity solution to the system (1) (with a constant$$\kappa =\frac{1}{|B_{1}(0)|}\int _{B_{1}(0)}z_{j}^{2}dz$$. that depends only on the dimension).

### Remark 3

The constant $$\kappa =\frac{1}{|B_{1}(0)|}\int _{B_{1}(0)}z_{j}^{2}dz$$. appears from the fact that a simple change of variables gives $$\fint _{B_\varepsilon (x)} |w_j-x_j|^2 dw =\varepsilon ^2 \kappa$$.

If we impose that the probability of moving random in the second board is $$1-K{\varepsilon }^2$$ (and hence the probability of changing from the second to the first board is $$K{\varepsilon }^2$$) with the same computations we obtain

\begin{aligned} v^{{\varepsilon }}(x)=K {\varepsilon }^{2}u^{{\varepsilon }}(x)+(1- K {\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy \end{aligned}

as the second equation in the DPP (the first equation and the exterior data remain unchanged). Passing to the limit we get

\begin{aligned} - \frac{\kappa }{2 K } \Delta v(x) + v(x) - u(x)=0 , \end{aligned}

and hence, choosing K, we can obtain any positive constant in front of the Laplacian in (1).

### Remark 4

Here we assumed that $$\overline{f}$$ and $$\overline{g}$$ are Lipschitz. However, we can assume only that $$\overline{f}$$ and $$\overline{g}$$ have a uniform modulus of continuity $$\omega : [0, \infty ) \mapsto [0, \infty )$$, that is, an increasing function with $$\omega (0) = 0$$ such that

\begin{aligned} |\overline{f}(x) - \overline{f}(y)| \le \omega (|x - y|) \qquad \hbox {and} \qquad |\overline{g}(x) - \overline{g}(y)| \le \omega (|x - y|). \end{aligned}

Under this uniform modulus of continuity assumption the proofs work similarly.

### Remark 5

It is possible to consider the inhomogeneous version of the system (1) for uniformly continuous RHSs. For example, we can consider

\begin{aligned} \left\{ \begin{array}{ll} - {\frac{1}{2}}\Delta _{\infty }u(x) + u(x) - v(x) = a(x) \qquad &{} \ x \in \Omega , \\ - \frac{\kappa }{2} \Delta v(x) + v(x) - u(x)= b(x) \qquad &{} \ x \in \Omega , \end{array} \right. \end{aligned}

with boundary conditions. When we add a RHS we just add a running payoff to the game, that is, each time the game is played in the first board the first player gets $$\varepsilon ^2 a(x_k)$$ and each time the game is played in the second board the first player gets $$\varepsilon ^2 b(x_k)$$. With this extra running payoff the equations in the DPP read as

\begin{aligned} \left\{ \begin{array}{ll} u^{{\varepsilon }}(x)={\varepsilon }^{2}v^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + \varepsilon ^2 a(x) \Big \} \quad &{} \ x \in \Omega , \\ v^{{\varepsilon }}(x)={\varepsilon }^{2}u^{{\varepsilon }}(x)+(1-{\varepsilon }^{2}) \Big \{ \fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy +\varepsilon ^2 b(x)\Big \} \quad &{} \ x \in \Omega . \end{array} \right. \end{aligned}

Let us comment briefly on the ideas (and difficulties) of the proofs. To prove that the sequences $$\{u^{\varepsilon }, v^{\varepsilon }\}_{\varepsilon }$$ converge we will apply an Arzelà-Ascoli type lemma. To this end we need to show a sort of asymptotic continuity that is based on estimates for both value functions $$(u^{\varepsilon }, v^{\varepsilon })$$ near the boundary (these estimates can be extended to the interior via a coupling probabilistic argument). In fact, to see an asymptotic continuity close to a boundary point, we are able to show that both players have strategies that enforce the game to end near a point $$y\in \partial \Omega$$ with high probability if we start close to that point no mater the strategy chosen by the other player. This allows us to obtain a sort of asymptotic equicontinuity close to the boundary leading to uniform convergence in the whole $${\overline{\Omega }}$$. Note that, in general the value functions $$(u^{\varepsilon },v^{\varepsilon })$$ are discontinuous in $$\Omega$$ (this is due to the fact that we make discrete steps) and therefore showing uniform convergence to a continuos limit is a difficult task.

Let us see formally why a uniform limit (uv) is a solution to Eq. (1). By subtracting $$u^{\varepsilon }(x)$$ and dividing by $${\varepsilon }^2$$ on both sides we get

\begin{aligned} 0=(v^{{\varepsilon }}(x)-u^{{\varepsilon }}(x))+(1-{\varepsilon }^{2}) \left\{ \frac{ {\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)} u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) -u^{{\varepsilon }}(x) }{{\varepsilon }^{2}} \right\} . \end{aligned}

which in the limit approximates the first equation in our system (1) (the terms into brackets approximate the second derivative of u in the direction of its gradient). Similarly, the second equation in the DPP can be written as

\begin{aligned} 0 = (u^{{\varepsilon }}(x)-v^{{\varepsilon }}(x))+(1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)} \frac{(v^{{\varepsilon }}(y)- v^{{\varepsilon }}(x))}{{\varepsilon }^2} dy \end{aligned}

that approximates solutions to the second equation in (1).

The paper is organized as follows: in Sect. 2 we include the precise definition of what we will understand by a viscosity solution for our system and we state a key preliminary result from probability theory (the Optional Stopping Theorem); Sect. 3 contains a detailed description of the game, also in Sect. 3 we show that there is a value of the game that satisfies the DPP (2) and prove uniqueness for the DPP (we prove Theorem 1); next, in Sect. 4 we analyze the game and show that value functions converge uniformly along subsequences to a pair of continuous functions; in Sect. 5 we prove that the limit is a viscosity solution to our system (up to this point we obtain the first part of Theorem 2) and in Sect. 6 we show uniqueness of viscosity solutions to the system, ending the proof of Theorem 2. Finally, in Sect. 7 we collect some comments on possible extensions of our results.

## Preliminaries

In this section we include the precise definition of what we understand as a viscosity solution for the system (1) and we include the precise statement of the Optional Stopping Theorem that will be needed when dealing with the probabilistic part of our arguments.

### Viscosity solutions

We begin by stating the definition of a viscosity solution to a fully nonlinear second order elliptic PDE. We refer to  for general results on viscosity solutions. Fix a function

\begin{aligned} P:\Omega \times {{\mathbb {R}}}\times {{\mathbb {R}}}^N\times {\mathbb {S}}^N\rightarrow {{\mathbb {R}}}\end{aligned}

where $${\mathbb {S}}^N$$ denotes the set of symmetric $$N\times N$$ matrices. We want to consider the PDE

\begin{aligned} P(x,u (x), Du (x), D^2u (x)) =0, \qquad x \in \Omega . \end{aligned}
(3)

The idea behind Viscosity Solutions is to use the maximum principle in order to “pass derivatives to smooth test functions”. This idea allows us to consider operators in non divergence form. We will assume that P is degenerate elliptic, that is, P satisfies a monotonicity property with respect to the matrix variable, that is,

\begin{aligned} X\le Y \text { in } {\mathbb {S}}^N \implies P(x,r,p,X)\ge P(x,r,p,Y) \end{aligned}

for all $$(x,r,p)\in \Omega \times {{\mathbb {R}}}\times {{\mathbb {R}}}^N$$.

Here we have an equation that involves the $$\infty$$-Laplacian that is not well defined when the gradient vanishes. In order to be able to handle this issue, we need to consider the lower semicontinous, $$P_*$$, and upper semicontinous, $$P^*$$, envelopes of P. These functions are given by

\begin{aligned} \begin{aligned} P^*(x,r,p,X)&=\limsup _{(y,s,w,Y)\rightarrow (x,r,p,X)}P(y,s,w,Y),\\ P_*(x,r,p,X)&=\liminf _{(y,s,w,Y)\rightarrow (x,r,p,X)}P(y,s,w,Y). \end{aligned} \end{aligned}

These functions coincide with P at every point of continuity of P and are lower and upper semicontinous respectively. With these concepts at hand we are ready to state the definition of a viscosity solution to (3).

### Definition 6

A lower semi-continuous function u is a viscosity supersolution of (3) if for every $$\phi \in C^2$$ such that $$\phi$$ touches u at $$x \in \Omega$$ strictly from below (that is, $$u-\phi$$ has a strict minimum at x with $$u(x) = \phi (x)$$), we have

\begin{aligned} P^*(x,\phi (x),D \phi (x),D^2\phi (x))\ge 0. \end{aligned}

An upper semi-continuous function u is a subsolution of (3) if for every $$\psi \in C^2$$ such that $$\psi$$ touches u at $$x \in \Omega$$ strictly from above (that is, $$u-\psi$$ has a strict maximum at x with $$u(x) = \psi (x)$$), we have

\begin{aligned} P_*(x,\phi (x),D \phi (x),D^2\phi (x))\le 0. \end{aligned}

Finally, u is a viscosity solution of (3) if it is both a super- and a subsolution.

In our system (1) we have two equations given by the functions

\begin{aligned} F_1 (x,u,p,X) = - \frac{1}{2} \left\langle X\frac{p}{|p|} , \frac{p}{|p|} \right\rangle + u - v(x)=0 \end{aligned}

and

\begin{aligned} F_2 (x,v,Y) = - \frac{\kappa }{2} trace (Y) + v - u(x)=0. \end{aligned}

Then, the definition of a viscosity solution for the system (1) that we will use here is the following.

### Definition 7

A pair of continuous functions $$u, v :{\overline{\Omega }} \mapsto {\mathbb {R}}$$ is a viscosity solution of (1) if

\begin{aligned}&u|_{\partial \Omega } = f, \qquad v|_{\partial \Omega } = g, \\&u \, \hbox {is a viscosity solution to} \, F_1 (x,u,D u, D^2u) = 0\\&{\textit{and}} \\&v \, \hbox {is a viscosity solution to} \, F_2 (x,v, D^2v) = 0 \end{aligned}

in the sense of Definition 6.

### Remark 8

We remark that, according to our definition, in the equation for u, as the other component v is continuous, we have that $$F_1$$ depends on x via v(x) (and similarly for $$F_2$$ that depend on x as u(x)). That is, we understand a solution to (1) as a pair of continuous up to the boundary functions that satisfies the boundary conditions pointwise and such that u is a viscosity solution to the first equation in the system in the viscosity sense (with v as a fixed continuos function of x in $$F_1$$) and v solves the second equation in the system (regarding u as a fixed function of x in $$F_2$$).

Also notice that we have that both u and v are assumed to be continuous in $${\overline{\Omega }}$$ and then the boundary data f and g are taken on $$\partial \Omega$$ with continuity.

### Probability: the optional stopping theorem

We briefly recall (see ) that a sequence of random variables $$\{M_{k}\}_{k\ge 1}$$ is a supermartingale (submartingales) if

\begin{aligned} {{\mathbb {E}}}[M_{k+1}\arrowvert M_{0},M_{1},\ldots ,M_{k}]\le M_{k} \ \ (\ge ) \end{aligned}

Then, the Optional Stopping Theorem, that we will call (OSTh) in what follows, says: given $$\tau$$ a stopping time such that one of the following conditions hold,

1. (a)

The stopping time $$\tau$$ is bounded almost surely;

2. (b)

It holds that $${{\mathbb {E}}}[\tau ]<\infty$$ and there exists a constant $$c>0$$ such that

\begin{aligned} {{\mathbb {E}}}[M_{k+1}-M_{k}\arrowvert M_{0},\ldots ,M_{k}]\le c; \end{aligned}
3. (c)

There exists a constant $$c>0$$ such that $$|M_{\min \{\tau ,k\}}|\le c$$ almost surely for every k.

Then

\begin{aligned} {{\mathbb {E}}}[M_{\tau }]\le {{\mathbb {E}}}[M_{0}] \ \ (\ge ) \end{aligned}

if $$\{M_{k}\}_{k\ge 0}$$ is a supermartingale (submartingale). For the proof of this classical result we refer to [11, 32].

## A two-player game

In this section, we describe in detail the two-player zero-sum game presented in the introduction. Let $$\Omega \subset {{\mathbb {R}}}^N$$ be a bounded smooth domain and fix $${\varepsilon }>0$$. The game takes place in two boards (that we will call board 1 and board 2), that are two copies of $${\mathbb {R}}^N$$ with the same domain $$\Omega$$ inside. Fix two Lipschitz functions $$\overline{f}:{{\mathbb {R}}}^{n} \backslash \Omega \rightarrow {{\mathbb {R}}}$$ and $$\overline{g}:{{\mathbb {R}}}^{N} \backslash \Omega \rightarrow {{\mathbb {R}}}$$ that are going to give the final payoff of the game when we exit $$\Omega$$ in board 1 and 2 respectively.

At the beginning a token is placed at $$x_0\in \Omega$$ in one of the two boards. When we play in the first board, with probability $$1-{\varepsilon }^2$$ we play Tug-of-War, the game introduced in , a fair coin (with probability $$\frac{1}{2}$$ of heads and tails) is tossed and the player who win the coin toss chooses the next position of the game inside the ball $$B_{\varepsilon }(x_0)$$ in the first board. With probability $${\varepsilon }^2$$ we jump to the other board, the next position of the toke in $$x_0$$ but now in board 2. If $$x_0$$ is in the second board then with probability $$1-{\varepsilon }^2$$ the new position of the game is chosen at random in the ball $$B_{\varepsilon }(x_0)$$ (with uniform probability) and with probability $${\varepsilon }^2$$ the position jumps to the same $$x_0$$ but in the first board. The position of the token will be denoted by (xj) where $$x \in {\mathbb {R}}^N$$ and $$j=1,2$$ (j encodes the boards in which the token is at position x). Then, after one movement, the players continue playing with the same rules from the new position of the token $$x_1$$ in its corresponding board, 1 or 2. The game ends when the position of the token leaves the domain $$\Omega$$. That is, let $$\tau$$ be the stopping time given by the first time at which $$x_{\tau } \in {{\mathbb {R}}}^{N} \backslash \Omega$$. If $$x_{\tau }$$ is in the first board then Player I gets $$\overline{f}(x_{\tau })$$ (and Player II pays that quantity), while in the token leaves $$\Omega$$ in the second board Player I gets $$\overline{g}(x_{\tau })$$ (and Player II pays that amount). We have that the game generates a sequence of states

\begin{aligned} P=\{ (x_{0},j_{0}),(x_{1},j_{1}),\ldots ,(x_{\tau },j_{\tau })\} \end{aligned}

with $$j_{i}\in \{1,2\}$$ and $$x_{i}$$ in the board $$j_{i}$$. The dependence of the position of the token in one of the boards, $$j_i$$, will be made explicit only when needed.

A strategy $$S_\text {I}$$ for Player I is a function defined on the partial histories that gives the next position of the game provided Player $$\text {I}$$ wins the coin toss (and the token is and stays in the first board)

\begin{aligned} S_\text {I}{\left( (x_0,j_{0}),(x_1,,j_{1}),\ldots ,(x_n,,1)\right) }= (x_{n+1},1) \qquad \hbox {with } x_{n+1} \in B_{\varepsilon }(x_n). \end{aligned}

Analogously, a strategy $$S_\text {II}$$ for Player II is a function defined on the partial histories that gives the next position of the game provided Player $$\text {II}$$ is who wins the coin toss (and the token stays at the first board).

When the two players fix their strategies $$S_I$$ and $$S_{II}$$ we can compute the expected outcome as follows: Given the sequence $$x_0,\ldots ,x_n$$ with $$x_k\in \Omega$$, if $$x_k$$ belongs to the first board, the next game position is distributed according to the probability

\begin{aligned} \pi _{S_\text {I},S_\text {II},1}((x_0,j_0),\ldots ,(x_k,1),{A}, B)&= \frac{1-{\varepsilon }^2}{2} \delta _{S_\text {I}((x_0,j_0),\ldots ,(x_k,1))} (A) \\&+ \frac{1-{\varepsilon }^2}{2} \delta _{S_\text {II}((x_0,j_0),\ldots ,(x_k,1))} (A) + {\varepsilon }^2 \delta _{x_k} (B). \end{aligned}

Here A is a subset in the first board while B is a subset in the second board. If $$x_k$$ belongs to the second board, the next game position is distributed according to the probability

\begin{aligned} \pi _{S_\text {I},S_\text {II},2}((x_0,j_0),\ldots ,(x_k,2),{A}, B)= (1-{\varepsilon }^2) U(B_{\varepsilon }(x_k)) (B) + {\varepsilon }^2 \delta _{x_k} (A). \end{aligned}

By using the Kolmogorov’s extension theorem and the one step transition probabilities, see Theorem 2.1.5 in , we can build a probability measure $${\mathbb {P}}^{x_0}_{S_\text {I},S_\text {II}}$$ on the game sequences (taking into account the two boards). The expected payoff, when starting from $$(x_0,j_0)$$ and using the strategies $$S_\text {I},S_\text {II}$$, is

\begin{aligned} {\mathbb {E}}_{S_{\text {I}},S_\text {II}}^{(x_0,j_0)} [ h (x_\tau ) ]=\int _{H^\infty } h (x_\tau ) \, d {\mathbb {P}}^{x_0}_{S_\text {I},S_\text {II}} \end{aligned}
(4)

(here we use $$h=f$$ if $$x_\tau$$ is in the first board or $$h=g$$ if $$x_\tau$$ is in the second board. The set $$H^\infty$$ refers to the set of all possible sequences of states of the game and $${\mathbb {P}}^{x_0}_{S_\text {I},S_\text {II}}$$ is the probability measure on $$H^\infty$$ obtained from Kolmogorov’s extension theorem).

The value of the game for Player I is given by

\begin{aligned} u^{\varepsilon }_\text {I}(x_0)=\inf _{S_\text {I}}\sup _{S_{\text {II}}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II}}^{(x_0,1)}\left[ h (x_\tau ) \right] \end{aligned}

for $$x_0\in \Omega$$ in the first board ($$j_0 =1$$), and by

\begin{aligned} v^{\varepsilon }_\text {I}(x_0)=\inf _{S_\text {I}}\sup _{S_{\text {II}}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II}}^{(x_0,2)}\left[ h (x_\tau ) \right] \end{aligned}

for $$x_0\in \Omega$$ in the second board ($$j_0 =2$$).

The value of the game for Player II is given by the same formulas just reversing the $$\inf$$$$\sup$$,

\begin{aligned} u^{\varepsilon }_\text {II}(x_0)=\sup _{S_{\text {II}}}\inf _{S_\text {I}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II}}^{(x_0,1)}\left[ h (x_\tau ) \right] , \end{aligned}

for $$x_0$$ in the first board and

\begin{aligned} v^{\varepsilon }_\text {II}(x_0)=\sup _{S_{\text {II}}}\inf _{S_\text {I}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II}}^{(x_0,2)}\left[ h (x_\tau ) \right] , \end{aligned}

for $$x_0$$ in the second board.

Intuitively, the values $$u_\text {I}(x_0)$$ and $$u_\text {II}(x_0)$$ are the best expected outcomes each player can guarantee when the game starts at $$x_0$$ in the first board while $$v_\text {I}(x_0)$$ and $$v_\text {II}(x_0)$$ are the best expected outcomes for each player in the second board.

If $$u^{\varepsilon }_\text {I}= u^{\varepsilon }_\text {II}$$ and $$v^{\varepsilon }_\text {I}= v^{\varepsilon }_\text {II}$$, we say that the game has a value.

Before proving that the game has a value, let us observe that the game ends almost surely no matter the strategies used by the players, that is $${{\mathbb {P}}}(\tau =+\infty ) =0$$, and therefore the expectation (4) is well defined. This fact is due to the random movements that we make in the second board (that kicks us out of the domain in a finite number of plays without changing boards with positive probability).

### Proposition 9

We have that

\begin{aligned} {\mathbb {P}} \Big (\hbox {the game ends in a finite number of plays}\Big )=1. \end{aligned}

### Proof

Let us start by showing that the game ends in a finite number of plays if we start with the token in the second board. Let $$\xi \in {{\mathbb {R}}}^{N}$$ with $$| \xi |=1$$ be a fixed direction. Consider the set

\begin{aligned} T_{\xi ,x_{k}}=\Big \{ y\in {{\mathbb {R}}}^{N}:y\in B_{{\varepsilon }}(x_{k})\wedge \langle y-\Big (x_{k}+\frac{{\varepsilon }}{2}\xi \Big ),\xi \rangle \ge 0 \Big \} \end{aligned}

that is a part of the ball where the points are at distance $$\frac{{\varepsilon }}{2}$$ from the center and are in the same direction. Then, starting from any point in $$\Omega$$ if in every play we choose a point in $$T_{\xi ,x_{k}}$$ (without changing boards) in at most $$\lceil \frac{4R}{{\varepsilon }}\rceil$$ steps we will be a out of $$\Omega$$ (here $$R=diam(\Omega )$$). As the set $$T_{\xi ,x_{k}}$$ has positive measure it holds that

\begin{aligned} {\mathbb {P}}(x_{k+1}\in T_{\xi ,x_{k}}\arrowvert x_{k}):=\alpha > 0. \end{aligned}

Therefore, we have a positive probability of ending the game in less than $$\lceil \frac{4R}{{\varepsilon }}\rceil$$ plays

\begin{aligned} {\mathbb {P}}\left( \hbox {the game ends in} \, \left\lceil \frac{4R}{{\varepsilon }}\right\rceil \, \hbox {plays}\right) \ge [(1-{\varepsilon }^{2})\alpha ]^{\left\lceil \frac{4R}{{\varepsilon }}\right\rceil } :=r>0. \end{aligned}

Hence

\begin{aligned} {\mathbb {P}}\left( \hbox {the game continues after} \, \left\lceil \frac{4R}{{\varepsilon }}\right\rceil \, \hbox {plays}\right) \le 1-r, \end{aligned}

and then

\begin{aligned} {\mathbb {P}}(\hbox {the game does not end in a finite number of plays}) =0. \end{aligned}

Now, if we start in the first board the probability of not changing the board in n plays is $$(1-{\varepsilon }^{2})^{n}$$. Therefore, we will change to the second board (or end the game) with probability one in a finite number of plays. Hence, we end the game or we are in the previous situation with probability one

This implies that the game ends almost surely in a finite number of plays. $$\square$$

To see that the game has a value, we first observe that we have existence of $$(u^{\varepsilon }, v^{\varepsilon })$$, a pair of functions that satisfies the DPP. The existence of such a pair can be obtained by Perron’s method. In fact, let us start considering the following set (that is composed by pairs of functions that are sub solutions to our DPP). Let

\begin{aligned} C=\max \Big \{ \Vert \overline{f}\Vert _\infty , \Vert \overline{g}\Vert _\infty \Big \}, \end{aligned}
(5)

and consider the set of functions

\begin{aligned} {A} = \Big \{ (z^{{\varepsilon }},w^{{\varepsilon }}) : \hbox { are bounded above by} \, C \, \hbox {and verify} (\mathbf{e }) \Big \}, \end{aligned}

with

\begin{aligned} \left\{ \begin{array}{ll} z^{{\varepsilon }}(x)\le {\varepsilon }^{2}w^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}z^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}z^{{\varepsilon }}(y)\Big \} \qquad &{} \ x \in \Omega , \\ w^{{\varepsilon }}(x)\le {\varepsilon }^{2}z^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)}w^{{\varepsilon }}(y)dy \qquad &{} \ x \in \Omega , \\ z^{{\varepsilon }}(x) \le \overline{f}(x) \qquad &{} \ x \in {{\mathbb {R}}}^{N} \backslash \Omega , \\ w^{{\varepsilon }}(x) \le \overline{g}(x) \qquad &{} \ x \in {{\mathbb {R}}}^{N} \backslash \Omega . \end{array} \right. \qquad \qquad (\mathbf{e }) \end{aligned}

### Remark 10

Notice that we need to impose that $$(z^{{\varepsilon }},w^{{\varepsilon }})$$ are bounded since

\begin{aligned} z^{{\varepsilon }} (x) = \left\{ \begin{array}{ll} +\infty \qquad &{} x \in \Omega \\ \overline{f} \qquad &{} x \not \in \Omega \end{array} \right. \qquad \hbox {and} \qquad w^{{\varepsilon }} (x) = \left\{ \begin{array}{ll} +\infty \qquad &{} x \in \Omega \\ \overline{g} \qquad &{} x \not \in \Omega \end{array} \right. \end{aligned}

satisfy e.

Observe that $${A} \ne \emptyset$$. To see this fact, we just take $$z^{{\varepsilon }}=-C$$ and $$w^{{\varepsilon }}=-C$$ with C given by (5). Now we let

\begin{aligned} u^{{\varepsilon }}(x)=\sup _{(z^{{\varepsilon }},w^{{\varepsilon }})\in {A}}z^{{\varepsilon }}(x) \qquad \hbox {and} \qquad v^{{\varepsilon }}(x)=\sup _{(z^{{\varepsilon }},w^{{\varepsilon }})\in {A}}w^{{\varepsilon }}(x) . \end{aligned}
(6)

Our goal is to show that in this way we find a solution to the DPP.

### Proposition 11

The pair$$(u^{{\varepsilon }},v^{{\varepsilon }})$$given by (6) is a solution to the DPP (2)

### Proof

First, let us see that $$(u^{{\varepsilon }},v^{{\varepsilon }})$$ belongs to the set A. To this end we first observe that $$u^{{\varepsilon }}$$ y $$v^{{\varepsilon }}$$ are bounded by C and verify $$u^{{\varepsilon }}(x)\le \overline{f}(x)$$ and $$v^{{\varepsilon }}(x)\le \overline{g}(x)$$ for $$x\in {{\mathbb {R}}}^{N}\backslash \Omega$$. Hence we need to check (e) for $$x\in \Omega$$. Take $$(z^{{\varepsilon }},w^{{\varepsilon }})\in {A}$$ and fix $$x\in \Omega$$. Then,

\begin{aligned} z^{{\varepsilon }}(x)\le {\varepsilon }^{2}w^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}z^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}z^{{\varepsilon }}(y)\Big \}. \end{aligned}

As $$z^{{\varepsilon }}\le u^{{\varepsilon }}$$ and $$w^{{\varepsilon }}\le v^{{\varepsilon }}$$ we obtain

\begin{aligned} z^{{\varepsilon }}(x)\le {\varepsilon }^{2}v^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y)\Big \}. \end{aligned}

Taking supremum in the left hand side we obtain

\begin{aligned} u^{{\varepsilon }}(x)\le {\varepsilon }^{2}v^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y)\Big \}. \end{aligned}

In an analogous way we obtain

\begin{aligned} v^{{\varepsilon }}(x)\le {\varepsilon }^{2}u^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy, \end{aligned}

and we conclude that $$(u^{{\varepsilon }},v^{{\varepsilon }}) \in {A}$$.

To end the proof we need to see that $$(u^{{\varepsilon }},v^{{\varepsilon }})$$ verifies the equalities in the equations in condition (e). We argue by contradiction and assume that there is a point $$x_{0}\in {{\mathbb {R}}}^{n}$$ where an inequality in (e) is strict. First, assume that $$x_{0}\in {{\mathbb {R}}}^{n}\backslash \Omega$$, and that we have $$u^{{\varepsilon }}(x_{0})<\overline{f}(x_{0})$$. Then, take $$u^{{\varepsilon }}_{0}$$ defined by $$u^{{\varepsilon }}_{0}(x)=u^{{\varepsilon }}(x)$$ for $$x\ne x_0$$ and $$u^{{\varepsilon }}_{0}(x_{0})=\overline{f}(x_{0})$$. The pair $$(u^{{\varepsilon }}_{0},v^{{\varepsilon }})$$ belongs to A but $$u^{{\varepsilon }}_{0}(x_{0})>u^{{\varepsilon }}(x_{0})$$ which is a contradiction. We can argue in a similar way if $$v^{{\varepsilon }}(x_{0})<\overline{g}(x_{0})$$. Next, we consider a point $$x_{0}\in \Omega$$ with one of the inequalities in e strict. Assume that

\begin{aligned} u^{{\varepsilon }}(x_{0})<{\varepsilon }^{2}v^{{\varepsilon }}(x_{0})+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}(y)\Big \}. \end{aligned}

Let

\begin{aligned} \delta ={\varepsilon }^{2}v^{{\varepsilon }}(x_{0})+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}(y)\Big \}-u^{{\varepsilon }}(x_{0})>0, \end{aligned}

and consider the function $$u^{{\varepsilon }}_{0}$$ given by;

\begin{aligned} u^{{\varepsilon }}_{0} (x) =\left\{ \begin{array}{ll} u^{{\varepsilon }}(x) &{} \ \ x \ne x_{0}, \\ u^{{\varepsilon }}(x)+\frac{\delta }{2} &{} \ \ x =x_{0} . \\ \end{array} \right. \end{aligned}

Observe that

\begin{aligned} u^{{\varepsilon }}_{0}(x_{0})=u^{{\varepsilon }}(x_{0})+\frac{\delta }{2}< {\varepsilon }^{2}v^{{\varepsilon }}(x_{0})+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}(y)\Big \} \end{aligned}

and hence

\begin{aligned} u^{{\varepsilon }}_{0}(x_{0})<{\varepsilon }^{2}v^{{\varepsilon }}(x_{0})+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}_{0}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x_{0})}u^{{\varepsilon }}_{0}(y)\Big \}. \end{aligned}

Then we have that $$(u^{{\varepsilon }}_{0},v^{{\varepsilon }})\in {A}$$ but $$u^{{\varepsilon }}_{0}(x_{0})>u^{{\varepsilon }}(x_{0})$$ reaching again a contradiction.

In an analogous way we can show that when

\begin{aligned} v^{{\varepsilon }}(x_{0})<{\varepsilon }^{2}u^{{\varepsilon }}(x_{0})+(1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }} (x_{0})}v^{{\varepsilon }}(y)dy, \end{aligned}

we also reach a contradiction. $$\square$$

Now, concerning the value functions of our game, we know that $$u^{\varepsilon }_\text {I}\ge u^{\varepsilon }_\text {II}$$ and $$v^{\varepsilon }_\text {I}\ge v^{\varepsilon }_\text {II}$$ (this is immediate from the definitions). Hence, to obtain uniqueness of solutions of the DPP and existence of value functions for our game, it is enough to show that $$u^{\varepsilon }_\text {II}\ge u^{\varepsilon }\ge u^{\varepsilon }_\text {I}$$ and $$v^{\varepsilon }_\text {II}\ge v^{\varepsilon }\ge v^{\varepsilon }_\text {I}$$. To show this result we will use the OSTh for sub/supermartingales (see Sect. 2).

### Theorem 12

Given$${\varepsilon }>0$$let$$(u^{{\varepsilon }},v^{{\varepsilon }})$$a pair of functions that verifies the DPP (2), then it holds that

\begin{aligned} u^{{\varepsilon }}(x_{0})=\sup _{S_{I}}\inf _{S_{II}}{{\mathbb {E}}}^{(x_{0},1)}_{S_{I},S_{II}}[h( x_{\tau })] \end{aligned}

if$$x_{0} \in \Omega$$is in the first board and

\begin{aligned} v^{{\varepsilon }}(x_{0})=\sup _{S_{I}}\inf _{S_{II}}{{\mathbb {E}}}^{(x_{0},2)}_{S_{I},S_{II}}[h( x_{\tau })] \end{aligned}

if$$x_{0} \in \Omega$$is in the second board.

Moreover, we can interchange$$\inf$$with$$\sup$$in the previous identities, that is, the game has a value. This value can be characterized as the unique solution to the DPP.

### Proof

Given $${\varepsilon }>0$$ we have proved the existence of a solution to the DPP $$(u^{{\varepsilon }},v^{{\varepsilon }})$$. Fix $$\delta >0$$. Assume that we start with $$(x_{0},1)$$, that is, the initial position is at board 1. We choose a strategy for Player I as follows:

\begin{aligned} x_{k+1}^{I}=S_{I}^{*}(x_{0},\ldots ,x_{k}) \qquad \hbox {is such that} \qquad \sup _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y) - \frac{\delta }{2^{k}}\le u^{{\varepsilon }}(x_{k+1}^{I}). \end{aligned}

Given this strategy for Player I and any strategy $$S_{II}$$ for Player II we consider the sequence of random variables given by

\begin{aligned} M_{k}=\left\{ \begin{array}{ll} u^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k}} &{} \ \ {if } \ (j_{k}=1), \\ v^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k}} &{} \ \ {if } \ (j_{k} = 2). \end{array} \right. \end{aligned}

Let us see that $$(M_{k})_{\kappa \ge 0}$$ is a submartingale. To this end we need to estimate

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert M_{k}]={{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},j_{k})]. \end{aligned}

We consider two cases:

Case 1: Assume that $$j_{k}=1$$, then

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]&= (1-{\varepsilon }^{2}){{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)\wedge j_{k+1}=1] \\&+{\varepsilon }^{2}{{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)\wedge j_{k+1}=2]. \end{aligned}

Here we used that the probability of staying in the same board is $$(1-{\varepsilon }^{2})$$ and the probability of jumping to the other board is $${\varepsilon }^{2}$$. Now, if $$j_{k}=1$$ and $$j_{k+1}=2$$ then $$x_{k+1}=x_{k}$$ (we just changed boards). On the other hand, if we stay in the first board we obtain

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]&= (1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}u^{{\varepsilon }}(x_{k+1}^{I})+{\frac{1}{2}}u^{{\varepsilon }}(x_{k+1}^{II})-\frac{\delta }{2^{k+1}}\Big \} \\&+ {\varepsilon }^{2}\Big (v^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k+1}}\Big ). \end{aligned}

Since we are using the strategies $$S_{I}^{*}$$ and $$S_{II}$$, it holds that

\begin{aligned} \sup _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y) - \frac{\delta }{2^{k}}\le u^{{\varepsilon }}(x_{k+1}^{I}) \qquad \hbox {and}\qquad \inf _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y)\le u^{{\varepsilon }}(x_{k+1}^{II}). \end{aligned}

Therefore, we arrive to

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]&\ge (1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\Big (\sup _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y) - \frac{\delta }{2^{k}}\Big )+{\frac{1}{2}}\inf _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y)\Big \} \\&+{\varepsilon }^{2}v^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k+1}}, \end{aligned}

that is,

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]&\ge (1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y)+{\frac{1}{2}}\inf _{y\in B_{{\varepsilon }}(x_{k})}u^{{\varepsilon }}(y)\Big \} \\&+{\varepsilon }^{2}v^{{\varepsilon }}(x_{k})-(1-{\varepsilon }^{2}) \frac{\delta }{2^{k+1}}-\frac{\delta }{2^{k+1}}. \end{aligned}

As $$u^{{\varepsilon }}$$ is a solution to the DPP (2) we obtain

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]\ge u^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k}}=M_{k} \end{aligned}

as we wanted to show.

Case 2: Assume that $$j_{k}=2$$. With the same ideas used before we get

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)]&= (1-{\varepsilon }^{2}){{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=2] \\&+{\varepsilon }^{2}{{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=1]. \end{aligned}

Remark that when $$j_{k}=j_{k+1}=2$$ (this means that we play in the second board) with $$x_{k}\in \Omega$$, then $$x_{k+1}$$ is chosen with uniform probability in the ball $$B_{{\varepsilon }}(x_{k})$$. Hence,

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=2]&= {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}} \left[ v^{{\varepsilon }}(x_{k+1})-\frac{\delta }{2^{k+1}}\arrowvert (x_{k},2)\wedge j_{k+1}=2\right] \\&= \fint _{B_{{\varepsilon }}(x_{k})} v^{{\varepsilon }}(y)dy-\frac{\delta }{2^{k+1}}. \end{aligned}

On the other hand,

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=1]=u^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k+1}}. \end{aligned}

Collecting these estimates we obtain

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)]&= (1-{\varepsilon }^{2})\left( \fint _{B_{{\varepsilon }}(x_{k})} v^{{\varepsilon }}(y)dy-\frac{\delta }{2^{k+1}} \right) +{\varepsilon }^{2} \left( u^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k+1}}\right) \\&\ge (1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x_{k})}v^{{\varepsilon }}(y)dy+ {\varepsilon }^{2}u^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k}}, \end{aligned}

that is,

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)]\ge v^{{\varepsilon }}(x_{k})-\frac{\delta }{2^{k}}=M_{k}. \end{aligned}

Here we used that $$v^{{\varepsilon }}$$ is a solution to the DPP, (2). This ends the second case.

Therefore $$(M_{k})_{k\ge 0}$$ is a submartingale. Using the OSTh (recall that we have proved that $$\tau$$ is finite a.s. and that we have that $$M_k$$ is uniformly bounded) we conclude that

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{\tau }]\ge M_{0} \end{aligned}

where $$\tau$$ is the first time such that $$x_{\tau }\notin \Omega$$ in any of the two boards. Then,

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[\hbox {final payoff}]\ge u^{\varepsilon }(x_{0})-\delta . \end{aligned}

We can compute the infimum in $$S_{II}$$ and then the supremum in $$S_{I}$$ to obtain

\begin{aligned} \sup _{S_{I}}\inf _{S_{II}}{{\mathbb {E}}}^{(x_{0},1)}_{S_{I},S_{II}}[\hbox {final payoff}]\ge u^{{\varepsilon }}(x_{0})-\delta . \end{aligned}

We just observe that if we have started in the second board the previous computations show that

\begin{aligned} \sup _{S_{I}}\inf _{S_{II}}{{\mathbb {E}}}^{(x_{0},2)}_{S_{I},S_{II}}[\hbox {final payoff}]\ge v^{{\varepsilon }}(x_{0})-\delta . \end{aligned}

Now our goal is to prove the reverse inequality (interchanging inf and sup). To this end we define an strategy for Player II with

\begin{aligned} x_{k+1}^{II}=S_{II}^{*}(x_{0},\ldots ,x_{k}) \qquad \hbox {is such that} \qquad \inf _{B_{{\varepsilon }}(x_{k})} u^{{\varepsilon }}(x_{k+1}^{II})+ \frac{\delta }{2^{k}}\ge u^{{\varepsilon }}(x_{k+1}^{II}), \end{aligned}

and consider the sequence of random variables

\begin{aligned} N_{k}=\left\{ \begin{array}{ll} u^{{\varepsilon }}(x_{k})+\frac{\delta }{2^{k}} &{} \ \ \hbox {if } j_{k}=1 \\ v^{{\varepsilon }}(x_{k})+\frac{\delta }{2^{k}} &{} \ \ \hbox {if } j_{k}=2. \end{array} \right. \end{aligned}

Arguing as before we obtain that this sequence is a supermartingale. From the OSTh we get

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[N_{\tau }]\le N_{0} \end{aligned}

where $$\tau$$ is the stopping time for the game. Then,

\begin{aligned} {{\mathbb {E}}}^{(x_{0},1)}_{S_{I},S_{II}^{*}}[\hbox {final payoff}]\le u^{\varepsilon }(x_{0})+\delta . \end{aligned}

Taking supremum in $$S_{I}$$ and then infimum in $$S_{II}$$ we obtain

\begin{aligned} \inf _{S_{II}}\sup _{S_{I}}{{\mathbb {E}}}^{(x_{0},1)}_{S_{I},S_{II}}[\hbox {final payoff}]\le u^{{\varepsilon }}(x_{0})+\delta . \end{aligned}

As before, the same ideas starting at $$(x_{0},2)$$ give us

\begin{aligned} \inf _{S_{II}}\sup _{S_{I}}{{\mathbb {E}}}^{(x_{0},1)}_{S_{I},S_{II}}[\hbox {final payoff}]\le v^{{\varepsilon }}(x_{0})+\delta . \end{aligned}

To end the proof we just observe that

\begin{aligned} \sup _{S_{I}}\inf _{S_{II}}{{\mathbb {E}}}_{S_{I},S_{II}}[\hbox {final payoff}]\le \inf _{S_{II}}\sup _{S_{I}}{{\mathbb {E}}}_{S_{I},S_{II}}[\hbox {final payoff}]. \end{aligned}

Therefore,

\begin{aligned} u^{{\varepsilon }}(x_{0})-\delta \le \sup _{S_{I}}\inf _{S_{II}} {{\mathbb {E}}}_{S_{I},S_{II}}^{(x_{0},1)}[\hbox {final payoff}] \le \inf _{S_{II}}\sup _{S_{I}}{{\mathbb {E}}}_{S_{I},S_{II}}^{(x_{0},1)}[\hbox {final payoff}]\le u^{{\varepsilon }}(x_{0})+\delta \end{aligned}

and

\begin{aligned} v^{{\varepsilon }}(x_{0})-\delta \le \sup _{S_{I}}\inf _{S_{II}} {{\mathbb {E}}}_{S_{I},S_{II}}^{(x_{0},2)}[\hbox {final payoff}]\le \inf _{S_{II}}\sup _{S_{I}}{{\mathbb {E}}}_{S_{I},S_{II}}^{(x_{0},2)}[\hbox {final payoff}]\le v^{{\varepsilon }}(x_{0})+\delta . \end{aligned}

As $$\delta >0$$ is arbitrary the proof is finished. $$\square$$

### Remark 13

One can obtain existence for the DPP considering,

\begin{aligned} (\mathbf{e}* ) : \left\{ \begin{array}{ll} z^{{\varepsilon }}(x)\ge {\varepsilon }^{2}w^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}z^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}z^{{\varepsilon }}(y)\Big \} \qquad &{} \ x \in \Omega , \\ w^{{\varepsilon }}(x)\ge {\varepsilon }^{2}z^{{\varepsilon }}(x)+(1-{\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)}w^{{\varepsilon }}(y)dy \qquad &{} \ x \in \Omega , \\ z^{{\varepsilon }}(x) \ge \overline{f}(x) \qquad &{} \ x \in {{\mathbb {R}}}^{n} \backslash \Omega , \\ w^{{\varepsilon }}(x) \ge \overline{g}(x) \qquad &{} \ x \in {{\mathbb {R}}}^{n} \backslash \Omega . \end{array} \right. \end{aligned}

and the associated set of functions

\begin{aligned} {B} = \Big \{ (z^{{\varepsilon }},w^{{\varepsilon }}) / \hbox { are bounded functions such that} \ (\mathbf{e}* ) \Big \}. \end{aligned}

Now, we compute infimums,

\begin{aligned} u^{{\varepsilon },*}(x)=\inf _{(z^{{\varepsilon }},w^{{\varepsilon }})\in {B}}z^{{\varepsilon }}(x) \qquad \hbox { and } \qquad v^{{\varepsilon },*}(x)=\inf _{(z^{{\varepsilon }},w^{{\varepsilon }})\in {B}}w^{{\varepsilon }}(x), \end{aligned}

that are solutions to the DPP (this fact can be proved as we did for supremums). Then, by the uniqueness to solutions to the DPP we have

\begin{aligned} u^{{\varepsilon },*} = u^{{\varepsilon }} \qquad \hbox {and} \qquad v^{{\varepsilon },*} = v^{{\varepsilon }}. \end{aligned}

## Uniform convergence

Now our aim is to pass to the limit in the values of the game

\begin{aligned} u^{\varepsilon }\rightarrow u, \ v^{\varepsilon }\rightarrow v \qquad \hbox {as } {\varepsilon }\rightarrow 0 \end{aligned}

and then in the next section to obtain that this limit pair (uv) is a viscosity solution to our system (1).

To obtain a convergent subsequence $$u^{\varepsilon }\rightarrow u$$ we will use the following Arzela–Ascoli type lemma. For its proof see Lemma 4.2 from .

### Lemma 14

Let $$\{u^{\varepsilon }: {\overline{\Omega }} \rightarrow {{\mathbb {R}}},\ {\varepsilon }>0\}$$ be a set of functions such that

1. (1)

there exists$$C>0$$such that$$\left| u^{\varepsilon }(x) \right| <C$$for every$${\varepsilon }>0$$and every$$x \in {\overline{\Omega }}$$,

2. (2)

given $$\delta >0$$ there are constants $$r_0$$ and $${\varepsilon }_0$$ such that for every $${\varepsilon }< {\varepsilon }_0$$ and any $$x, y \in {\overline{\Omega }}$$ with $$|x - y | < r_0$$ it holds

\begin{aligned} |u^{\varepsilon }(x) - u^{\varepsilon }(y)| < \delta . \end{aligned}

Then, there exists a uniformly continuous function $$u: {\overline{\Omega }} \rightarrow {{\mathbb {R}}}$$ and a subsequence still denoted by $$\{u^{\varepsilon }\}$$ such that

\begin{aligned} \begin{aligned} u^{{\varepsilon }}\rightarrow u \qquad \text { uniformly in }{\overline{\Omega }}, \hbox { as} \, {\varepsilon }\rightarrow 0. \end{aligned} \end{aligned}

So our task now is to show that $$u^{\varepsilon }$$ and $$v^{\varepsilon }$$ both satisfy the hypotheses of the previous lemma. First, we observe that they are uniformly bounded.

### Lemma 15

There exists a constant$$C>0$$independent of$${\varepsilon }$$such that

\begin{aligned} \left| u^{\varepsilon }(x) \right| \le C, \qquad \left| v^{\varepsilon }(x) \right| \le C, \end{aligned}

for every$${\varepsilon }>0$$and every$$x \in {\overline{\Omega }}$$.

### Proof

It follows form our proof of existence of a solution to the DPP. In fact, we can take

\begin{aligned} C = \max \{ \Vert g\Vert _\infty , \Vert f \Vert _\infty \}, \end{aligned}

since the final payoff in any of the boards is bounded by this C. $$\square$$

To prove the second hypothesis of Lemma 14 we will need some key estimates according to the board in which we are playing.

### Estimates for the tug-of-war game

In this case we are going to assume that we are playing in board 1 (with the Tug-of-War game) all the time (without changing boards).

### Lemma 16

Given$$\eta >0$$and$$a>0$$, there exist$$r_{0}>0$$and$${\varepsilon }_{0}>0$$such that, given$$y\in \partial \Omega$$and$$x_{0}\in \Omega$$with$$| x_{0}-y |<r_{0}$$, any of the two players has a strategy$$S^{*}$$with which we obtain

\begin{aligned} {\mathbb {P}}\Big (x_{\tau } : | x_{\tau }-y |< a \Big ) \ge 1 - \eta \qquad \hbox {and} \qquad {\mathbb {P}}\Big (\tau \ge \frac{a}{{\varepsilon }^2}\Big )< \eta \end{aligned}

for$${\varepsilon }<{\varepsilon }_{0}$$and$$x_{\tau }\in {{\mathbb {R}}}^{N}\backslash \Omega$$the first position outside$$\Omega$$.

This Lemma says that if we start playing close enough to $$y\in \partial \Omega$$ we will finish quickly (in a number of steps less than a small constant times $${\varepsilon }^2$$) and at a final position close to $$y\in \partial \Omega$$ with high probability.

### Proof

We can assume without loss of generality that $$y=0\in \partial \Omega$$. In this case we will define the strategy $$S^{*}$$ (this strategy can be used by any of the two players) “point to the point $$y=0$$” as follows

\begin{aligned} x_{k+1}=S^{*}(x_{0},x_{1},\ldots ,x_{k})=x_{k}+ \left( \frac{{\varepsilon }^{3}}{2^{k}}-{\varepsilon }\right) \frac{x_{k}}{ |x_{k} |} \end{aligned}

as long as $$x_{k+1}\in \Omega$$. Now let us consider the random variables

\begin{aligned} N_{k}= | x_{k}|+\frac{{\varepsilon }^{3}}{2^{k}} \end{aligned}

for $$k\ge 0$$ and play assuming that one of the players uses the $$S^{*}$$ strategy. The goal is to prove that $$\{N_{k}\}_{k\ge 0}$$ is supermartingale, i.e.,

\begin{aligned} {{\mathbb {E}}}[N_{k+1}\arrowvert N_{k}]\le N_{k}. \end{aligned}

Note that with probability 1/2 we obtain

\begin{aligned} x_{k+1}=x_{k} + \left( \frac{{\varepsilon }^{3}}{2^{k}}-{\varepsilon }\right) \frac{x_{k}}{ | x_{k} |} \end{aligned}

this is the case when the player who uses the $$S^{*}$$ strategy wins the coin toss. On the other hand, we have

\begin{aligned} | x_{k+1}|\le | x_{k}| + {\varepsilon }, \end{aligned}

when the other player wins. Then, we obtain

\begin{aligned} {{\mathbb {E}}}\Big [| x_{k+1}| \arrowvert x_{k} \Big ]\le {\frac{1}{2}}\Big (| x_{k}| +\Big (\frac{{\varepsilon }^{3}}{2^{k}}-{\varepsilon }\Big )\Big ) + {\frac{1}{2}}(| x_{k} | + {\varepsilon })= | x_{k} | +\frac{{\varepsilon }^{3}}{2^{k+1}}. \end{aligned}

Hence, we get

\begin{aligned} {{\mathbb {E}}}\Big [N_{k+1}\arrowvert N_{k}\Big ]={{\mathbb {E}}}\Big [ | x_{k+1}| + \frac{{\varepsilon }^{3}}{2^{k+1}}\arrowvert | x_{k} | + \frac{{\varepsilon }^{3}}{2^{k}}\Big ] \le |x_{k}| + \frac{{\varepsilon }^{3}}{2^{k+1}}+\frac{{\varepsilon }^{3}}{2^{k+1}}=N_{k}. \end{aligned}

We just proved that $$\{N_{k}\}_{k\ge 0}$$ is a supermartingale. Now, let us consider the random variables

\begin{aligned} (N_{k+1}-N_{k})^{2}, \end{aligned}

and the event

\begin{aligned} F_{k}=\{ the \ player \ who \ points \ to \ 0\in \partial \Omega \ wins \ the \ coin \ toss \}. \end{aligned}
(7)

Then we have the following

\begin{aligned} {{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k}]&= {\frac{1}{2}}{{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}]+{\frac{1}{2}}{{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}^{c}] \\&\ge {\frac{1}{2}}{{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}]. \end{aligned}

Let us observe that

\begin{aligned} {\frac{1}{2}}{{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}]&= {\frac{1}{2}}{{\mathbb {E}}}\Big [\Big ( |x_{k}|-{\varepsilon }+\frac{{\varepsilon }^{3}}{2^{k}}+\frac{{\varepsilon }^{3}}{2^{k+1}}- | x_{k}| -\frac{{\varepsilon }^{3}}{2^{k}}\Big )^{2} \Big ] \\&= {\frac{1}{2}}{{\mathbb {E}}}\Big [\Big (-{\varepsilon }+\frac{{\varepsilon }^{3}}{2^{k+1}}\Big )^{2} \Big ]\ge \frac{{\varepsilon }^{2}}{3} \end{aligned}

if $${\varepsilon }<{\varepsilon }_{0}$$ for $${\varepsilon }_{0}$$ small enough. With this estimate in mind we obtain

\begin{aligned} {{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k}] \ge \frac{{\varepsilon }^{2}}{3}. \end{aligned}
(8)

Now we will analyze $$N_{k}^{2}-N_{k+1}^{2}$$. We have

\begin{aligned} N_{k}^{2}-N_{k+1}^{2}=(N_{k+1}-N_{k})^{2}+2N_{k+1}(N_{k}-N_{k+1}). \end{aligned}
(9)

Let us prove that $${{\mathbb {E}}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]\ge 0$$ using the set $$F_{k}$$ defined by (7). It holds that

\begin{aligned} \begin{aligned}&{{\mathbb {E}}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}] \\&\quad ={\frac{1}{2}}{{\mathbb {E}}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}\wedge F_{k}]+{\frac{1}{2}}{{\mathbb {E}}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}\wedge F_{k}^{c}] \\&\quad = {\frac{1}{2}}\Big [\Big (|x_{k}|-{\varepsilon }+\frac{{\varepsilon }^{3}}{2^{k}}+\frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big (|x_{k}|+ \frac{{\varepsilon }^{3}}{2^{k}}-|x_{k}|+{\varepsilon }-\frac{{\varepsilon }^{3}}{2^{k}}- \frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big ] \\&\qquad + {\frac{1}{2}}\Big [\Big (|x_{k+1}|+ \frac{{\varepsilon }^{3}}{2^{k+1}}\Big ) \Big (|x_{k}|+\frac{{\varepsilon }^{3}}{2^{k}}- |x_{k+1}|-\frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big ] \\&\quad \ge {\frac{1}{2}}\Big (|x_{k}|-{\varepsilon }+\frac{{\varepsilon }^{3}}{2^{k}}+ \frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big ({\varepsilon }- \frac{{\varepsilon }^{3}}{2^{k+1}}\Big ) \\&\qquad +{\frac{1}{2}}\Big [\Big (|x_{k}|-{\varepsilon }+ \frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big (|x_{k}|+\frac{{\varepsilon }^{3}}{2^{k}}- |x_{k}|-{\varepsilon }-\frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big ] \end{aligned} \end{aligned}

here we used that $$|x_{k}|-{\varepsilon }\le |x_{k+1}|\le |x_{k}|+{\varepsilon }$$. Thus

\begin{aligned} {{\mathbb {E}}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]&\ge {\frac{1}{2}}\left( |x_{k}|-{\varepsilon }+ \frac{{\varepsilon }^{3}}{2^{k+1}}+\frac{{\varepsilon }^{3}}{2^{k}}\right) \left( {\varepsilon }- \frac{{\varepsilon }^{3}}{2^{k+1}}\right) \\&+ {\frac{1}{2}}\left( |x_{k}|-{\varepsilon }+ \frac{{\varepsilon }^{3}}{2^{k+1}}\right) \left( -{\varepsilon }+\frac{{\varepsilon }^{3}}{2^{k+1}}\right) , \end{aligned}

and then

\begin{aligned} {{\mathbb {E}}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]\ge {\frac{1}{2}}\Big [\frac{{\varepsilon }^{3}}{2^{k}} \Big ({\varepsilon }-\frac{{\varepsilon }^{3}}{2^{k+1}}\Big )\Big ]\ge 0. \end{aligned}

If we go back to (9) and use (8) and the result we have just obtained we arrive to

\begin{aligned} {{\mathbb {E}}}[N_{k}^{2}-N_{k+1}^{2}\arrowvert N_{k}]\ge {{\mathbb {E}}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k}]\ge \frac{{\varepsilon }^{2}}{3}. \end{aligned}

Therefore, for the sequence of random variables

\begin{aligned} {{\mathbb {W}}}_{k}=N^{2}_{k}+\frac{k{\varepsilon }^{2}}{3} \end{aligned}

we have

\begin{aligned} {{\mathbb {E}}}[{{\mathbb {W}}}_{k}-{{\mathbb {W}}}_{k+1}\arrowvert {{\mathbb {W}}}_{k}]= {{\mathbb {E}}}\left[ N_{k}^{2}-N_{k+1}^{2}-\frac{{\varepsilon }^{2}}{3}\arrowvert {{\mathbb {W}}}_{k}\right] \ge 0. \end{aligned}

As $${{\mathbb {E}}}[{{\mathbb {W}}}_{k}\arrowvert {{\mathbb {W}}}_{k}]={{\mathbb {W}}}_{k}$$ then

\begin{aligned} {{\mathbb {E}}}[{{\mathbb {W}}}_{k+1}\arrowvert {{\mathbb {W}}}_{k}]\le {{\mathbb {W}}}_{k}, \end{aligned}

that is, the sequence $$\{{{\mathbb {W}}}_{k}\}_{k\ge 1}$$ is a supermartingale. In order to use the OSTh, given a fixed integer $$m\in {{\mathbb {N}}}$$ we define the stopping time

\begin{aligned} \tau _{m}=\tau \wedge m := \min \{\tau ,m\} \end{aligned}

Now this new stopping time verifies $$\tau _{m}\le m$$ which is the first hypothesis of the OSTh. Then, using the OSTh we obtain

\begin{aligned} {{\mathbb {E}}}[{{\mathbb {W}}}_{\tau _{m}}]\le {{\mathbb {W}}}_{0}. \end{aligned}

Observe that $$\lim \limits _{m\rightarrow \infty }\tau \wedge m = \tau$$ almost surely. Then, using Fatou’s Lemma, we arrive to

\begin{aligned} {{\mathbb {E}}}[{{\mathbb {W}}}_{\tau }]={{\mathbb {E}}}[\liminf _{m} {{\mathbb {W}}}_{\tau \wedge m}]\underbrace{\le }_{Fatou} \liminf _{m} {{\mathbb {E}}}[{{\mathbb {W}}}_{\tau \wedge m}]\underbrace{\le }_{OSTh} {{\mathbb {W}}}_{0} \end{aligned}

Thus, we obtain $${{\mathbb {E}}}[{{\mathbb {W}}}_{\tau }]\le {{\mathbb {W}}}_{0}$$, i.e.,

\begin{aligned} {{\mathbb {E}}}\left[ N^{2}_{\tau }+\frac{\tau {\varepsilon }^{2}}{3}\right] \le N_{0}^{2}. \end{aligned}
(10)

Then,

\begin{aligned} {{\mathbb {E}}}[\tau ]\le 3(|x_{0}|+{\varepsilon }^{3})^{2}{\varepsilon }^{-2}\le 4|x_{0} |^{2}{\varepsilon }^{-2} \end{aligned}

if $${\varepsilon }$$ is small enough. On the other hand, if we go back to (10) we have

\begin{aligned} {{\mathbb {E}}}[N_{\tau }^{2}]\le N_{0}^{2}, \end{aligned}

i.e.

\begin{aligned} {{\mathbb {E}}}[|x_{\tau }|^{2}]\le {{\mathbb {E}}}\left[ \left( |x_{\tau }|+ \frac{{\varepsilon }^{3}}{2^{\tau }}\right) ^{2}\right] \le (|x_{0}|+{\varepsilon }^{3})^{2}\le 2 |x_{0}|^{2}. \end{aligned}

What we have so far is that

\begin{aligned} {{\mathbb {E}}}[\tau ]\le 4|x_{0}|^{2}{\varepsilon }^{-2} \qquad \hbox {and} \qquad {{\mathbb {E}}}[|x_{\tau }|^{2}]\le 2 |x_{0}|^{2}. \end{aligned}

We will use these two estimates to prove

\begin{aligned} {{\mathbb {P}}}\Big (\tau \ge \frac{a}{{\varepsilon }^2}\Big )< \eta \qquad \hbox {and} \qquad {{\mathbb {P}}}\Big (|x_{\tau } |\ge a\Big )< \eta . \end{aligned}

Given $$\eta > 0$$ and $$a > 0$$, we take $$x_{0}\in \Omega$$ such that $$|x_{0} |< r_{0}$$ with $$r_{0}$$ that will be choosed later (depending on $$\eta$$ and a). We have

\begin{aligned} C r_{0}^{2}{\varepsilon }^{-2}\ge C |x_{0}-y |^{2}{\varepsilon }^{-2} \ge {{\mathbb {E}}}^{x_{0}}[\tau ]\ge {{\mathbb {P}}}\Big (\tau \ge \frac{a}{{\varepsilon }^{2}}\Big )\frac{a}{{\varepsilon }^{2}}. \end{aligned}

Thus

\begin{aligned} {{\mathbb {P}}}\left( \tau \ge \frac{a}{{\varepsilon }^{2}}\right) \le C \frac{r_{0}^{2}}{a}< \eta \end{aligned}

which holds true if $$r_{0}<\sqrt{\frac{\eta a}{C}}$$.

Also we have

\begin{aligned} C r_{0}^{2} \ge C |x_{0} |^{2}\ge {{\mathbb {E}}}^{x_{0}}[|x_{\tau } |^{2}]\ge a^{2}{{\mathbb {P}}}(|x_{\tau } |^{2}\ge a^{2}). \end{aligned}

Then

\begin{aligned} {{\mathbb {P}}}(|x_{\tau } |\ge a)\le C\frac{r_{0}^{2}}{a^{2}}< \eta \end{aligned}

which holds true if $$r_{0}< \sqrt{\frac{\eta a^{2}}{C}}$$. Observe that if we take $$a<1$$ we have $$\sqrt{\frac{\eta a^{2}}{C}}<\sqrt{\frac{\eta a}{C}}$$, then if we choose $$r_{0}<\sqrt{\frac{\eta a^{2}}{C}}$$ both conditions are fulfilled at the same time. $$\square$$

### Estimates for the random walk game

In this case we are going to assume that we are permanently playing on board 2, with the random walk game. The estimates for this game follow the same ideas as before, and are even simpler since there are no strategies of the players involved in this case. We include the details for completeness.

### Lemma 17

Given$$\eta >0$$and$$a>0$$, there exists$$r_{0}>0$$and$${\varepsilon }_{0}>0$$such that, given$$y\in \partial \Omega$$and$$x_{0}\in \Omega$$with$$|x_{0}-y|<r_{0}$$, if we play random we obtain

\begin{aligned} {\mathbb {P}} \Big (|x_{\tau }-y|< a \Big ) \ge 1 - \eta \qquad \hbox {and} \qquad {\mathbb {P}} \Big (\tau \ge \frac{a}{{\varepsilon }^2} \Big )< \eta \end{aligned}

for$${\varepsilon }<{\varepsilon }_{0}$$and$$x_{\tau }\in {{\mathbb {R}}}^{N}\backslash \Omega$$the first position outside$$\Omega$$.

### Proof

Recall that we assumed that $$\Omega$$ satisfies the uniform exterior ball property for a certain $$\theta _ {0}> 0$$.

For $$N \ge 3$$, given $$\theta <\theta _{0}$$, and $$y \in \partial \Omega$$ we are going to assume that $$z_{y} = 0$$ is chosen so that we have $$\overline{B_{\theta }(0)} \cap \overline{\Omega } = \{ y \}$$. We define the set

\begin{aligned} \Omega _{{\varepsilon }}=\{x\in {{\mathbb {R}}}^{N}:d(x,\Omega )<{\varepsilon }\} \end{aligned}

for $${\varepsilon }$$ small enough. Now, we consider the function $$\mu :\Omega _{{\varepsilon }}\rightarrow {{\mathbb {R}}}$$ given by

\begin{aligned} \mu (x)=\frac{1}{\theta ^{N-2}}-\frac{1}{|x |^{N-2}}. \end{aligned}
(11)

This function is positive in $$\overline{\Omega }\backslash \{ y \}$$, radially increasing and harmonic in $$\Omega$$. Also it holds that $$\mu (y) = 0$$. For $$N=2$$ we take $$\mu (x)=\ln (\theta )-\ln (|x |)$$ and we leave the details to the reader.

We will take the first position of the game, $$x_{0} \in \Omega$$, such that $$|x_{0} -y |<r_{0}$$ with $$r_{0}$$ to be choosed later. Let $$(x_{k})_{k \ge 0}$$ be the sequence of positions of the game playing random walks. Consider the sequence of random variables

\begin{aligned} N_{k}=\mu (x_{k}) \end{aligned}

for $$k \ge 0$$. Let us prove that $$N_{k}$$ is a martingale. Indeed

\begin{aligned} {{\mathbb {E}}}[N_{k + 1} \arrowvert N_{k}] = \fint _{B _{{\varepsilon }} (x_{k})} \mu (y) dy = \mu (x_{k}) = N_{k}. \end{aligned}

Here we have used that $$\mu$$ is harmonic. Since $$\mu$$ is bounded in $$\Omega$$, the third hypothesis of OSTh is fulfilled, hence we obtain

\begin{aligned} {{\mathbb {E}}}[\mu (x_{\tau })] = \mu (x_ {0}). \end{aligned}
(12)

Let us estimate the value $$\mu (x_{0})$$

\begin{aligned} \mu (x_{0})&= \frac{1}{\theta ^{n-2}}-\frac{1}{|x_{0} |^{n-2}}=\frac{|x_{0}|^{n-2}-\theta ^{n-2}}{\theta ^{n-2} |x_{0}|^{n-2}} \nonumber \\&= \frac{(|x_{0}|-\theta )}{\theta ^{n-2}|x_{0} |^{n-2}}\Big (\sum \limits _{j=1}^{N-2}|x_{0}|^{N-2-j}\theta ^{j-1}\Big ). \end{aligned}
(13)

The first term can be bounded as

\begin{aligned} (|x_{0}|-\theta )=(|x_{0}|-|y |)\le |x_{0}-y|<r_{0}. \end{aligned}

To deal with the second term we will ask $$\theta <1$$ and $$|x_{0}|^{l}\le R^{N-2}$$ where $$R=\max _{x\in \Omega }\{|x |\}$$ (suppose $$R>1$$). Then, we obtain

\begin{aligned} \sum \limits _{j=1}^{N-2}|x_{0}|^{n-2-j}\theta ^{j-1}\le R^{N-2}(N-2). \end{aligned}

Finally, we will use that $$|x_{0}|>\theta$$. Plugging all these estimates in (13) we obtain

\begin{aligned} \mu (x_{0})\le r_{0} \left( \frac{R^{N-2}(N-2)}{\theta ^{2(N-2)}}\right) . \end{aligned}

If we call $$c(\Omega ,\theta )=\frac{R^{N-2}(N-2)}{\theta ^{2(N-2)}}$$ and come back to (12) we get

\begin{aligned} {{\mathbb {E}}}[\mu (x_{\tau })]<c(\Omega ,\theta )r_{0}. \end{aligned}
(14)

We need to establish a relation between $$\mu (x_{\tau })$$ and $$|x_{\tau } -y |$$. To this end, we take the function $$b: [\theta , + \infty ) \rightarrow {{\mathbb {R}}}$$ given by

\begin{aligned} b(\overline{a})=\frac{1}{\theta ^{N-2}}-\frac{1}{\overline{a}^{N-2}}. \end{aligned}

Note that this function is the radial version of $$\mu$$. It is positive and increasing, then, it has an inverse (also increasing) that is given by the formula

\begin{aligned} \overline{a}(b)=\frac{\theta }{(1-\theta ^{N-2}b)^{\frac{1}{N-2}}}. \end{aligned}

This function is positive, increasing and convex, since $$\overline{a}''> 0$$. Then for $$b <1$$ we obtain

\begin{aligned} \overline{a}(b)\le \theta + (\overline{a}(1)-\theta )b. \end{aligned}

Let us call $$K(\theta )=(\overline{a}(1)-\theta )>0$$ (this constant depends only on $$\theta$$). Using the relationship between $$\overline{a}$$ and b we obtained the following: given $$\overline{a}> \theta$$ there is $$b> 0$$ such that

\begin{aligned} \hbox {if } \mu (x_{\tau })<b \hbox { then } |x_{\tau } |<\overline{a}. \end{aligned}

Here we are using that the function $$b (\overline{a})$$ is increasing. Now one can check that, for all $$a> 0$$ , there are $$\overline{a}> \theta$$ and $${\varepsilon }_{0}> 0$$ such that, if

\begin{aligned} |x_{\tau }|< \overline{a} \qquad \hbox {and} \qquad d(x_{\tau },\Omega )<{\varepsilon }_{0}, \end{aligned}

then

\begin{aligned} |x_{\tau }-y|<a. \end{aligned}

Putting everything together we obtained that, given $$a>0$$, exist $$\overline{a}>\theta$$, $$b>0$$ and $${\varepsilon }_{0}>0$$ such that

\begin{aligned} \hbox {if } \mu (x_{\tau })<b \Rightarrow |x_{\tau }-y|<a \ , \ d(x_{\tau },\Omega )<{\varepsilon }_{0}. \end{aligned}

We ask for $$0<b<a$$ that we will used later. Then, we have

\begin{aligned} {{\mathbb {P}}}(\mu (x_{\tau })\ge b)\ge {{\mathbb {P}}}(|x_{\tau }-y|\ge a). \end{aligned}

Coming back to (14) we get

\begin{aligned} c(\Omega ,\theta )r_{0}>{{\mathbb {E}}}[\mu (x_{\tau })]\ge {{\mathbb {P}}}(\mu (x_{\tau })\ge b)b\ge {{\mathbb {P}}}(|x_{\tau }-y|\ge a)b \end{aligned}
(15)

Using that $$\overline{a}-\theta \le K(\theta )b$$ we obtain

\begin{aligned} c(\Omega ,\theta )r_{0}>{{\mathbb {P}}}(|x_{\tau }-y|\ge a)\frac{\overline{a}-\theta }{K(\theta )} \end{aligned}

Then

\begin{aligned} {{\mathbb {P}}}(|x_{\tau }-y|\ge a)<\frac{c(\Omega ,\theta )r_{0}K(\theta )}{\overline{a}-\theta }<\eta \end{aligned}

which holds true if

\begin{aligned} r_{0}<\frac{\eta (\overline{a}-\theta )}{c(\Omega ,\theta )K(\theta )}. \end{aligned}

This is one of the inequalities we wanted to prove.

Now let us compute

\begin{aligned} {{\mathbb {E}}}[N_{k+1}^{2}-N_{k}^{2}\arrowvert N_{k}]=\fint _{B_{{\varepsilon }}(x_{k})}(\mu ^{2}(w)-\mu ^{2}(x_{k}))dw. \end{aligned}
(16)

Let us call $$\varphi =\mu ^{2}$$. If we made the Taylor expansion of order two we obtain

\begin{aligned} \varphi (w)=\varphi (x_{k})+\langle \nabla \varphi (x_{k}),(w-x_{k})\rangle \quad+\,{\frac{1}{2}}\langle D^{2}\varphi (x_{k})(w-x_{k}),(w-x_{k})\rangle +\,O(|w-x_{k}|^{3}). \end{aligned}

Then

\begin{aligned} \begin{aligned} \fint _{B_{{\varepsilon }}(x_{k})}(\varphi (w)-\varphi (x_{k}))dw&=\fint _{B_{{\varepsilon }}(x_{k})}\langle \nabla \varphi (x_{k}),(w-x_{k})\rangle dw \\&\quad +{\frac{1}{2}}\fint _{B_{{\varepsilon }}(x_{k})}\langle D^{2}\varphi (x_{k})(w-x_{k}),(w-x_{k})\rangle dw \\&\quad +\fint _{B_{{\varepsilon }}(x_{k})}O(|w-x_{k}|^{3})dw. \end{aligned} \end{aligned}

Let us analyze these integrals

\begin{aligned} \fint _{B_{{\varepsilon }}(x_{k})}\langle \nabla \varphi (x_{k}),(w-x_{k})\rangle dw=0. \end{aligned}

On the other hand, for $$\langle D^{2}\varphi (x_{k})(w-x_{k}),(w-x_{k})\rangle$$, changing variables as $$w=x_{k}+{\varepsilon }z$$, it holds that

\begin{aligned} \fint _{B_{{\varepsilon }}(x_{k})}\langle D^{2}\varphi (x_{k})(w-x_{k}), (w-x_{k})\rangle dw&= \sum \limits _{j=1}^{N}\partial _{x_{j}x_{j}}^{2} \varphi (x_{k}){\varepsilon }^{2}\fint _{B_{1}(0)}z_{j}^{2}dz \\&= \kappa {\varepsilon }^{2}\sum \limits _{j=1}^{N}\partial _{x_{j}x_{j}}^{2}\varphi (x_{k}). \end{aligned}

Here we find the constant $$\kappa$$ that appears in the second equation in (1). Let us compute the second derivatives of $$\varphi$$. As $$\varphi =\mu ^{2}$$,

\begin{aligned} \sum \limits _{j=1}^{N}\partial _{x_{j}x_{j}}^{2}\varphi (x_{k})= 2\sum \limits _{j=1}^{N}(\partial _{x_{j}}\mu (w))^{2}+2\mu (x_{k}) \sum \limits _{j=1}^{n}\partial _{x_{j}x_{j}}^{2}\mu (x_{k}). \end{aligned}

The second term is zero because $$\mu$$ is harmonic in $$\Omega$$. Hence, we arrived to

\begin{aligned} \sum \limits _{j=1}^{N}\partial _{x_{j}x_{j}}^{2}\varphi (x_{k})= 2\sum \limits _{j=1}^{N}(\partial _{x_{j}}\mu (w))^{2}. \end{aligned}

Using the definition of $$\mu$$ (11) we get

\begin{aligned} \sum \limits _{j=1}^{N}\partial _{x_{j}x_{j}}^{2}\varphi (x_{k})= \frac{2(N-2)^{2}}{|x_{k} |^{2(N-2)}}. \end{aligned}

Putting everything together

\begin{aligned} \fint _{B_{{\varepsilon }}(x_{k})}(\varphi (w)-\varphi (x_{k}))dw&= {\frac{1}{2}}\kappa {\varepsilon }^{2}\frac{2(N-2)^{2}}{|x_{k} |^{2(N-2)}}+O(|w-x_{k}|^{3}) \\&\ge {\varepsilon }^{2}\frac{\kappa (N-2)^{2}}{R^{2(n-2)}}-\gamma {\varepsilon }^{3} \ge {\varepsilon }^{2}\frac{\kappa (N-2)^{2}}{2R^{2(N-2)}}, \end{aligned}

if $${\varepsilon }$$ is small enough (here $$R=\max _{x\in \Omega } \{ |x |\}$$). Let us call

\begin{aligned} \sigma (\Omega )=\frac{\kappa (N-2)^{2}}{2R^{2(N-2)}}. \end{aligned}

Then, if we go back to (16) we get

\begin{aligned} {{\mathbb {E}}}[N_{k+1}^{2}-N_{k}^{2}\arrowvert N_{k}]\ge \sigma (\Omega ){\varepsilon }^{2}. \end{aligned}

Let us consider the sequence of random variables $$({{\mathbb {W}}}_{k})_{k\ge 0}$$ given by

\begin{aligned} {{\mathbb {W}}}_{k}=-N_{k}^{2}+\sigma (\Omega )k{\varepsilon }^{2}. \end{aligned}

Then

\begin{aligned} {{\mathbb {E}}}[{{\mathbb {W}}}_{k+1}-{{\mathbb {W}}}_{k}\arrowvert {{\mathbb {W}}}_{k}]={{\mathbb {E}}}[-(N_{k+1}^{2}-N_{k}^{2})+\sigma {\varepsilon }^{2}\arrowvert N_{k}]\le 0 \end{aligned}

That is, $${{\mathbb {W}}}_{k}$$ is a supermartingale. Using the OSTh in the same way as before we get

\begin{aligned} {{\mathbb {E}}}[-\mu ^{2}(x_{\tau })+\sigma \tau {\varepsilon }^{2}]\le -\mu ^{2}(x_{0}). \end{aligned}

Therefore,

\begin{aligned} {{\mathbb {E}}}[\sigma \tau {\varepsilon }^{2}]\le -\mu ^{2}(x_{0})+{{\mathbb {E}}}[\mu ^{2}(x_{\tau })]\le {{\mathbb {E}}}[\mu ^{2}(x_{\tau })]. \end{aligned}
(17)

Hence, we need a bound for $${{\mathbb {E}}}[\mu ^{2}(x_{\tau })]$$. We have

\begin{aligned} {{\mathbb {E}}}[\mu ^{2}(x_{\tau })]={{\mathbb {E}}}[\mu ^{2}(x_{\tau })\arrowvert \mu (x_{\tau })<b] {{\mathbb {P}}}(\mu (x_{\tau })<b)+{{\mathbb {E}}}[\mu ^{2}(x_{\tau })\arrowvert \mu (x_{\tau })\ge b]{{\mathbb {P}}}(\mu (x_{\tau })\ge b). \end{aligned}

It holds that $${{\mathbb {E}}}[\mu ^{2}(x_{\tau })\arrowvert \mu (x_{\tau })<b]\le b^{2}$$ and $${{\mathbb {P}}}(\mu (x_{\tau })<b)\le 1$$. If we call $$M({\varepsilon }_{0})=\max _{x\in \Omega _{{\varepsilon }_{0}}}\arrowvert \mu (x)\arrowvert$$ it holds $${{\mathbb {E}}}[\mu ^{2}(x_{\tau })\arrowvert \mu (x_{\tau })\ge b]\le M({\varepsilon }_{0})^{2}$$. Finaly using (15) we obtain $${{\mathbb {P}}}(\mu (x_{\tau })\ge b)\le \frac{c(\Omega ,\theta )r_{0}}{b}$$. Thus

\begin{aligned} {{\mathbb {E}}}[\mu ^{2}(x_{\tau })]\le b^{2}+M({\varepsilon }_{0})^{2}\frac{c(\Omega ,\theta )r_{0}}{b}. \end{aligned}

Recall that we imposed $$0<b<a$$. Then

\begin{aligned} {{\mathbb {E}}}[\mu ^{2}(x_{\tau })]\le a^{2}+M({\varepsilon }_{0})^{2}\frac{c(\Omega ,\theta )r_{0}}{b}. \end{aligned}
(18)

On the other hand, we have

\begin{aligned} \sigma {{\mathbb {E}}}[\tau {\varepsilon }^{2}]\ge {{\mathbb {P}}}(\tau {\varepsilon }^{2}\ge a)a\sigma . \end{aligned}

Using (17) and (18) we get

\begin{aligned} {{\mathbb {P}}}\Big (\tau \ge \frac{a}{{\varepsilon }^{2}}\Big )\le \frac{a}{\sigma } +M({\varepsilon }_{0})^{2}\frac{c(\Omega ,\theta )r_{0}}{b\sigma a}. \end{aligned}

\begin{aligned} \frac{a}{\sigma } < \frac{\eta }{2} \end{aligned}

we arrive to

\begin{aligned} {{\mathbb {P}}}\Big (\tau \ge \frac{a}{{\varepsilon }^{2}}\Big ) \le \frac{\eta }{2}+M({\varepsilon }_{0})^{2}\frac{c(\Omega ,\theta )r_{0}}{b a\sigma }<\eta \end{aligned}

which is true if we impose that

\begin{aligned} r_{0}< \frac{b\eta a \sigma }{2M({\varepsilon }_{0})c(\Omega ,\theta )}. \end{aligned}

Thus we achieve the second inequality of the lemma, and the proof is finished. $$\square$$

Now we are ready to prove the second condition in the Arzela–Ascoli type lemma.

### Lemma 18

Given $$\delta >0$$ there are constants $$r_0$$ and $${\varepsilon }_0$$ such that for every $${\varepsilon }< {\varepsilon }_0$$ and any $$x, y \in {\overline{\Omega }}$$ with $$|x - y | < r_0$$ it holds

\begin{aligned} |u^{\varepsilon }(x) - u^{\varepsilon }(y)|< \delta \qquad \hbox {and} \qquad |v^{\varepsilon }(x) - v^{\varepsilon }(y)| < \delta . \end{aligned}

### Proof

We deal with the estimate for $$u^{\varepsilon }$$. Recall that $$u^{{\varepsilon }}$$ is the value of the game playing in the first board (where we play Tug-of-War). The computations for $$v^{\varepsilon }$$ are similar.

First, we start with two close points x and y with $$y\not \in \Omega$$ and $$x\in \Omega$$. We have that $$u^{{\varepsilon }}(y)=\overline{f}(y)$$ for $$y\in \partial \Omega$$. Given $$\eta >0$$ we take a, $$r_{0}$$, $${\varepsilon }_{0}$$ and $$S^{*}_{I}$$ the strategy as in Lemma 16. Let

\begin{aligned} {A}=\Big \{\hbox {the position does not change board in the first } \ \Big \lceil \frac{a}{{\varepsilon }^{2}}\Big \rceil \hbox { plays and } \tau < \Big \lceil \frac{a}{{\varepsilon }^{2}}\Big \rceil \Big \}. \end{aligned}

We consider two cases.

1st case: We are going to show that $$u^{{\varepsilon }}(x_{0})-\overline{f}(y) \ge - {\mathcal {A}}(a,\eta )$$ with $${\mathcal {A}}(a,\eta )\searrow 0$$ if $$a\rightarrow 0$$ and $$\eta \rightarrow 0$$. We have

\begin{aligned} u^{{\varepsilon }}(x_{0})\ge \inf _{S_{II}}{{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[h(x_{\tau })]. \end{aligned}

Now

\begin{aligned} \begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[ h (x_{\tau })]&= {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[h( x_{\tau })\arrowvert A]{{\mathbb {P}}}(A) + {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[h( x_{\tau })\arrowvert A^{c}]{{\mathbb {P}}}(A^{c}) \\&\ge {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A]{{\mathbb {P}}}(A)-\max \{|\overline{f}|,|\overline{g}|\}{{\mathbb {P}}}(A^{c}). \end{aligned} \end{aligned}

Now we estimate $${{\mathbb {P}}}(A)$$ and $${{\mathbb {P}}}(A^{c})$$. We have that

\begin{aligned} {{\mathbb {P}}}(A^{c})\le {{\mathbb {P}}}\Big (\hbox {the game changes board before }\Big \lceil \frac{a}{{\varepsilon }^{2}}\Big \rceil \hbox { plays}\Big ) +{{\mathbb {P}}}(\tau \ge \Big \lceil \frac{a}{{\varepsilon }^{2}}\Big \rceil ). \end{aligned}

Hence we are left with two bounds. First, we have

\begin{aligned} {{\mathbb {P}}}\Big (\hbox {the game changes board before }\Big \lceil \frac{a}{{\varepsilon }^{2}}\Big \rceil \hbox { plays} \Big )=1-(1-{\varepsilon }^{2})^{\frac{a}{{\varepsilon }^{2}}} \le (1-e^{-a})+\eta \end{aligned}
(19)

for $${\varepsilon }$$ small enough. Here we are using that $$(1-{\varepsilon }^{2})^{\frac{a}{{\varepsilon }^{2}}}\nearrow e^{-a}$$.

Now, we observe that using Lemma 16 we get

\begin{aligned} {{\mathbb {P}}}\Big (\tau \ge \frac{a}{{\varepsilon }^{2}}\Big ) \le {{\mathbb {P}}}\Big (\tau \ge \frac{a}{{\varepsilon }_{0}^{2}}\Big )\le \eta , \end{aligned}
(20)

for $${\varepsilon }< {\varepsilon }_{0}$$. From (19) and (20) we obtain

\begin{aligned} {{\mathbb {P}}}(A^{c})\le (1-e^{-a})+\eta +\eta = (1-e^{-a})+2\eta \end{aligned}

and hence

\begin{aligned} {{\mathbb {P}}}(A) =1-{{\mathbb {P}}}(A^{c}) \ge 1-[(1-e^{-a})+2\eta ] . \end{aligned}

Then we obtain

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[h(x_{\tau })]&\ge {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A] (1-[(1-e^{-a})+2\eta ]) \nonumber \\&-\max \{|\overline{f}|,|\overline{g}|\}[(1-e^{-a})+2\eta ]. \end{aligned}
(21)

Let us analyze the expected value $${{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A]$$. Again we need to consider two events,

\begin{aligned} A_{1}=A\cap \{ |x_{\tau }-y|< a \} \qquad \hbox {and} \qquad A_{2}=A\cap \{ |x_{\tau }-y|\ge a\}. \end{aligned}

We have that $$A=A_{1}\cup A_{2}$$. Then

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A]={{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A_{1}]{{\mathbb {P}}}(A_{1})+{{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A_{2}]{{\mathbb {P}}}(A_{2}). \end{aligned}
(22)

Now we observe that

\begin{aligned} {{\mathbb {P}}}(A_{2})\le {{\mathbb {P}}}( |x_{\tau }-y|\ge a)\le \eta . \end{aligned}
(23)

To get a bound for the other case we observe that $$A_{1}^{c}=A^{c}\cup \{ |x_{\tau }-y|\ge a\}$$. Therefore

\begin{aligned} {{\mathbb {P}}}(A_{1})=1-{{\mathbb {P}}}(A_{1}^{c})\ge 1-[{{\mathbb {P}}}(A^{c})+{{\mathbb {P}}}(|x_{\tau }-y|\ge a)], \end{aligned}

and we arrive to

\begin{aligned} {{\mathbb {P}}}(A_{1})\ge 1-[(1-e^{-a})+2\eta +\eta ]=1-[(1-e^{-a})+3\eta ]. \end{aligned}
(24)

If we go back to (22) and use (24) and (23) we get

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A]\ge {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A_{1}](1-[(1-e^{-a})+3\eta ])-\max \{|\overline{f}|\}\eta . \end{aligned}

Using that $$\overline{f}$$ is Lipschitz we obtain

\begin{aligned} \overline{f}(x_{\tau })\ge \overline{f}(y)-L|x_{\tau }-y|\ge \overline{f}(y)-La , \end{aligned}

and then we obtain (using that $$(\overline{f}(y)-La)$$ does not depend on the strategies)

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[\overline{f}(x_{\tau })\arrowvert A]\ge (\overline{f}(y)-La)(1-[(1-e^{-a})+3\eta ])-\max \{|\overline{f}|\}\eta . \end{aligned}

Here we remark that we can assume only that $$\overline{f}$$ has a uniform modulus of continuity $$\omega : [0, \infty ) \mapsto [0, \infty )$$, i.e. an increasing function with $$\omega (0) = 0$$ such that

\begin{aligned} |f(x) - f(y)| \le \omega (|x - y|). \end{aligned}

Under this uniform modulus of continuity assumption the proof works similarly.

Recalling (21) we obtain

\begin{aligned} \begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[h ( x_{\tau })]&\ge ((\overline{f}(y)-La)(1-[(1-e^{-a})+3\eta ]) \\&-\max \{|\overline{f}|\}\eta ) (1-[(1-e^{-a})+2\eta ]) \\&-\max \{|\overline{f}|,|\overline{g}|\}[(1-e^{-a})+2\eta ]. \end{aligned} \end{aligned}

Notice that when $$\eta \rightarrow 0$$ and $$a\rightarrow 0$$ the the right hand side goes to $$\overline{f}(y)$$, hence we have obtained

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S^{*}_{I},S_{II}}[h( x_{\tau })]\ge \overline{f}(y)- {\mathcal {A}}(a,\eta ) \end{aligned}

with $${\mathcal {A}}(a,\eta )\rightarrow 0$$. Taking the infimum over all possible strategies $$S_{II}$$ we get

\begin{aligned} u^{{\varepsilon }}(x_{0})\ge \overline{f}(y)- {\mathcal {A}}(a,\eta ) \end{aligned}

with $${\mathcal {A}}(a,\eta )\rightarrow 0$$ as $$\eta \rightarrow 0$$ and $$a\rightarrow 0$$ as we wanted to show.

2nd case: Now we want to show that $$u^{{\varepsilon }}(x_{0})-\overline{f}(y)\le {\mathcal {B}}(a,\eta )$$ with $${\mathcal {B}}(a,\eta )\searrow 0$$ as $$\eta \rightarrow 0$$ and $$a\rightarrow 0$$. In this case we just use the strategy $$S^*$$ from Lemma 16 as the strategy for the second player $$S^{*}_{II}$$ and we obtain

\begin{aligned} u^{{\varepsilon }}(x_{0})\le \sup _{S_{II}}{{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[h(x_{\tau })]. \end{aligned}

Using again the set A that we considered in the previous case we obtain

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[ h( x_{\tau })]= {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}( x_{\tau })\arrowvert A]{{\mathbb {P}}}(A)+ {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[h( x_{\tau })\arrowvert A^{c}]{{\mathbb {P}}}(A^{c}). \end{aligned}

We have that $${{\mathbb {P}}}(A) \le 1$$ and $${{\mathbb {P}}}(A^{c})\le (1-e^{-a})+2\eta$$. Hence we get

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[h( x_{\tau })]\le {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}( x_{\tau })\arrowvert A]+\max \{|\overline{f}|,|\overline{g}|\}[(1-e^{-a})+2\eta ]. \end{aligned}

To bound $${{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A]$$ we will use again the sets $$A_{1}$$ and $$A_{2}$$ as in the previous case. We have

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A]={{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A_{1}]{{\mathbb {P}}}(A_{1})+{{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A_{2}]{{\mathbb {P}}}(A_{2}). \end{aligned}

Now we use that $${{\mathbb {P}}}(A_{1}) \le 1$$ and $${{\mathbb {P}}}(A_{2})\le c\eta$$ to obtain

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A]\le {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A_{1}]+\max \{|\overline{f}|\}\eta . \end{aligned}

Now for $${{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A_{1}]$$ we use that $$\overline{f}$$ is Lipschitz to obtain

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(x_{\tau })\arrowvert A]\le {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[\overline{f}(y)+La\arrowvert A_{1}]+\max \{|\overline{f}|\}\eta . \end{aligned}

As $$(\overline{f}(y)+La)$$ does not depend on the strategies we have

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{**}_{II}}[\overline{f}(x_{\tau })\arrowvert A]\le (\overline{f}(y)+La)+\max \{|\overline{f}|\}\eta , \end{aligned}

and therefore we conclude that

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[h( x_{\tau } )]\le \overline{f}(y)+La+\max \{|\overline{f}|\}\eta + \max \{|\overline{f}|,|\overline{g}|\}[(1-e^{-a})+2\eta ]. \end{aligned}

We have proved that

\begin{aligned} {{\mathbb {E}}}^{x_{0}}_{S_{I},S^{*}_{II}}[ h( x_{\tau })]\le \overline{f}(y) + {\mathcal {B}}(a,\eta ) \end{aligned}

with $${\mathcal {B}}(a,\eta )\rightarrow 0$$. Taking supremum over the strategies for Player I we obtain

\begin{aligned} u^{{\varepsilon }}(x_{0})\le \overline{f}(y)+{\mathcal {B}}(a,\eta ) \end{aligned}

with $${\mathcal {B}}(a,\eta )\rightarrow 0$$ as $$\eta \rightarrow 0$$ and $$a\rightarrow 0$$.

Therefore, we conclude that

\begin{aligned} |u^{{\varepsilon }}(x_{0})-\overline{f}(y)|< \max \{{\mathcal {A}}(a,\eta ),{\mathcal {B}}(a,\eta )\}, \end{aligned}

that holds when $$y \not \in \Omega$$ and $$x_0$$ is close to y.

An analogous estimate holds for $$v^{\varepsilon }$$.

Now, given two points $$x_0$$ and $$z_0$$ inside $$\Omega$$ with $$|x_0-z_0|<r_0$$ we couple the game starting at $$x_0$$ with the game starting at $$z_0$$ making the same movements and also changing board simultaneously. This coupling generates two sequences of positions $$(x_i,j_i)$$ and $$(z_i,k_i)$$ such that $$|x_i - z_i|<r_0$$ and $$j_i=k_i$$ (since they change boards at the same time both games are at the same board at every turn). This continues until one of the games exits the domain (say at $$x_\tau \not \in \Omega$$). At this point for the game starting at $$z_0$$ we have that its position $$z_\tau$$ is close to the exterior point $$x_\tau \not \in \Omega$$ (since we have $$|x_\tau - z_\tau |<r_0$$) and hence we can use our previous estimates for points close to the boundary to conclude that

\begin{aligned} |u^{{\varepsilon }}(x_{0})- u^{\varepsilon }(z_0)|< \delta , \qquad \hbox { and } \qquad |v^{{\varepsilon }}(x_{0})- v^{\varepsilon }(z_0)|< \delta . \end{aligned}

This ends the proof. $$\square$$

As a consequence, we have convergence of $$(u^{{\varepsilon }},v^{{\varepsilon }})$$ as $${\varepsilon }\rightarrow 0$$ along subsequences.

### Theorem 19

Let$$(u^{{\varepsilon }},v^{{\varepsilon }})$$be solutions to the DPP, then there exists a subsequence$${\varepsilon }_k \rightarrow 0$$and a pair on functions (uv) continuous in$${\overline{\Omega }}$$such that

\begin{aligned} u^{{\varepsilon }_k} \rightarrow u, \qquad \hbox { and } \qquad v^{{\varepsilon }_k} \rightarrow v, \end{aligned}

uniformly in$${\overline{\Omega }}$$.

### Proof

Lemmas 15 and  18 imply that we can use the Arzela–Ascoli type lemma, Lemma 14. $$\square$$

## Existence of viscosity solutions

Now, we prove that any possible uniform limit of $$(u^{\varepsilon },v^{\varepsilon })$$ is a viscosity solution to the limit PDE problem (1).

### Theorem 20

Any uniform limit of the values of the game$$(u^{\varepsilon },v^{\varepsilon })$$, (uv), is a viscosity solution to

\begin{aligned} \left\{ \begin{array}{ll} - {\frac{1}{2}}\Delta _{\infty }u(x) + u(x) - v(x)=0 \qquad &{} \ x \in \Omega , \\ - \frac{\kappa }{2} \Delta v(x) + v(x) - u(x)=0 \qquad &{} \ x \in \Omega , \\ u(x) = f(x) \qquad &{} \ x \in \partial \Omega , \\ v(x) = g(x) \qquad &{} \ x \in \partial \Omega . \end{array} \right. \end{aligned}

### Proof

Since $$u^{\varepsilon }=\overline{f}$$ and $$v^{\varepsilon }=\overline{g}$$ in $${{\mathbb {R}}}^N \setminus \Omega$$ we have that $$u = f$$ and $$v= g$$ on $$\partial \Omega$$.

Infinity Laplacian. let us start by showing that u is a viscosity subsolution to

\begin{aligned} - {\frac{1}{2}}\Delta _{\infty }u(x)+u(x)-v(x)=0. \end{aligned}

Let $$x_{0} \in \Omega$$ and $$\phi \in {C}^{2}(\Omega )$$ auch that $$u(x_{0})-\phi (x_{0})=0$$ and $$u-\phi$$ has an absolute maximum at $$x_{0}$$. Then, there exists a sequence $$(x_{{\varepsilon }})_{{\varepsilon }>0}$$ with $$x_{{\varepsilon }} \rightarrow x_{0}$$ as $${\varepsilon }\rightarrow 0$$ verifying

\begin{aligned} u^{{\varepsilon }}(y)-\phi (y)\le u^{{\varepsilon }}(x_{{\varepsilon }})-\phi (x_{{\varepsilon }})+ {\varepsilon }^{3}, \end{aligned}

(notice that we choose $${\varepsilon }^{3}$$ but here we can take any positive term as long as it is $$o({\varepsilon }^2)$$). Then we obtain

\begin{aligned} u^{{\varepsilon }}(y)-u^{{\varepsilon }}(x_{{\varepsilon }}) \le \phi (y)-\phi (x_{{\varepsilon }})+{\varepsilon }^{3} \end{aligned}
(25)

Now, using the DPP, we get

\begin{aligned} u^{{\varepsilon }}(x_{{\varepsilon }})={\varepsilon }^{2}v^{{\varepsilon }}(x_{{\varepsilon }})+(1-{\varepsilon }^{2}) \left\{ {\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x_{{\varepsilon }})}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x_{{\varepsilon }})}u^{{\varepsilon }}\right\} \end{aligned}

and hence

\begin{aligned} 0&= {\varepsilon }^{2}(v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }}))+(1-{\varepsilon }^{2}) \\&\left\{ {\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x_{{\varepsilon }})}(u^{{\varepsilon }}(y)-u^{{\varepsilon }}(x_{{\varepsilon }})) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x_{{\varepsilon }})}(u^{{\varepsilon }}(y)-u^{{\varepsilon }}(x_{{\varepsilon }}))\right\} . \end{aligned}

Using (25) and that $$\phi$$ is smooth we obtain

\begin{aligned} 0&\le {\varepsilon }^{2}(v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }}))+(1-{\varepsilon }^{2}) \nonumber \\&\left\{ {\frac{1}{2}}\max _{y \in \overline{B}_{{\varepsilon }}(x_{{\varepsilon }})}(\phi (y)-\phi (x_{{\varepsilon }})) + {\frac{1}{2}}\min _{y \in \overline{B}_{{\varepsilon }}(x_{{\varepsilon }})}(\phi (y)-\phi (x_{{\varepsilon }}))\right\} +{\varepsilon }^{3}. \end{aligned}
(26)

Now, assume that $$\nabla \phi (x_0) \ne 0$$. Then, by continuity $$\nabla \phi \ne 0$$ in a ball $$B_{r}(x_{0})$$ for r small. In particular, we have $$\nabla \phi (x_{{\varepsilon }}) \ne 0$$. Call $$w_{{\varepsilon }}= \frac{\nabla \phi (x_{{\varepsilon }})}{|\nabla \phi (x_{{\varepsilon }}) |}$$ and let $$z_{{\varepsilon }}$$ with $$| z_{{\varepsilon }}|=1$$ be such that

\begin{aligned} \max _{y\in \partial B_{{\varepsilon }}(x_{{\varepsilon }})}\phi (y)=\phi (x_{{\varepsilon }}+{\varepsilon }z_{{\varepsilon }}). \end{aligned}

We have

\begin{aligned} \phi (x_{{\varepsilon }}+{\varepsilon }z_{{\varepsilon }})-\phi (x_{{\varepsilon }})&= {\varepsilon }\langle \nabla \phi (x_{{\varepsilon }}),z_{{\varepsilon }}\rangle +o({\varepsilon }) \le {\varepsilon }\langle \nabla \phi (x_{{\varepsilon }}),w_{{\varepsilon }}\rangle +o({\varepsilon }) \\&= \phi (x_{{\varepsilon }}+{\varepsilon }w_{{\varepsilon }})-\phi (x_{{\varepsilon }})+o({\varepsilon }). \end{aligned}

On the other hand

\begin{aligned} \phi (x_{{\varepsilon }}+{\varepsilon }w_{{\varepsilon }})-\phi (x_{{\varepsilon }})= {\varepsilon }\langle \nabla \phi (x_{{\varepsilon }}),w_{{\varepsilon }}\rangle +o({\varepsilon })\le \phi (x_{{\varepsilon }}+{\varepsilon }z_{{\varepsilon }})-\phi (x_{{\varepsilon }}). \end{aligned}

Therefore, we get

\begin{aligned} {\varepsilon }\langle \nabla \phi (x_{{\varepsilon }}),w_{{\varepsilon }}\rangle +o({\varepsilon })\le {\varepsilon }\langle \nabla \phi (x_{{\varepsilon }}),z_{{\varepsilon }}\rangle +o({\varepsilon })\le {\varepsilon }\langle \nabla \phi (x_{{\varepsilon }}),w_{{\varepsilon }}\rangle +o({\varepsilon }). \end{aligned}

multiplying by $${\varepsilon }^{-1}$$ and taking the limit we arrive to

\begin{aligned} \langle \nabla \phi (x_{0}),w_{0}\rangle =\langle \nabla \phi (x_{0}),z_{0}\rangle \end{aligned}

with $$w_{0}=\frac{\nabla \phi (x_{0})}{|\nabla \phi (x_{0}) |}$$ and we conclude that

\begin{aligned} z_{0}=w_{0}=\frac{\nabla \phi (x_{0})}{|\nabla \phi (x_{0}) |}. \end{aligned}

Going back to (26) we obtain

\begin{aligned} 0&\le {\varepsilon }^{2}(v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }}))+(1-{\varepsilon }^{2})\nonumber \\&\left\{ {\frac{1}{2}}(\phi (x_{{\varepsilon }}+{\varepsilon }z_{{\varepsilon }})-\phi (x_{{\varepsilon }})) + {\frac{1}{2}}(\phi (x_{{\varepsilon }}-{\varepsilon }z_{{\varepsilon }})-\phi (x_{{\varepsilon }}))\right\} +{\varepsilon }^{3}. \end{aligned}
(27)

Making Taylor expansions we get

\begin{aligned} \left\{ {\frac{1}{2}}(\phi (x_{{\varepsilon }}+{\varepsilon }z_{{\varepsilon }})-\phi (x_{{\varepsilon }})) + {\frac{1}{2}}(\phi (x_{{\varepsilon }}-{\varepsilon }z_{{\varepsilon }})-\phi (x_{{\varepsilon }}))\right\} = {\frac{1}{2}}{\varepsilon }^{2} \langle D^{2}\phi (x_{{\varepsilon }})z_{{\varepsilon }},z_{{\varepsilon }}\rangle + \textit{o}({\varepsilon }^{2}). \end{aligned}

Then, from (27),

\begin{aligned} 0 \le v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }})+(1-{\varepsilon }^{2}) {\frac{1}{2}}\langle D^{2}\phi (x_{{\varepsilon }})z_{{\varepsilon }},z_{{\varepsilon }}\rangle + \frac{\textit{o}({\varepsilon }^{2})}{{\varepsilon }^{2}}, \end{aligned}

and taking the limit as $${\varepsilon }\rightarrow 0$$ we get

\begin{aligned} 0 \le v(x_{0})-u(x_{0})+ {\frac{1}{2}}\langle D^{2}\phi (x_{0})w_{0},w_{0}\rangle , \end{aligned}

that is,

\begin{aligned} -{\frac{1}{2}}\Delta _{\infty }\phi (x_{0}) + u(x_{0})-v(x_{0}) \le 0. \end{aligned}

Now, if $$\nabla \phi (x_{0}) =0$$ we have to use the upper and lower semicontinuous envelopes of the equation (notice that $$\Delta _\infty u$$ is not well defined when $$\nabla u=0$$). For a symmetric matrix $$M \in {{\mathbb {R}}}^{N\times N}$$ and $$\xi \in {{\mathbb {R}}}^{N}$$, we define

\begin{aligned} F_1 (\xi , M) = \left\{ \begin{array}{ll} -\left\langle M \frac{\xi }{|\xi |} ; \frac{\xi }{|\xi |} \right\rangle \qquad &{} \xi \ne 0 \\ 0 \qquad &{} \xi = 0 \end{array} \right. \end{aligned}

The semicontinuous envelopes of $$F_1$$ are defined as

\begin{aligned} F_1^{*}(\xi , M) = \left\{ \begin{array}{ll} -\Big \langle M \frac{\xi }{|\xi |} ; \frac{\xi }{|\xi |} \Big \rangle \qquad &{} \xi \ne 0 \\ \max \Big \{ \limsup _{\eta \rightarrow 0}-\langle M \frac{\eta }{|\eta |} ; \frac{\eta }{|\eta |} \rangle ; 0 \Big \} \qquad &{} \xi = 0. \end{array} \right. \end{aligned}

and

\begin{aligned} F_{1,*}(\xi , M) = \left\{ \begin{array}{ll} -\Big \langle M \frac{\xi }{ | \xi |} ; \frac{\xi }{|\xi |} \Big \rangle \qquad &{} \xi \ne 0 \\ \min \Big \{ \liminf _{\eta \rightarrow 0}-\langle M \frac{\eta }{|\eta |} ; \frac{\eta }{|\eta |} \rangle ; 0 \Big \} \qquad &{} \xi = 0. \end{array} \right. \end{aligned}

Now, we just remark that

\begin{aligned} -\max _{1 \le i \le N} \{ \lambda _{i} \} \le - \left\langle M\frac{\xi }{|\xi |},\frac{\xi }{|\xi |}\right\rangle \le -\min _{1\le i \le N} \{ \lambda _{i} \} \end{aligned}

and hence we obtain

\begin{aligned} F_1^{*}(\xi , M) = \left\{ \begin{array}{ll} -\Big \langle M \frac{\xi }{ |\xi |} ; \frac{\xi }{ |\xi |} \Big \rangle \qquad &{} \xi \ne 0 \\ \max \Big \{ - \min _{1\le i\le N}\{\lambda _{i}\} ; 0 \Big \} \qquad &{} \xi = 0. \end{array} \right. \end{aligned}

and

\begin{aligned} F_{1,*}(\xi , M) = \left\{ \begin{array}{ll} -\Big \langle M \frac{\xi }{|\xi |} ; \frac{\xi }{|\xi |} \Big \rangle \qquad &{} \xi \ne 0 \\ \min \Big \{ -\max _{1\le i\le N}\{\lambda _{i}\}; 0 \Big \} \qquad &{} \xi = 0. \end{array} \right. \end{aligned}

Now, let us go back to the proof and show that

\begin{aligned} {\frac{1}{2}}F_{1,*}(0,D^{2}\phi (x_{0}))+u(x_{0})-v(x_{0}) \le 0. \end{aligned}

As before, we have a sequence $$(x_{{\varepsilon }})_{{\varepsilon }>0}$$ such that $$x_{{\varepsilon }} \rightarrow x_{0}$$

\begin{aligned} u^{{\varepsilon }}(y)-u^{{\varepsilon }}(x_{{\varepsilon }}) \le \phi (y)-\phi (x_{{\varepsilon }})+{\varepsilon }^{3} \end{aligned}
(28)

Using the DPP, that $$\phi$$ is smooth and (28) we obtain

\begin{aligned} 0&\le (1-{\varepsilon }^{2}) \Big \{ {\frac{1}{2}}\max _{\overline{B_{{\varepsilon }}(x_{{\varepsilon }})}}(\phi (y)-\phi (x_{{\varepsilon }}))+{\frac{1}{2}}\min _{\overline{B_{{\varepsilon }}(x_{{\varepsilon }})}}(\phi (y)-\phi (x_{{\varepsilon }})) \Big \} \\&+{\varepsilon }^{2}(v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }})) +{\varepsilon }^{3} . \end{aligned}

Let $$w_{{\varepsilon }} \in \overline{B_{{\varepsilon }}(x_{{\varepsilon }})}$$ be such that

\begin{aligned} \phi (w_{{\varepsilon }})-\phi (x_{{\varepsilon }})=\max _{\overline{B_{{\varepsilon }}(x_{{\varepsilon }})}}(\phi (y)-\phi (x_{{\varepsilon }})) . \end{aligned}

Let $$\overline{w_{{\varepsilon }}}$$ be the symmetric point to $$w_{{\varepsilon }}$$ in the ball $$B_{{\varepsilon }}(x_{{\varepsilon }})$$. Then we obtain

\begin{aligned} 0 \le (1-{\varepsilon }^{2}) \Big \{ {\frac{1}{2}}(\phi (w_{{\varepsilon }})-\phi (x_{{\varepsilon }}))+{\frac{1}{2}}(\phi (\overline{w_{{\varepsilon }}})-\phi (x_{{\varepsilon }})) \Big \} +{\varepsilon }^{2}(v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }})) . \end{aligned}

Using again Taylor’s expansions

\begin{aligned} 0 \le (1-{\varepsilon }^{2}) {\frac{1}{2}}\langle D^{2}\phi (x_{{\varepsilon }}) \frac{(w_{{\varepsilon }}-x_{{\varepsilon }})}{{\varepsilon }}, \frac{(w_{{\varepsilon }}-x_{{\varepsilon }})}{{\varepsilon }} \rangle +v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }}) + o(1) . \end{aligned}

If for a sequence $${\varepsilon }\rightarrow 0$$ we have

\begin{aligned} \left| \frac{(w_{{\varepsilon }}- x_{{\varepsilon }})}{{\varepsilon }} \right| =1, \end{aligned}

then, extracting a subsequence if necessary, we have $$z\in {{\mathbb {R}}}^{N}$$ with $$\Vert z \Vert =1$$ such that

\begin{aligned} \frac{(w_{{\varepsilon }}- x_{{\varepsilon }})}{{\varepsilon }} \rightarrow z. \end{aligned}

Passing to the limit we get

\begin{aligned} 0 \le {\frac{1}{2}}\langle D^2 \phi (x_{0}) z , z \rangle +v(x_{0})-u(x_{0}) . \end{aligned}

Then

\begin{aligned} -{\frac{1}{2}}\max _{1\le i\le N}\{\lambda _{i}\}+u(x_{0})-v(x_{0}) \le 0, \end{aligned}

that is, $${\frac{1}{2}}F_{1,*}(0,D^{2}\phi (x_{0}))+u(x_{0})-v(x_{0})\le 0$$.

Now, if we have

\begin{aligned} \left| \frac{(w_{{\varepsilon }}- x_{{\varepsilon }})}{{\varepsilon }} \right| <1 \end{aligned}

for $${\varepsilon }$$ small, we just observe that at those points we have that $$D^2 \phi (w_{{\varepsilon }})$$ is negative semidefinite. Hence, passing to the limit we obtain that $$D^2 \phi (x_0)$$ is also negative semidefinite and then every eigenvalue of $$D^2 \phi (x_0)$$ is less or equal to 0. We conclude that

\begin{aligned} F_{1,*}(0,D^2 \phi (x_{0}))=\min \{ -\max _{1\le i \le N}\{ \lambda _{i}\};0\}=0. \end{aligned}

Moreover, for $${\varepsilon }$$ small we have that $$\langle D^{2}\phi (x_{{\varepsilon }})\frac{(w_{{\varepsilon }}-x_{{\varepsilon }})}{{\varepsilon }},\frac{(w_{{\varepsilon }}-x_{{\varepsilon }})}{{\varepsilon }}\rangle \le 0$$. Then,

\begin{aligned} 0\le v^{{\varepsilon }}(x_{{\varepsilon }})-u^{{\varepsilon }}(x_{{\varepsilon }}) + o(1). \end{aligned}

Taking the limit as $${\varepsilon }\rightarrow 0$$ we obtain

\begin{aligned} u(x_{0})-v(x_{0})\le 0. \end{aligned}

Therefore we arrive to

\begin{aligned} {\frac{1}{2}}F_{1,*}(x_0,D^{2}\phi (x_{0}))+u(x_{0})-v(x_{0})\le 0, \end{aligned}

that is what we wanted to show.

The fact that u is a supersolution can be proved in an analogous way. In this case we need to show that

\begin{aligned} {\frac{1}{2}}F_1^{*}(\nabla \phi (x_{0}),D^{2}\phi (x_{0}))+u(x_{0})-v(x_{0})\ge 0, \end{aligned}

for $$x_{0} \in \Omega$$ and $$\phi \in {C}^{2}(\Omega )$$ such that $$u(x_{0})-\phi (x_{0})=0$$ and $$u-\phi$$ has a strict minimum at $$x_{0}$$.

Laplacian. Now, let us show that v is a viscosity solution to

\begin{aligned} -\frac{\kappa }{2}\Delta v(x)+v(x)-u(x)=0. \end{aligned}

Let us start by showing that u is a subsolution. Let $$\psi \in C^{2}(\Omega )$$ such that $$v(x_{0})-\psi (x_{0})=0$$ and has a maximum of $$v-\psi$$ at $$x_{0} \in \Omega$$. As before, we have the existence of a sequence $$(x_{{\varepsilon }})_{{\varepsilon }>0}$$ such that $$x_{{\varepsilon }} \rightarrow x_{0}$$ and $$v^{{\varepsilon }}-\psi$$ and

\begin{aligned} u^{{\varepsilon }}(y)-u^{{\varepsilon }}(x_{{\varepsilon }}) \le \psi (y)-\psi (x_{{\varepsilon }})+{\varepsilon }^{3}. \end{aligned}

Therefore, from the DPP, we obtain

\begin{aligned} 0\le (u^{{\varepsilon }}(x_{{\varepsilon }})-v^{{\varepsilon }}(x_{{\varepsilon }}))+(1-{\varepsilon }^{2}) \frac{1}{{\varepsilon }^{2}}\fint _{B_{{\varepsilon }}(x_{{\varepsilon }})}(\psi (y)-\psi (x_{{\varepsilon }}))dy. \end{aligned}

From Taylor’s expansions we obtain

\begin{aligned} \frac{1}{{\varepsilon }^{2}}\fint _{B_{{\varepsilon }}(x_{{\varepsilon }})}(\psi (y)-\psi (x_{{\varepsilon }}))dy =\frac{\kappa }{2} \sum \limits _{j=1}^{N} \partial _{x_{j}x_{j}}\psi (x_{{\varepsilon }})=\frac{\kappa }{2}\Delta \psi (x_{{\varepsilon }}), \end{aligned}

with $$\kappa = \frac{1}{{\varepsilon }^{N}|B_{1}(0)|}\int _{B_{1}(0)}z_{j}^{2}{\varepsilon }^{N}dz=\frac{1}{|B_{1}(0)|}\int _{B_{1}(0)}z_{j}^{2}dz.$$ Taking limits as $${\varepsilon }\rightarrow 0$$ we get

\begin{aligned} -\frac{\kappa }{2}\Delta \psi (x_{0})+v(x_{0})-u(x_{0}) \le 0. \end{aligned}

The fact that v is a supersolution is similar. $$\square$$

## Uniqueness for viscosity solutions

Our goal is to show uniqueness for viscosity solutions to our system (1). To this end we follow ideas from [3, 27] (see also  for uniqueness results concerning the infinity Laplacian). This uniqueness result implies that the whole sequence $$u^{\varepsilon },v^{\varepsilon }$$ converge as $${\varepsilon }\rightarrow 0$$. The main idea behind the proof (as in [3, 27]) is to make a change of variable $$U =\psi (u)$$, $$V = \psi (v)$$ which transforms our system (1) in a system in which both equations are coercive in their respective variables U and V when $$DU \ne 0$$ and $$DV\ne 0$$. Next we use the fact that one can take $$\psi$$ as close to the identity as we want.

First, we state the Hopf Lemma. We only state the result for supersolutions (the result for subsolutions is the same with the obvious changes).

### Lemma 21

Let$${\mathcal {V}}$$be an open set with$$\overline{{\mathcal {V}}} \subset \Omega$$. Let (uv) be a viscosity supersolution of (1) and assume that there exists$$x_0 \in \partial {\mathcal {V}}$$such that

\begin{aligned} u (x_0) = \min \Big \{ \min _\Omega u(x); \min _\Omega v(x) \Big \} \qquad \hbox {and} \qquad u (x_0) < \min _{x \in {\mathcal {V}}} \{u(x),v(x)\}. \end{aligned}

Assume further that$${\mathcal {V}}$$satisfies the interior ball condition at$$x_0$$, namely, there exists an open ball$$B_R \subset {\mathcal {V}}$$with$$x_0 \in \partial B_R$$. Then,

\begin{aligned} \liminf _{s\rightarrow 0} \frac{ u(x_0 - s \nu (x_0)) - u(x_0)}{s} > 0, \end{aligned}

where$$\nu (x_0)$$is the outward normal vector to$$\partial B_R$$at$$x_0$$.

### Remark 22

An analogous statement holds for the second component of the system, v. If we have that

\begin{aligned} v (x_0) = \min \Big \{ \min _\Omega u(x); \min _\Omega v(x) \Big \} \qquad \hbox {and} \qquad v (x_0) < \min _{x \in V} \{u(x),v(x)\}. \end{aligned}

Then we have

\begin{aligned} \liminf _{s\rightarrow 0} \frac{ v(x_0 - s \nu (x_0)) - v(x_0)}{s} > 0. \end{aligned}

### Proof of Lemma 21

See the Appendix in . In fact one can take

\begin{aligned} w(x):=e^{-\alpha |x|^2} - e^{-\alpha R^2} \end{aligned}

and show that w is a strict subsolution of any of the two equations in (1) in the annulus $$\{x : R/2<|x|<R\}$$. $$\square$$

The Strong Maximum Principle follows form Hopf Lemma.

### Theorem 23

Let (uv) be a viscosity supersolution of (1). Assume that$$\min _\Omega \min \{ u, v\}$$is attained at an interior point of$$\Omega$$. Then$$u = v = C$$for some constantCin the whole$$\Omega$$.

### Proof

Again we refer to the Appendix in . $$\square$$

Now we can proceed with the proof of the Comparison Principle.

### Theorem 24

Assume that$$(u_1, v_1)$$and$$(u_2, v_2)$$are a bounded viscosity subsolution and a bounded viscosity supersolution of (1), respectively, and also assume that$$u_1 \le u_2$$and$$v_1 \le v_2$$on$$\partial \Omega$$. Then

\begin{aligned} u_1 \le u_2 \qquad \hbox {and} \qquad v_1 \le v_2, \end{aligned}

in$$\Omega$$.

This comparison result implies the desired uniqueness for (1).

### Corollary 25

There exists a unique viscosity solution to (1).

### Proof of Theorem 24

We argue by contradiction and assume that

\begin{aligned} c := \max \Big \{ \max _\Omega (u_1(x) -u_2(x)) ; \max _\Omega (v_1(x) -v_2(x)) \Big \} > 0. \end{aligned}

We replace $$(u_1, v_1)$$ by $$(u_1 - c/2, v_1- c/2)$$. We may assume further that $$u_1$$ and $$v_1$$ are semi-convex and $$u_2$$ and $$v_2$$ are semi-concave by using sup and inf convolutions and restricting the problem to a slightly smaller domain if necessary (see  for extra details). We now perturb $$u_1$$ and $$v_1$$ as follows. For $$\alpha > 0$$, take $$\Omega _\alpha := \{x \in \Omega : dist(x,\partial \Omega ) > \alpha \}$$ and for |h| sufficiently small, define

\begin{aligned} \begin{aligned} M(h)&:=\max \Big \{ \max _{x \in \Omega } (u_1(x+h)- u_2(x)) ; \max _{x \in \Omega } (v_1(x+h)- v_2(x))\Big \} \\&= w_1(x_h+h)- w_2(x_h) \end{aligned} \end{aligned}

for $$w=u \hbox { or } v$$ (we will call w the component at which the maximum is achieved) and some $$x_h \in \Omega _{|h|}$$. Since $$M(0) > 0$$, for |h| small enough, we have $$M(h)>0$$ and the above maximum is the same if we take it over $$\Omega _\alpha$$ any $$\alpha >0$$ sufficiently small and fixed. Note that from the equations we get that at $$x_h$$ we have

\begin{aligned} u_1(x_h + h) - u_2(x_h) = v_1(x_h + h) - v_2(x_h). \end{aligned}

Now, we claim that there exists a sequence $$h_n \rightarrow 0$$ such that at any maximum point $$y \in \Omega _{|h_n|}$$ of

\begin{aligned} \max \Big \{ \max _{x\in \Omega _{|h_n|}} (u_1(x + h_n) - u_2(x)) ; \max _{x\in \Omega _{|h_n|}} (v_1(x + h_n) - v_2(x))\Big \}. \end{aligned}

Recall that we called w the component at which the maximum is achieved, $$w=u \hbox { or } v$$. We have

\begin{aligned} Dw_1(y + h_n) = Dw_2(y) \ne 0 \end{aligned}

for $$n \in {\mathbb {N}}$$. To prove this claim we argue again by contradiction and assume that there exists, for each h with |h| small, $$x_h$$ which is a maximum point so that $$Dw_1 (x_h + h) = Dw_2 (x_h) = 0$$. As $$u_1 - u_2$$ and $$v_1-v_2$$ are semi-convex, M(h) is semi-convex for h small. Now for any k close to h, one has that, thanks to the fact that $$Dw_1 (x_h + h) = 0$$,

\begin{aligned} M(k) \ge w_1 (x_h+k)- w_2(x_h) \ge w_1(x_h+h)-C|h-k|^2-w(x_h) = M(h)-C|h-k|^2. \end{aligned}

Thus, $$0 \in \partial M(h)$$ for every |h| small (here we denoted by $$\partial M(h)$$ the sub-differential of M at h). This implies that $$M(h) = M(0)$$ for |h| small.

Now take $$x_0 \in \Omega$$ a maximum point of $$\max \{ \max _{x\in \Omega } (u_1(x) - u_2(x)); \max _{x\in \Omega } (v_1(x) - v_2(x))$$. For |h| sufficiently small we have that $$x_0 \in \Omega _{|h|}$$, and, $$w_1(x_0) - w_2(x_0) = M(0) = M(h) \ge w_1(x_0 + h) - w_2(x_0)$$. Hence, $$x_0$$ is a local maximum of $$u_1$$, $$v_1$$. Now, the strong maximum principle, Theorem 23, implies that $$u_1$$, $$v_1$$ are constant in $$\Omega$$, which gives the desired contradiction and proves the claim.

Now we recall that for a semi-convex function a and a semi-concave function b we have that both a and b are differentiable at any local maximum points of $$b-a$$ and if the function a (or b) is differentiable at $$x_0$$ and $$\{x_n\}$$ is a sequence of differentiable points such that $$x_n \rightarrow x_0$$, then $$Da(x_n)\rightarrow Da(x_0)$$ (or $$Db(x_n)\rightarrow Db(x_0)$$). Then, thanks to these properties and the previous claim, we have the existence of a positive constant $$\delta (n) > 0$$ so that $$|Dw_1(y+h_n)| = |Dw_2(y)| > \delta (n)$$ for all y such that the maximum in the claim is attained.

Now we consider, as in , the functions $$\varphi _{\varepsilon }$$ defined by

\begin{aligned} \varphi _{\varepsilon }' (t) = \exp \left( \int _0^t \exp \Big ( - \frac{1}{{\varepsilon }} (s- \frac{1}{{\varepsilon }}) \Big ) ds\right) . \end{aligned}

These functions $$\varphi _{\varepsilon }$$ are close to the identity, $$\varphi _{\varepsilon }' > 0$$, $$\varphi _{\varepsilon }'$$ converge to 1 as $${\varepsilon }\rightarrow 0$$ and $$\varphi _{\varepsilon }''$$ converge to 0 as $${\varepsilon }\rightarrow 0$$ with $$(\varphi _{\varepsilon }''(s))^2 > \varphi _{\varepsilon }''' (s) \varphi _{\varepsilon }' (s)$$, see .

With $$\psi _{\varepsilon }= \varphi _{\varepsilon }^{-1}$$ we perform the changes of variables

\begin{aligned} U_{i}^{\varepsilon }=\psi _{\varepsilon }(u_i), \qquad V_i^{\varepsilon }= \psi _{\varepsilon }(v_i), \qquad i=1,2. \end{aligned}

It is clear to see that $$U^{\varepsilon }_1$$, $$V^{\varepsilon }_1$$ are semi-convex and $$U^{\varepsilon }_2$$, $$V^{\varepsilon }_2$$ are semi-concave. We have that $$\max \{ \max _x (U_1^{\varepsilon }(x+h_n)- U_2^{\varepsilon }(x)) ; \max _x (U_1^{\varepsilon }(x+h_n)- U_2^{\varepsilon }(x)) \}$$ is achieved at some point $$x_{\varepsilon }$$ and by passing to a subsequence if necessary, $$x_{\varepsilon }\rightarrow x_{h_n}$$ as $${\varepsilon }\rightarrow 0$$. Since we have $$|Dw_1(x_n + h_n)| = |Dw_2 (x_n)| > \delta (n)$$, we deduce that for $${\varepsilon }$$ sufficiently small, it holds that $$|DW_1^{\varepsilon }(x_{\varepsilon }+ h_n)| = |DW_2^{\varepsilon }(x_{\varepsilon })| \ge \delta (n)/2$$.

Now, omitting the dependence on $${\varepsilon }$$ in what follows, we observe that, after the change of variables

\begin{aligned} u_1 = \varphi (U_1), \qquad v_1 = \varphi (V_1) \end{aligned}

the pair of new unknowns $$(U_1,V_1)$$ verifies the equations (in the viscosity sense)

\begin{aligned} \begin{aligned} 0&=- {\frac{1}{2}}\Delta _{\infty }u_1(x) + u_1(x) - v_1(x) \\&= - \frac{1}{2} \varphi ' (U_1)\Delta _\infty U_1(x) - \frac{1}{2} \varphi ''(U_1)|DU_1|^2 (x) + \varphi (U_1(x)) - \varphi (V_1 (x))\\&= \varphi ' (U_1) \Big ( - \frac{1}{2} \Delta _\infty U_1(x) - \frac{1}{2} \frac{\varphi ''(U_1)}{\varphi ' (U_1)}|DU_1|^2 (x) + \frac{\varphi (U_1(x)) - \varphi (V_1 (x))}{\varphi ' (U_1)}\Big ), \end{aligned} \end{aligned}

and

\begin{aligned} \begin{aligned} 0&= - \frac{\kappa }{2} \Delta v(x) + v(x) - u(x) \\&= - \frac{\kappa }{2} \Big ( \varphi ' (V_1)\Delta V_1(x) + \varphi ''(V_1)|DV_1|^2 (x) \Big ) + \varphi (V_1 (x)) - \varphi (U_1 (x) ) \\&= \varphi ' (V_1) \Big (- \frac{\kappa }{2} \Delta V_1(x) - \frac{\kappa }{2} \frac{\varphi ''(V_1)}{\varphi ' (V_1)} |DV_1|^2 (x) + \frac{\varphi (V_1 (x)) - \varphi (U_1 (x) )}{ \varphi ' (V_1)} \Big ), \end{aligned} \end{aligned}

and similar equations also hold for $$(U_2,V_2)$$.

Since $$|DU_1^{\varepsilon }(x_{\varepsilon }+ h_n)| = |DV_2^{\varepsilon }(x_{\varepsilon })| \ge \delta (n)/2$$ this system is strictly monotone (the first equation is monotone in $$U_1$$ and the second in $$V_1$$. Here we use that $$(\varphi _{\varepsilon }''(s))^2 > \varphi _{\varepsilon }''' (s) \varphi _{\varepsilon }' (s)$$ that implies that $$\varphi _{\varepsilon }'' /\varphi _{\varepsilon }'$$ is increasing, i.e. $$-(\varphi _{\varepsilon }'' /\varphi _{\varepsilon }')' >0$$. Thus, from the strict monotonicity, we get the desired contradiction. See the proof of , Lemma 3.1, for a more detailed discussion. $$\square$$

## Possible extensions of our results

In this section we gather some comments on more general systems that can be studied using the same techniques.

### Coefficients with spacial dependence

We can look at the case in which the probability of jumping from one board to the other depends on the spacial location, that is, we can take the probability to jump from board 1 to 2 as $$a(x) {\varepsilon }^2$$ and from 2 to 1 as $$b(x) {\varepsilon }^2$$, for two given nonnegative functions a(x), b(x). In this case the DPP is given by

\begin{aligned} \left\{ \begin{array}{ll} u^{{\varepsilon }}(x)= a(x) {\varepsilon }^{2}v^{{\varepsilon }}(x)+(1- a(x) {\varepsilon }^{2})\Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) \Big \} \qquad &{} \ x \in \Omega , \\ v^{{\varepsilon }}(x)= b(x) {\varepsilon }^{2}u^{{\varepsilon }}(x)+(1- b(x) {\varepsilon }^{2})\fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy \qquad &{} \ x \in \Omega , \\ u^{{\varepsilon }}(x) = \overline{f}(x) \qquad &{} \ x \in {{\mathbb {R}}}^{N} \backslash \Omega , \\ v^{{\varepsilon }}(x) = \overline{g}(x) \qquad &{} \ x \in {{\mathbb {R}}}^{N} \backslash \Omega . \end{array} \right. \end{aligned}

and the limit system is

\begin{aligned} \left\{ \begin{array}{ll} - {\frac{1}{2}}\Delta _{\infty }u(x) + a(x) u(x) - a(x) v(x)=0 \qquad &{} \ x \in \Omega , \\ - \frac{\kappa }{2} \Delta v(x) + b(x) v(x) - b(x) u(x)=0 \qquad &{} \ x \in \Omega , \\ u(x) = f(x) \qquad &{} \ x \in \partial \Omega , \\ v(x) = g(x) \qquad &{} \ x \in \partial \Omega , \end{array} \right. \end{aligned}

### $$n\times n$$ systems

We can deal with a system of n equations with n unknowns, $$u_1,\ldots ,u_n$$, of the form

\begin{aligned} \left\{ \begin{array}{ll} - L_i u_i(x) + b_i u_i(x) - \sum _{j\ne i} a_{ij} u_j(x)=0 \qquad &{} \ x \in \Omega , \\ u_i(x) = f_i(x) \qquad &{} \ x \in \partial \Omega . \end{array} \right. \end{aligned}

Here $$L_i$$ is $$\Delta _\infty$$ or $$\Delta$$, and the coefficients $$b_i$$, $$a_{ij}$$ are nonnegative and verify

\begin{aligned} b_i = \sum _{j\ne i} a_{ij}. \end{aligned}

To handle this case we have to play in n different boards and take the probability of jumping from board i to board j as $$a_{ij} {\varepsilon }^2$$ (notice that then the probability of continue playing in the same board i is $$1- \sum _{j\ne i} a_{ij} {\varepsilon }^2$$). The associated DPP is given by

\begin{aligned} \left\{ \begin{array}{l} u_i^{{\varepsilon }}(x)= {\varepsilon }^{2} \sum _{j\ne i} a_{ij} u_j^{{\varepsilon }}(x)+(1- b_i {\varepsilon }^{2}) \Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u_i^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u_i^{{\varepsilon }}(y) \Big \} \\ \hbox { or } \\ u_i^{{\varepsilon }}(x)= {\varepsilon }^{2} \sum _{j\ne i} a_{ij} u_j^{{\varepsilon }}(x)+(1- b_i {\varepsilon }^{2}) \fint _{B_{{\varepsilon }}(x)}u_i^{{\varepsilon }}(y)dy \qquad \qquad \qquad \qquad \ x \in \Omega , \\ u_i^{{\varepsilon }}(x) = \overline{f}(x) \qquad \qquad \ x \in {{\mathbb {R}}}^{N} \backslash \Omega . \end{array} \right. \end{aligned}

### Systems with normalized $$p-$$Laplacians

The normalized $$p-$$Laplacian is given by

\begin{aligned} \Delta _p^N u (x) = \alpha \Delta _\infty u (x) + \beta \Delta u(x), \end{aligned}

with $$\alpha (p)$$, $$\beta (p)$$ verifying $$\alpha + \beta =1$$ (see ). Notice that this operator is $$1-$$homogeneous. With the same ideas used here we can also handle the system

\begin{aligned} \left\{ \begin{array}{ll} - \Delta _{p}^N u(x) + u(x) - v(x)=0 \qquad &{} \ x \in \Omega , \\ - \Delta _{q}^N v(x) + v(x) - u(x)=0 \qquad &{} \ x \in \Omega , \\ u(x) = f(x) \qquad &{} \ x \in \partial \Omega , \\ v(x) = g(x) \qquad &{} \ x \in \partial \Omega . \end{array} \right. \end{aligned}

The associated game runs as follows, in the first board, when the token does not jump, a biased coin is towed (with probabilities $$\alpha (p)$$ of heads and $$\beta (p)$$ of tails), if we get heads then we play Tug-of-War and if we get tails then we move at random. In the second board the rules are the same but we use a biased coin with different probabilities $$\alpha (q)$$ and $$\beta (q)$$, see [25, 26, 30, 31] for a similar game for a scalar equation (playing in only one board). The corresponding DPP is:

\begin{aligned} \left\{ \begin{array}{l} u^{{\varepsilon }}(x)={\varepsilon }^{2}v^{{\varepsilon }}(x)+(1-{\varepsilon }^{2}) \\ \quad \left[ \alpha (p) \Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y) \Big \} + \beta (p) \fint _{B_{{\varepsilon }}(x)}u^{{\varepsilon }}(y)dy \right] \qquad \ x \in \Omega , \\ v^{{\varepsilon }}(x)={\varepsilon }^{2}u^{{\varepsilon }}(x)+(1-{\varepsilon }^{2}) \\ \quad \left[ \alpha (q) \Big \{{\frac{1}{2}}\sup _{y \in B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y) + {\frac{1}{2}}\inf _{y \in B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)\Big \} + \beta (q) \fint _{B_{{\varepsilon }}(x)}v^{{\varepsilon }}(y)dy \right] \qquad \ x \in \Omega ,\\ u^{{\varepsilon }}(x) = \overline{f}(x) \qquad \ x \in {{\mathbb {R}}}^{N} \backslash \Omega , \\ v^{{\varepsilon }}(x) = \overline{g}(x) \qquad \ x \in {{\mathbb {R}}}^{N} \backslash \Omega . \end{array} \right. \end{aligned}

### Systems depending on eigenvalues of the Hessian

Here we refer to second order PDEs associated with eigenvalues of the Hessian, for example, the jth eigenvalue

\begin{aligned} \lambda _j (D^2u (x)) := \inf _{S: dim(S)=j} \sup _{v \in S, |v|=1} \, \langle D^2u(x) v ; v \rangle , \end{aligned}

see  (remark that $$\lambda _j (D^2u)$$ is degenerate elliptic), the truncated Laplacians, also called truncated Pucci operators, that are given by the sum of the k smallest (or largest) eigenvalues of the Hessian

\begin{aligned} P_k^-(D^2u(x)) := \sum _{i=1}^k \lambda _i(D^2u(x)) \qquad \hbox {and} \qquad P_k^+(D2u (x)) := \sum _{i=N-k+1}^N \lambda _i(D^2u(x)) \end{aligned}

see [4, 5] (these operators are also degenerate elliptic) or Pucci operators,

\begin{aligned} {{\mathcal {P}}}^{\pm }_{a, b}(D^2u (x)):= a \sum _{\lambda _i <0} \lambda _i(D^2u (x))+ b \sum _{\lambda _i >0} \lambda _i(D^2u (x)) \end{aligned}

for $$a,b>0$$, with $$a>b$$ (for $${\mathcal {P}}_{a,b}^-$$) or $$b>a$$ (for $${\mathcal {P}}_{a,b}^+$$) (notice that Pucci operators are uniformly elliptic).

For instance, for the system

\begin{aligned} \left\{ \begin{array}{ll} - \lambda _j (D^2u(x)) + u(x) - v(x)=0 \qquad &{} \ x \in \Omega , \\ - \lambda _k (D^2v(x)) + v(x) - u(x)=0 \qquad &{} \ x \in \Omega , \\ u(x) = f(x) \qquad &{} \ x \in \partial \Omega , \\ v(x) = g(x) \qquad &{} \ x \in \partial \Omega , \end{array} \right. \end{aligned}

the associated game runs as follows, in the first board, when the token does not jump, the player who wants to minimize chooses a subspace of dimension j, S, and then the other player (who wants to maximize) chooses a direction (a unitary vector) $$v \in S$$, then a fair coin is towed (with probabilities $$1/2-1/2$$), and the new position of the game moves to $$x_0 \pm \varepsilon v$$, see . The rules in the second board are similar but this time the subspace S is of dimension k. The corresponding DPP is:

\begin{aligned} \left\{ \begin{array}{l} u^{{\varepsilon }}(x)={\varepsilon }^{2}v^{{\varepsilon }}(x)+(1-{\varepsilon }^{2}) \\ \quad \left[ \inf _{S: dim(S)=j} \sup _{v \in S, |v|=1} \Big ( \frac{1}{2} u^{\varepsilon }(x+{\varepsilon }v) + \frac{1}{2} u^{\varepsilon }(x - {\varepsilon }v) \Big )\right] \qquad \ x \in \Omega , \\ v^{{\varepsilon }}(x)={\varepsilon }^{2}u^{{\varepsilon }}(x)+(1-{\varepsilon }^{2}) \\ \quad \left[ \inf _{S: dim(S)=k} \sup _{v \in S, |v|=1} \Big ( \frac{1}{2} u^{\varepsilon }(x+{\varepsilon }v) + \frac{1}{2} u^{\varepsilon }(x - {\varepsilon }v) \Big ) \right] \qquad \ x \in \Omega , \\ u^{{\varepsilon }}(x) = \overline{f}(x) \qquad \ x \in {{\mathbb {R}}}^{N} \backslash \Omega , \\ v^{{\varepsilon }}(x) = \overline{g}(x) \qquad \ x \in {{\mathbb {R}}}^{N} \backslash \Omega . \end{array} \right. \end{aligned}

For a game involving Pucci operators, we refer to .

## References

1. 1.

Antunovic, T., Peres, Y., Sheffield, S., Somersille, S.: Tug-of-war and infinity Laplace equation with vanishing Neumann boundary condition. Commun. Partial Differ. Equ. 37(10), 1839–1869 (2012)

2. 2.

Armstrong, S.N., Smart, C.K.: An easy proof of Jensen’s theorem on the uniqueness of infinity harmonic functions. Calc. Var. Partial Differ. Equ. 37(3–4), 381–384 (2010)

3. 3.

Barles, G., Busca, J.: Existence and comparison results for fully nonlinear degenerate elliptic equations without zeroth-order term. Commun. Partial Differ. Equ. 26(11–12), 2323–2337 (2001)

4. 4.

Birindelli, I., Galise, G., Ishii, H.: A family of degenerate elliptic operators: maximum principle and its consequences. Ann. Inst. H. Poincare Anal. Non Lineaire 35(2), 417–441 (2018)

5. 5.

Blanc, P., Rossi, J.D.: Games for eigenvalues of the Hessian and concave/convex envelopes. J. Math. Pures Appl. 127, 192–215 (2019)

6. 6.

Blanc, P., Rossi, J. D.: Game Theory and Partial Differential Equations. De Gruyter Series in Nonlinear Analysis and Applications Vol. 31 (2019). ISBN 978-3-11-061925-6. ISBN 978-3-11-062179-2 (eBook)

7. 7.

Blanc, P., Manfredi, J.J., Rossi, J.D.: Games for Pucci’s maximal operators. J. Dyn. Games 6(4), 277–289 (2019)

8. 8.

Charro, F., Garcia Azorero, J., Rossi, J.D.: A mixed problem for the infinity laplacian via Tug-of-War games. Calc. Var. Partial Differ. Equ. 34(3), 307–320 (2009)

9. 9.

Crandall, M.G.: A Visit with the $$\infty$$-Laplace Equation, Calculus of variations and Nonlinear Partial Differential Equations. Lecture Notes in Mathematics, vol. 1927, pp. 75–122. Springer, Berlin (2008)

10. 10.

Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27, 1–67 (1992)

11. 11.

Doob, J.L.: What is a martingale? Am. Math. Mon. 78(5), 451–463 (1971)

12. 12.

Doob, J.L.: Classical Potential Theory and Its Probabilistic Counterpart. Classics in Mathematics. Springer, Berlin (2001)

13. 13.

Doob, J.L.: Semimartingales and subharmonic functions. Trans. Am. Math. Sot. 77, 86–121 (1954)

14. 14.

Hunt, G.A.: Markoff processes and potentials I, II, III. Ill. J. Math. 1(44–93), 316–369 (1957)

15. 15.

Hunt, G.A.: Markoff processes and potentials I, II, III. Ill. J. Math. 2, 151–213 (1958)

16. 16.

Ishiwata, M., Magnanini, R., Wadade, H.: A natural approach to the asymptotic mean value property for the p-Laplacian. Calc. Var. Partial Differ. Equ. 56(4), Art. 97, 22 (2017)

17. 17.

Jensen, R.: Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient. Arch. Ration. Mech. Anal. 123(1), 51–74 (1993)

18. 18.

Kac, M.: Random walk and the theory of Brownian motion. Am. Math. Mon. 54(7), 369–391 (1947)

19. 19.

Kakutani, S.: Two-dimensional Brownian motion and harmonic functions. Proc. Imp. Acad. Tokyo 20, 706–714 (1944)

20. 20.

Kawohl, B., Manfredi, J.J., Parviainen, M.: Solutions of nonlinear PDEs in the sense of averages. J. Math. Pures Appl. 97(3), 173–188 (2012)

21. 21.

Knapp, A.W.: Connection between Brownian motion and potential theory. J. Math. Anal. Appl. 12, 328–349 (1965)

22. 22.

Lindqvist, P., Manfredi, J.J.: On the mean value property for the $$p-$$Laplace equation in the plane. Proc. Am. Math. Soc. 144(1), 143–149 (2016)

23. 23.

Luiro, H., Parviainen, M., Saksman, E.: Harnack’s inequality for p-harmonic functions via stochastic games. Comm. Partial Differ. Equ. 38(11), 1985–2003 (2013)

24. 24.

Manfredi, J.J., Parviainen, M., Rossi, J.D.: An asymptotic mean value characterization for p-harmonic functions. Proc. Am. Math. Soc. 138(3), 881–889 (2010)

25. 25.

Manfredi, J.J., Parviainen, M., Rossi, J.D.: Dynamic programming principle for tug-of-war games with noise. ESAIM Control Opt. Calc. Var. 18, 81–90 (2012)

26. 26.

Manfredi, J.J., Parviainen, M., Rossi, J.D.: On the definition and properties of p-harmonious functions. Ann. Scuola Nor. Sup. Pisa 11, 215–241 (2012)

27. 27.

Mitake, H., Tran, H.V.: Weakly coupled systems of the infinity Laplace equations. Trans. Am. Math. Soc. 369, 1773–1795 (2017)

28. 28.

Oksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 6th edn. Springer, Berlin (2003)

29. 29.

Peres, Y., Schramm, O., Sheffield, S., Wilson, D.: Tug-of-war and the infinity Laplacian. J. Am. Math. Soc. 22, 167–210 (2009)

30. 30.

Peres, Y., Sheffield, S.: Tug-of-war with noise: a game theoretic view of the $$p$$-Laplacian. Duke Math. J. 145(1), 91–120 (2008)

31. 31.

Rossi, J.D.: Tug-of-war games and PDEs. Proc. R. Soc. Edim. 141A, 319–369 (2011)

32. 32.

Williams, D.: Probability with Martingales. Cambridge University Press, Cambridge (1991)

## Acknowledgements

Partially supported by CONICET Grant PIP GI No. 11220150100036CO (Argentina), by UBACyT Grant 20020160100155BA (Argentina) and by MINECO MTM2015-70227-P (Spain).

## Author information

Authors

### Corresponding author

Correspondence to Julio D. Rossi.