Applied Mathematics & Optimization

, Volume 77, Issue 3, pp 567–597

# Nonzero-Sum Games of Optimal Stopping for Markov Processes

Open Access
Article

## Abstract

Two players are observing a right-continuous and quasi-left-continuous strong Markov process X. We study the optimal stopping problem $$V^{1}_{\sigma }(x)=\sup _{\tau } \mathsf {M}_{x}^{1}(\tau ,\sigma )$$ for a given stopping time $$\sigma$$ (resp. $$V^{2}_{\tau }(x)=\sup _{\sigma } \mathsf {M}_{x}^{2}(\tau ,\sigma )$$ for given $$\tau$$) where $$\mathsf {M}_{x}^{1}(\tau ,\sigma ) = \mathsf {E}_{x} [G_{1}(X_{\tau })I(\tau \le \sigma ) + H_{1}(X_{\sigma })I(\sigma < \tau )]$$ with $$G_1,H_1$$ being continuous functions satisfying some mild integrability conditions (resp. $$\mathsf {M}_{x}^{2}(\tau ,\sigma ) = \mathsf {E}_{x} [G_{2}(X_{\sigma })I(\sigma < \tau ) + H_{2}(X_{\tau })I(\tau \le \sigma )]$$ with $$G_2,H_2$$ being continuous functions satisfying some mild integrability conditions). We show that if $$\sigma = \sigma _{D_{2}} = \inf \{t \ge 0: X_t \in D_2\}$$ (resp. $$\tau = \tau _{D_{1}} = \inf \{t \ge 0: X_t \in D_1\}$$) where $$D_{2}$$ (resp. $$D_1$$) has a regular boundary, then $$V^{1}_{\sigma _{D_{2}}}$$ (resp. $$V^{2}_{\tau _{D_{1}}}$$) is finely continuous. If $$D_{2}$$ (resp. $$D_1$$) is also (finely) closed then $$\tau _*^{\sigma _{D_2}} = \inf \{t \ge 0: X_{t} \in D_{1}^{\sigma _{D_{2}}}\}$$ (resp. $$\sigma _{*}^{\tau _{D_1}} = \inf \{t \ge 0: X_{t} \in D_{2}^{\tau _{D_{1}}}\}$$) where $$D_{1}^{\sigma _{D_{2}}} = \{V^{1}_{\sigma _{D_{2}}} = G_{1}\}$$ (resp. $$D_{2}^{\tau _{D_{1}}} = \{V^{2}_{\tau _{D_{1}}} = G_{2}\}$$) is optimal for player one (resp. player two). We then derive a partial superharmonic characterisation for $$V^{1}_{\sigma _{D_2}}$$ (resp. $$V^{2}_{\tau _{D_1}}$$) which can be exploited in examples to construct a pair of first entry times that is a Nash equilibrium.

## Keywords

Nonzero-sum optimal stopping game Nash equilibrium Markov process Double partial superharmonic characterisation Principle of double smooth fit Principle of double continuous fit

## Mathematics Subject Classification

Primary 91A15 60G40 Secondary 60J25 60G44

## 1 Introduction

Optimal stopping games, often referred to as Dynkin games, are extensions of optimal stopping problems. Since the seminal paper of Dynkin [14], optimal stopping games have been studied extensively. Martingale methods for zero-sum games were studied by Kifer [32], Neveu [44], Stettner [55], Lepeltier and Maingueneau [41] and Ohtsubo [45]. The Markovian framework was initially studied by Frid [22], Gusein-Zade [26], Elbakidze [18] and Bismut [5]. Bensoussan and Friedman [2] and Friedman [20, 21] considered zero-sum optimal stopping games for diffusions and developed an analytic approach by relying on variational and quasi-variational inequalities. Ekström and Peskir [16] proved the existence of a value in two-player zero-sum optimal stopping games for right-continuous strong Markov processes and construct a Nash equilibrium point under the additional assumption that the underlying process is quasi-left continuous. Peskir in [51] and [52] extended these results further by deriving a semiharmonic characterisation of the value of the game without assuming that a Nash equilibrium exists a priori. In particular, a necessary and sufficient condition for the existence of a Nash equilibrium is that the value function coincides with the smallest superharmonic and the largest subharmonic function lying between the gain and the loss function which, in the case of absorbed Brownian motion in [0,1], is equivalent to ‘pulling a rope’ between ‘two obstacles’ (that is finding the shortest path between the graphs of two functions). Connections between zero-sum optimal stopping games and singular stochastic control problems were studied in [23, 31] and[4]. Cvitanic and Karatzas [11] showed that backward stochastic differential equations are connected with the value function of a zero-sum Dynkin game. Advances in this direction can be found in [27]. Various authors have also studied zero-sum optimal stopping games with randomised strategies. For further details one can refer to [39] and the references therein. Zero-sum optimal stopping games have been used extensively in the pricing of game contingent claims both in complete and incomplete markets (see for example [15, 17, 24, 25, 30, 33, 35, 36, 38] and [19]).

Literature on nonzero-sum optimal stopping games is mainly concerned with the existence of a Nash equilibrium. Initial studies in discrete time date back to Morimoto [42] wherein a fixed point theorem for monotone mappings is used to derive sufficient conditions for the existence of a Nash equilibrium point. Ohtsubo [46] derived equilibrium values via backward induction and in [47] the same author considers nonzero-sum games in which the lower gain process has a monotone structure, and gives sufficient conditions for a Nash equilibrium point to exist. Shmaya and Solan in [54] proved that every two player nonzero-sum game in discrete time admits an $$\varepsilon$$-equilibrium in randomised stopping times. In continuous time Bensoussan and Friedman [3] showed that, for diffusions, a Nash equilibrium exists if there exists a solution to a system of quasi-variational inequalities. However, the regularity and uniqueness of the solution remain open problems. Nagai [43] studies a nonzero-sum stopping game of symmetric Markov processes. A system of quasi-variational inequalities is introduced in terms of Dirichlet forms and the existence of extremal solutions of a system of quasi-variational inequalities is discussed. Nash equilibrium points of the stopping game are then obtained from these extremal solutions. Cattiaux and Lepeltier [8] study special right-processes, namely Hunt processes in the Ray topology, and they prove existence of a quasi-Markov Nash Equilbrium. The authors follow Nagai’s idea but use probabilistic tools rather than the theory of Dirichlet forms. Huang and Li in [29] prove the existence of a Nash equilibrium point for a class of nonzero-sum noncyclic stopping games using the martingale approach. Laraki and Solan [40] proved that every two-player nonzero-sum Dynkin game in continuous time admits an $$\varepsilon -$$equilibrium in randomised stopping times. Hamadène and Zhang in [28] prove existence of a Nash equilibrium using the martingale approach, for processes with positive jumps. One application of nonzero-sum optimal stopping games is seen in the study of game options in incomplete markets, via the consideration of utility-based arguments (see [34]). Nonzero-sum optimal stopping games have also been used to model the interaction between bondholders and shareholders in the study of convertible bonds, when corporate taxes are included and when the company is allowed to claim default (see [9]).

In this work we consider two player nonzero-sum games of optimal stopping for a general strong Markov process. The aim is to use probabilistic tools to study the optimal stopping problem of player one (resp. player two) when the stopping time of player two (resp. player one) is externally given. Although this work does not deal with the question of existence of mutually best responses (that is the existence of a Nash equilibrium) the results obtained can be exploited further in various examples, to show the existence of a pair of first entry times that will be a Nash equilibrium. Indeed, the results derived here will be used in a separate work (see [1]) to construct Nash equilibrium points for one dimensional regular diffusions and for a certain class of payoff functions.

This paper is organised as follows: In Sect. 2 we introduce the underlying setup and formulate the nonzero-sum optimal stopping game. In Sect. 3 we show that if the strategy chosen by player two (resp. player one) is $$\sigma _{D_2}$$ (resp. $$\tau _{D_1}$$), the first entry time into a regular Borel subset $$D_2$$ (resp. $$D_1$$) of the state space, then the value function of player one associated with $$\sigma _{D_2}$$ (resp. the value function of player two associated with $$\tau _{D_1}$$), which we shall denote by $$V^1_{\sigma _{D_2}}$$ (resp. $$V^2_{\tau _{D_1}}$$), is finely continuous. In Section 4 we shall use this regularity property of $$V^1_{\sigma _{D_2}}$$ (resp. $$V^2_{\tau _{D_1}}$$) to construct an optimal stopping time for player one (resp. player two). In Sect. 5 we shall use the results obtained in Sects. 3 and 4 to provide a partial superharmonic characterisation for $$V^1_{\sigma _{D_2}}$$ (resp. $$V^2_{\tau _{D_1}}$$). More precisely if $${D_2}$$ (resp. $$D_1$$) is also a closed or finely closed subset of the state space then $$V^1_{\sigma _{D_2}}$$ (resp. $$V^2_{\tau _{D_1}}$$) can be identified with the smallest finely continuous function that is superharmonic in $$D_2^c$$ (resp. in $$D_1^c$$) and that majorises the lower payoff function. In Section 6 we shall consider stationary one-dimensional Markov processes and we shall assume that there exists a pair of stopping times $$(\tau _{A_*},\sigma _{B_*})$$ of the form $$\tau _{A_*}= \inf \{t\ge 0: X_t \le A_*\}$$ and $$\sigma _{B_*} = \inf \{t \ge 0 : X_t \ge B_*\}$$ where $$A_*< B_*$$, that is a Nash equilibrium point. We first show that $$V^{1}_{\sigma _{B_*}}$$ (resp. $$V^{1}_{\tau _{A_*}}$$) is continuous at $$A_*$$ (resp. at $$B_*$$). Then for the special case of one dimensional regular diffusions we shall use the results obtained in Sect. 5 to show that $$V^{1}_{\sigma _{B_*}}$$ (resp. $$V^{2}_{\tau _{A_*}}$$) is also smooth at $$A_*$$ (resp. $$B_*$$) provided that the payoff functions are smooth. This is in line with the principle of smooth fit observed in standard optimal stopping problems (see for example [49] for further details).

## 2 Formulation of the Problem

In this section we shall formulate rigorously the nonzero-sum optimal stopping game. For this we shall first set up the underlying framework. This will be similar to the one presented by Ekström and Peskir (cf. [16, p. 3]). On a given filtered probability space $$\left( \Omega ,\mathcal {F},(\mathcal {F}_t)_{t\ge 0},\mathsf {P}_{x}\right)$$ we define a strong Markov process $$X=(X_{t})_{t \ge 0}$$ with values in a measurable space $$(E,\mathcal {B})$$, with E being a locally compact Hausdorff space with a countable base (note that since E has a countable base then it is a Polish space) and $$\mathcal {B}$$ the Borel $$\sigma$$-algebra on E. We shall assume that $$\mathsf {P}_{x}\left( X_0=x\right) =1$$, that the sample paths of X are right-continuous and that X is quasi-left-continuous (that is $$X_{\rho _{n}} \rightarrow X_{\rho }$$ $$\mathsf {P}_{x}$$-a.s. whenever $$\rho _{n}$$ and $$\rho$$ are stopping times such that $$\rho _{n} \uparrow \rho$$ $$\mathsf {P}_{x}$$-a.s.). All stopping times mentioned throughout this text are relative to the filtration $$(\mathcal {F}_{t})_{t \ge 0}$$ introduced above, which is also assumed to be right-continuous. This means that entry times in open and closed sets are stopping times. Moreover $$\mathcal {F}_0$$ is assumed to contain all $$\mathsf {P}_{x}$$-null sets from $$\mathcal {{F}}^{X}_\infty = \sigma \left( X_{t}:t \ge 0\right)$$, which further implies that the first entry times to Borel sets are stopping times. We shall also assume that the mapping $$x \mapsto \mathsf {P}_{x}(F)$$ is (universally) measurable for each $$F \in \mathcal {F}$$ so that the mapping $$x \mapsto \mathsf {E}_{x} [Z]$$ is (universally) measurable for each (integrable) random variable Z.

## Remark 2.1

Note that a subset F of a Polish space E is said to be universally measurable if it is $$\mu$$-measurable for every finite measure $$\mu$$ on $$(E,\mathcal {B})$$, where $$\mathcal {B}$$ is the Borel-$$\sigma$$ algebra on E. By $$\mu$$-measurable we mean that F is measurable with respect to the completion of $$\mathcal {B}$$ under $$\mu$$. If $$\mathcal {B}^*$$ is the collection of universally measurable subsets of E then a function $$f:E \rightarrow \mathbb {R}$$ is said to be universally measurable if $$f^{-1}(A) \in \mathcal {B}^*$$ for all $$A \in \mathcal {B}(\mathbb {R})$$, where $$\mathcal {B}(\mathbb {R})$$ is the Borel-$$\sigma$$ algebra on $$\mathbb {R}$$.

Finally we shall assume that $$\Omega$$ is the canonical space $$E^{\left[ 0,\infty \right) }$$ with $$X_{t}(\omega ) = \omega (t)$$ for $$\omega \in \Omega$$. In this case the shift operator $$\theta _{t}:\Omega \rightarrow \Omega$$ is well defined by $$\theta _{t}(\omega )(s) = \omega (t+s)$$ for $$\omega \in \Omega$$ and $$t,s \ge 0$$.

The Markovian version of a nonzero-sum optimal stopping game may now be formally described as follows. Let $$G_{1},G_{2},H_{1},H_{2}:E \rightarrow \mathbb {R}$$ be continuous functions satisfying $$G_{i} \le H_{i}$$ and the following integrability conditions;
\begin{aligned} \mathsf {E}_{x}[\sup _{t}|G_{i}(X_{t})|]< \infty , \quad \text {and}\quad \mathsf {E}_{x}[\sup _{t}|H_{i}(X_{t})|] < \infty \end{aligned}
(2.1)
for $$i=1,2$$. Suppose that two players are observing X. Player one wants to choose a stopping time $$\tau$$ and player two a stopping time $$\sigma$$ in such a way as to maximise their total average gains, which are respectively given by
\begin{aligned}&\mathsf {M}_{x}^{1}\left( \tau ,\sigma \right) = \mathsf {E}_{x}\left[ G_{1}\left( X_{\tau }\right) I\left( \tau \le \sigma \right) +H_{1}\left( X_{\sigma } \right) I\left( \sigma < \tau \right) \right] \end{aligned}
(2.2)
\begin{aligned}&\mathsf {M}_{x}^{2}\left( \tau ,\sigma \right) = \mathsf {E}_{x}\left[ G_{2}\left( X_{\sigma }\right) I\left( \sigma < \tau \right) +H_{2}\left( X_{\tau }\right) I\left( \tau \le \sigma \right) \right] . \end{aligned}
(2.3)
For a given strategy $$\sigma$$ chosen by player two, we let
\begin{aligned} V^{1}_{\sigma }(x) = \sup _{\tau } \mathsf {M}_{x}^{1}(\tau ,\sigma ) \end{aligned}
(2.4)
and for a given strategy $$\tau$$ chosen by player one, we let
\begin{aligned} V^{2}_{\tau }(x) = \sup _{\sigma } \mathsf {M}_{x}^{2}(\tau ,\sigma ). \end{aligned}
(2.5)
We shall refer to $$V^1_{\sigma }$$ (resp. $$V^2_\tau$$) as the value function of player one (resp. player two) associated with the given stopping time $$\sigma$$ (resp. $$\tau$$) of player two (resp. player one). We shall assume that the stopping times in (2.4) and (2.5) are finite valued and if the terminal time T is finite we shall further assume that
\begin{aligned} G_{i}(X_{T}) = H_{i}(X_{T}) \text { } \mathsf {P}_{x} \text {-a.s.} \end{aligned}
(2.6)
for $$i=1,2$$. In this case one can think of X as being a two dimensional process $$((t,Y_{t}))_{t \ge 0}$$ so that $$G_{i}$$ and $$H_{i}$$ will be functions on $$[0,T] \times E$$ (cf. [49, p. 36]). (We note that if the terminal time T is infinite and the stopping times $$\tau$$ and $$\sigma$$ are allowed to be infinite our results will still be valid provided that $$\limsup _{t \rightarrow \infty } G_i(X_t) = \limsup _{t \rightarrow \infty } H_i(X_t) \text { } \mathsf {P}_x$$-a.s).

The game is said to have a solution if there exists a pair of stopping times $$(\tau _*,\sigma _{*})$$ which is a Nash equilibrium point, that is $$\mathsf {M}_{x}^1(\tau ,\sigma _{*}) \le \mathsf {M}_{x}^1(\tau _*,\sigma _{*})$$ and $$\mathsf {M}_{x}^2(\tau _*,\sigma ) \le \mathsf {M}_{x}^2(\tau _*,\sigma _{*})$$ for all stopping times $$\tau , \sigma$$. This means that none of the players will perform better if they change their strategy independent of each other. In this case $$V^{1}_{\sigma _{*}}(x) = \mathsf {M}_{x}^{1}(\tau _{*},\sigma _{*})$$ is the payoff function of player one and $$V^{2}_{\tau _{*}}(x)=\mathsf {M}_{x}^{2}(\tau _{*},\sigma _{*})$$ the payoff function of player two in this equilibrium. So $$V^{1}_{\sigma _*}$$ and $$V^{2}_{\tau _*}$$ can be called the value functions of the game (corresponding to $$(\tau _*,\sigma _*))$$. In general, as we shall see in Sect. 6, there might be other pairs of stopping times that form a Nash equilibrium point, which can lead to different value functions.

## 3 Fine continuity property

In this section we show that if the strategy chosen by player two (resp. player one) corresponds to the first entry time into a subset $$D_2$$ (resp. $$D_1$$) of E, whose boundary $$\partial D_2$$ (resp. $$\partial D_1$$) is regular, then $$V^{1}_{\sigma _{D_2}}$$ (resp. $$V^{2}_{\tau _{D_1}})$$ is continuous in the fine topology (i.e. finely continuous). For literature on the fine topology one can refer to [6, 10] and [13]. We first define the concept of a finely open set and a regular boundary of a Borel subset of E.

## Definition 3.1

An arbitrary set $$B \subseteq E$$ is said to be finely open if there exists a Borel set $$A \subseteq B$$ such that $$\mathsf {P}_{x}(\rho _{A^{c}} > 0) = 1$$ for every $$x \in A$$, where $$\rho _{A^{c}} = \inf \{t > 0: X_{t} \in A^{c}\}$$ is the first hitting time in $$A^{c}$$.

## Definition 3.2

The boundary $$\partial D$$ of a Borel set $$D \subseteq E$$ is said to be regular for D if $$\mathsf {P}_{x}(\rho _{D} = 0) = 1$$ for every point $$x \in \partial D$$, where $$\rho _{D} = \inf \{t>0:X_{t} \in D \}$$.

We now introduce preliminary results which are needed to prove the main theorem of this section.

## Lemma 3.3

For any given stopping time $$\sigma$$ (resp. $$\tau$$), the mapping $$x \mapsto V^{1}_{\sigma }(x)$$, (resp. $$x \mapsto V^{2}_{\tau }(x)$$) is measurable.

## Proof

To prove measurability of the mapping $$x \mapsto V^{2}_\tau (x)$$ one can follow the proof in [16, p. 5, pt. 3] by replacing $$G_1$$ and $$G_2$$ with $$G_2$$ and $$H_2$$ respectively (note that our payoff functions are assumed to be continuous, hence finely-continuous). So we shall only prove the result for $$V^{1}_{\sigma }$$. Ekström and Peskir [16, p. 5, pt. 3] proved that for any given stopping time $$\sigma$$, the function $$\tilde{V}^{1}_{\sigma }$$ of the optimal stopping problem $$\sup _{\tau }{\tilde{\mathsf {M}}}_{x}^{1}(\tau ,\sigma ) = \sup _{\tau }\mathsf {E}_{x} [G_{1}(X_{\tau })I(\tau < \sigma ) + H_{1}(X_{\sigma })I(\sigma \le \tau )]$$ is measurable. The same method of proof can be applied in this setting with the following slight modification: Let $$G_{t}^{\sigma ,1}=G_{1}(X_{\tau })I(t \le \sigma ) + H_{1}(X_{\sigma })I(\sigma < t)$$ and $$\tilde{G}_{t}^{\sigma ,1}=G_{1}(X_{t})I(t < \sigma ) + H_{1}(X_{\sigma })I(\sigma \le t)$$. Note that the mapping $$t \mapsto G_{t}^{\sigma ,1}$$ is not right-continuous. Now for any stopping time $$\tau$$ in the optimal stopping problem (2.4) we let $$\tau _n = \frac{k}{2^n}$$ on $$\{\frac{k-1}{2^n} < \tau \le \frac{k}{2^n}\}$$ for each $$n \ge 1$$. It is well known that $$\tau _n$$, for each n is a stopping time on the dyadic rationals $$Q_n$$ of the form $$\frac{k}{2^n}$$ and that $$\tau _n \downarrow \tau$$ as $$n \rightarrow \infty$$. Since $$G_{1}(X)$$ is right-continuous we have that
\begin{aligned} \lim _{n \rightarrow \infty } G_{\tau _{n}}^{\sigma ,1} = \tilde{G}_{\tau }^{\sigma ,1} \text { } \mathsf {P}_{x} \text {-a.s.} \end{aligned}
(3.1)
Since $$G_1 \le H_1$$ we get, upon using (3.1) and Fatou’s lemma (the required integrability condition for using Fatou’s lemma can be derived from the integrability assumption (2.1)), that
\begin{aligned}&\mathsf {E}_{x}\big [G_{\tau }^{\sigma ,1}\big ] \le \mathsf {E}_{x}\big [\tilde{G}_{\tau }^{\sigma ,1}\big ]= \mathsf {E}_{x}\big [\lim _{n\rightarrow \infty } G_{\tau _{n}}^{\sigma ,1} \big ] \le \liminf _{n \rightarrow \infty } \mathsf {E}_{x} \big [G_{\tau _{n}}^{\sigma ,1}\big ] \nonumber \\&\quad \le \sup _{n \ge 1} \sup _{\tau \in Q_{n}} \mathsf {E}_{x} \big [G_{\tau }^{\sigma ,1}\big ] = :\sup _{n \ge 1} V_{n}^{\sigma ,1}(x). \end{aligned}
(3.2)
Taking the supremum over all $$\tau$$ it follows that $$V^{1}_{\sigma }(x) \le \sup _{n \ge 1} V_{n}^{\sigma ,1}(x)$$. On the other hand, $$V_{n}^{\sigma ,1}(x) \le V_{\sigma }^{1}(x)$$ for all $$n \ge 1$$ so we get that $$V^{1}_{\sigma }(x) = \sup _{n \ge 1} V_{n}^{\sigma ,1}(x)$$ for all $$x \in E$$. Measurability of $$V^{1}_{\sigma }(x)$$ follows from the measurability property of $$\sup _{n \ge 1} V_{n}^{\sigma ,1}(x)$$ as in [16]. $$\square$$

## Lemma 3.4

Let D be a Borel subset of E and let $$x \in \partial D$$, where $$\partial D$$ is a regular boundary for D. Suppose that $$(\rho _{n})_{n=1}^{\infty }$$ is a sequence of stopping times such that $$\rho _{n} \downarrow 0 \text { } \mathsf {P}_{x}$$-a.s. as $$n \rightarrow \infty$$. Set $$\sigma _{\rho _{n}}= \inf \{t \ge \rho _{n} : X_{t} \in D\}$$. Then $$\sigma _{\rho _{n}} \downarrow 0 \text { } \mathsf {P}_{x}\text {-a.s.}$$ as $$n \rightarrow \infty$$.

## Proof

Let $$x \in \partial D$$. By regularity of $$\partial D$$ for any $$\varepsilon > 0$$ there exists $$t \in (0,\varepsilon )$$ such that $$X_{t} \in D$$ $$\mathsf {P}_{x}$$-a.s. Since $$\sigma _{\rho _{n}}$$ is a sequence of decreasing stopping times then $$\sigma _{\rho _{n}} \downarrow \beta \text { } \mathsf {P}_{x}$$-a.s. for some stopping time $$\beta$$. So suppose for contradiction that $$\beta > 0$$. Now $$\rho _{n} \downarrow 0 \text { } \mathsf {P}_{x}$$-a.s. and for each n we have $$\sigma _{\rho _{n}} \ge \beta \text { } \mathsf {P}_{x}$$-a.s. So for any given $$\omega \in \Omega \backslash N$$ where $$\mathsf {P}_{x}(N) = 0$$ we have that $$X_{t}(\omega ) \notin D$$ for all $$t \in (0,\beta (\omega ))$$ and this contradicts the fact that $$\partial D$$ is regular for D.

The next lemma and theorem, which we shall exploit in this study, provide conditions for fine continuity. The proofs of these results can be found in [13].

## Lemma 3.5

A measurable function $$F:E \rightarrow \mathbb {R}$$ is finely continuous if and only if
\begin{aligned} \lim _{t \downarrow 0} F(X_{t}) = F(x) \text { } \mathsf {P}_{x}\text {-a.s.} \end{aligned}
(3.3)
for every $$x \in E$$. This is further equivalent to the fact that the mapping
\begin{aligned} t \mapsto F(X_{t}(\omega )) \text { is right-continuous on } \mathbb {R}_{+} \end{aligned}
(3.4)
for every $$\omega \in \Omega \backslash N$$ where $$\mathsf {P}_{x}(N)=0$$ for all $$x \in E$$.

## Theorem 3.6

Let $$F:E \rightarrow \mathbb {R}$$ be a measurable function and suppose that $$K_{1} \subseteq K_{2} \subseteq K_{3} \subseteq ...$$ is a nested sequence of compact sets in E. Suppose also that $$\rho _{K_{n}} \downarrow 0 \text { } \mathsf {P}_{x}$$-a.s. as $$n \rightarrow \infty$$, where $$\rho _{K_{n}} = \inf \{t \ge 0: X_{t} \in K_{n}\}$$. If $$\lim _{n \rightarrow \infty } \mathsf {E}_{x}[F(X_{\rho _{K_{n}}}) ]= F(x)$$ then F is finely continuous.

We next state and prove the main result of this section, that is the fine continuity property of $$V^{1}_{\sigma _{D_2}}$$ (resp. $$V^{2}_{\tau _{D_1}}$$).

## Theorem 3.7

Let $$(\rho _{n})_{n=1}^{\infty }$$ be any sequence of stopping times such that $$\rho _{n}\downarrow 0$$ $$\mathsf {P}_{x}$$-a.s. as $$n\rightarrow \infty$$. Suppose that $$D_1,D_2$$ are Borel subsets of E having regular boundaries $$\partial D_1$$ and $$\partial D_2$$ respectively. Then
\begin{aligned}&\lim _{n\rightarrow \infty }\mathsf {E}_{x} \big [V^{1}_{\sigma _{D_2}} (X_{\rho _{n}})\big ] = V^{1}_{\sigma _{D_2}}(x) \end{aligned}
(3.5)
\begin{aligned}&\lim _{n\rightarrow \infty }\mathsf {E}_{x} \big [V^{2}_{\tau _{D_1}} (X_{\rho _{n}})\big ] = V^{2}_{\tau _{D_1}}(x) \end{aligned}
(3.6)
where $$\sigma _{D_2}=\inf \{t \ge 0 :X_{t} \in D_2\}$$ and $$\tau _{D_1}=\inf \{t \ge 0 :X_{t} \in D_1\}$$.

## Proof

We will only prove (3.5) as (3.6) follows by symmetry.

$$\mathbf {1^\circ }$$ From Lemma 3.3 we know that $${V}^{1}_{\sigma _{D_2}}$$ is measurable. This implies that
\begin{aligned} V_{\sigma _{D_2}}^{1}(X_{\rho _{n}}) = \sup _{\tau } \mathsf {M}^{1}_{X_{\rho _{n}}}(\tau ,\sigma _{D_2}) \end{aligned}
(3.7)
is a random variable. By the strong Markov property of X we have that
\begin{aligned} \mathsf {{M}}^{1}_{X_{\rho _{n}}}(\tau ,\sigma _{D_2})= \mathsf {E}_{x} \big [G_{1}(X_{\tau _{\rho _{n}}})I(\tau _{\rho _{n}} \le \sigma _{\rho _{n}}) + H_{1}(X_{\sigma _{\rho _{n}}})I(\sigma _{\rho _{n}} < \tau _{\rho _{n}})|\mathcal {F}_{\rho _{n}}\big ] \end{aligned}
(3.8)
where we set $$\tau _{\rho _{n}}=\rho _{n}+\tau \circ \theta _{\rho _{n}}$$ and $$\sigma _{\rho _{n}}=\rho _{n}+\sigma _{D} \circ \theta _{\rho _{n}}$$. It is well known that $$\tau _{\rho _{n}}$$ and $$\sigma _{\rho _{n}}$$ are stopping times (see for example [10, Section 1.3, Thoerem 11]). Let us set
\begin{aligned} \mathsf {M}_x^1(\tau ,\sigma | \mathcal {F}_{\rho }) = \mathsf {E}_{x} \big [G_{1}(X_{\tau })I(\tau \le \sigma ) + H_{1}(X_{\sigma })I(\sigma < \tau )|\mathcal {F}_{\rho }\big ] \end{aligned}
(3.9)
for given stopping times $$\tau ,\sigma$$ and $$\rho$$. Then from (3.7) and (3.8) we get that
\begin{aligned} {V}_{\sigma _{D}}^{1}(X_{\rho _{n}}) = {{\mathrm{ess\,sup}}}_{\tau }\mathsf {{M}}_{x}^{1}(\tau _{\rho _{n}},\sigma _{\rho _{n}}|\mathcal {F}_{\rho _{n}}) = {{\mathrm{ess\,sup}}}_{\tau \ge \rho _{n}}\mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}|\mathcal {F}_{\rho _{n}}). \end{aligned}
(3.10)
The last equality follows from the fact that for every stopping time $$\tau \ge \rho _{n}$$, there exists a function $$\tau ^{\rho _{n}}:\Omega \times \Omega \rightarrow [0,\infty ]$$ such that
\begin{aligned} \tau ^{\rho _{n}} \text { is } \mathcal {F}_{\rho _{n}} \otimes \mathcal {F}_{\infty }-\text { measurable } \end{aligned}
(3.11)
\begin{aligned} \vartheta \mapsto \tau ^{\rho _{n}}(\omega ,\vartheta ) \text { is a stopping time } \end{aligned}
(3.12)
\begin{aligned} \tau (\omega ) = \rho _{n} + \tau ^{\rho _{n}}(\omega ,\theta _{\rho _{n}}(\omega )) \end{aligned}
(3.13)
for all $$\omega \in \Omega$$. Note that the latter assertion can be derived from Galmarino’s test. In particular if $$\tau$$ is the first entry time of X into a set, then $$\tau = \sigma + \tau \circ \theta _{\rho _{n}}$$ and $$\tau ^{\rho _{n}}$$ can be identified with $$\tau$$ in the sense that $$\tau _{\rho _{n}}(\omega ,\vartheta ) = \tau (\vartheta )$$ for all $$\omega$$ and $$\vartheta$$. Taking expectations on both sides in (3.10) we get
\begin{aligned} \mathsf {E}_{x}\big [{V}_{\sigma _{D}}^{1}(X_{\rho _{n}})\big ] = \mathsf {E}_{x} \big [{{\mathrm{ess\,sup}}}_{\tau \ge \rho _{n}}\mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}|\mathcal {F}_{\rho _{n}})\big ]. \end{aligned}
(3.14)
$$\mathbf {2^\circ }$$ We next show that the family $$\{\mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}|\mathcal {F}_{\rho _{n}}):\tau \ge \rho _{n} \}$$ is upwards directed. For this we show that for any two stopping times $$\tau _{1},\tau _{2}\ge \rho _{n}$$ there exists $$\tau _{3}\ge \rho _{n}$$ such that $$\mathsf {M}_{x}^{1}( \tau _{3},\sigma _{\rho _{n}} | \mathcal {F}_{\rho _{n}}) \ge \mathsf {M}_{x}^{1}( \tau _{1},\sigma _{\rho _{n}} | \mathcal {F} _{\rho _{n} }) \vee \mathsf {M}_{x}^{1}( \tau _{2},\sigma _{\rho _{n}} | \mathcal {F}_{\rho _{n} })$$. So let $$\tau _{1},\tau _{2}\ge \rho _{n}$$ be any two stopping times given and fixed and define the set $$A= \{\omega : \mathsf {M}_{x}^{1}(\tau _{1},\sigma _{\rho _{n}} | \mathcal {F}_{\rho _{n} }) (\omega ) \ge \mathsf {M}_{x}^{1}( \tau _{2},\sigma _{\rho _{n}}| \mathcal {F}_{\rho _{n}})(\omega )\}$$. Now $$A\in \mathcal {F}_{\rho _{n} }$$ because $$\mathsf {M}_{x}^{1}(\tau _{i},\sigma _{\rho _{n}} |\mathcal {F}_{\rho _{n} })$$ for $$i=1,2$$ are $$\mathcal {F}_{\rho _{n} }-$$measurable. Let $$\tau _{3}=\tau _{1} I _{A}+\tau _{2} I _{A^{c}}$$. Since $$\tau _{1},\tau _{2}\ge \rho _{n}$$ it follows that $$\tau _{3}\ge \rho _{n}$$. Also, $$\left\{ \tau _{3}\le t\right\} =\left\{ \left\{ \tau _{1}\le t\right\} \cap A\right\} \cup \left\{ \left\{ \tau _{2}\le t\right\} \cap A^{c}\right\} =\left\{ \left\{ \tau _{1}\le t\right\} \cap A\cap \left\{ \rho _{n} \le t\right\} \right\} \cup \left\{ \left\{ \tau _{2}\le t\right\} \cap A^{c}\cap \left\{ \rho _{n} \le t\right\} \right\} \in \mathcal {F}_{t}.$$ This follows from the fact that the sets A and $$A^{c}$$ belong to $$\mathcal {F}_{\rho _{n} }$$ and that $$\{\tau _{i} \le t\} \subseteq \{\rho _{n} \le t\}$$ for $$i=1,2$$. So $$\tau _{3}$$ is a stopping time and hence,
\begin{aligned} \mathsf {M}_{x}^{1}\left( \tau _{3},\sigma _{\rho _{n}} |\mathcal {F}_{\rho _{n} }\right)= & {} \mathsf {M}_{x}^{1}\left( \tau _{1} I _{A}+\tau _{2} I _{A^{c}},\sigma _{\rho _{n}} | \mathcal {F}_{\rho _{n} }\right) \\ \nonumber= & {} \mathsf {M}_{x}^{1}\left( \tau _{1},\sigma _{\rho _{n}} |\mathcal {F}_{\rho _{n} }\right) I _{A} +\;\mathsf {M}_{x}^{1}\left( \tau _{2},\sigma _{\rho _{n}} |\mathcal {F}_{\rho _{n} }\right) I _{A^{c}} \\ \nonumber= & {} \mathsf {M}_{x}^{1}\left( \tau _{1},\sigma _{\rho _{n}} |\mathcal {F}_{\rho _{n} }\right) \vee \mathsf {M}_{x}^{1}\left( \tau _{2},\sigma _{\rho _{n}} |\mathcal {F}_{\rho _{n} }\right) \end{aligned}
(3.15)
$$\mathbf {3^\circ }$$ We next prove that if $$\partial D_2$$ is a regular boundary for $$D_2$$ then
\begin{aligned} {V}_{\sigma _{D_2}}^{1}(x) \le \liminf _{n \rightarrow \infty } \mathsf {E}_{x}\big [{V}^{1}_{\sigma _{D_2}}(X_{\rho _{n}})\big ]. \end{aligned}
(3.16)
For this we first show that
\begin{aligned} \mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}) - \mathsf {{M}}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) = \mathsf {E}_{x}[G_{1}(X_{\tau \wedge \rho _{n}}) - G_{1}(X_{\rho _{n}})] \end{aligned}
(3.17)
Since $$\sigma _{\rho _{n}} \ge \rho _{n}$$ we have that
\begin{aligned} \mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}})= & {} \mathsf {E}_{x}\big [(G_{1}(X_{\tau })I(\tau \le \sigma _{\rho _{n}}) + H_{1}(X_{\sigma _{\rho _{n}}})I(\sigma _{\rho _{n}}< \tau ))I(\tau<\rho _{n})\big ] \nonumber \\&+ \;\mathsf {E}_{x}\big [(G_{1}(X_{\tau \vee \rho _{n}})I(\tau \vee \rho _{n} \le \sigma _{\rho _{n}}) \nonumber \\&+ \;H_{1}(X_{\sigma _{\rho _{n}}})I(\sigma _{\rho _{n}}< \tau \vee \rho _{n}))I(\tau \ge \rho _{n})\big ] \nonumber \\= & {} \mathsf {E}_{x}\big [G_{1}(X_{\tau })I(\tau< \rho _{n})\big ] + \mathsf {M}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) \nonumber \\&- \;\mathsf {E}_{x}\big [(G_{1}(X_{\rho _{n}})I(\rho _{n} \le \sigma _{\rho _{n}}) + H_{1}(X_{\sigma _{\rho _{n}}})I(\sigma _{\rho _{n}}< \rho _{n})))I(\tau< \rho _{n})\big ] \nonumber \\= & {} \mathsf {E}_{x}\big [(G_{1}(X_{\tau })- G_{1}(X_{\rho _{n}}))I(\tau < \rho _{n})\big ] + \mathsf {{M}}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) \nonumber \\= & {} \mathsf {E}_{x}\big [G_{1}(X_{\tau \wedge \rho _{n}})- G_{1}(X_{\rho _{n}})\big ] + \mathsf {{M}}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) \end{aligned}
(3.18)
from which (3.17) follows. By considering separately the sets $$\{\sigma _{D_2} < \rho _n \}$$, $$\{\sigma _{D_2} \ge \rho _n\}$$ (note that $$\sigma _{\rho _n} = \sigma _{D_2}$$ on the set $$\{\sigma _{D_2} \ge \rho _{n}\}$$), $$\{\sigma _{D_2} > 0\}, \{ \sigma _{D_2} = 0\}, \{ \tau = 0\}$$ and $$\{ \tau > 0 \}$$, and by using Lemma 3.4 we get
\begin{aligned}&\mathsf {{M}}_{x}^{1}(\tau ,\sigma _{D_2}) - \mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}) \nonumber \\&\quad = \mathsf {E}_{x}[-G_{1}(X_{\tau })I(\tau \le \sigma _{\rho _{n}})I(0< \rho _{n})I(\sigma _{D_2} = 0)I(\tau> 0)\nonumber \\&\qquad +\;(H_{1}(X_{0})- H_{1}(X_{\sigma _{\rho _{n}}})I(\sigma _{\rho _n}< \tau ))I(0< \rho _{n})I(\sigma _{D_2} = 0)I(\tau> 0)] \nonumber \\&\quad =\mathsf {E}_{x} [(H_{1}(X_{0})- H_{1}(X_{\sigma _{\rho _{n}}}))I(0 < \rho _{n})I(\sigma _{D_2} = 0)I(\tau > 0)] \end{aligned}
(3.19)
for n sufficiently large. The last equality in (3.19) can be seen as follows: If $$x \in \text {int}{D_2}$$ the interior of $$D_2$$ then by the right-continuity property of the sample paths it follows that $$\sigma _{\rho _n} = 0$$ $$\mathsf {P}_x$$-a.s. for n sufficiently large. If on the other hand $$x \in \partial D_2$$ then by Lemma 3.4, we have that $$\sigma _{\rho _n} \downarrow 0$$ $$\mathsf {P}_x$$-a.s. Note that in the case $$x \notin D_2 \cup \partial D_2$$ then $$\sigma _{D_2} > 0$$ $$\mathsf {P}_x$$-a.s. and so the terms in the right-hand side of (3.19) vanish. Combining (3.17) and (3.19) we get
\begin{aligned}&\mathsf {{M}}_{x}^{1}(\tau ,\sigma _{D_2}) - \mathsf {{M}}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) \nonumber \\&\quad =\mathsf {{M}}_{x}^{1}(\tau ,\sigma _{D_2}) - \mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}})+\; \mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}) \nonumber \\&\qquad -\; \mathsf {{M}}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) \le \mathsf {E}_{x} [\sup _{t \le \rho _n}|G_{1}(X_{t \wedge \rho _{n}}) - G_{1}(X_{\rho _{n}})|] \nonumber \\&\qquad + \;\mathsf {E}_{x}[|H_{1}(X_{0})- H_{1}(X_{\sigma _{\rho _{n}}})|I(0 < \rho _{n})I(\sigma _{D_2} = 0)I(\tau > 0)]. \end{aligned}
(3.20)
for n sufficiently large. So
\begin{aligned} V_{\sigma _{D_2}}^{1}(x) \nonumber \\\le & {} \liminf _{n \rightarrow \infty } \sup _{\tau } \mathsf {{M}}_{x}^{1}(\tau \vee \rho _{n},\sigma _{\rho _{n}}) \nonumber \\= & {} \liminf _{n \rightarrow \infty } \sup _{\tau \ge \rho _{n}} \mathsf {{M}}_{x}^{1}(\tau ,\sigma _{\rho _{n}}) = \liminf _{n \rightarrow \infty } \mathsf {E}_{x}\big [{V}_{\sigma _{\rho _n}}^{1}(X_{\rho _{n}})\big ]. \end{aligned}
(3.21)
The first inequality follows from (3.20). Indeed, since $$G_{1}$$ and $$H_{1}$$ are continuous, the composed processes $$G_{1}(X)$$ and $$H_{1}(X)$$ are right-continuous and so both terms on the right hand side of (3.20) tend to zero as $$n \rightarrow \infty$$ (note that if $$x \in \partial D_2$$ this follows from Lemma 3.4 whereas if $$x \in \text {int} D_2$$ or $$x \notin D \cup \partial D$$ the result follows as explained in the text before (3.20)). The last equality in (3.21) follows from the fact that the family $$\{\mathsf {M}_x(\tau ,\sigma _{\rho _n}| \mathcal {F}_{\rho _n}) : \tau \ge \rho _n \}$$ is upwards directed (see step $$2^{\circ }$$), and so we can interchange the expectation and the essential supremum in (3.14).
$$\mathbf {4^{\circ }}$$ We show that $$V^{1}_{\sigma _{D_2}}(x) \ge \limsup _{n \rightarrow \infty } \mathsf {E}_x [V^{1}_{\sigma _{\rho _n}}(X_{\rho _n})]$$. From (3.17) and (3.19) we get
\begin{aligned} \mathsf {M}_x^{1}(\tau ,\sigma _{D_2})\ge & {} \mathsf {M}_x^{1}(\tau \vee \rho _n,\sigma _{\rho _n}) - \mathsf {E}_x [\sup _{t \le \rho _n} |G_1(X_t) - G_1(X_{\rho _n})|] \\ \nonumber&-\; \mathsf {E}_x [|H_1(X_0) - H_1(X_{\sigma _{\rho _n}})|] \end{aligned}
(3.22)
for any stopping time $$\tau$$. From this, together with Lebesgue dominated convergence theorem (upon recalling assumption (2.1)) and the continuity property of $$G_1$$ and $$H_1$$ we conclude that
\begin{aligned} V^{1}_{\sigma _{D_2}}(x) \ge \limsup _{n \rightarrow \infty } \sup _{\tau \ge \rho _n} \mathsf {M}_x^{1}(\tau \vee \rho _n,\sigma _{\rho _n}) = \limsup _{n \rightarrow \infty } \mathsf {E}_x\big [V^{1}_{\sigma _{\rho _n}}(X_{\rho _n})\big ]. \end{aligned}
$$\square$$

We next present an example to show that if $$\partial D$$ is not regular for D then $$V^{1}_{\sigma _{D}}$$ may not be finely continuous.

## Example 3.8

Let $$E = \mathbb {R}$$ and $$\mathcal {B}$$ the Borel $$\sigma$$-algebra on $$\mathbb {R}$$. Suppose that X is the deterministic motion to the right, that is the process starts at $$x \in \mathbb {R}$$ and $$X_{t} = x + t$$ $$\mathsf {P}_{x}$$-a.s. for each $$t \ge 0$$. In this case the fine topology coincides with the right-topology (cf. [53]), so a function is finely continuous if it is right-continuous. Define the functions $$G_{1}(x)= \frac{e^{x-1}}{2}I(x < 1) + \frac{1}{2x^{2}}I(x \ge 1)$$ and $$H_{1}(x)= 1I(x < 1) + \frac{1}{x^{2}}I(x \ge 1)$$. Let $$D = \{1\}$$ and $$\sigma _{D} = \inf \{t \ge 0: X_{t} \in D \}$$. Then $$\partial D$$ is not regular for D because if $$\rho _{D} = \inf \{t > 0:X_{t} \in D\}$$ then $$\rho _{D} = \infty$$ $$\mathsf {P}_{1}$$-a.s. We show that for any given $$\varepsilon > 0$$ the stopping time $$\tau ^{\varepsilon } = (\tau _{[1,\infty )} + \varepsilon )1_{A} + \tau _{[1,\infty )}1_{A}$$, where $$\tau _{[1,\infty )}$$ is the first entry time of X in $$[1,\infty )$$ and $$A = \{\tau _{[1,\infty )}=\sigma _{D}\}$$, is optimal for player one given the strategy $$\sigma _{D}$$ chosen by player two. For each $$\varepsilon > 0$$ we have that $$\tau ^{\varepsilon } = \tau _{[1,\infty )}I(x> 1) + (\tau _{[1,\infty )} + \varepsilon ) I(x \le 1) = 0I(x>1) + (1-x+\varepsilon )I(x\le 1)$$. So $$\mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D}) = H_{1}(1)I(x\le 1) + G_{1}(x)I(x>1)$$. On the other hand, for any stopping time $$\tau$$ one can see that
\begin{aligned} \mathsf {M}_{x}^{1}(\tau ,\sigma _{D})\le & {} \mathsf {E}_{x}[G_{1}(X_{\tau })I(\tau< \sigma _{D}) + H_{1}(X_{\sigma _{D}})I(\sigma _{D} \le \tau )] \nonumber \\<&(H_{1}(1)I(\tau < 1-x) + H_{1}(1)I(1-x \le \tau ))I(x \le 1) \nonumber \\&+\; G_{1}(x+\tau )I(x>1) \le \mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D}) \end{aligned}
where the last inequality follows from the fact that $$G_{1}$$ is decreasing in $$[1,\infty )$$. Taking the supremum over all $$\tau$$ we get that $$V^{1}_{\sigma _{D}}(x) \le \mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D})$$ for all x. Since on the other hand $$V^{1}_{\sigma _{D}}(x) \ge \mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D})$$, it follows that $$V^{1}_{\sigma _{D}}(x) = \mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D})$$ for all x and so we must have that $$\tau ^{\varepsilon }$$ is optimal for player one, provided that player two selects strategy $$\sigma _{D}$$. The value function is thus given by $$V_{\sigma _{D}}^{1}(x) = H_{1}(x)I(x \le 1) + G_{1}(x)I(x > 1)$$ which is not right-continuous.

## 4 Towards a Nash equilibrium

The main result of this section is to show that if $$\sigma _{D_2}$$ (resp. $$\tau _{D_1}$$) is externally given as the first entry time in $$D_2$$ (resp. $$D_1$$), a set that is either closed or finely closed, and has a regular boundary, then the first entry time $$\tau _{*}^{\sigma _{D_2}} = \inf \{t \ge 0:X_t \in D^{\sigma _{D_2}}_1\}$$ (resp. $$\sigma _{*}^{\tau _{D_1}} = \inf \{t \ge 0: X_t \in D^{\tau _{D_1}}_1 \}$$) where $$D^{\sigma _{D_2}}_1 = \{V^{1}_{\sigma _{D_2}} = G_1\}$$ (resp. $$D^{\tau _{D_1}}_2 = \{V^{2}_{\tau _{D_1}} = G_2\}$$) solves the optimal stopping problem $$V_{\sigma _{D_{2}}}^{1}(x)=\sup _{\tau } \mathsf {M}_{x}^{1}(\tau ,\sigma _{D_{2}})$$ (resp. $$V_{\tau _{D_{1}}}^{2}(x)=\sup _{\sigma } \mathsf {M}_{x}^{2}(\tau _{D_1},\sigma ))$$. The proof of this result will be divided into several lemmas and propositions.

## Proposition 4.1

Let $$D_1, D_{2}$$ be Borel subsets of E having regularies boundaries $$\partial D_{1}$$ and $$\partial D_{2}$$ respectively. Set $$\tau _{D_{1}} = \inf \{t \ge 0:X_t \in D_1\}$$ and $$\sigma _{D_{2}} = \inf \{t \ge 0:X_t \in D_2\}$$. Then,
\begin{aligned} V^{1}_{\sigma _{D_{2}}}(x) \le \mathsf {M}_{x}^{1}\big (\tau _{\varepsilon }^{\sigma _{D_{2}}},\sigma _{D_{2}}\big ) + \varepsilon \end{aligned}
(4.1)
\begin{aligned} V^{2}_{\tau _{D_{1}}}(x) \le \mathsf {M}_{x}^{2}\big (\tau _{D_1},\sigma _{\varepsilon }^{\tau _{D_{1}}}\big ) + \varepsilon \end{aligned}
(4.2)
for any $$\varepsilon >0$$, where $$\tau _{\varepsilon }^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in D_{1}^{\sigma _{D_{2}},\varepsilon }\}$$ and $$\sigma _{\varepsilon }^{\tau _{D_{1}}} = \inf \{t \ge 0: X_{t} \in D_{2}^{\tau _{D_{1}},\varepsilon }\}$$ with $$D_{1}^{\sigma _{D_{2}},\varepsilon } = \{V_{\sigma _{D_{2}}}^{1} \le G_{1} + \varepsilon \}$$ and $$D_{2}^{\tau _{D_{1}},\varepsilon } = \{V_{\tau _{D_{1}}}^{2} \le G_{2} + \varepsilon \}$$.

## Proof

We shall only prove (4.1) as for (4.2) the result follows by symmetry. The proof will be carried out in several steps.

$$\mathbf {1}^{\circ }$$ Consider the optimal stopping problem
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(x) = \sup _{\tau } \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}) \end{aligned}
(4.3)
where
\begin{aligned} \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}) = \mathsf {E}_{x}\big [G_{1}(X_{\tau })I(\tau < \sigma _{D_2}) + H_{1}(X_{\sigma _{D_2}})I(\sigma _{D_2} \le \tau )\big ]. \end{aligned}
(4.4)
Recall that the mapping $$x \mapsto \tilde{V}_{\sigma _{D_{2}}}^{1}(x)$$ is measurable (cf. [16, p. 5]) and so $$\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\rho }) = \sup _{\tau } \tilde{\mathsf {M}}_{X_{\rho }}^{1}(\tau ,\sigma _{D_{2}})$$ is a random variable for any stopping time $$\rho$$. By the strong Markov property of X it follows that for any stopping time $$\rho$$ given and fixed
\begin{aligned} \tilde{\mathsf {M}}^{1}_{X_{\rho }}(\tau ,\sigma _{D_{2}})= & {} \mathsf {{E}}_{X_{\rho }}\big [G_{1}(X_{\tau })I(\tau< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le \tau )\big ] \\= & {} \mathsf {E}_{x} \big [G_{1}(X_{\rho + \tau \circ \theta _{\rho }})I(\rho + \tau \circ \theta _{\rho } < \rho + \sigma _{D_{2}} \circ \theta _{\rho }) \nonumber \\&+\; H_{1}(X_{\rho + \sigma _{D_{2}} \circ \theta _{\rho }}) I(\rho + \sigma _{D_{2}} \circ \theta _{\rho } \le \rho + \tau \circ \theta _{\rho })|\mathcal {F}_{\rho }\big ] \nonumber \end{aligned}
(4.5)
and so we have that
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\rho })= & {} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau } \mathsf {E}_{x} \big [G_{1}(X_{\rho + \tau \circ \theta _{\rho }})I(\rho + \tau \circ \theta _{\rho } < \rho + \sigma _{D_{2}} \circ \theta _{\rho }) \nonumber \\&+\; H_{1}(X_{\rho + \sigma _{D_{2}} \circ \theta _{\rho }}) I(\rho + \sigma _{D_{2}} \circ \theta _{\rho } \le \rho + \tau \circ \theta _{\rho })|\mathcal {F}_{\rho }\big ] \nonumber \\=: & {} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau } \tilde{\mathsf {M}}_{x}^{1} (\tau _{\rho },\sigma _{D_{2}}^{\rho }|\mathcal {F_{\rho }}) \end{aligned}
(4.6)
where $$\sigma _{D_{2}}^{\rho }= \inf \{t \ge \rho : X_{t} \in D_{2}\}$$ and $$\tau _{\rho } = \rho + \tau \circ \theta _{\rho }$$. The gain process $$\tilde{G}_{t}^{\sigma _{D_{2}},1}=G_{1}(X_{t})I(t < \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le t)$$ is right-continuous, adapted and satisfies the integrability condition
\begin{aligned} \mathsf {E}_{x}[\sup _{t \ge 0} |G_{t}^{\sigma _{D_{2}},1}|]= & {} \mathsf {E}_{x} [\sup _{t \ge 0} |G_{1}(X_{t})I(t< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le t)|] \\\le & {} \mathsf {E}_{x} [\sup _{t \ge 0} |G_{1}(X_{t})I(t< \sigma _{D_{2}})| + \sup _{t \ge 0} |H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le t)|] < \infty \end{aligned}
where the last inequality follows from assumption (2.1). So the martingale approach in the theory of optimal stopping (cf. [49, Theorem 2.2]) can be applied in this setting to deduce that there exists a right-continuous modification of the supermartingale
\begin{aligned} \tilde{S}_{t}^{\sigma _{D_{2}}} = \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \ge t} \tilde{\mathsf {M}}_x^{1}(\tau ,\sigma _{D_2}|\mathcal {F}_t), \end{aligned}
(4.7)
known as the Snell envelope (for simplicity of exposition we shall still denote the right-continuous modification by $$\tilde{S}_{t}^{\sigma _{D_{2}}}$$), such that the stopping time $$\hat{\tau }_t:=\hat{\tau }_{t}^{\sigma _{D_{2}}} = \inf \{s \ge t: \tilde{S}_{s}^{\sigma _{D_{2}}} = \tilde{G}_{s}^{\sigma _{D_{2}},1}\}$$ is optimal. It is known (cf. [49, Theorem 2 p. 29]) that the stopped process $$(\tilde{S}^{\sigma _{D_{2}}}_{s \wedge \hat{\tau }_{t}})_{s \ge t}$$ is a right-continuous martingale and so
\begin{aligned} \mathsf {E}_{x}\big [\tilde{S}_{\rho }^{\sigma _{D_{2}}}\big ] = \mathsf {E}_{x}\big [\tilde{S}^{\sigma _{D_{2}}}_{\rho \wedge \hat{\tau }_{\varepsilon }}\big ] = \mathsf {E}_{x}\big [\tilde{S}^{\sigma _{D_{2}}}_{\rho \wedge \hat{\tau }_{\varepsilon } \wedge \hat{\tau }_{0}}\big ] = \tilde{S}_{0}^{\sigma _{D_{2}}} = \tilde{V}_{\sigma _{D_{2}}}^{1}(x) \end{aligned}
(4.8)
for every stopping time $$\rho \le \hat{\tau }_{\varepsilon }$$ where $$\hat{\tau }_{\varepsilon }:= \tau _{\varepsilon }^{\sigma _{D_{2}}}= \inf \{t \ge 0: \tilde{S}_{t}^{\sigma _{D_{2}}} \le \tilde{G}_{t}^{\sigma _{D_{2}},1} + \varepsilon \}$$. Using the fact that $$\sigma _{D_{2}}^{\rho } = \sigma _{D_{2}}$$ for any stopping time $$\rho \le \sigma _{D_{2}}$$, that $$\mathsf {P}_{x}$$-a.s., the essential supremum and its right-continuous modification are equivalent at stopping times and that the essential supremum is attained at hitting times (cf. [49]) it follows, from (4.6), that
\begin{aligned} \tilde{V}^{1}_{\sigma _{D_{2}}}(X_{\rho })=S^{\sigma _{D_{2}}}_{\rho } \text { } \mathsf {P}_{x}\text {-a.s.} \end{aligned}
(4.9)
for every stopping time $$\rho \le \sigma _{D_{2}}$$.
$$\mathbf {2^\circ }$$. We next show that $$V_{\sigma _{D_{2}}}^{1}(x) = \tilde{V}_{\sigma _{D_{2}}}^{1}(x)$$. Since $$G_{1} \le H_{1}$$ we have that
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(x) \ge \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}) \ge \mathsf {M}_{x}^{1}(\tau ,\sigma _{D_{2}}) \end{aligned}
(4.10)
for all stopping times $$\tau$$ and for all $$x \in E$$. Taking the supremum over all $$\tau$$ we get that
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(x) \ge V_{\sigma _{D_{2}}}^{1}(x). \end{aligned}
(4.11)
To prove the reverse inequality we will show that $$\tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}) \le V^{1}_{\sigma _{D_{2}}}(x)$$ for all stopping times $$\tau$$ so that $$\sup _{\tau } \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}) = \tilde{V}^{1}_{\sigma _{D_{2}}}(x) \le V^{1}_{\sigma _{D_{2}}}(x)$$. By definition of $$V_{\sigma _{D_{2}}}^{1}$$ we have that $$V_{\sigma _{D_{2}}}^{1}(x) \ge \mathsf {M}_{x}^{1}(\tau ,\sigma _{D_{2}})$$ for all stopping times $$\tau$$. Now take any stopping time $$\tau$$ and set $$\tau ^{\varepsilon }=(\tau +\varepsilon )1_{A} + \tau 1_{A^{c}}$$ where $$A=\{\tau = \sigma _{D_{2}}\}$$. (Note that $$\tau ^{\varepsilon }$$ is a stopping time since $$A \in \mathcal {F}_{\tau \wedge \sigma _{D_{2}}} \subset \mathcal {F}_{\tau }$$). If the time horizon T is finite, then we shall replace $$\tau +\varepsilon$$ in the definition of $$\tau ^{\varepsilon }$$ with $$(\tau +\varepsilon ) \wedge T$$). Then we have that
\begin{aligned}&\mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D_{2}})\\&\quad =\mathsf {E}_{x}\big [(G_{1}(X_{\tau ^{\varepsilon }})I(\tau ^{\varepsilon } \le \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau ^{\varepsilon })) I(\tau = T)I(\sigma _{D_{2}} = T) \\&\qquad +\;(G_{1}(X_{\tau ^{\varepsilon }})I(\tau ^{\varepsilon } \le \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau ^{\varepsilon }))I(\tau< T) I(\sigma _{D_{2}} = T) \\&\qquad +\;(G_{1}(X_{\tau ^{\varepsilon }})I(\tau ^{\varepsilon } \le \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau ^{\varepsilon }))I(\tau = T) I(\sigma _{D_{2}}< T) \\&\qquad +\;(G_{1}(X_{\tau ^{\varepsilon }})I(\tau ^{\varepsilon } \le \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau ^{\varepsilon }))I(\tau< T) I(\sigma _{D_{2}}< T)] \\&\quad = \mathsf {E}_{x} [(G_{1}(X_{\tau })I(\tau< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le \tau ))I(\tau = T) I(\sigma _{D_{2}} = T) \\&\qquad +\;(G_{1}(X_{\tau })I(\tau< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le \tau ))I(\tau< T) I(\sigma _{D_{2}} = T) \\&\qquad +\;(G_{1}(X_{\tau })I(\tau< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le \tau ))I(\tau = T) I(\sigma _{D_{2}}< T) \\&\qquad +\;(G_{1}(X_{\tau })I(\tau< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})(1_{A} + I(\sigma _{D_{2}}< \tau ))I(\tau< T) I(\sigma _{D_{2}} < T)\big ]\\&\quad = \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}) \end{aligned}
The first and third expressions in the second equality follow from assumption (2.6). The second expression also follows from assumption (2.6) together with the fact that $$I(\tau ^{\varepsilon }< \sigma _{D_{2}}) = I(\tau < \sigma _{D_{2}})$$ on the set $$\{\tau < T\}$$. The last expression follows from the fact that $$I(\tau ^{\varepsilon }=\sigma _{D_{2}}) = 0$$ and $$I(\sigma _{D_{2}}< \tau ^{\varepsilon }) = 1_{A}+I(\sigma _{D_{2}}< \tau )$$ on the set $$\{\tau<T\} \cap \{\sigma _{D_2} < T\}$$. So for any given stopping time $$\tau$$ we have $$V_{\sigma _{D_{2}}}^{1}(x) \ge \mathsf {M}_{x}^{1}(\tau ^{\varepsilon },\sigma _{D_{2}}) = \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{D_{2}}).$$
$$\mathbf {3^{\circ }}.$$ We show that $$V^{1}_{\sigma _{D_{2}}}(x) = \mathsf {E}_{x}[V_{\sigma _{D_{2}}}^{1}(X_{\sigma _{D_{2}} \wedge \tau _{\varepsilon }})]$$ where $$\tau _{\varepsilon }:=\tau _{\varepsilon }^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in {D}_{1}^{\sigma _{D_{2}},\varepsilon }\}$$ with $$D_{1}^{\sigma _{D_{2}},\varepsilon } = \{V_{\sigma _{D_{2}}}^{1} \le G_{1} + \varepsilon \}$$. From step $${2^\circ }$$ it is sufficient to prove that $$\tilde{V}^{1}_{\sigma _{D_{2}}}(x) = \mathsf {E}_{x} [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon }})]$$ where $$\tilde{\tau }_{\varepsilon }:=\tilde{\tau }_{\varepsilon }^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in \tilde{D}_{1}^{\sigma _{D_{2}},\varepsilon }\}$$ with $$\tilde{D}_{1}^{\sigma _{D_{2}},\varepsilon } = \{\tilde{V}_{\sigma _{D_{2}}}^{1} \le G_{1} + \varepsilon \}$$. By definition of $$\tilde{\tau }_{\varepsilon }$$, for any given $$t < \sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon }$$, we have
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(X_{t})> & {} G_{1}(X_{t})+\varepsilon \nonumber \\= & {} G_{1}(X_{t}) I(t < \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}}) I(\sigma _{D_{2}} \le t) + \varepsilon \nonumber \\= & {} \tilde{G}_{t}^{\sigma _{D_{2}},1} + \varepsilon \end{aligned}
(4.12)
where the first equality follows from the fact that $$I(\sigma _{D_{2}} \le t) = 0$$. Since $$\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon } \le \sigma _{D_{2}}$$, by (4.9) it follows that
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon }}) = \tilde{S}^{\sigma _{D_{2}}}_{\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon }} \text { } \mathsf {P}_{x}\text {-a.s.} \end{aligned}
(4.13)
Using (4.9) again we get
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(X_{t}) = \tilde{S}_{t}^{\sigma _{D_{2}}} \text { } \mathsf {P}_{x}\text {-a.s.} \end{aligned}
(4.14)
So by (4.12) we can conclude that
\begin{aligned} \tilde{S}_{t}^{\sigma _{D_{2}}} > \tilde{G}_{t}^{\sigma _{D_{2}},1}+ \varepsilon \text { } \mathsf {P}_{x}\text {-a.s.} \end{aligned}
(4.15)
Recall, from step $${2^\circ }$$, that $$\hat{\tau }_{\varepsilon } = \inf \{t \ge 0 : \tilde{S}_t^{\sigma _{D_2}} \le \tilde{G}_t^{\sigma _{D_2,1}}\}$$. By using the definition of $$\hat{\tau }_{\varepsilon }$$ together with (4.15) one can see that $$\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon } \le \hat{\tau }_{\varepsilon }$$. By using (4.8), (4.9) and that $$(\tilde{S}^{\sigma _{D_2}}_{s \wedge \hat{\tau }_0})_{s \ge 0}$$ is a martingale we further get that
\begin{aligned} \tilde{S}^{\sigma _{D_{2}}}_{0}=\tilde{V}_{\sigma _{D_{2}}}^{1}(x) = \mathsf {E}_{x}\big [\tilde{S}^{\sigma _{D_{2}}}_{\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon } \wedge \hat{\tau }_{\varepsilon }}\big ] = \mathsf {E}_{x}\big [\tilde{S}^{\sigma _{D_{2}}}_{\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon }}\big ] = \mathsf {E}_{x}\big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\sigma _{D_{2}} \wedge \tilde{\tau }_{\varepsilon }})\big ]. \end{aligned}
(4.16)
From (4.16) together with the fine continuity property of $$V^{1}_{\sigma _{D_{2}}}$$ (upon using Lemma 3.5), the right-continuity property of the composite process G(X) and the fact that $$\tilde{V}^{1}_{\sigma _{D_2}} = V^{1}_{\sigma _{D_2}}$$, we have
\begin{aligned} V^{1}_{\sigma _{D_{2}}}(x)= & {} \mathsf {E}_{x}\big [\tilde{S}^{\sigma _{D_{2}}}_{\sigma _{D_{2}} \wedge \tau _{\varepsilon }}\big ] =\mathsf {E}_{x}\big [\tilde{S}^{\sigma _{D_{2}}}_{\tau _{\varepsilon }}I(\tau _{\varepsilon } \le \sigma _{D_{2}}) + \tilde{S}_{\sigma _{D_{2}}}^{\sigma _{D_{2}}}I(\sigma _{D_{2}}< \tau _{\varepsilon })\big ] \nonumber \\= & {} \mathsf {E}_{x}\big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\tau _{\varepsilon }})I(\tau _{\varepsilon } \le \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau _{\varepsilon })\big ] \nonumber \\\le & {} \mathsf {E}_{x}\big [(G_{1}(X_{\tau _{\varepsilon }})+\varepsilon )I(\tau _{\varepsilon } \le \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} < \tau _{\varepsilon })\big ] \nonumber \\\le & {} \mathsf {M}_{x}^{1}(\tau _{\varepsilon },\sigma _{D_{2}}) + \varepsilon \end{aligned}
(4.17)
for any $$\varepsilon > 0$$, where the third equality follows from the fact that
\begin{aligned} \tilde{S}^{\sigma _{D_{2}}}_{\sigma _{D_{2}}}= & {} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \ge \sigma _{D_{2}}} \mathsf {E}_{x}[G_{1}(X_{\tau })I(\tau < \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}} \le \tau )|\mathcal {F}_{\sigma _{D_{2}}}] \nonumber \\= & {} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \ge \sigma _{D_{2}}} \mathsf {E}_{x}[H_{1}(X_{\sigma _{D_{2}}})|\mathcal {F}_{\sigma _{D_{2}}}] = H_{1}(X_{\sigma _{D_{2}}}) \end{aligned}
(4.18)
$$\square$$

## Lemma 4.2

Let $$\{\rho _{n}\}_{n=1}^{\infty }$$ be a sequence of stopping times such that $$\rho _{n} \uparrow \rho$$ $$\mathsf {P}_{x}$$-a.s. For a given Borel set $$D \subseteq E$$ define the entry times $$\sigma _{\rho _{n}} = \inf \{t \ge \rho _{n}:X_{t} \in D\}$$ and $$\sigma _{\rho } = \inf \{t \ge \rho :X_{t} \in D\}$$. If either
\begin{aligned} D \text { is closed}, or \end{aligned}
(4.19)
\begin{aligned} D \text { is finely closed with regular boundary } \partial D, \end{aligned}
(4.20)
then $$\sigma _{\rho _{n}} \uparrow \sigma _{\rho }$$ $$\mathsf {P}_{x}$$-a.s.

## Proof

Since $$\rho _{n}$$ is an increasing sequence of stopping times, $$\sigma _{\rho _{n}}$$ is increasing and thus $$\sigma _{\rho _{n}} \uparrow \beta \text { }\mathsf {P}_{x}$$-a.s. for some stopping time $$\beta$$. We need to prove that $$\beta = \sigma _{\rho } \text { } \mathsf {P}_{x}-a.s.$$

$$\mathbf {1^{\circ }}$$ We first show that $$\beta \ge \rho \text { } \mathsf {P}_{x}$$-a.s. Suppose, for contradiction, that the set $$\hat{\Omega }:= \{\omega \in \Omega :\beta (\omega ) < \rho (\omega )\}$$ is of positive measure. Since $$\rho _{n} \uparrow \rho \text { } \mathsf {P}_{x}$$-a.s. then for every $$\omega \in \hat{\Omega } \backslash N_{1}$$, where $$\mathsf {P}_{x}(N_{1})=0$$-a.s., there exists $$n_{0}(\omega ) \in \mathbb {N}$$ such that
\begin{aligned} \rho _{n}(\omega ) > \beta (\omega ) \end{aligned}
(4.21)
for all $$n \ge n_{0}(\omega )$$. But for each $$n \in \mathbb {N}$$ we have that $$\sigma _{\rho _{n}} \le \beta \text { } \mathsf {P}_{x}$$-a.s., so for every $$\omega \in \hat{\Omega } \backslash N_{2}$$ where $$\mathsf {P}_{x}(N_{2})=0$$ there exists $$n_{1}(\omega ) \in \mathbb {N}$$ such that
\begin{aligned} \sigma _{\rho _{n}}(\omega ) < \rho _{n}(\omega ) \end{aligned}
(4.22)
for all $$n \ge n_{1}(\omega )$$. Combining (4.21) and (4.22) it follows that for every $$\omega \in \hat{\Omega } \backslash (N_{1} \cup N_{2})$$ there exists $$\hat{n}(\omega ) \in \mathbb {N}$$ such that $$\sigma _{\rho _{n}}(\omega ) < \rho _{n}(\omega )$$ for all $$n \ge \hat{n}(\omega )$$. But this contradicts the fact that $$\sigma _{\rho _{n}} \ge \rho _{n}$$ $$\mathsf {P}_{x}$$-a.s. So we must have that $$\beta \ge \rho$$ $$\mathsf {P}_{x}$$-a.s.

$$\mathbf {2^{\circ }}$$ Let $$\Omega _{1}=\{\omega \in \Omega : \beta (\omega ) > \rho (\omega )\}$$ and $$\Omega _{2} = \{\omega \in \Omega : \beta (\omega ) = \rho (\omega )\}$$. We prove that there exists a set N with $$\mathsf {P}_{x}(N) =0$$ such that $$\beta (\omega ) = \sigma _{\rho }(\omega )$$ for every $$\omega \in (\Omega _{1} \cup \Omega _{2}) \backslash {N}$$.

$$\mathbf {(i.)}$$ Suppose first that $$\mathsf {P}_{x}(\Omega _{1}) > 0$$. Since $$\sigma _{\rho _{n}} \uparrow \beta$$ $$\mathsf {P}_{x}$$-a.s. then for every $$\omega \in \Omega _{1} \backslash N_{3}$$, where $$\mathsf {P}_{x}(N_{3}) = 0$$, there exists $$n_{2}(\omega ) \in \mathbb {N}$$ such that $$\sigma _{\rho _{n}}(\omega ) > \rho (\omega )$$ for all $$n \ge n_{2}(\omega )$$. Moreover, since $$\sigma _{\rho }(\omega ) \ge \rho (\omega )$$ for every $$\omega \in \Omega _{1} \backslash N_{4}$$ where $$\mathsf {P}_{x}(N_{4}) = 0$$ it follows that for every $$\omega \in \Omega _{1} \backslash N$$, where $$N = N_{3} \cup N_{4}$$, there exists $$n_{3}(\omega ) \in \mathbb {N}$$ such that $$\sigma _{\rho _{n}}(\omega ) = \sigma _{\rho }(\omega )$$ for all $$n \ge n_{3}(\omega )$$. From this it follows that $$\beta = \sigma _{\rho }$$ $$\mathsf {P}_{x}$$-a.s. on $$\Omega _{1}$$.

$$\mathbf {(ii.)}$$ Now suppose that $$\mathsf {P}_x (\Omega _2) > 0$$. Let us consider first the case when (4.19)holds. So we have that $$\sigma _{\rho _{n}}(\omega ) \uparrow \rho (\omega )$$ for every $$\omega \in \Omega _{2} \backslash N_{5}$$ where $$\mathsf {P}_{x}(N_{5}) = 0$$. The fact that D is closed implies that $$X_{\sigma _{\rho _{n}}} \in D$$ for all $$n \in \mathbb {N}$$. Moreover, by the quasi-left-continuity property of X it follows that $$X_{\sigma _{\rho _{n}(\omega )}}(\omega ) \rightarrow X_{\rho (\omega )}(\omega )$$ for every $$\omega \in \Omega _{2} \backslash N_{6}$$ where $$\mathsf {P}_{x}(N_{6})=0$$. Again using the fact that D is closed we have that $$X_{\rho }(\omega ) \in D$$ for every $$\omega \in \Omega _{2} \backslash N_{7}$$ where $$\mathsf {P}_{x}(N_{7}) = 0$$. From this it follows that $$\sigma _{\rho }(\omega ) = \rho (\omega )$$ for every $$\omega \in \Omega _{2} \backslash N$$ with $$N = N_{6} \cup N_{7}$$. By definition of $$\Omega _{2}$$, this implies that $$\sigma _{\rho } = \beta \text { } \mathsf {P}_{x}$$-a.s. on $$\Omega _{2}$$. Now let us consider the case when (4.20) holds. Again by quasi-left-continuous of X it follows that $$X_{\sigma _{\rho _{n}}(\omega )}(\omega ) \rightarrow X_{\rho (\omega )}(\omega )$$ for each $$\omega \in \Omega _{2} \backslash N_{8}$$ with $$\mathsf {P}_{x}(N_{8}) = 0$$. Since D is not necessarily closed we have that $$X_{\rho (\omega )}(\omega ) \in D \cup \partial D$$. Suppose first that $$X_{\rho (\omega )}(\omega ) \in D$$. This means that $$\sigma _{\rho (\omega )}(\omega ) = \rho (\omega )$$ and so we have that $$\sigma _{\rho _{n}(\omega )}(\omega ) \uparrow \sigma _{\rho (\omega )}(\omega )$$. From this we can conclude that $$\beta (\omega ) = \sigma _{\rho (\omega )}(\omega )$$. To prove that $$\sigma _{\rho } = \rho$$ $$\mathsf {P}_x$$-a.s. on the set $$\Omega ':=\{\omega \in \Omega : X_{\rho }(\omega ) \in \partial D\}$$ it is sufficient to show that $$\mathsf {P}_x (\{\sigma _{\rho } > \rho \} \cap \Omega ')$$. Let $$\eta _D = \inf \{t > 0: X_t \in D\}$$. By the strong Markov property of X we have that
\begin{aligned} \mathsf {E}_{X_\rho } [I(\eta _D> 0)] = \mathsf {E}_x [I (\rho + \eta _D \circ \theta _\rho > \rho ) | \mathcal {F}_\rho ] \end{aligned}
(4.23)
Multiplying both sides in (4.23) by $$I_{\Omega '}$$ and taking $$\mathsf {E}_x$$ expectation on both sides (note that $$X_\rho$$ is $$\mathcal {F}_\rho$$ measurable) we get that $$\mathsf {P}_{x} (\{\sigma _\rho> \rho \} \cap \Omega ') = \mathsf {E}_x [I_{\Omega '} \mathsf {E}_{X_\rho } [I (\eta _D > 0)]] = 0$$ by the regularity property of X. $$\square$$

## Proposition 4.3

Let $$D_1,D_{2}$$ be either closed or finely closed subsets of E. Suppose also that their respective boundaries $$\partial D_1$$ and $$\partial D_{2}$$ are regular. Let $$\tau _{D_1}$$ and $$\sigma _{D_{2}}$$ be the first entry times into $$D_1$$ and $$D_2$$ respectively. Set $$\tau _{\varepsilon }^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in D_{1}^{\sigma _{D_{2}},\varepsilon }\}$$ and $$\sigma _{\varepsilon }^{\tau _{D_{1}}} = \inf \{t \ge 0: X_{t} \in D_{2}^{\tau _{D_{1}},\varepsilon }\}$$ where $$D_{1}^{\sigma _{D_{2}},\varepsilon } = \{V_{\sigma _{D_{2}}}^{1} \le G_{1} + \varepsilon \}$$ and $$D_{2}^{\tau _{D_{1}},\varepsilon } = \{V_{\tau _{D_{1}}}^{2} \le G_{2} + \varepsilon \}$$. Then $$\tau _{\varepsilon }^{\sigma _{D_{2}}} \uparrow \tau _{*}^{\sigma _{D_{2}}}$$ $$\mathsf {P}_{x}$$-a.s. and $$\sigma _{\varepsilon }^{\tau _{D_{1}}} \uparrow \sigma _{*}^{\tau _{D_{1}}}$$ $$\mathsf {P}_{x}$$-a.s., where $$\tau _{*}^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in D_{1}^{\sigma _{D_{2}}}\}$$ with $$D_{1}^{\sigma _{D_{2}}} = \{V_{\sigma _{D_{2}}}^{1} = G_{1}\}$$ and $$\sigma _{*}^{\tau _{D_{1}}} = \inf \{t \ge 0: X_{t} \in D_{2}^{\tau _{D_{1}}}\}$$ with $$D_{2}^{\tau _{D_{1}}} = \{V_{\tau _{D_{1}}}^{2} = G_{2}\}$$.

## Proof

We shall only prove that $$\tau _{\varepsilon }^{\sigma _{D_{2}}} \uparrow \tau _{*}^{\sigma _{D_{2}}}$$ $$\mathsf {P}_{x}$$-a.s. as the other assertion follows by symmetry. Recall the definition of $$\tilde{V}^{1}_{\sigma _{D_2}}$$ from (4.3)–(4.4). From step $$2^{\circ }$$ in the proof of Proposition 4.1 it is sufficient to prove that $$\tilde{\tau }_{\varepsilon } \uparrow \tilde{\tau }_{*}$$, where we recall that $$\tilde{\tau }_{\varepsilon } = \inf \{t \ge 0: X_{t} \in \tilde{D}_{1}^{\sigma _{D_{2}},\varepsilon }\}$$ with $$\tilde{D}_{1}^{\sigma _{D_{2}},\varepsilon } = \{\tilde{V}_{\sigma _{D_{2}}}^{1} \le G_{1} + \varepsilon \}$$ and $$\tilde{\tau }_{*}^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in \tilde{D}_{1}^{\sigma _{D_{2}}}\}$$ with $$\tilde{D}_{1}^{\sigma _{D_{2}}} = \{\tilde{V}_{\sigma _{D_{2}}}^{1} = G_{1}\}$$. For each $$\varepsilon > 0$$ we have that $$\tilde{\tau }_{\varepsilon } \le \tilde{\tau }_{*}^{\sigma _{D_{2}}}$$. Since $$\tilde{\tau }_{\varepsilon }$$ increases as $$\varepsilon$$ decreases, $$\tilde{\tau }_{\varepsilon } \uparrow \beta$$ as $$\varepsilon \downarrow 0$$ where $$\beta \le \tilde{\tau }_{*}^{\sigma _{D_{2}}}$$ $$\mathsf {P}_{x}$$-a.s. To prove that $$\beta = \tilde{\tau }_{*}^{\sigma _{D_{2}}}$$ we first show that
\begin{aligned} \mathsf {E}_{x}\big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\beta })\big ] \le \liminf _{\varepsilon \downarrow 0} \mathsf {E}_{x} \big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\tilde{\tau }_\varepsilon })\big ] \end{aligned}
(4.24)
For stopping times $$\sigma _{\beta } = \inf \{t \ge \beta : X_{t} \in D_{2}\}$$ and $$\sigma _{\tilde{\tau }_{\varepsilon }} = \inf \{t \ge \tilde{\tau }_{\varepsilon }: X_{t} \in D_{2}\}$$ we have
\begin{aligned}&\tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{\beta }) - \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{\tilde{\tau }_{\varepsilon }})\nonumber \\&\quad = \mathsf {E}_{x}\big [G_{1}(X_{\tau })I(\tau< \sigma _{\beta }) + H_{1}(X_{\sigma _{\beta }})I(\sigma _{\beta }< \tau ) \nonumber \\&\qquad + H_{1}(X_{\sigma _{\beta }})I(\sigma _{\beta } = \tau ,\sigma _{\beta } \ne \sigma _{\tilde{\tau }_{\varepsilon }}) -G_{1}(X_{\tau })I(\tau< \sigma _{\tilde{\tau }_{\varepsilon }}) \nonumber \\&\qquad - H_{1}(X_{\sigma _{\tilde{\tau }_{\varepsilon }}})I(\sigma _{\tilde{\tau }_{\varepsilon }}< \tau ) - H_{1}(X_{\sigma _{\tilde{\tau }_{\varepsilon }}})I( \sigma _{\tilde{\tau }_{\varepsilon }} = \tau ,\sigma _{\tilde{\tau }_{\varepsilon }} \ne \sigma _{\beta })\nonumber \\&\qquad +H_{1}(X_{\sigma _{\beta }})I(\sigma _{\tilde{\tau }_{\varepsilon }}< \tau ) - H_{1}(X_{\sigma _{\beta }})I(\sigma _{\tilde{\tau }_{\varepsilon }}< \tau )\big ] \nonumber \\&\quad \le \mathsf {E}_{x}\big [G_{1}(X_{\tau })(I(\tau< \sigma _{\beta }) - I(\tau< \sigma _{\tilde{\tau }_{\varepsilon }}) - I(\sigma _{\tilde{\tau }_{\varepsilon }} = \tau ,\sigma _{\tilde{\tau }_{\varepsilon }} \ne \sigma _{\beta }))\big ] \nonumber \\&\qquad +\; \mathsf {E}_{x}\big [H_{1}(X_{\sigma _{\beta }})(I(\sigma _{\beta }< \tau ) - I(\sigma _{\tilde{\tau }_{\varepsilon }}< \tau ) + I(\sigma _{\beta } = \tau ,\sigma _{\beta } \ne \sigma _{\tilde{\tau }_{\varepsilon }})\big ] \nonumber \\&\qquad +\; \mathsf {E}_{x}\big [(H_{1}(X_{\sigma _{\beta }}) - H_{1}(X_{\sigma _{\tilde{\tau }_{\varepsilon }}}))I( \sigma _{\tilde{\tau }_{\varepsilon }}< \tau )\big ] \nonumber \\&\quad = \mathsf {E}_{x}\big [(G_{1}(X_{\tau }) - H_{1}(X_{\sigma _{\beta }}))I( \sigma _{\tilde{\tau }_{\varepsilon }}< \tau< \sigma _{\beta })\big ] \nonumber \\&\qquad + \; \mathsf {E}_{x}\big [(H_{1}(X_{\sigma _{\beta }}) - H_{1}(X_{\sigma _{\tilde{\tau }_{\varepsilon }}}))I( \sigma _{\tilde{\tau }_{\varepsilon }}< \tau )\big ] \nonumber \\&\quad \le \mathsf {E}_{x}\big [(\sup _{t}|G_{1}(X_{t})| + \sup _{t}|H_{1}(X_{t})|)I(\sigma _{\tilde{\tau }_{\varepsilon }}< \tau < \sigma _{\beta })\big ] \nonumber \\&\qquad + \; \mathsf {E}_{x}\big [|H_{1}(X_{\sigma _{\beta }}) - H_{1}(X_{\sigma _{\tilde{\tau }_{\varepsilon }}})|\big ] \end{aligned}
(4.25)
where the first inequality follows from the fact that $$G_{1} \le H_{1}$$. By Lemma 4.2 we have that $$\sigma _{\tilde{\tau }_{\varepsilon }} \uparrow \sigma _{\beta }$$ as $$\varepsilon \downarrow 0$$ and so the first term on the right hand side of the above expression tends to zero uniformly over all $$\tau$$. Since $$H_{1}(X)$$ is quasi-left-continuous, the second expression also tends to zero. By the strong Markov property of X (recall that the expectation and the essential supremum in (3.14) can be interchanged) it follows that $$\mathsf {E}_{x}[\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\beta })]= \sup _{\tau \ge \beta } \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{\beta })$$. By (4.25) we have
\begin{aligned}&\sup _{\tau \ge \beta } \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{\beta }) \le \liminf _{\varepsilon \downarrow 0} \sup _{\tau \ge \beta } \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{\tilde{\tau }_{\varepsilon }}) \le \liminf _{\varepsilon \downarrow 0} \sup _{\tau \ge \tilde{\tau }_{\varepsilon }} \tilde{\mathsf {M}}_{x}^{1}(\tau ,\sigma _{\tilde{\tau }_{\varepsilon }})\nonumber \\&\quad = \liminf _{\varepsilon \downarrow 0} \mathsf {E}_{x}\big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\tilde{\tau }_{\varepsilon }})\big ] \end{aligned}
(4.26)
Using again the fact that $$\tilde{V}_{\sigma _{D_{2}}}^{1} = V_{\sigma _{D_{2}}}^{1}$$ so that $$\tilde{V}_{\sigma _{D_{2}}}^{1}$$ is finely continuous, together with the fact that $$G_{1}(X)$$ is left-continuous over stopping times, we get, from Lemma 3.5, that $$\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\tilde{\tau }_{\varepsilon }}) \le G_{1}(X_{\tilde{\tau }_{\varepsilon }}) + \varepsilon \text { } \mathsf {P}_{x}$$-a.s. Hence it follows that
\begin{aligned}&\mathsf {E}_{x}\big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\beta })\big ] \le \liminf _{\varepsilon \downarrow 0} \mathsf {E}_{x}\big [\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\tilde{\tau }_{\varepsilon }^{\sigma _{D_{2}}}})\big ]\nonumber \\&\quad \le \liminf _{\varepsilon \downarrow 0} \mathsf {E}_{x}\big [G_{1}(X_{\tilde{\tau }_{\varepsilon }^{\sigma _{D_{2}}}}) + \varepsilon \big ] = \mathsf {E}_{x}[G_{1}(X_{\beta })]. \end{aligned}
(4.27)
Combining (4.27) with the fact that $$\tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\beta }) \ge G_{1}(X_{\beta })$$ $$\mathsf {P}_{x}$$-a.s. we conclude that
\begin{aligned} \tilde{V}_{\sigma _{D_{2}}}^{1}(X_{\beta }) = G_{1}(X_{\beta }) \text { } \mathsf {P}_{x}\text {-a.s.} \end{aligned}
(4.28)
But $$\tilde{\tau }_{*}^{\sigma _{D_{2}}} = \inf \{t \ge 0: X_{t} \in \tilde{D}_{1}^{\sigma _{D_{2}}}\}$$ and so we must have that $$\beta \ge \tilde{\tau }_{0}^{\sigma _{D_{2}}}$$. This fact together with $$\beta \le \tilde{\tau }_{*}^{\sigma _{D_{2}}}$$ proves the required result. $$\square$$

We now state and prove the main result of this section.

## Theorem 4.4

Given the setting in Proposition 4.3, we have
\begin{aligned} V^{1}_{\sigma _{D_{2}}}(x) = \mathsf {M}_{x}^{1}\big (\tau _{*}^{\sigma _{D_{2}}},\sigma _{D_{2}}\big ), \end{aligned}
(4.29)
\begin{aligned} V^{2}_{\tau _{D_{1}}}(x) = \mathsf {M}_{x}^{2}\big (\tau _{D_{1}},\sigma _{*}^{\tau _{D_{1}}}\big ). \end{aligned}
(4.30)

## Proof

We shall only prove (4.29) as the proof of (4.30) follows by symmetry. Recall, from Proposition 4.1, that $$V_{\sigma _{D_{2}}}^{1}(x) \le \mathsf {M}_{x}(\tau _{\varepsilon }^{\sigma _{D_{2}}},\sigma _{D_{2}}) + \varepsilon$$. We show that $$\limsup _{\varepsilon \downarrow 0} \mathsf {M}_{x}^{1}(\tau _{\varepsilon }^{\sigma _{D_{2}}}, \sigma _{D_{2}}) \le \mathsf {M}_{x}^{1}(\tau _{*}^{\sigma _{D_{2}}},\sigma _{D_{2}})$$ so that $$V_{\sigma _{D_{2}}}^{1}(x) \le \limsup _{\varepsilon \downarrow 0}(\mathsf {M}_{x}^{1}(\tau _{\varepsilon }^{\sigma _{D_{2}}},\sigma _{D_{2}}) + \varepsilon ) \le \mathsf {M}_{x}^{1}(\tau _{*}^{\sigma _{D_{2}}},\sigma _{D_{2}}).$$ For simplicity of exposition let us set $$\tau _{\varepsilon }:=\tau _{\varepsilon }^{\sigma _{D_2}}$$ and $$\tau _{*}:=\tau _{*}^{\sigma _{D_2}}$$. Now
\begin{aligned}&\mathsf {M}_{x}^{1}(\tau _{*},\sigma _{D_{2}}) - \mathsf {M}_{x}^{1}(\tau _{\varepsilon },\sigma _{D_{2}}) \nonumber \\&\quad = \mathsf {E}_{x}\big [G_{1}(X_{\tau _{*}})I(\tau _{*}< \sigma _{D_{2}}) + H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau _{*}) \nonumber \\&\qquad + \; G_{1}(X_{\sigma _{D_{2}}}) I(\sigma _{D_{2}} = \tau _{*},\tau _{*} \ne \tau _{\varepsilon }) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }})I(\tau _{\varepsilon }< \sigma _{D_{2}}) - H_{1}(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau _{\varepsilon }) \nonumber \\&\qquad - \; G_{1}(X_{\sigma _{D_{2}}}) I(\sigma _{D_{2}} = \tau _{\varepsilon },\tau _{\varepsilon } \ne \tau _{*}) +G_{1}(X_{\tau _{\varepsilon }})I(\tau _{*}< \sigma _{D_{2}}) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }}) I(\tau _*< \sigma _{D_{2}}) +G_{1}(X_{\tau _{\varepsilon }})I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon }) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }}) I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon })\big ] \ge \mathsf {E}_{x}\big [(G_{1}(X_{\tau _*}) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }})) I(\tau _{*}< \sigma _{D_{2}})\big ] + G_{1}(X_{\tau _{\varepsilon }})(I(\tau _*< \sigma _{D_{2}}) \nonumber \\&\qquad - \; I(\tau _{\varepsilon }< \sigma _{D_{2}}) I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon })) \nonumber \\&\qquad + \; G_{1}(X_{\tau _*})I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon }) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }}) I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon }) + H_{1}(X_{\sigma _{D_{2}}})(I(\sigma _{D_{2}}< \tau _*) \nonumber \\&\qquad - \; I(\sigma _{D_{2}}< \tau _{\varepsilon }) - I(\tau _{\varepsilon } = \sigma _{D_{2}},\tau _{\varepsilon } \ne \tau _*))] \nonumber \\&\quad = \mathsf {E}_{x}\big [(G_{1}(X_{\tau _*}) - G_{1}(X_{\tau _{\varepsilon }})) I(\tau _{*}< \sigma _{D_{2}}) +(H_{1}(X_{\sigma _{D_{2}}}) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }})) I(\tau _\varepsilon< \sigma _{D_2}< \tau _*) + G_{1}(X_{\tau _*})I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon }) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }}) I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon }) \big ] \ge -2\mathsf {E}_{x}\big [|G_{1}(X_{\tau _*}) \nonumber \\&\qquad - \; G_{1}(X_{\tau _{\varepsilon }})|\big ] + \mathsf {E}_{x}\big [(H_{1}(X_{\sigma _{D_{2}}}) - G_{1}(X_{\tau _{\varepsilon }})) I(\tau _{\varepsilon }< \sigma _{D_{2}}< \tau _*)\big ] \nonumber \\&\quad \ge -2\mathsf {E}_{x}\big [|G_{1}(X_{\tau _*}) - G_{1}(X_{\tau _{\varepsilon }})|\big ] - \mathsf {E}_{x}\big [(\sup _{t}|H_{1}(X_{t})| \nonumber \\&\qquad + \; \sup _{t}|G_{1}(X_{t})|) I(\tau _{\varepsilon }< \sigma _{D_{2}} < \tau _*)\big ] \end{aligned}
(4.31)
The first inequality follows from the assumption $$-G_1 \ge -H_1$$ whereas the second equality follows from the fact that
\begin{aligned}&I(\sigma _{D_{2}}< \tau _*) - I(\sigma _{D_{2}}< \tau _{\varepsilon }) - I(\tau _{\varepsilon } = \sigma _{D_{2}},\tau _{\varepsilon } \ne \tau _*) = -(I(\tau _*< \sigma _{D_{2}}) \nonumber \\&\qquad -\; I(\tau _{\varepsilon }< \sigma _{D_{2}}) + I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon })) =I(\tau _{\varepsilon }< \sigma _{D_2} < \tau _*) \end{aligned}
(4.32)
The penultimate inequality follows from the fact that
\begin{aligned}&(G_{1}(X_{\tau _*}) - G_{1}(X_{\tau _{\varepsilon }}))I(\tau _{*} < \sigma _{D_{2}}) + (G_{1}(X_{\tau _*}) - G_{1}(X_{\tau _{\varepsilon }}))I(\tau _*= \sigma _{D_{2}},\tau _*\ne \tau _{\varepsilon }) \nonumber \\&\qquad \ge \; -2|G_{1}(X_{\tau _*}) - G_{1}(X_{\tau _{\varepsilon }})| \end{aligned}
(4.33)
whereas the last inequality follows from the fact that
\begin{aligned}&(H_{1}(X_{\sigma _{D_{2}}}) - G_{1}(X_{\tau _{\varepsilon }}))I(\tau _{\varepsilon }< \sigma _{D_{2}}< \tau _*) \ge \;-((\sup _{t}|H_{1}(X_{t})| \nonumber \\&\qquad + \sup _{t}|G_{1}(X_{t})|)I(\tau _{\varepsilon }< \sigma _{D_{2}} < \tau _*)) \end{aligned}
(4.34)
Letting $$\varepsilon \downarrow 0$$ in (4.31) we get, from Proposition 4.3, that $$I(\tau _{\varepsilon }< \sigma _{D_{2}} < \tau _*)$$ converges to zero uniformly over $$\sigma _{D_{2}}$$. Moreover, by using the quasi-left-continuity property of $$G_{1}(X)$$ we conclude that $$\mathsf {M}_{x}^{1}(\tau _*,\sigma _{D_{2}}) \ge \limsup _{\varepsilon \downarrow 0} \mathsf {M}_{x}^{1}(\tau _{\varepsilon },\sigma _{D_{2}})$$ and this completes the proof. $$\square$$

## 5 Partial superharmonic characterisation

The purpose of the current section is to utilise the results derived in Sects. 3 and 4 to provide a partial superharmonic characterisation of $$V^{1}_{\sigma _{D_2}}$$ (resp. $$V^{2}_{\tau _{D_1}}$$) when the stopping time $$\sigma _{D_2}$$ (resp. $$\tau _{D_1}$$) of player two (resp. player one) is externally given. This characterisation attempts to extend the semiharmonic characterisation of the value function in zero-sum games (see [51] and [52]) and can informally be described as follows: Suppose that $$G_2 \equiv -\infty$$ in (2.3). Then the second player has no incentive of stopping the process and so (2.4) reduces to the optimal stopping problem $$V^{1}_{\infty }(x) = \sup _{\tau }\mathsf {E}_{x}[G_1(X_{\tau })].$$ By results in optimal stopping theory $$V^{1}_{\infty }$$ admits a superharmonic characterisation. More precisely $$V^{1}_{\infty }$$ can be identified with the smallest superharmonic function that dominates $$G_1$$ (see [49, p. 37 Theorem 2.4]). However, if $$G_2$$ is finite valued then there might be an incentive for the second player to stop the process. This raises two questions: (i) is the superharmonic characterisation of $$V^{1}_{\sigma }$$ still valid before the second player stops the process, (ii) does $$V^{1}_{\sigma }$$ coincide with $$H_1$$ at the time the second player stops the process? If the second player selects the stopping time $$\sigma := \sigma _{D_2} = \inf \{t \ge 0:X_t \in D_2\}$$ where $$D_2$$ is a closed or finely closed subset of the state space E having a regular boundary $$\partial D_2$$ then the above questions can be answered affirmatively and we will say that the value function of player one associated with the stopping time $$\sigma _{D_2}$$ admits a partial superharmonic characterisation. To be more precise let us consider the set
\begin{aligned} \mathbf {Sup} ^{1}_{D_2}(G_{1},K_{1})= & {} \{F:E \rightarrow [G_{1},K_{1}] : F \text { is finely continuous},F=H_{1} \text { in } D_{2}, \nonumber \\&F \text { is superharmonic in } D_{2}^{c}\} \end{aligned}
(5.1)
where $$K_1$$ is the smallest superharmonic function that dominates $$H_1$$ and $$[G_{1},K_{1}]$$ means that $$G_1(x) \le F(x) \le K_1(x)$$ for all $$x \in E$$. Then the value function of player one can be identified with the smallest finely continuous function from $$\mathbf {Sup} ^{1}_{D_2}(G_{1},K_{1})$$.
Likewise, suppose that player one selects the stopping time $$\tau _{D_1} = \inf \{t \ge 0:X_t \in D_1\}$$ where $$D_1$$ is a closed or finely closed set having a regulary boundary $$\partial D_1$$, and consider the set
\begin{aligned} \mathbf {Sup} ^{2}_{D_1}(G_{2},K_{2})= & {} \{F:E \rightarrow [G_{2},K_{2}] : F \text { is finely continuous},F=H_{2} \text { in } D_{1}, \nonumber \\&F \text { is superharmonic in } D_{1}^{c}\} \end{aligned}
(5.2)
where $$K_2$$ is the smallest superharmonic function that dominates $$H_2$$ and $$[G_{2},K_{2}]$$ means that $$G_2(x) \le F(x) \le K_2(x)$$ for all $$x \in E$$. Then the value function of player two associated to $$\tau _{D_1}$$ can be identified with the smallest finely continuous function from $$\mathbf {Sup} ^{2}_{D_1}(G_{2},K_{2})$$.
The above characterisation of $$V^{1}_{\sigma _{D_2}}$$ and $$V^{2}_{\tau _{D_1}}$$ can be used to study the existence of a Nash equilibrium. Indeed suppose that one can show the existence of finely continuous functions u and v such that:
1. (i.)

u lies between $$G_{1}$$ and $$K_1$$, u is identified with $$H_{1}$$ in the region $$D_{2} = \{v = G_{2}\}$$ and u is the smallest superharmonic function that dominates $$G_{1}$$ in the region $$\{v > G_2\}$$

2. (ii.)

v lies between $$G_{2}$$ and $$K_2$$, v is identified with $$H_2$$ in the region $$D_{1} = \{u = G_1\}$$ and v is the smallest superharmonic function that dominates $$G_2$$ in the region $$\{u > G_1\}$$.

Then under the assumption that $$D_1$$ and $$D_2$$ have regular boundaries and are either closed or finely closed, u and v coincide with $$V^{1}_{\sigma _{D_2}}$$ and $$V^{2}_{\tau _{D_1}}$$ respectively. In this case we shall say that together, $$V^{1}_{\sigma _{D_2}}$$ and $$V^{2}_{\tau _{D_1}}$$ admit a double partial superharmonic characterisation (see Fig. 1) and can be called the value functions of the game (2.4)–(2.5). Moreover, the pair $$(\tau _{D_1},\sigma _{D_2})$$ will form a Nash equilibrium point.
To prove the partial superharmonic characterisation of $$V^{1}_{\sigma _{D_2}}$$ (resp. $$V^{2}_{\tau _{D_1}}$$) we first show that for any stopping time $$\sigma$$ (resp. $$\tau$$), $$V^{1}_{\sigma }$$ (resp. $$V^{2}_{\tau }$$) is bounded above by $$K_1$$ (resp. $$K_2$$). For this we define the concept of superharmonic functions.

## Definition 5.1

Let C be a measurable subset of E and $$D = E \backslash C$$. A measurable function $$F:E \rightarrow \mathbb {R}$$ is said to superharmonic in C if $$\mathsf {E}_{x}[F(X_{\rho \wedge \sigma _{D}})] \le F(x)$$ for every stopping time $$\rho$$ and for all $$x \in E$$, where $$\sigma _{D} = \inf \{t \ge 0 : X_{t} \in D\}$$. F is said to be superharmonic if $$\mathsf {E}_{x}[F(X_{\rho })] \le F(x)$$ for every stopping time $$\rho$$ and for all $$x \in E$$.

## Lemma 5.2

Let $$\mathbf {Sup} (H_{1}) = \{F:E \rightarrow \mathbb {R} : F \ge H_{1}, F \text { is superharmonic} \}$$ be the collection of superharmonic functions that majorise $$H_{1}$$. Then for any given stopping time $$\sigma$$ we have
\begin{aligned} V_{\sigma }^{1} \le \inf _{F\in \mathbf {Sup} (H_{1})}F. \end{aligned}
(5.3)
Similarly let $$\mathbf {Sup} (H_{2}) = \{F:E \rightarrow \mathbb {R} : F \ge H_{2}, F \text { is superharmonic} \}$$. Then for any given stopping time $$\tau$$ we have
\begin{aligned} V_{\tau }^{2} \le \inf _{F\in \mathbf {Sup} (H_{2})}F. \end{aligned}
(5.4)

## Proof

We shall only prove (5.3) as (5.4) can be proved in exactly the same way. Take any stopping time $$\sigma$$ and any $$F \in \mathbf Sup (H_{1})$$. Then
\begin{aligned} \mathsf {M}_{x}^{1}\left( \tau ,\sigma \right) \le \mathsf {E}_{x}\left[ F\left( X_{\tau }\right) I(\tau \le \sigma )+F\left( X_{\sigma }\right) I( \sigma < \tau )\right] =\mathsf {E}_{x}\left[ F\left( X_{\tau \wedge \sigma }\right) \right] \quad \end{aligned}
(5.5)
for all stopping times $$\tau$$. The first two inequalities follow from the fact that $$G_{1}\le H_{1}\le F.$$ F is superharmonic, so $$\mathsf {E}_{x}\left[ F\left( X_{\rho }\right) \right] \le F\left( x\right)$$ for any stopping time $$\rho$$ and for all $$x\in E .$$ In particular $$\mathsf {E}_{x}\left[ F\left( X_{\tau \wedge \sigma }\right) \right] \le F\left( x\right)$$ for every stopping time $$\tau .$$ Thus
\begin{aligned} \mathsf {M}_{x}^{1}\left( \tau ,\sigma \right) \le F\left( x\right) \end{aligned}
(5.6)
for all stopping times $$\tau$$ and for all $$x\in E$$. Taking the infimum over all F in $$\mathbf Sup (H_{1})$$ on the right hand side and the supremum over all $$\tau$$ on the left hand side of (5.6) we get the required result. $$\square$$

## Theorem 5.3

i. Suppose that $$K_{1}$$ is the smallest superharmonic function that dominates $$H_{1}$$. Let $$v \ge G_2$$ be a finely continuous function, with $$D_{2}=\{v=G_{2}\}$$, such that $$u:= \inf _{F \in \mathbf {Sup} ^{1}_{D_2}(G_{1},K_{1})}{F}$$ where $$\mathbf {Sup} ^{1}_{D_2}(G_{1},K_{1})$$ is the collection of functions given in (5.1), exists. If the boundary $$\partial D_{2}$$ of $$D_{2}$$ is regular for $$D_2$$, then
\begin{aligned} u(x) = V^{1}_{\sigma _{D_{2}}}(x) \end{aligned}
(5.7)
for all $$x \in E$$ where $$\sigma _{D_{2}} = \{t \ge 0: X_{t} \in D_{2}\}.$$
ii. Similarly suppose that $$K_{2}$$ be the smallest superharmonic function that dominates $$H_{2}$$. Let $$u \ge G_1$$ be a finely continuous function, with $$D_{1}=\{u=G_{1}\}$$, such that $$v:= \inf _{F \in \mathbf {Sup} ^{2}_{D_1}(G_{2},K_{2})}{F}$$ where $$\mathbf {Sup} ^{2}_{D_1}(G_{2},K_{2})$$ is the collection of functions defined in (5.2), exists. If the boundary $$\partial D_{1}$$ of $$D_{1}$$ is regular for $$D_1$$, then
\begin{aligned} v(x) = V^{2}_{\tau _{D_{1}}}(x) \end{aligned}
(5.8)
for all $$x \in E$$ where $$\tau _{D_{1}} = \{t \ge 0: X_{t} \in D_{1}\}.$$

## Proof

We shall only prove (i.) as (ii.) follows by symmetry. We first show that $$u \ge V^{1}_{\sigma _{D_{2}}}$$. Take any $$F\in \mathbf Sup ^{1}_{D_2}(G_{1},K_{1})$$. We know that F is superharmonic in $${D}_{2}^{c}$$ so
\begin{aligned} F\left( x\right)\ge & {} \mathsf {E}_{x}\left[ F\left( X_{\tau \wedge \sigma _{D_{2}}}\right) \right] =\mathsf {E}_{x}[F(X_{\tau }) I(\tau \le \sigma _{D_{2}})+F(X_{\sigma _{D_{2}}})I(\sigma _{D_{2}}< \tau )] \nonumber \\\ge & {} \mathsf {E}_{x}\left[ G_{1}\left( X_{\tau }\right) I(\tau \le \sigma _{D_{2}})+F\left( X_{\sigma _{D_{2}}}\right) I(\sigma _{D_{2}} < \tau )\right] \end{aligned}
(5.9)
for every stopping time $$\tau$$ and for all $$x\in E$$, where the last inequality follows from the fact that $$F \ge G_{1}$$. Since v is finely continuous and $$G_{2}$$ is continuous (hence finely continuous), $$D_{2}$$ is finely closed and thus by the definition of $$\mathbf Sup ^{1}_{D_2}(G_{1},K_{1})$$, upon using (3.4) (since F is finely continuous and $$H_{1}$$ is continuous hence finely continuous) it can be seen that $$F(X_{\sigma _{D_{2}}}) = H_{1}(X_{\sigma _{D_{2}}})$$. Thus for any $$F\in \mathbf Sup ^{1}_{D_2}(G_{1},K_{1})$$ we have
\begin{aligned} F\left( x\right) \ge \mathsf {E}_{x}\left[ G_{1}\left( X_{\tau }\right) I( \tau \le \sigma _{D_{2}})+H_{1}\left( X_{\sigma _{D_{2}}}\right) I( \sigma _{D_{2}} < \tau )\right] =\mathsf {M}_{x}^{1}\left( \tau ,\sigma _{D_{2}}\right) \end{aligned}
(5.10)
for every stopping time $$\tau$$ and for all $$x\in E$$. Taking the infimum over all F and the supremum over all $$\tau$$ we get that $$u(x) \ge V^{1}_{\sigma _{D_{2}}}(x)$$ for all $$x \in E$$. We next show that $$u \le V^{1}_{\sigma _{D_{2}}}$$. For this it is sufficient to prove that $$V^{1}_{\sigma _{D_{2}}} \in \mathbf Sup ^{1}_{D_2}(G_{1},K_{1})$$ as the result will follow by definition of u. Recall, from Theorem 3.7, that $$V^{1}_{\sigma _{D_{2}}}$$ is finely continuous since the boundary $$\partial D_{2}$$ is assumed to be regular for $$D_{2}$$. The fact that $$V^{1}_{\sigma _{D_{2}}} \le K_{1}$$ follows from Lemma 5.2. To show that $$V^{1}_{\sigma _{D_{2}}}$$ is bounded below by $$G_{1}$$ we note that $$V^{1}_{\sigma _{D_{2}}} \ge \mathsf {M}_{x}^{1}(\tau ,\sigma _{D_{2}})$$ for any $$\tau$$ in particular for $$\tau = 0$$. Since $$\mathsf {M}_{x}^{1}(0,\sigma _{D_{2}}) = G_{1}(x)$$ the result follows. To prove that $$V^{1}_{\sigma _{D_{2}}} = H_{1}$$ in $$D_{2}$$ we take any $$x\in D_{2}$$ so that $$\sigma _{D_{2}}=0$$. Then by selecting any stopping time $$\tau > 0$$ we get that $$V_{0}^{1}(x) \ge \mathsf {M}_{x}^{1}(\tau ,0) = H_1(x)$$. On the other hand
\begin{aligned}&V^{1}_{0}(x) \le \sup _{\tau } (H_{1}(x)\mathsf {P}_{x}(\tau = 0) + H_{1}(x)\mathsf {P}_{x}(\tau > 0)) = H_{1}(x). \end{aligned}
(5.11)
From this we conclude that $$V^{1}_{0}(x) = H_{1}(x)$$. It remains to prove that $$V^{1}_{\sigma _{D_{2}}}$$ is superharmonic in $$D_{2}^{c}$$ By the strong Markov property of X we have
\begin{aligned}&\mathsf {E}_{x}\big [V^{1}_{\sigma _{D_{2}}}(X_{\rho \wedge \sigma _{D_{2}}})\big ] \nonumber \\&\quad = \mathsf {E}_{x}\big [\sup _{\tau }\mathsf {M}_{{X_{\rho \wedge \sigma _{D_{2}}}}}^{1}(\tau ,\sigma _{D_{2}})\big ] \nonumber \\&\quad = \mathsf {E}_{x}\big [\mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau }\mathsf {M}^{1}_{x}(\rho \wedge \sigma _{D_{2}} + \tau \circ \theta _{\rho \wedge \sigma _{D_{2}}},\rho \wedge \sigma _{D_{2}} + \sigma _{D_{2}} \circ \theta _{\rho \wedge \sigma _{D_{2}}}|\mathcal {F}_{\rho \wedge \sigma _{D_{2}}})\big ] \nonumber \\&\quad = \mathsf {E}_{x}\big [\mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \ge \rho \wedge \sigma _{D_{2}}}\mathsf {M}^{1}_{x}(\tau ,\sigma _{D_{2}}|\mathcal {F}_{\rho \wedge \sigma _{D_{2}}})\big ] \nonumber \\&\quad = \sup _{\tau \ge \rho \wedge \sigma _{D_{2}}}\mathsf {M}^{1}_{x}(\tau ,\sigma _{D_{2}}) \nonumber \\&\quad \le \sup _{\tau }\mathsf {M}^{1}_{x}(\tau ,\sigma _{D_{2}}) = V^{1}_{\sigma _{D_{2}}}(x) \end{aligned}
(5.12)
for any stopping time $$\rho$$ where we recall that $$\mathsf {M}^{1}_{x}(\tau ,\sigma |\mathcal {F}_{\rho }) = \mathsf {E}_{x}[G_{1}(X_{\tau })I(\tau \le \sigma ) + H_{1}(X_{\sigma })I(\sigma < \tau )| \mathcal {F_{\rho }}]$$ for stopping times $$\tau ,\sigma$$ and $$\rho$$. $$\square$$

## 6 The case of stationary one-dimensional Markov processes

In this section we shall assume that the Markov process X takes values in $$\mathbb {R}$$ and is such that Law$$(X|\mathsf {P}_{x}) =$$ Law$$(X^{x}|\mathsf {P})$$. We shall also assume that there exist points $$A_*$$ and $$B_*$$ satisfying $$-\infty< A_*< B_*< \infty$$ such that (i.) for given $$D_2$$ of the form $$[B_*,\infty )$$, the first entry time $$\tau _{A_*} = \inf \{t \ge 0:X_t \le A_*\}$$ (as obtained from Theorem 4.4) is optimal for player one and (ii.) for given $$D_1$$ of the form $$(-\infty ,A_*]$$, the first entry time $$\sigma _{B_*} = \inf \{t \ge 0:X_t \ge B_*\}$$ (as obtained from Theorem 4.4) is optimal for player two.

So in this section we will assume the existence of a pair $$(\tau _{A_*}, \sigma _{B_*})$$ that is a Nash equilibrium.

### 6.1 The principle of double continuous fit

We prove that $$V^{1}_{\sigma _{{B_*}}}$$ (resp. $$V^{2}_{\tau _{{A_*}}})$$ is continuous at $$A_*$$ (resp. $$B_*$$). We shall refer to this result as the principle of double continuous fit. For this we shall further assume that the following time-space conditions hold:
\begin{aligned} X^{A_*+ \varepsilon }_{t+h} \rightarrow X_t^{A_*} \text { } \mathsf {P}-\text {a.s.} \end{aligned}
(6.1)
\begin{aligned} X^{B_*- \varepsilon }_{t+h} \rightarrow X_t^{B_*} \text { } \mathsf {P}-\text {a.s.} \end{aligned}
(6.2)
as $$\varepsilon ,h \downarrow 0$$ and
\begin{aligned} X^{A_*+ \varepsilon }_{\rho _n} \rightarrow X_{\rho }^{A_*} \text { }\mathsf {P}-\text {a.s.} \end{aligned}
(6.3)
\begin{aligned} X^{B_*- \varepsilon }_{\rho _n} \rightarrow X_{\rho }^{B_*} \text { } \mathsf {P}-\text {a.s.} \end{aligned}
(6.4)
whenever $$\rho _n$$ is a sequence of stopping times such that $$\rho _n \uparrow \rho$$. Conditions (6.1)-(6.4) imply that the mapping $$x \mapsto X^x$$ (stochastic flow) is continuous at $$A_*$$ and $$B_*$$. Stochastic differential equations driven by Lévy processes, for example, satisfy this property under regularity assumptions on the drift and diffusion coefficients (see for example ([37, p. 340])).

We shall first state and prove the following lemma.

## Lemma 6.1

Let $$\sigma _{B_*}^{A_{*}} = \inf \{t \ge 0: X_t \ge B_*\}$$ be the optimal stopping time for player two under $$\mathsf {P}_{A_*}$$ and let $$\sigma _{B_*}^{A_{*}+\varepsilon } = \inf \{t \ge 0: X_t \ge B_*\}$$ be the optimal stopping time of player two under $$\mathsf {P}_{A_{*} + \varepsilon }$$, for given $$\varepsilon > 0$$. Then, if condition (6.3) is satisfied we have that $$\sigma _{B_*}^{A_*+ \varepsilon } \uparrow \sigma _{B_*}^{A_*}$$ as $$\varepsilon \downarrow 0$$. Similarly, if $$\tau _{A_*}^{B_{*}} = \inf \{t \ge 0: X_t \le A_*\}$$ is the optimal stopping time for player two under $$\mathsf {P}_{B_*}$$ and $$\tau _{A_*}^{B_{*}-\varepsilon } = \inf \{t \ge 0: X_t \le A_*\}$$ is the optimal stopping time of player one under $$\mathsf {P}_{B_{*} - \varepsilon }$$, for given $$\varepsilon > 0$$, then if condition (6.4) is satisfied we have that $$\tau _{A_*}^{B_*- \varepsilon } \uparrow \tau _{A_*}^{B_*}$$ as $$\varepsilon \downarrow 0$$.

## Proof

We shall only prove that $$\sigma _{B_*}^{A_*+ \varepsilon } \uparrow \sigma _{D_2}^{A_*}$$ as $$\varepsilon \downarrow 0$$. The fact that $$\tau _{A_*}^{B_*- \varepsilon } \uparrow \tau _{A_*}^{B_*}$$ as $$\varepsilon \downarrow 0$$ can be proved in the same way. Since $$\text {Law}(X|\mathsf {P}_{x}) = \text {Law}(X^x|\mathsf {P})$$ we have that $$\sigma _{B_*}^{A_{*}+\varepsilon }$$ is equally distributed as $$\hat{\sigma }_{B_*}^{A_{*}+\varepsilon }:= \inf \{t \ge 0: X_t^{A_{*}+\varepsilon } \ge B_*\}$$ under $$\mathsf {P}$$ whereas $$\sigma _{B_*}^{A_{*}}$$ is equally distributed as $$\hat{\sigma }_{B_*}^{A_{*}}:= \inf \{t \ge 0: X_t^{A_*} \ge B_*\}$$ under $$\mathsf {P}$$. Now $$\hat{\sigma }_{B_*}^{A_*+ \varepsilon } \uparrow \gamma$$ as $$\varepsilon \downarrow 0$$ for some stopping time $$\gamma \le \hat{\sigma }_{B_*}^{A_*}$$. So to prove the result it remains to show that $$\gamma \ge \hat{\sigma }_{B_*}^{A_*}$$. By the time-space condition (6.3) we have that $$X_{\hat{\sigma }_{B_*}^{A_*+ \varepsilon }}^{A_*+ \varepsilon } \rightarrow X_{\gamma }^{A_*}$$ $$\mathsf {P}$$-a.s. as $$\varepsilon \downarrow 0$$. Now since $$X_{\hat{\sigma }_{B_*}^{A_*+ \varepsilon }}^{A_*+ \varepsilon } \ge B_*$$ for each $$\varepsilon > 0$$ it follows that $$X_{\gamma }^{A_*} \ge B_*$$. But this implies that $$\gamma \ge \hat{\sigma }_{B_*}^{A_*}$$ and this proves the required result. $$\square$$

## Proposition 6.2

Suppose that the payoff functions $$G_{i}, H_{i}$$ for $$i=1,2$$ are also assumed to be bounded. Then the value functions $$V^{1}_{\sigma _{B_*}}$$ and $$V^{2}_{\tau _{A_*}}$$ are continuous at $$A_*$$ and $$B_*$$ respectively.

## Proof

We shall only prove the result for $$V^{1}_{\sigma _{B_*}}$$ as for $$V^{2}_{\tau _{A_*}}$$ the result will follow by symmetry. To prove this it is sufficient to show that
\begin{aligned} \lim _{\varepsilon \downarrow 0}\big (V^{1}_{\sigma _{B_*}}(A_{*}+\varepsilon ) - V^{1}_{\sigma _{B_*}}(A_{*})\big ) = 0 \end{aligned}
(6.5)
because $$\lim _{\varepsilon \downarrow 0}(V^{1}_{\sigma _{B_*}}(A_{*}-\varepsilon ) - V^{1}_{\sigma _{B_*}}(A_{*})) = \lim _{\varepsilon \downarrow 0}(G_{1}(A_{*}-\varepsilon ) - G_{1}(A_{*})) = 0$$ by continuity of $$G_1$$. Since $$V^{1}_{\sigma _{B_*}}(x) \ge G_{1}(x)$$ for all $$x \in \mathbb {R}$$ and $$V^{1}_{\sigma _{B_*}}(A_{*}) = G_{1}(A_{*})$$ we get that $$V^{1}_{\sigma _{B_*}}(A_*+ \varepsilon ) - V^{1}_{\sigma _{B_*}}(A_{*}) \ge G_{1}(A_{*}+\varepsilon ) - G_{1}(A_{*})$$ for every $$\varepsilon > 0$$. So by continuity of $$G_{1}$$ we have that $$\liminf _{\varepsilon \downarrow 0}(V^{1}_{\sigma _{B_*}}(A_{*}+\varepsilon ) - V^{1}_{\sigma _{B_*}}(A_{*})) \ge 0$$. We next show that $$\limsup _{\varepsilon \downarrow 0} (V^{1}_{\sigma _{B_*}}(A_{*}+\varepsilon ) - V^{1}_{\sigma _{B_*}}(A_{*})) \le 0$$ so that we get the required result. Given $$\varepsilon >0$$, let $$\tau ^{A_{*} + \varepsilon }_{A_*} = \inf \{t \ge 0: X_{t} \le A_{*}\}$$ be the optimal stopping time for player one under $$\mathsf {P}_{A_{*}+\varepsilon }$$. Then we have that $$\tau ^{A_{*} + \varepsilon }_{A_*}$$ is equally distributed as $$\hat{\tau }_{A_*}^{A_{*}+\varepsilon } = \inf \{t \ge 0: X_t^{A_*+ \varepsilon } \le A_*\}$$ under $$\mathsf {P}$$. Now from the optimality of $$\tau _{A_*}^{A_*+ \varepsilon }$$ under $$\mathsf {P}_{A_*+ \varepsilon }$$ we have that
\begin{aligned}&V^{1}_{\sigma _{B_*}}(A_{*}+\varepsilon ) - V^{1}_{\sigma _{B_*}}(A_{*}) \nonumber \\&\quad \le \mathsf {M}_{A_{*}+\varepsilon }^{1}\big ( \tau _{A_*}^{A_{*} + \varepsilon },\sigma _{B_*}^{A_{*} + \varepsilon }\big ) -\mathsf {M}_{A_{*}}^{1}\big ( \tau _{A_*}^{A_{*} + \varepsilon },\sigma ^{A_{*}}_{B_*} \big ) \nonumber \\&\quad = \mathsf {E}_{A_{*}+\varepsilon }\big [ G_{1}( X_{\tau _{A_*}^{A_{*} + \varepsilon }}) I\big (\tau _{A_*}^{A_{*} + \varepsilon } \le \sigma _{B_*}^{A_{*} + \varepsilon }\big )\big ] - \mathsf {E}_{A_{*}}\big [G_{1}( X_{\tau _{A_*}^{A_{*} + \varepsilon }}) I\big ( \tau _{A_*}^{A_{*} + \varepsilon } \le \sigma ^{A_{*}}_{B_*}\big )\big ] \nonumber \\&\quad \quad +\;\mathsf {E}_{A_{*}+\varepsilon }\big [ H_{1}( X_{\sigma _{B_*}^{A_{*} + \varepsilon }}) I\big (\sigma _{B_*}^{A_{*} + \varepsilon }< \tau _{A_*}^{A_{*} + \varepsilon }\big )\big ] - \mathsf {E}_{A_{*}}\big [H_{1}( X_{\sigma ^{A_{*}}_{B_*} }) I\big (\sigma ^{A_{*}}_{B_*}<\tau _{A_*}^{A_{*} + \varepsilon }\big )\big ] \nonumber \\&\quad =\mathsf {E}\big [ G_{1}( X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}+\varepsilon }) I\big (\hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }_{B_*}^{A_{*} + \varepsilon }\big )-G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*}+\varepsilon }}^{A_{*}}\big ) I\big (\hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }^{A_{*}}_{B_*}\big )\big ] \nonumber \\&\quad \quad +\;\mathsf {E}\big [H_{1}\big (X^{A_{*}+\varepsilon }_{\hat{\sigma }_{B_*}^{A_{*}+\varepsilon }}\big ) I\big (\hat{\sigma }_{B_*}^{A_{*}+\varepsilon }< \hat{\tau }_{A_*}^{A_{*}+\varepsilon }\big )-H_{1}\big ( X^{A_{*}}_{\hat{\sigma }^{A_{*}}_{B_*} }\big ) I\big (\hat{\sigma }^{A_{*}}_{B_*} <\hat{\tau }_{A_*}^{A_{*}+\varepsilon }\big )\big ] \end{aligned}
(6.6)
The first expectation in the last expression on the right hand side of (6.6) can be written as
\begin{aligned}&\mathsf {E}\big [(G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}+\varepsilon }\big )I\big ( \hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }_{B_*}^{A_{*} + \varepsilon }\big ) - G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}}\big )I\big (\hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }^{A_{*}}_{B_*}\big )\big ) I\big (\hat{\tau }_{A_*}^{A_*+ \varepsilon }< \hat{\sigma }^{A_*}_{B_*}\big ) \nonumber \\&\quad \quad +\;\big (G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}+\varepsilon }\big )I\big ( \hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }_{B_*}^{A_{*} + \varepsilon }\big ) - G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}}\big )I\big (\hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }^{A_{*}}_{B_*}\big )\big ) I\big (\hat{\tau }_{A_*}^{A_*+ \varepsilon } = \hat{\sigma }^{A_*}_{B_*}\big ) \nonumber \\&\quad \quad +\;\big (G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}+\varepsilon }\big )I\big ( \hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }_{B_*}^{A_{*} + \varepsilon }\big ) - G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}}\big )I\big (\hat{\tau }_{A_*}^{A_{*} + \varepsilon } \le \hat{\sigma }^{A_{*}}_{B_*}\big )\big ) I\big ( \hat{\tau }_{A_*}^{A_*+ \varepsilon } > \hat{\sigma }^{A_*}_{B_*}\big )\big ] \nonumber \\&\quad =\mathsf {E}\big [\big (G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}+\varepsilon }\big ) - G_{1}\big (X_{\hat{\tau }_{A_*}^{A_{*} + \varepsilon }}^{A_{*}}\big )\big ) I\big (\hat{\tau }_{A_*}^{A_*+ \varepsilon } < \hat{\sigma }^{A_*}_{B_*}\big )\big ] \end{aligned}
(6.7)
The last expression in (6.7) follows from the fact that $$\mathsf {P}(\hat{\tau }_{A_*}^{A_*+\varepsilon } = \hat{\sigma }_{B_*}^{A_*})=0$$ for all $$\varepsilon > 0$$ sufficiently small (upon assuming that the hitting times considered are finite). and from the fact that $$\hat{ \sigma _{B_*}^{A_*+ \varepsilon }} \uparrow \hat{ \sigma _{B_*}^{A_*}}$$ as $$\varepsilon \downarrow 0$$ (see Lemma 6.1).
In a similar way one can show that the second expectation in the last expression on the right hand side of (6.6) can be written as
\begin{aligned} \mathsf {E}\big [\big (H_{1}\big (X^{A_{*}+\varepsilon }_{\hat{\sigma }_{B_*}^{A_{*}+\varepsilon }}\big )-H_{1}\big ( X^{A_{*}}_{\hat{\sigma }^{A_{*}}_{B_*} }\big )\big ) I\big (\hat{\sigma }^{A_{*}}_{B_*} <\hat{\tau }_{A_*}^{A_{*}+\varepsilon }\big )\big ] \end{aligned}
(6.8)
Since $$G_1$$ and $$H_1$$ are continuous and bounded then by the time space conditions (6.1) and (6.3) together with Lemma 6.1 and Fatou’s lemma we get the required result. $$\square$$

## Remark 6.3

The assumption of boundedness on $$G_1$$ and $$H_1$$ in Proposition 6.2 can be relaxed. For example, the result will also hold provided that $$G_1(X^{A_*+ \varepsilon }_{\hat{\tau }_{A_*}^{A_*+ \varepsilon }}) - G_1(X^{A_*}_{\hat{\tau }^{A_*+ \varepsilon }_{A_*}})$$ and $$H_1(X^{A_*+ \varepsilon }_{\hat{\sigma }_{B_*}^{A_*+ \varepsilon }}) - H_1(X^{A_*}_{\hat{\sigma }_{B_*}^{A_*}})$$ are bounded above by some integrable random variables $$\tilde{Z}_1$$ and $$\tilde{Z}_2$$ respectively. Similarly the boundedness assumption on $$G_2$$ and $$H_2$$ can be relaxed.

### 6.2 The principle of double smooth fit

In this section we will consider the special case when X is a one dimensional regular diffusion process and we shall assume that $$V^{1}_{\sigma _{B_*}}$$ and $$V^{2}_{\tau _{A_*}}$$ are obtained from the double partial superharmonic characterisation as explained in Sect. 5. More precisely we shall assume that the functions uv introduced in Theorem 5.3 (i.) coincide with those from Theorem 5.3 (ii.) so that a mutual response is assumed to exist. The aim is to use this characterisation to derive the so-called principle of double smooth fit. This principle is an extension of the principle of smooth fit observed in standard optimal stopping problems (see [49]). We note that in the case of more general strong Markov processes in $$\mathbb {R}$$ this principle may break down. As observed in standard optimal stopping problems this may happen for example when the scale function of X is not differentiable (see [50]) or in the case of Poisson process (see [48]). Carr et. al in [7], for example, also showed that this principle breaks down in a CGMY model.

## Remark 6.4

Examples of nonzero-sum optimal stopping games for one dimensional regular diffusion processes, for which the optimal stopping regions are of the threshold type are given in [1] and [12]1. In particular the authors therein provide sufficient conditions for existence and uniqueness of Nash equilibria.

So suppose that X is a regular diffusion process with values in $$\mathbb {R}$$. We shall also assume that the fine topology coincides with Euclidean topology so that fine continuity is equivalent to continuity in the classical sense. In this context we can define the scale function S of the process X, that is the mapping $$S:\mathbb {R} \rightarrow \mathbb {R}$$ which is a strictly increasing continuous function satisfying
\begin{aligned} \mathsf {P}_{x}(\tau _{c}< \tau _{d}) = \frac{S(d)-S(x)}{S(d)-S(c)} \quad \text {and} \quad \mathsf {P}_{x}(\tau _{d}<\tau _{c}) = \frac{S(x)-S(c)}{S(d)-S(c)} \end{aligned}
(6.9)
for any $$c< x < d$$ where $$\tau _{y} = \inf \{t \ge 0: X_{t} = y\}$$ for $$y \in \mathbb {R}$$. Since we are assuming that $$D_{2} = [B_{*},\infty )$$ then for any given $$a,b \in (-\infty ,B_{*})$$ such that $$a < b$$ we have
\begin{aligned}&u(x) \ge \mathsf {E}_{x}[u(X_{\tau _{a,b} \wedge \sigma _{B_*}})] = \mathsf {E}_{x}[u(X_{\tau _{a,b}})] = u(a)\frac{S(b)-S(x)}{S(b)-S(a)} + u(b)\frac{S(x)-S(a)}{S(b)-S(a)}\nonumber \\ \end{aligned}
(6.10)
for all $$a< x < b$$, where $$\tau _{a,b} = \inf \{t \ge 0:X_{t} \notin (a,b)\}$$. The first inequality follows from the fact that u is superharmonic in $$D_{2}^{c}$$ (recall Definition 5.1). This means that u is S-concave in every interval in $$(-\infty ,B_{*})$$ and as for concave functions this implies that the mapping
\begin{aligned} y \mapsto \frac{u(y)-u(x)}{S(y)-S(x)} \end{aligned}
(6.11)
is decreasing provided that $$y \ne x$$. By symmetry we have that v is S-concave in every interval in $$(A_{*},\infty )$$ and that the mapping $$y \mapsto \frac{v(y)-v(x)}{S(y)-S(x)}$$ is decreasing provided $$y \ne x$$.
From the results for the Dirichlet problem (see for example [49, (7.1.2)-(7.1.3)]) one can show that
\begin{aligned}&\mathbb {L}_{X}u = 0 \text { and } \mathbb {L}_{X}v = 0 \text { in } C_{1} \cap C_{2} u\left| _{\partial C_{1}} \right. = G_{1} \text { and } u\left| _{\partial C_{2}} = H_{1} \right. \nonumber \\&v\left| _{\partial C_{1}} \right. = H_{2} \text { and } v\left| _{\partial C_{2}} \right. = G_{2} \nonumber \end{aligned}
where $$C_1 = D_1^c$$ and $$C_2 = D_2^{c}$$. The aim is to show that $$u'\left| _{A_*}\right. = G'_{1} \left| _{A_*}\right.$$ and $$v'\left| _{B_*}\right. = G'_{2} \left| _{B_*}\right.$$ These two conditions will be referred to as the principle of double smooth fit. Informally this principle states that the optimal stopping boundary points $$A_{*}$$ and $$B_{*}$$ must be selected in such a way that u and v are respectively smooth at these points. The proof of this result follows in a similar way as the proof of Theorem 2.3 in [50]. We shall first state the following lemma, the proof of which can be found in [50].

## Lemma 6.5

Suppose that $$f,g:\mathbb {R}_{+} \rightarrow \mathbb {R}$$ are two continuous functions such that $$f(0)=g(0)=0$$, $$f(\varepsilon ) > 0$$ whenever $$\varepsilon > 0$$, and $$g(\delta ) > 0$$ whenever $$\delta > 0$$. Then for every $$\varepsilon _{n} \downarrow 0$$ as $$n \rightarrow \infty$$, there exists $$\varepsilon _{n_{k}} \downarrow 0$$ and $$\delta _{k} \downarrow 0$$ as $$k \rightarrow \infty$$ such that $$\lim _{k \rightarrow \infty } \frac{f(\varepsilon _{n_{k}})}{g(\delta _{k})} = 1.$$

## Proposition 6.6

Suppose that $$D_{1}$$ is of the form $$(-\infty ,A_{*}]$$ and $$D_{2}$$ of the form $$[B_{*},\infty )$$ for some points $$A_{*},B_{*}$$ such that $$A_{*} < B_{*}$$. Suppose that $$G_{1}$$ is differentiable at $$A_{*}$$ and $$G_{2}$$ is differentiable at $$B_{*}$$. If the scale function S of X is differentiable at $$A_{*}$$ and $$B_{*}$$, then $$u'(A_{*}) = G'_{1}(A_{*})$$ and $$v'(B_{*}) = G'_{2}(B_{*})$$.

## Proof

We shall first consider the case when $$S'(A_{*}) \ne 0$$. Since u is superharmonic in $$\left( -\infty ,B_{*}\right)$$ we have that $$\mathsf {E}_{x}[u(X_{\rho \wedge \sigma _{B_*}})] \le u(x)$$ for all stopping times $$\rho$$ and for all $$x\in \mathbb {R}$$. Define the exit time $$\tau _{\varepsilon }=\inf \left\{ t\ge 0:X_{t}\notin \left( A_{*}-\varepsilon ,A_{*}+\varepsilon \right) \right\}$$ for $$\varepsilon >0$$ such that $$A_{*}+\varepsilon < B_{*}.$$ Then
\begin{aligned}&\mathsf {E}_{A_{*}}[u(X_{\tau _{\varepsilon } \wedge \sigma _{D_{2}}})] = \mathsf {E}_{A_{*}}[u(X_{\tau _{\varepsilon }})] \nonumber \\&\quad \le u(A_{*}) = u(A_{*})\mathsf {P}_{A_{*}}(X_{\tau _{\varepsilon }} = A_{*}+\varepsilon ) + G_{1}(A_{*}) \mathsf {P}_{A_{*}}(X_{\tau _{\varepsilon }} = A_{*}- \varepsilon )\qquad \end{aligned}
(6.12)
where the last equality follows from the fact that $$u(A_{*})= G_{1}(A_{*})$$. On the other hand we have
\begin{aligned} \mathsf {E}_{A_{*}}[u(X_{\tau _{\varepsilon }})]= u(A_{*}+\varepsilon )\mathsf {P}_{A_{*}}(X_{\tau _{\varepsilon }} = A_{*}+\varepsilon ) + G_{1}(A_{*}-\varepsilon ) \mathsf {P}_{A_{*}}(X_{\tau _{\varepsilon }} = A_{*}- \varepsilon ). \end{aligned}
(6.13)
By combining (6.12) and (6.13) it follows that
\begin{aligned}&(u(A_{*}+\varepsilon ) - u(A_{*})) \mathsf {P}_{A_{*}}\left( X_{\tau _{\varepsilon }}=A_{*}+\varepsilon \right) \nonumber \\&\quad \le \left( G_{1}(A_{*})-G_{1}(A_{*}-\varepsilon )\right) \mathsf {P}_{A_{*}}\left( X_{\tau _{\varepsilon }}=A_{*}-\varepsilon \right) . \end{aligned}
(6.14)
Since we are assuming that S and $$G_{1}$$ are differentiable at $$A_{*}$$ and that $$S'(A_{*}) \ne 0$$ we get, upon using the facts $$\mathsf {P}_{A_{*}}(X_{\tau _{\varepsilon }} = A_{*}-\varepsilon ) = \frac{S(A_{*}+\varepsilon )-S(A_{*})}{S(A_{*}+\varepsilon ) - S(A_{*}-\varepsilon )}$$ and $$\mathsf {P}_{A_{*}}(X_{\tau _{\varepsilon }} = A_{*}+\varepsilon ) = \frac{S(A_{*})-S(A_{*}-\varepsilon )}{S(A_{*}+\varepsilon ) - S(A_{*}-\varepsilon )}$$, that
\begin{aligned}&\frac{(u(A_{*}+\varepsilon ) - u(A_{*})) (S(A_{*})-S(A_{*}-\varepsilon ))}{S(A_{*}+\varepsilon ) - S(A_{*}-\varepsilon )} \\&\quad \le \frac{( G_{1}(A_{*})-G_{1}(A_{*}-\varepsilon ))(S(A_{*}+\varepsilon ) - S(A_{*}))}{S(A_{*}+\varepsilon )-S(A_{*}-\varepsilon )} \end{aligned}
which is equivalent to (since the scale function is increasing)
\begin{aligned}&\frac{u(A_{*}+\varepsilon ) - u(A_{*})}{\varepsilon } \le \frac{( G_{1}(A_{*})-G_{1}(A_{*}-\varepsilon ))}{\varepsilon } \frac{\frac{(S(A_{*}+\varepsilon ) - S(A_{*}))}{\varepsilon }}{\frac{(S(A_{*}) - S(A_{*}-\varepsilon ))}{\varepsilon }}. \end{aligned}
(6.15)
Taking the limit as $$\varepsilon \downarrow 0$$ on both sides of (6.15) and using the fact that $$G_1$$ and S are differentiable at $$A_*$$ and that $$S'(A_*) \ne 0$$ we get that $$u'(A_{*}+) = \lim _{\varepsilon \downarrow 0} \frac{u(A_*+ \varepsilon ) - u(A_*)}{\varepsilon } \le G_1'(A_*)$$. On the other hand we have
\begin{aligned}&u(A_{*}+\varepsilon ) - u(A_{*}) \ge \frac{G_{1}\left( A_{*}+\varepsilon \right) -G_{1}\left( A_{*}\right) }{\varepsilon }. \end{aligned}
(6.16)
Taking limits on both sides of (6.16) as $$\varepsilon \downarrow 0$$ we get that $$u'(A_*+) \ge G_{1}'(A_{*})$$. So we can conclude that $$u'(A_*+) = G_1'(A_*)$$. On the other hand, since $$u(A_{*}-\varepsilon ) = G_{1}(A_{*}-\varepsilon )$$ for all $$\varepsilon >0$$ sufficiently small we have that $$u'(A_*-) = \lim _{\varepsilon \downarrow 0} \frac{u(A_*- \varepsilon ) - u(A_*)}{\varepsilon } = G_{1}'(A_{*})$$. So the result is proved in the case $$S'(A_{*}) \ne 0$$. Now suppose that $$S'(A_{*}) = 0$$. Since $$G_{1} \le u$$, we have, for any $$\varepsilon ,\delta > 0$$ sufficiently small, that
\begin{aligned}&\frac{G_{1}(A_{*}+\varepsilon ) - G_{1}(A_{*})}{S(A_{*}+\varepsilon )-S(A_{*})} \le \frac{u(A_{*}+\varepsilon )-u(A_{*})}{S(A_{*}+\varepsilon )-S(A_{*})} \le \frac{u(A_{*}-\delta )-u(A_{*})}{S(A_{*}-\delta )-S(A_{*})} \nonumber \\&\qquad = \frac{G_{1}(A_{*}-\delta )-G_{1}(A_{*})}{S(A_{*}-\delta )-S(A_{*})} \end{aligned}
(6.17)
where the second inequality follows from the fact that the mapping $$y \mapsto \frac{u(y)-u(x)}{S(y)-S(x)}$$ is decreasing. Multiplying both sides of (6.17) by $$\frac{S(A_{*}+\varepsilon )- S(A_{*})}{\varepsilon }$$ and using the fact that the scale function is increasing we get
\begin{aligned}&\frac{G_{1}(A_{*}+\varepsilon ) - G_{1}(A_{*})}{\varepsilon } \le \frac{u(A_{*}+\varepsilon )-u(A_{*})}{\varepsilon } \nonumber \\&\quad \le \frac{\frac{G_{1}(A_{*}-\delta )-G_{1}(A_{*})}{-\delta }}{\frac{S(A_{*}-\delta )-S(A_{*})}{-\delta }}\frac{S(A_{*}+\varepsilon )-S(A_{*})}{\varepsilon } \end{aligned}
(6.18)
Setting $$f(\varepsilon ) := \frac{S(A_{*}+\varepsilon ) - S(A_{*})}{\varepsilon }$$ and $$g(\delta ) := \frac{S(A_{*}-\delta ) - S(A_{*})}{-\delta }$$ in Lemma 6.5 we get that for any $$\varepsilon _{n} \downarrow 0$$, there exists $$\varepsilon _{n_{k}} \downarrow 0$$ and $$\delta _{k} \downarrow 0$$ as $$k \rightarrow \infty$$ such that
\begin{aligned} G_{1}'(A_{*})\le & {} \lim _{k \rightarrow \infty } \frac{u(A_{*}+\varepsilon _{n_{k}}) - u(A_{*})}{\varepsilon _{n_{k}}} \le \lim _{k \rightarrow \infty } \frac{f(\varepsilon _{n_k})}{g(\delta _{k})} \frac{G_{1}(A_{*}-\delta _k) - G_1(A_*)}{-\delta _k} \nonumber \\= & {} G'_{1}(A_{*}). \end{aligned}
(6.19)
From this it follows that that $$u'(A_*+) = G'_{1}(A_{*})$$. On the other hand if we multiply (6.17) by $$\frac{S(A_{*}-\delta )- S(A_{*})}{-\delta }$$ we get
\begin{aligned} \frac{\frac{G_{1}(A_{*}+\varepsilon )-G_{1}(A_{*})}{\varepsilon }}{\frac{S(A_{*}+\varepsilon )-S(A_{*})}{\varepsilon }}\frac{S(A_{*}-\delta )-S(A_{*})}{-\delta } \le \frac{u(A_{*}-\delta )-u(A_{*})}{-\delta } = \frac{G_{1}(A_{*}-\delta )-G_{1}(A_{*})}{-\delta } \end{aligned}
Interchanging $$\varepsilon$$ and $$\delta$$ in (6.20) and using Lemma 6.5 again with $$f(\varepsilon ):= \frac{S(A_{*}) - S(A_{*}-\varepsilon )}{\varepsilon }$$ and $$g(\delta ):= \frac{S(A_{*}+\delta ) - S(A_{*})}{\delta }$$ we get that $$\frac{d^{-}u(A_{*})}{dx} = G_{1}'(A_{*})$$. This assertion together with $$\frac{d^{+}u(A_{*})}{dx} = G_{1}'(A_{*})$$ proves the required result. By symmetry one can show that $$v'(B_{*})=G_{1}'(B_{*})$$. $$\square$$
Figure 2a, b show that there can exist more than one Nash equilibrium point. In this example, if both players cooperate and decide to stop the process in the regions $$D_{1}$$ and $$D_{2}$$ given in Fig. 2a, then their expected gains are higher than those earned if they stop the process in the regions given in Fig. 2b. However this is not the case in the example presented in Fig. 2c, d, where it is evident that nothing is gained if the players cooperate. For a more detailed study on the existence and uniqueness of a Nash equilibrium in the case of absorbed Brownian motion in [0,1] and when the class of payoff functions $$G_i$$ are of a similar form to the one presented in Fig. 2, the reader is referred to [1].

## Footnotes

1. 1.

The second manuscript was available to the author after the first draft of the paper was published on The University of Manchester Website http://www.maths.manchester.ac.uk/our-research/research-groups/statistics-and-its-applications/research-reports/.

## Notes

### Acknowledgements

The author is grateful to Professor Goran Peskir for introducing the topic of optimal stopping games, for the many fruitful discussions on the subtleties of Markov processes and the principles of smooth and continuous fit in zero-sum games, and for providing insight into the variational approach as a way of observing and understanding the principles of ‘double smooth fit’ and ‘double continuous fit’ in nonzero-sum games.

## References

1. 1.
Attard, N.: Nash equilibrium in nonzero-sum games of optimal stopping for Brownian motion. Research Report No. 2, Probab. Statist. Group Manchester (2015)Google Scholar
2. 2.
Bensoussan, F., Friedman, A.: Nonlinear variational inequalities and differential games with stopping times. J. Funct. Anal. 16, 305–352 (1974)
3. 3.
Bensoussan, F., Friedman, A.: Nonzero-sum stochastic differential games with stopping times and free boundary value problem. Trans. Am. Math. Soc. 213, 275–327 (1977)
4. 4.
Boetius, F.: Bounded variation singular stochastic control and Dynkin game. SIAM J. Control Optim. 44, 1289–1321 (2005)
5. 5.
Bismut, J.M.: Sur un problème de Dynkin. Z. Wahrsch. Verw. Geb. 39, 31–53 (1977)
6. 6.
Blumenthal, R.M., Getoor, R.K.: Markov Processes and Potential Theory. Academic Press, New York (1968)
7. 7.
Carr, P., Geman, H., Madan, D., Yor, M.: The fine structure of returns: an empirical investigation. J. Bus. 75, 305–332 (2002)
8. 8.
Cattiaux, P., Lepeltier, J.P.: Existence of a quasi-Markov Nash equilibrium for nonzero-sum Markov stopping games. Stoch. Stoch. Rep. 30, 85–103 (1990)
9. 9.
Chen, N., Dai, M., Wan, X.: A nonzero-sum game approach to convertible bonds: tax benefit, bankruptcy cost and early/late calls. Math. Financ. 23(1), 57–93 (2013)
10. 10.
Chung, K.L., Walsh, J.B.: Markov Processes, Brownian Motion, and Time Symmetry. Springer, New York (2005)
11. 11.
Cvitanic, I., Karatzas, I.: Backward SDEs with reflection and Dynkin games. Ann. Probab. 24, 2024–2056 (1996)
12. 12.
De Angelis, T., Ferrari, G., Moriarty, J.: Nash equilibria of threshold type for two-player nonzero-sum games of stopping. (2015) arXiv:1508.03989
13. 13.
Dynkin, E.: Markov Processes, Volume 1. Academic Press, New York (1965)
14. 14.
Dynkin, E.: Game variant of a problem of optimal stopping. Soviet Math. Dokl. 10, 16–19 (1969)
15. 15.
Ekström, E.: Properties of game options. Math. Methods Oper. Res. 63, 221–238 (2006)
16. 16.
Ekström, E., Peskir, G.: Optimal stopping games for Markov processes. SIAM J. Control Optim. 47, 684–702 (2008)
17. 17.
Ekström, E., Villeneuve, S.: On the value of optimal stopping games. Ann. Appl. Probab. 16, 1576–1596 (2006)
18. 18.
Elbakidze, N.V.: Construction of the cost and optimal policies in a game problem of stopping a Markov process. Theory Probab. Appl. 21, 163–168 (1976)
19. 19.
Emmerling, T.J.: Perpetual cancellable American call options. Math. Financ. 22, 645–666 (2012)
20. 20.
Friedman, A.: Stochastic games and variational inequalities. Arch. Ration. Mech. Anal. 51, 321–341 (1973a)
21. 21.
Friedman, A.: Regularity theorem for variational inequalities in unbounded domains and applications to stopping time problems. Arch. Ration. Mech. Anal. 76, 134–160 (1973b)
22. 22.
Frid, E.B.: The optimal stopping rule for a two-person Markov chain with opposing interests. Theory Probab. Appl. 14, 713–716 (1969)
23. 23.
Fukushima, M., Taksar, M.: Dynkin games via Dirichlet forms and singular control of one-dimensional diffusions. SIAM J. Control Optim. 41, 682–699 (2002)
24. 24.
Gapeev, P.V.: The spread option optimal stopping game. In: Kyprianou, A., et al. (eds.) Exotic Option Pricing and Advanced Lévy Models, pp. 205–293. Wiley, New York (2006)Google Scholar
25. 25.
Gapeev, P.V., Kühn, C.: Perpetual convertible bonds in jump-diffusion models. Stat. Decis. 23, 15–31 (2005)
26. 26.
Gusein-Zade, S.M.: On a game connected with a Wiener process. Theory Probab. Appl. 14(704), 701 (1969)
27. 27.
Hamadène, S., Hassani, M.: BSDEs with two reflecting barriers: the general result. Probab. Theory Related Fields 132(264), 237 (2005)
28. 28.
Hamadène, S., Zhang, M.: The continuous time nonzero-sum Dynkin game problem and application in game options. SIAM J. Control Optim. 5, 3659–3669 (2010)
29. 29.
Huang, C.-F., Li, L.: Continuous time stopping games with monotone reward structures. Math. Oper. Res. 15, 496–507 (1990)
30. 30.
Kallsen, J., Kühn, C.: Pricing derivatives of American and game type in incomplete markets. Financ. Stoch. 9, 261–284 (2004)
31. 31.
Karatzas, I., Wang, H.: Connections Between Bounded-Variation Control and Dynkin Games. Volume in Honor of Professor Alain Bensoussan, pp. 353–362. IOS Press, Amsterdam (2001)Google Scholar
32. 32.
Kifer, Y.: Optimal stopping in games with continuous time. Theory Probab. Appl. 16, 545–550 (1971)
33. 33.
Kifer, Y.: Game options. Financ. Stoch. 4, 443–463 (2000)
34. 34.
Kühn, C.: Game contingent claims in complete and incomplete markets. J. Math. Econom. 40, 889–902 (2004)
35. 35.
Kühn, C., Kyprianou, A.E.: Callable puts as composite exotic options. Math. Financ. 17, 487–502 (2007)
36. 36.
Kühn, C., Kyprianou, A.E., Van Schaik, K.: Pricing Israeli options: a pathwise approach. Stochastics 79, 117–137 (2007)
37. 37.
Kunita, H.: Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms. In: Rao, M.M. (ed.) Real and Stochastic Analysis: New Perspectives (Trends in Mathematics), pp. 305–373. Birkhäuser, Boston (2004)
38. 38.
Kyprianou, A.E.: Some calculations for Israeli options. Financ. Stoch. 8, 73–86 (2004)
39. 39.
Laraki, R., Solan, E.: The value of zero-sum stopping games in continuous time. SIAM J. Control Optim. 43, 1913–1922 (2005)
40. 40.
Laraki, R., Solan, E.: Equilibrium in two-player nonzero-sum Dynkin games in continuous time. Stochastics 85, 997–1014 (2012)
41. 41.
Lepeltier, J.P., Maingueneau, M.A.: Le Jeu de Dynkin en théorie générale sans l’hypothése de Mokobodski. Stochastics 13, 25–44 (1984)
42. 42.
Morimoto, Y.: Nonzero-sum discrete parameter stochastic games with stopping times. Probab. Theory Relat. Fields 72, 155–160 (1986)
43. 43.
Nagai, H.: Nonzero-sum stopping games of symmetric Markov processes. Probab. Theory Relat. Fields 75, 487–497 (1987)
44. 44.
Neveu, J.: Discrete-Parameter Martingales. North-Holland Publishing Company, Amsterdam (1975)
45. 45.
Ohtsubo, Y.: Optimal Stopping in sequential games with or without a constraint of always terminating. Math. Oper. Res. 12, 277–296 (1986)
46. 46.
Ohtsubo, Y.: A nonzero-sum extension of Dynkin stopping problem. Math. Oper. Res. 11, 591–607 (1987)
47. 47.
Ohtsubo, Y.: On a discrete-time nonzero-sum Dynkin problem with monotonicity. J. Appl. Probab. 28, 466–472 (1991)
48. 48.
Peskir, G., Shiryaev, A.N.: Sequential testing problems for Poisson processes. Ann. Stat. 28, 837–859 (2000)
49. 49.
Peskir, G., Shiryaev, A.N.: Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics. ETH Zürich, Birkhäuser, Basel (2006)
50. 50.
Peskir, G.: Principle of smooth fit and diffusions with angles. Stochastics 79, 239–302 (2006)
51. 51.
Peskir, G.: Optimal stopping games and Nash equilibrium. Theory Probab. Appl. 53, 558–571 (2008)
52. 52.
Peskir, G.: A duality principle for the Legendre transform. J. Convex Anal. 19, 609–630 (2012)
53. 53.
Sharpe, M.: General Theory of Markov Processes. Academic Press, New York (1988)
54. 54.
Shmaya, E., Solan, E.: Two-player nonzero-sum stopping games in discrete time. Ann. Probab. 32, 2733–2764 (2004)
55. 55.
Stettner, L.: Zero-sum Markov games with stopping and impuslive strategies. Appl. Math. Optim. 9, 1–24 (1982)