On the problems of sequential statistical inference for Wiener processes with delayed observations

Abstract

We study the sequential hypothesis testing and quickest change-point (or disorder) detection problems with linear delay penalty costs for observable Wiener processes under (constantly) delayed detection times. The method of proof consists of the reduction of the associated delayed optimal stopping problems for one-dimensional diffusion processes to the equivalent free-boundary problems and solution of the latter problems by means of the smooth-fit conditions. We derive closed-form expressions for the Bayesian risk functions and optimal stopping boundaries for the associated weighted likelihood ratio processes in the original problems of sequential analysis.

Introduction

The problem of sequential testing for two simple hypotheses about the drift rate of an observable Wiener process (or Brownian motion) is to detect the form of its constant drift rate from one of the two given alternatives. In the Bayesian formulation of this problem, it is assumed that these alternatives have an a priori given distribution. The problem of quickest change-point (or disorder) detection for an Wiener process is to find a stopping time of alarm \(\tau \) which is as close as possible to the unknown time of change-point \(\theta \) at which the local drift rate of the process changes from one constant value to another. In the classical Bayesian formulation, it is assumed that the random time \(\theta \) takes the value 0 with probability \(\pi \) and is exponentially distributed given that \(\theta > 0\). Such problems found applications in many real-world systems in which the amount of observation data is increasing over time (see, e.g. Carlstein et al. 1994; Poor and Hadjiliadis 2008; Shiryaev 2019 for an overview).

The sequential testing and quickest change-point detection problems were originally formulated and solved for sequences of observable independent identically distributed random variables by Shiryaev (1978, Chap. IV, Sects. 1 and 3) (see also references to original sources therein). The first solutions of these problems in the continuous-time setting were obtained in the case of observable Wiener processes with constant drift rates by Shiryaev (1978, Chap. IV, Sects. 2 and 4) (see also references to original sources therein). The standard disorder problem for observable Poisson processes with unknown intensities was introduced and solved by Davis (1976) given certain restrictions on the model parameters. Peskir and Shiryaev (2000, 2002) solved both problems of sequential analysis for Poisson processes in full generality (see also Peskir and Shiryaev 2006, Chap. VI, Sects. 23 and 24). The method of solution of these problems was based on the reduction of the associated optimal stopping problems for the posterior probability processes to the equivalent free boundary problems for ordinary (integro-)differential operators and a unique characterisation of the Bayesian risks by means of the smooth- and continuous-fit conditions for the value functions at the optimal stopping boundaries. Further investigations of the both problems for observable Wiener processes were provided in Gapeev and Peskir (2004, 2006) in the finite-horizon setting. The sequential testing and quickest change-point detection problems in the distributional properties of certain observable time-homogeneous diffusions processes were studied in Gapeev and Shiryaev (2011, 2013) on infinite time intervals.

These two classical problems of sequential analysis for the case of observable compound Poisson processes, in which the unknown probabilitstic characteristics were the intensities and distributions of jumps, were investigated by Dayanik and Sezer (2006a, b). Some multidimensional extensions of the problems with several observable independent compound Poisson and Wiener processes were considered by Dayanik et al. (2008) and Dayanik and Sezer (2012). Other formulations of the change-point detection problem for Poisson processes for various types of probabilities of false alarms (including the delayed probability of false alarm) and delay penalty costs were studied by Bayraktar et al. (2005). More general versions of the standard Poisson disorder problem were solved by Bayraktar et al. (2006), where the intensities of the observable processes changed to certain unknown values. These problems for observable jump processes were solved by successive approximations of the value functions of the corresponding optimal stopping problems. The same method was also applied for the solution of the disorder problem for observable Wiener process by Sezer (2010), in which disorder occurs at one of the arrival times of an observable Poisson process. More recently, closed-form solutions of the both problems of sequential analysis for observable (time-homogeneous diffusion) Bessel processes were obtained by Johnson and Peskir (2017, 2018).

The aim of this paper is to address these two problems of statistical sequential analysis in their Bayesian formulations for observable Wiener processes under constantly delayed detection times. We formulate a unifying delayed optimal stopping problem for the appropriate weighted likelihood ratio processes representing time-homogeneous diffusions and make an assumption that the optimal stopping times are the first times at which these processes exit from certain regions restricted by constant boundaries. It is verified that the left- and right-hand optimal stopping boundaries provide the minimal and maximal solutions of the associated systems of arithmetic equations whenever they exist.

The question of consideration of the sequential analysis problems with delayed observations was raised by Anderson (1964). This idea was taken further by Chang and Ehrenfeld (1972) (see also Chang 1972). The Bayesian and variational sequential hypotheses testing problems on the drift of an observable Wiener process under randomly delayed observations were studied by Galtchouk and Nobelis (1999, 2000) (see also Miroshnitchenko 1979). Other optimal sequential estimation procedures for parameters and continuous distribution functions from sequences of random variables under delayed observations were considered by Magiera (1998), Jokiel-Rokita and Stȩpień (2009), Stȩpień-Baran (2011), and Baran and Stȩpień-Baran (2013) among others. More recently, Shiryaev (2019, Chap. VI) introduced a classification of quickest detection problems for observable Wiener processes.

The paper is organised as follows. In Sect. 2, we formulate unifying optimal stopping problems for the time-homogeneous weighted likelihood ratio diffusion processes and show how these problems arise from the Bayesian sequential testing and quickest change-point detection settings. We formulate the associated free-boundary problems and derive closed-form solutions of the equivalent systems of arithmetic equations for the optimal stopping boundaries. In Sect. 3, we verify that the uniquely specified solutions of the free-boundary problems provide the solutions of the original optimal stopping problems. In Sect. 4, we reproduce the derivation of the explicit expression for the transition density function of the weighted likelihood ratio process in the quickest change-point detection problem derived in Gapeev and Peskir (2006, Sect. 4) (see also Peskir and Shiryaev 2006, Chap. VI, Sect. 24).

Preliminaries

In this section, we give a formulation of the unifying optimal stopping problem for a one-dimensional time-homogeneous regular diffusion process and consider the associated partial and ordinary differential free boundary problems.

Formulation of the problem

For a precise formulation of the problem, let us consider a probability space \((\Omega , \mathcal {G}, P)\) with a standard Brownian motion \({\overline{B}} = ({\overline{B}}_t)_{t \ge 0}\). Let \(\Phi ^i = (\Phi ^i_t)_{t \ge 0}\) be a one-dimensional time-homogeneous diffusion process with the state space \([0, \infty )\), which is a pathwise (strong) solution of the stochastic differential equation

$$\begin{aligned} d\Phi ^i_t&= \eta _i(\Phi ^i_t) \, dt + \zeta _i(\Phi ^i_t) \, d{\overline{B}}_t \quad (\Phi _0 = \phi ) \end{aligned}$$
(2.1)

where \(\eta _i(\phi )\) and \(\zeta _i(\phi ) > 0\) are some continuously differentiable functions of at most linear growth in \(\phi \) on \([0, \infty )\), for every \(i = 1, 2\) fixed. Let us consider an optimal stopping problem with the value function

$$\begin{aligned} V^*_i(\phi ; \delta ) = \inf _{\tau } E_{\phi } \bigg [ F_i(\Phi ^i_{\tau + \delta }) + \int _0^{\tau + \delta } H_i(\Phi ^i_{s}) \, ds \bigg ] \end{aligned}$$
(2.2)

where \(E_{\phi }\) denotes the expectation under the assumption that \(\Phi ^i_0 = \phi \), for some \(\phi \in [0, \infty )\). Here, the gain function \(F_i(\phi )\) and the cost function \(H_i(\phi )\) are assumed to be non-negative, continuous and bounded, while \(F_i(\phi )\) is also continuously differentiable on \((0, c') \cup (c', \infty )\), for some \(c' \in [0, \infty ]\), for \(i = 1, 2\). It is assumed that the infimum in (2.2) is taken over all \(\delta \)-delayed stopping times \(\tau + \delta \), where \(\tau \) is a stopping time with respect to the natural filtration \((\mathcal{F}_t)_{t \ge 0}\) of the process \(\Phi ^i\), for any \(i = 1, 2\), and \(\delta > 0\) is given and fixed. Hence, it follows from the structure of the reward in (2.2) that the inequality \(V^*_i(\phi ; \delta ') \le V^*_i(\phi ; \delta )\) holds, for all \(\phi \ge 0\), and each \(0 \le \delta ' < \delta \) fixed. Such \(\delta \)-delayed stopping times were introduced in Øksendal (2005, Definition 1.1) and then the related optimal stopping and stochastic control problems related to such stopping times were studied in Bar-Ilan and Sulem (1995), Alvarez and Keppo (2002), Bayraktar and Egami (2007), Consteniuc et al. (2008), Øksendal and Sulem (2008), and Lempa (2012) among others.

Example 1

(Sequential testing problem) Suppose that we observe a continuous process \(X^1 = (X^1_t)_{t\ge 0}\) of the form \(X^1_t = \theta \mu t + \sigma B_t\), for \(t \ge 0\), with some \(\mu \ne 0\) and \(\sigma > 0\) fixed, where \(B=(B_t)_{t\ge 0}\) is a standard Brownian motion which is independent of the random variable \(\theta \). We assume that \(P(\theta =1)=\pi \) and \(P(\theta =0)=1-\pi \) holds, for some \(\pi \in (0, 1)\) fixed. The problem of sequential testing of two simple hypotheses about the values of the parameter \(\theta \) can be embedded into the optimal stopping problem of (2.2), for \(i = 1\), with \(F_1(\phi )=((a' \phi ) \wedge b')/(1+\phi )\) and \(H_1(\phi ) = 1\), where \(a', b' > 0\) are some given constants (see, e.g. Shiryaev 1978, Chap. IV, Sect. 2; Peskir and Shiryaev 2006, Chap. VI, Sect. 21). In this case, the weighted likelihood ratio process \(\Phi ^1\) takes the form

$$\begin{aligned} \Phi ^1_t = \frac{\pi }{1-\pi } \, L^1_t \quad \text {with} \quad L^1_t = \exp \bigg ( \frac{\mu }{\sigma ^2} \, X^1_t - \frac{\mu ^2}{2 \sigma ^2} \, t \bigg ) \end{aligned}$$
(2.3)

and thus, \(\Phi ^1\) solves the stochastic differential equation in (2.1) with the coefficients \(\eta _1(\phi ) = (\mu \phi /\sigma )^2/({1+\phi })\) and \(\zeta _1(\phi ) = \mu \phi /\sigma \), where the process \({\overline{B}} = ({\overline{B}}_t)_{t \ge 0}\) defined by

$$\begin{aligned} {\overline{B}}_t = X^1_t - \frac{\mu }{\sigma } \int _0^t \frac{\Phi ^1_s}{1 + \Phi ^1_s} \, ds \end{aligned}$$
(2.4)

is an innovation standard Brownian motion generating the same filtration \((\mathcal {F}_t)_{t \ge 0}\) as the process \(\Phi ^1\). The consideration of this model will be continued in Example 3 below.

Example 2

(Quickest change-point detection problem) Suppose that we observe a continuous process \(X^2 = (X^2_t)_{t \ge 0}\) of the form \(X^2_t = \mu (t-\theta )^+ + \sigma B_t\), for \(t \ge 0\), with some \(\mu \ne 0\) and \(\sigma > 0\) fixed, where \(B=(B_t)_{t\ge 0}\) is a standard Brownian motion which is independent of the random variable \(\theta \). We assume that \(P(\theta =0)=\pi \) and \(P(\theta> t \, | \, \theta > 0) = e^{- \lambda t}\) holds for all \(t \ge 0\), and some \(\pi \in (0, 1)\) and \(\lambda > 0\) fixed. The problem of quickest detection of the change-point parameter \(\theta \) can be embedded into the optimal stopping problem of (2.2), for \(i = 2\), with \(F_2(\phi )=1/(1+\phi )\) and \(H_2(\phi ) = c \phi /(1+\phi )\), where \(c > 0\) is a given constant (see, e.g. Shiryaev 1978, Chap. IV, Sect. 4 and Peskir and Shiryaev 2006, Chap. VI, Sect. 22). In this case, the weighted likelihood ratio process \(\Phi ^2\) takes the form

$$\begin{aligned} \Phi ^2_t = \frac{L^2_t}{e^{-\lambda t}} \bigg ( \frac{\pi }{1-\pi } + \int _0^t\frac{\lambda e^{-\lambda s}}{L^2_s} \, ds \bigg ) \quad \text {with} \quad L^2_t = \exp \bigg ( \frac{\mu }{\sigma ^2} \, X^2_t - \frac{\mu ^2}{2 \sigma ^2} \, t \bigg ) \end{aligned}$$
(2.5)

and thus, \(\Phi ^2\) solves the stochastic differential equation in (2.1) with the coefficients \(\eta _2(\phi ) = \lambda (1+\phi ) + (\mu \phi /\sigma )^2/(1+\phi )\) and \(\zeta _2(\phi ) = \mu \phi /\sigma \), where the process \({\overline{B}} = ({\overline{B}}_t)_{t \ge 0}\) defined by

$$\begin{aligned} {\overline{B}}_t = X^2_t - \frac{\mu }{\sigma } \int _0^t \frac{\Phi ^2_s}{1 + \Phi ^2_s} \, ds \end{aligned}$$
(2.6)

is an innovation standard Brownian motion generating the same filtration \((\mathcal {F}_t)_{t \ge 0}\) as the process \(\Phi ^2\). The consideration of this model will be continued in Example 4 below. The classification of quickest detection problems for the observable Wiener processes was recently introduced in Shiryaev (2019), Chap. VI).

Delayed optimal stopping problems

It follows from the result of (Øksendal 2005, Lemma 1.2) that the value functions in (2.2) admit the representations

$$\begin{aligned} V^*_i(\phi ; \delta ) = \inf _{\tau } E_{\phi } \bigg [ G_i(\Phi _{\tau }; \delta ) + \int _0^{\tau } H_i(\Phi _{t}) \, dt \bigg ] \end{aligned}$$
(2.7)

with

$$\begin{aligned} G_i(\phi ; \delta ) = E_{\phi } \bigg [ F_i(\Phi _{\delta }) + \int _0^{\delta } H_i(\Phi _{t}) \, dt \bigg ] \end{aligned}$$
(2.8)

for \(\delta > 0\) and any \(i = 1, 2\), where the infumum is taken over \((\mathcal{F}_t)_{t \ge 0}\)-stopping times \(\tau \) of finite expectation.

For the sequential testing problem case of \(i = 1\), we have \(F_1(\phi ) = ((a' \phi ) \wedge b')/(1+\phi )\), for some \(a', b' > 0\) fixed, and \(H_1(\phi ) = 1\), so that

$$\begin{aligned} G_1(\phi ; \delta )&= E_{\phi } \bigg [ \frac{(a' \Phi ^1_{\delta }) \wedge b'}{1+\Phi ^1_{\delta }} \bigg ] + \delta \nonumber \\&= \frac{\phi }{1+\phi } \, \int _{-\infty }^{\infty } \frac{(a' \phi \exp ( \mu x \sqrt{\delta }/\sigma + \mu ^2 \delta /(2 \sigma ^2))) \wedge b'}{1 + \phi \exp ( \mu x \sqrt{\delta }/\sigma + \mu ^2 \delta /(2 \sigma ^2))} \, \varphi (x) \, dx \nonumber \\&\quad + \frac{1}{1+\phi } \, \int _{-\infty }^{\infty } \frac{(a' \phi \exp ( \mu z \sqrt{\delta }/\sigma - \mu ^2 \delta /(2 \sigma ^2))) \wedge b'}{1 + \phi \exp ( \mu x \sqrt{\delta }/\sigma - \mu ^2 \delta /(2 \sigma ^2))} \, \varphi (x) \, dx + \delta \end{aligned}$$
(2.9)

holds, for \(\phi \ge 0\), where we use the function \(\varphi (x) = (1/\sqrt{2 \pi }) e^{- x^2/2}\), for \(x \in {\mathbb R}\).

In the change-point detection problem case of \(i = 2\), we have \(F_2(\phi ) = 1/(1 + \phi )\) and \(H_2(\phi ) = c \phi /(1+\phi )\), for some \(c > 0\) fixed, so that

$$\begin{aligned} G_2(\phi ; \delta )&= E_{\phi } \bigg [ \frac{1}{1+\Phi ^2_{\delta }} + \int _0^{\delta } \frac{c \Phi ^2_{t}}{1 + \Phi ^2_t} \, dt \bigg ] \nonumber \\&= \frac{1}{1+\phi } + E_{\phi } \bigg [ \int _0^{\delta } \frac{c \Phi ^2_{t} - \lambda }{1 + \Phi ^2_t} \, dt \bigg ] \nonumber \\&= \frac{1}{1+\phi } + \int _0^{\delta } E_{\phi } \bigg [ \frac{c \Phi ^2_{t} - \lambda }{1 + \Phi ^2_t} \bigg ] \, dt \end{aligned}$$
(2.10)

with

$$\begin{aligned}&E_{\phi } \bigg [ \frac{c \Phi ^2_t - \lambda }{1 + \Phi ^2_t} \bigg ] = \int _0^{\infty } \frac{c y - \lambda }{1 + y} \, q(\phi ; t, y) \, dy \end{aligned}$$
(2.11)

where the marginal distribution

$$\begin{aligned}&P_{\phi }(\Phi ^2_t \in dy) = q(\phi ; t, y) \, dy \end{aligned}$$
(2.12)

for all \(t, y > 0\) is derived in Gapeev and Peskir (2006, Sect. 4) (see also Peskir and Shiryaev 2006, Chap. VI, Sect. 24 as well as Sect. 4 below).

Optimal stopping times

It follows from the general theory of optimal stopping for Markov processes (see, e.g. Peskir and Shiryaev 2006, Chap. I, Sect. 2.2) that the optimal stopping time in the problem of (2.2) is given by

$$\begin{aligned} \tau ^*_i(\delta ) = \inf \big \{ t \ge 0 \, \big | \, V^*_i(\Phi ^i_t; \delta ) = G_i(\Phi ^i_{t}; \delta ) \big \} \end{aligned}$$
(2.13)

whenever it exists. We further search for an optimal stopping time of the form

$$\begin{aligned} \tau ^*_i(\delta ) = \inf \big \{ t \ge 0 \, \big | \, \Phi ^i_t \notin \big ( a^*_i(\delta ), b^*_i(\delta ) \big ) \big \} \end{aligned}$$
(2.14)

for some \(0 \le a^*_i(\delta ) < b^*_i(\delta ) \le \infty \) to be determined (see, e.g. Shiryaev 1978, Chap. IV, Sects. 2 and 4; Peskir and Shiryaev 2006, Chap. VI, Sects. 23 and 24) for the case of \(\delta = 0\)).

Free-boundary problems

By means of standard arguments based on Itô’s formula (see, e.g. Liptser and Shiryaev 2001, Chap. IV, Theorem 4.4), it can be shown that the infinitesimal generator \({\mathbb {L}^i}\) of the process \(\Phi ^i = (\Phi ^i_t)_{t \ge 0}\) acts on an arbitrary twice continuously differentiable bounded function \(V(\phi )\) according to the rule

$$\begin{aligned} ({\mathbb {L}^i} V)(\phi ) = \eta _i(\phi ) \, V'(\phi ) + \frac{\zeta ^2_i(\phi )}{2} \, V''(\phi ) \end{aligned}$$
(2.15)

for all \(\phi > 0\) (see, e.g. Øksendal and Sulem 2008, Chap. VII, Theorem 7.3.3). In order to find analytic expressions for the unknown value function \(V^*_i(\phi ; \delta )\) from (2.2) and the unknown boundaries \(a^*_i(\delta )\) and \(b^*_i(\delta )\) from (2.14), for \(i = 1, 2\), we use the results of general theory of optimal stopping problems for continuous time Markov processes (see, e.g. Shiryaev 1978, Chap. III, Sect. 8; Peskir and Shiryaev 2006, Chap. IV, Sect. 8). We formulate the associated free boundary problem

$$\begin{aligned}&({\mathbb {L}^i} V_i)(\phi ; \delta ) = - H_i(\phi ) \quad \text {for} \quad a_i(\delta )< \phi < b_i(\delta ) \end{aligned}$$
(2.16)
$$\begin{aligned}&V_i(a_i(\delta ){+}; \delta ) {=} G_i(a_i(\delta ); \delta ), \quad V_i(b_i(\delta ){-}; \delta ) {=} G_i(b_i(\delta ); \delta ) \,\, \text {(\textit{instantaneous stopping})} \end{aligned}$$
(2.17)
$$\begin{aligned}&V_i'(a_i(\delta )+; \delta ) {=} G_i'(a_i(\delta ); \delta ), \quad V_i'(b_i(\delta )-; \delta ) {=} G_i'(b_i(\delta ); \delta ) \,\, {(\textit{smooth fit})} \end{aligned}$$
(2.18)
$$\begin{aligned}&V_i(\phi ; \delta ) = G_i(\phi ; \delta ) \quad \text {for} \quad \phi < a_i(\delta ) \quad \text {and} \quad \phi > b_i(\delta ) \end{aligned}$$
(2.19)
$$\begin{aligned}&V_i(\phi ; \delta )< G_i(\phi ; \delta ) \quad \text {for} \quad a_i(\delta )< \phi < b_i(\delta ) \end{aligned}$$
(2.20)
$$\begin{aligned}&({\mathbb {L}^i} G_i)(\phi ; \delta )> - H_i(\phi ) \quad \text {for} \quad \phi < a_i(\delta ) \quad \text {and} \quad \phi > b_i(\delta ) \end{aligned}$$
(2.21)

for some \(0 \le a_i(\delta )< c' < b_i(\delta ) \le \infty \). Note that the superharmonic characterization of the value function (see, e.g. Shiryaev 1978, Chap. III, Sect. 8; Peskir and Shiryaev 2006, Chap. IV, Sect. 9) implies that \(V^*_i(\phi ; \delta )\) from (2.2) is the largest function satisfying (2.16)–(2.17) and (2.19)–(2.21) with the boundaries \(a^*_i(\delta )\) and \(b^*_i(\delta )\), for each \(\delta > 0\) fixed.

Example 3

(Sequential testing problem) Let us first solve the free-boundary problem in (2.16)–(2.21) with \(G_1(\phi ; \delta )\) from (2.9) and \(H_1(\phi ) = 1\), as in Example 1 above. For this purpose, we follow the arguments of Shiryaev (1978, Chap. IV, Sect. 2) and Peskir and Shiryaev (2006, Chap. VI, Sect. 21) and integrate the second-order ordinary differential equation in (2.16) twice as well as use the conditions of (2.17) and (2.18) at the candidate boundaries \(a_1(\delta )\) and \(b_1(\delta )\) to obtain

$$\begin{aligned} V_1(\phi ; {a}_1(\delta ), {b}_1(\delta ); \delta ) = C_1({a}_1(\delta ), {b}_1(\delta )) + C_2({a}_1(\delta ), {b}_1(\delta )) \, \frac{\phi }{1+\phi } + \Psi (\phi ) \end{aligned}$$
(2.22)

where we denote

$$\begin{aligned} \Psi (\phi ) = \frac{2 \sigma ^2}{\mu ^2} \, \frac{1-\phi }{1+\phi } \, \ln \phi \end{aligned}$$
(2.23)

for all \(\phi > 0\). Here, we have

$$\begin{aligned}&C_1({a}_1(\delta ), {b}_1(\delta )) \nonumber \\&\quad = \frac{(G_1(a_1(\delta )) - \Psi (a_1(\delta ))) b_1(\delta ) (1 + a_1(\delta )) - (G_1(b_1(\delta )) - \Psi (b_1(\delta ))) a_1(\delta ) (1 + b_1(\delta ))}{b_1(\delta ) - a_1(\delta )} \end{aligned}$$
(2.24)
$$\begin{aligned}&C_2({a}_1(\delta ), {b}_1(\delta )) \nonumber \\&\quad = \frac{(G_1(b_1(\delta )) - \Psi (b_1(\delta )) - G_1(a_1(\delta )) + \Psi (a_1(\delta ))) (1 + a_1(\delta ))(1 + b_1(\delta ))}{b_1(\delta ) - a_1(\delta )} \end{aligned}$$
(2.25)

for \(0< a_1(\delta )< b_1(\delta ) < \infty \). It thus follows from the condition of (2.18) that the boundaries \(a^*_1(\delta )\) and \(b^*_1(\delta )\) solve the system of arithmetic equations

$$\begin{aligned}&\frac{C_2(a_1(\delta ), b_1(\delta ))}{(1 + a_1(\delta ))^2} - \frac{2 \sigma ^2}{\mu ^2} \, \bigg ( \frac{2 \ln a_1(\delta )}{(1 + a_1(\delta ))^2} + \frac{a_1(\delta ) - 1}{a_1(\delta ) (1 + a_1(\delta ))} \bigg ) = G_1'(a_1(\delta ); \delta ) \end{aligned}$$
(2.26)
$$\begin{aligned}&\frac{C_2(a_1(\delta ), b_1(\delta ))}{(1 + b_1(\delta ))^2} - \frac{2 \sigma ^2}{\mu ^2} \, \bigg ( \frac{2 \ln b_1(\delta )}{(1 + b_1(\delta ))^2} + \frac{b_1(\delta ) - 1}{b_1(\delta ) (1 + b_1(\delta ))} \bigg ) = G_1'(b_1(\delta ); \delta ) \end{aligned}$$
(2.27)

for \(0< a_1(\delta )< b_1(\delta ) < \infty \). Following the arguments in Shiryaev (1978, Chap. IV, Sect. 2) and Peskir and Shiryaev (2006, Chap. VI, Sect. 21), we further consider the minimal and maximal solutions \(a^*_1(\delta )\) and \(b^*_1(\delta )\) of the system of equations in (2.26)–(2.27), respectively, for any \(\delta > 0\) fixed.

Example 4

(Quickest change-point detection problem) Let us now solve the free-boundary problem in (2.16)–(2.21) with \(G_2(\phi ; \delta )\) from (2.10) and \(H_2(\phi ) = c \phi /(1+\phi )\), for all \(\phi \ge 0\), as in Example 2 above, where we set \({a}^*_2(\delta ) = 0\), for each \(\delta > 0\). For this purpose, we follow the arguments of Shiryaev (1978, Chap. IV, Sect. 4) or Peskir and Shiryaev (2006, Chap. VI, Sect. 22) and integrate the second-order ordinary differential equation in (2.16) twice with respect to the variable \(\phi \) as well as use the conditions of (2.17) and (2.18) at the upper candidate boundary \({b}_2(\delta )\) to obtain

$$\begin{aligned}&V_2(\phi ; {b}_2(\delta ); \delta ) \nonumber \\&= G_2(b_2(\delta ); \delta ) + \int _{\phi }^{{b}_2(\delta )} \frac{C}{(1+y)^2} \int _0^y \exp \Big ( - \Lambda \, \big ( \Upsilon (y) - \Upsilon (x) \big ) \Big ) \, \frac{1+x}{x} \, dx \, dy \end{aligned}$$
(2.28)

where we denote

$$\begin{aligned}&C = \frac{2 c \sigma ^2}{\mu ^2}, \quad \Lambda = \frac{2 \lambda \sigma ^2}{\mu ^2}, \quad \text {and} \quad \Upsilon (\phi ) = \ln \phi - \frac{1+\phi }{\phi } \end{aligned}$$
(2.29)

for all \(\phi > 0\). It thus follows from the condition of (2.18) that the boundary \(b^*_2(\delta )\) solves the arithmetic equation

$$\begin{aligned} C \int _0^{b_2(\delta )} \exp \Big ( - \Lambda \, \big ( \Upsilon (b_2(\delta ))-\Upsilon (\phi ) \big ) \Big ) \, \frac{1+\phi }{\phi } \, d\phi = G_2'(b_2(\delta ); \delta ) \end{aligned}$$
(2.30)

for any \(\delta > 0\) fixed. Following the arguments in Shiryaev (1978, Chap. IV, Sect. 4) and Peskir and Shiryaev (2006, Chap. VI, Sect. 22), we further consider the maximal solution \(b^*_2(\delta )\) of the equation in (2.30) such that \(\lambda /c \le b^*_2(\delta )\), for any \(\delta > 0\) fixed.

Main results and proofs

Theorem 1

Let the process \(\Phi ^i\) be a pathwise unique solution of the stochastic differential equation in (2.1). Suppose that the functions \(G_i(\phi ; \delta )\) and \(H_i(\phi )\) are bounded and continuous, while \(G_i(\phi ; \delta )\) is also continuously differentiable on \(((0, c') \cup (c', \infty ))\), for some \(c' \in [0, \infty ]\), and any \(i = 1, 2\) fixed. Assume that the couple \(a^*_i(\delta )\) and \(b^*_i(\delta )\), such that \(0 \le a^*_i(\delta ) < b^*_i(\delta ) \le \infty \), together with \(V_i(\phi ; a^*_i(\delta ), b^*_i(\delta ); \delta )\) form a solution of the free boundary problem of (2.16)–(2.21), such that \(a^*_i(\delta )\) is the minimal solution and \(b^*_i(\delta )\) is the maximal solution of the system of arithmetic equations in (2.17)–(2.18) [which are equivalent to either (2.26)–(2.27) or (2.30)], for any \(i = 1, 2\) fixed. Then, the value function \(V^*_i(\phi ; \delta )\) admits the representation

$$\begin{aligned} {V}^*_i(\phi ; \delta ) = {\left\{ \begin{array}{ll} {V}_i(\phi ; {a}^*_i(\delta ), {b}^*_i(\delta ); \delta ), &{} \text {if} \; \; {a}^*_i(\delta )< \phi < {b}^*_i(\delta ) \\ {G}_i(\phi ; \delta ), &{} \text {if} \; \; \phi \le {a}^*_i(\delta ) \; \; \text {or} \; \; \phi \ge {b}^*_i(\delta ) \end{array}\right. } \end{aligned}$$
(3.1)

[where the candidate function \({V}_i(\phi ; {a}_i(\delta ), {b}_i(\delta ); \delta )\) is given by either (2.22)–(2.23) or (2.24)–(2.25) or (2.28)–(2.29)] and the optimal stopping time \(\tau ^*_i\) has the form of the first exit time of the process \(\Phi ^i\) from the interval \((a^*_i(\delta ), b^*_i(\delta ))\) as in (2.14), whenever \(E_{\phi } [\tau ^*_i] < \infty \) holds, for any \(i = 1, 2\) fixed.

Proof

In order to verify the assertions stated above, let us denote by \({V}_i(\phi ; \delta )\) the right-hand side of the expression in (3.1). It follows from the arguments of the previous section that the function \({V}_i(\phi ; \delta )\) solves the ordinary differential equation of (2.16) and satisfies the instantaneous-stopping conditions of (2.17). Then, using the fact that the function \(V_i(\phi ; \delta )\) of (3.1) satisfies the smooth-fit conditions of (2.18) as well as the conditions of (2.19)–(2.21) by construction, we can apply the local time-space formula from Peskir (2005) (see also Peskir and Shiryaev 2006, Chap. II, Sect. 3.5) for a summary of the related results and further references) to obtain

$$\begin{aligned}&{V}_i(\Phi ^i_{t}; \delta )+\int _0^t H_i(\Phi ^i_{s}) \, ds\nonumber \\&\quad = {V}_i(\phi ; \delta ) + M^i_t + \int _0^t ({\mathbb L}^i {V_i} + H_i)(\Phi ^i_{s}; \delta ) \, I \big ( \Phi ^i_{s} \ne a^*_i(\delta ), \Phi ^i_{s} \ne b^*_i(\delta ) \big ) \, ds \end{aligned}$$
(3.2)

for all \(t \ge 0\), where \(I(\cdot )\) denotes the indicator function. Here, the process \(M^i = (M^i_t)_{t \ge 0}\) defined by

$$\begin{aligned} M^i_t = \int _0^t {V}_i'(\Phi ^i_{s}; \delta ) \, \zeta _i(\Phi ^i_{s}) \, d{\overline{B}}_s \end{aligned}$$
(3.3)

is a continuous local martingale with respect to the probability measure \(P_{\phi }\), for any \(i = 1, 2\).

Using the assumption that the inequality in (2.21) holds for the function \({G}_i(\phi ; \delta )\) with the boundaries \({a}^*_i(\delta )\) and \({b}^*_i(\delta )\), we conclude that \(({\mathbb L}^i {V}_i + H_i)(\phi ; \delta ) \ge 0\) holds, for any \(\phi \ne a^*_i(\delta )\) and \(\phi \ne b^*_i(\delta )\), and any \(i = 1, 2\). Moreover, it follows from the conditions in (2.17)–(2.20) that the inequality \({V}_i(\phi ; \delta ) \le {G}_i(\phi ; \delta )\) holds, for all \(\phi \ge 0\) and any \(i = 1, 2\). Since the time spent by the process \(\Phi ^i\) at the points \(a^*_i(\delta )\) and \(b^*_i(\delta )\) is of Lebesgue measure zero, the indicator that appear in the integral of (3.2) can be ignored (see, e.g. Borodin and Salminen 2002, Chap. II, Sect. 1). Thus, the expression in (3.2) and the structure of the stopping time in (2.14) yields the inequalities

$$\begin{aligned}&{G}_i(\Phi ^i_{\tau }; \delta ) + \int _0^\tau {H}_i(\Phi ^i_{s}) \, ds \ge {V}_i(\Phi ^i_{\tau }; \delta ) + \int _0^\tau {H}_i(\Phi ^i_{s}) \, ds \ge {V}_i(\phi ; \delta ) + M^i_{\tau }\nonumber \\ \end{aligned}$$
(3.4)

for any stopping time \(\tau \) such that \(E_{\phi } [\tau ] < \infty \). Let \((\varkappa ^n_i)_{n \in {\mathbb N}}\) be the localising sequence of stopping times for the process \(M^i\) such that \(\varkappa ^n_i = \inf \{t \ge 0 \, | \, |M^i_t| \ge n\}\), for any \(i = 1, 2\). Then, taking the expectations with respect to the probability measure \(P_{\phi }\) in (3.4), by means of the optional sampling theorem (see, e.g. Liptser and Shiryaev 2001, Chap. III, Theorem 3.6), we get the inequalities

$$\begin{aligned}&E_{\phi } \bigg [ {G}_i(\Phi ^i_{\tau \wedge \varkappa ^n_i}; \delta ) + \int _0^{\tau \wedge \varkappa ^n_i} H_i(\Phi ^i_{s}) \, ds \bigg ] \nonumber \\&\quad \ge E_{\phi } \bigg [ {V}_i(\Phi ^i_{\tau \wedge \varkappa ^n_i}; \delta ) + \int _0^{\tau \wedge \varkappa ^n_i } H_i(\Phi ^i_{s}) \, ds \bigg ] \ge {V}_i(\phi ; \delta ) + E_{\phi } \big [ M^i_{\tau \wedge \varkappa ^n_i} \big ] = {V}_i(\phi ; \delta ) \end{aligned}$$
(3.5)

hold, for each \(n \in {\mathbb N}\) and \(i = 1, 2\). Hence, letting n go to infinity and using Fatou’s lemma, we obtain

$$\begin{aligned}&E_{\phi } \bigg [ {G}_i(\Phi ^i_{\tau }; \delta ) + \int _0^{\tau } H_i(\Phi _{s}) \, ds \bigg ] \ge E_{\phi } \bigg [ {V}_i(\Phi ^i_{\tau }; \delta ) + \int _0^{\tau } H_i(\Phi ^i_{s}) \, ds \bigg ] \ge {V}_i(\phi ; \delta ) \end{aligned}$$
(3.6)

for any stopping time \(\tau \) such that \(E_{\phi } [\tau ] < \infty \), for all \(\phi \ge 0\). By virtue of the structure of the stopping time in (2.14) and the conditions of (2.19), it is readily seen that the equalities in (3.4) hold with \(\tau _*\) instead of \(\tau \) when either \(\phi \le a^*_i(\delta )\) or \(\phi \ge b^*_i(\delta )\), respectively.

Let us finally show that the equalities are attained in (3.6) when \(\tau ^*_i\) replaces \(\tau \) and the smooth-fit conditions of (2.18) hold, for \(a^*_i(\delta )< \phi < b^*_i(\delta )\) and \(i = 1, 2\). By virtue of the fact that the function \({V}_i(\phi ; \delta )\) and the boundaries \(a^*_i(\delta )\) and \(b^*_i(\delta )\) solve the ordinary differential equation in (2.16) and satisfy the conditions in (2.17) and (2.18), it follows from the expression in (3.2) and the structure of the stopping time in (2.14) that

$$\begin{aligned}&{G}_i(\Phi ^i_{\tau ^*_i \wedge \varkappa ^n_i}; \delta ) + \int _0^{\tau ^*_i \wedge \varkappa ^n_i} H_i(\Phi ^i_{s}) \, ds \nonumber \\&\quad = {V}_i(\Phi ^i_{\tau ^*_i \wedge \varkappa ^n_i}; \delta ) + \int _0^{\tau ^*_i \wedge \varkappa ^n_i} H_i(\Phi ^i_{s}) \, ds = {V}_i(\phi ; \delta ) + M^i_{\tau ^*_i \wedge \varkappa ^n_i} \end{aligned}$$
(3.7)

holds, for all \(a^*_i(\delta )< \phi < b^*_i(\delta )\), and any \(i = 1, 2\). Hence, taking expectations and letting n go to infinity in (3.7), using the properties that \(G_i(\phi ; \delta )\) is bounded and the integral there is of finite expectation if and only if \(\tau ^*_i\) is so, we apply the Lebesgue dominated convergence theorem to obtain the equality

$$\begin{aligned} E_{\phi } \bigg [ {G}_i(\Phi ^i_{{\tau }^*_i}; \delta ) + \int _0^{\tau ^*_i} H_i(\Phi ^i_{s}) \, ds \bigg ] = {V}_i(\phi ; \delta ) \end{aligned}$$
(3.8)

for all \(\phi \ge 0\) and any \(i = 1, 2\). We may therefore conclude that the function \(V_i(\phi ; \delta )\) coincides with the value function \(V^*_i(\phi ; \delta )\) of the optimal stopping problem in (2.2). \(\square \)

References

  1. Alvarez L, Keppo J (2002) The impact of delivery lags on irreversible investment under uncertainty. Eur J Oper Res 136:173–180

    MathSciNet  MATH  Article  Google Scholar 

  2. Anderson TW (1964) Sequential analysis with delayed observations. J. Am. Stat. Assoc. 59:1006–1015

    MathSciNet  MATH  Article  Google Scholar 

  3. Baran J, Stȩpień-Baran A (2013) Sequential estimation of a location parameter and powers of a scale parameter from delayed observations. Stat Neerl 67(3):263–280

    MathSciNet  MATH  Article  Google Scholar 

  4. Bar-Ilan A, Sulem A (1995) Explicit solution of inventory problems with delivery lags. Math Oper Res 20:709–720

    MathSciNet  MATH  Article  Google Scholar 

  5. Bayraktar E, Egami M (2007) The effects of implementation delay on decision-making under uncertainty. Stoch Process Their Appl 117:333–358

    MathSciNet  MATH  Article  Google Scholar 

  6. Bayraktar E, Dayanik S, Karatzas I (2005) The standard Poisson disorder problem revisited. Stoch Process Their Appl 115(9):1437–1450

    MathSciNet  MATH  Article  Google Scholar 

  7. Bayraktar E, Dayanik S, Karatzas I (2006) Adaptive Poisson disorder problem. Ann Appl Probab 16:1190–1261

    MathSciNet  MATH  Article  Google Scholar 

  8. Borodin AN, Salminen P (2002) Handbook of Brownian Motion, 2nd edn. Birkhäuser, Basel

    Google Scholar 

  9. Carlstein E, Müller H-G, Siegmund D (eds) (1994) Change-point problems. IMS lecture notes monograph series, vol 23. Institute of Mathematical Statics, Hayward

    Google Scholar 

  10. Chang R-C (1972) Sequential statistical analysis with delayed observations. PhD thesis, New York University, School of Engineering and Science

  11. Chang R-C, Ehrenfeld S (1972) On a sequential test procedure with delayed observations. Naval Res Logist Quart 19:651–661

    MathSciNet  MATH  Article  Google Scholar 

  12. Consteniuc M, Schnetzer M, Taschini L (2008) Entry and exit decision problem with implementation delay. J Appl Probab 45:1039–1059

    MathSciNet  MATH  Article  Google Scholar 

  13. Davis MHA (1976) A note on the Poisson disorder problem, vol 1. Banach Center Publications, Warsaw, pp 65–72

    Google Scholar 

  14. Dayanik S, Sezer SO (2006a) Compound Poisson disorder problem. Math Oper Res 31:649–672

  15. Dayanik S, Sezer SO (2006b) Sequential testing of simple hypotheses about compound Poisson processes. Stoch Process Their Appl 116:1892–1919

  16. Dayanik S, Sezer SO (2012) Multisource Bayesian sequential binary hypothesis testing problem. Ann Oper Res 201:99–130

    MathSciNet  MATH  Article  Google Scholar 

  17. Dayanik S, Poor HV, Sezer SO (2008) Multisource Bayesian sequential change detection. Ann Appl Probab 18:552–590

    MathSciNet  MATH  Article  Google Scholar 

  18. Dufresne D (2001) The integral of geometric Brownian motion. Adv Appl Probab 33:223–241

    MathSciNet  MATH  Article  Google Scholar 

  19. Galtchouk LI, Nobelis PP (1999) Bayesian testing simple hypotheses on the drift of a Wiener process with randomly delayed observations. Seq Anal 18:23–31

    MathSciNet  MATH  Article  Google Scholar 

  20. Galtchouk LI, Nobelis PP (2000) Sequential variational testing hypotheses on the Wiener process under delayed observations. Stat Infer Stoch Process 2:31–56

    MathSciNet  MATH  Article  Google Scholar 

  21. Gapeev PV, Peskir G (2004) The Wiener sequential testing problem with finite horizon. Stoch Stoch Rep 76(1):59–75

    MathSciNet  MATH  Article  Google Scholar 

  22. Gapeev PV, Peskir G (2006) The Wiener disorder problem with finite horizon. Stoch Process Their Appl 116(12):1770–1791

    MathSciNet  MATH  Article  Google Scholar 

  23. Gapeev PV, Shiryaev AN (2011) On the sequential testing problem for some diffusion processes. Stochastics 83(4–6):519–535

    MathSciNet  MATH  Article  Google Scholar 

  24. Gapeev PV, Shiryaev AN (2013) Bayesian quickest detection problems for some diffusion processes. Adv Appl Probab 45(1):164–185

    MathSciNet  MATH  Article  Google Scholar 

  25. Johnson P, Peskir G (2017) Quickest detection problems for Bessel processes. Ann Appl Probab 27:1003–1056

    MathSciNet  MATH  Article  Google Scholar 

  26. Johnson P, Peskir G (2018) Sequential testing problems for Bessel processes. Trans Am Math Soc 1370:2085–2113

    MathSciNet  MATH  Google Scholar 

  27. Jokiel-Rokita A, Stȩpień A (2009) Sequential estimation of a location parameter from delayed observations. Stat Pap 2:363–372

    MathSciNet  MATH  Article  Google Scholar 

  28. Lempa J (2012) Optimal stopping with random exercise lag. Math Methods Oper Res 75:273–286

    MathSciNet  MATH  Article  Google Scholar 

  29. Liptser RS, Shiryaev AN (2001) Statistics of random processes I, 2nd edn. Springer, Berlin

    Google Scholar 

  30. Magiera R (1998) Optimal sequential estimation procedures under delayed observations from multiparameter exponential families. In: Operations Research Proceedings. Springer, Berlin, pp 200–205

  31. Miroshnitchenko TP (1979) Testing of two simple hypothesis in the presence of delayed observations. Theory Probab Its Appl 24:467–479

    Article  Google Scholar 

  32. Øksendal B (1998) Stochastic differential equations. Springer, Berlin

    Google Scholar 

  33. Øksendal B (2005) Optimal stopping with delayed information. Stoch Dyn 5(2):271–280

    MathSciNet  MATH  Article  Google Scholar 

  34. Øksendal B, Sulem A (2008) Optimal stochastic impulse control with delayed reaction. Appl Math Optim 58(2):243–255

    MathSciNet  MATH  Article  Google Scholar 

  35. Peskir G (2005) A change-of-variable formula with local time on curves. J Theor Probab 18:499–535

    MathSciNet  MATH  Article  Google Scholar 

  36. Peskir G, Shiryaev AN (2000) Sequential testing problems for Poisson processes. Ann Stat 28:837–859

    MathSciNet  MATH  Article  Google Scholar 

  37. Peskir G, Shiryaev AN (2002) Solving the Poisson disorder problem. Advances in finance and stochastics. Essays in Honour of Dieter Sondermann. Springer, New York, pp 295–312

    Google Scholar 

  38. Peskir G, Shiryaev AN (2006) Optimal stopping and free-boundary problems. Birkhäuser, Basel

    Google Scholar 

  39. Poor HV, Hadjiliadis O (2008) Quickest detection. Cambridge University Press, Cambridge

    Google Scholar 

  40. Schröder M (2003) On the integral of geometric Brownian motion. Adv Appl Probab 35:159–183

    MathSciNet  MATH  Article  Google Scholar 

  41. Sezer SO (2010) On the Wiener disorder problem. Ann Appl Probab 20:1537–1566

    MathSciNet  MATH  Article  Google Scholar 

  42. Shiryaev AN (1978) Optimal stopping rules. Springer, Berlin

    Google Scholar 

  43. Shiryaev AN (2019) Stochastic disorder problems. With a foreword by H. Vincent Poor. Probability theory and stochastic modelling, vol 93. Springer, Cham

    Google Scholar 

  44. Stȩpień-Baran A (2011) Sequential estimation of a continuous distribution function from delayed observations. Commun Stat Theory Methods 40(10):1717–1733

    MathSciNet  MATH  Article  Google Scholar 

  45. Yor M (1992) On some exponential functionals of Brownian motion. Adv Appl Probab 24:509–531

    MathSciNet  MATH  Article  Google Scholar 

Download references

Acknowledgements

The author is grateful to the Lead Guest Editor and an anonymous Referee for pointing out relevant references as well as other useful suggestions which helped to improve the presentation of the paper. This research was also supported by a Small Grant from the Suntory and Toyota International Centres for Economics and Related Disciplines (STICERD) at the London School of Economics and Political Science.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Pavel V. Gapeev.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this section, we reproduce all the arguments for the derivation of the explicit expression for the transition density function of the weighted likelihood ratio process \(\Phi ^2 = (\Phi ^2_t)_{t \ge 0}\) given in (2.5) derived in Gapeev and Peskir (2006, Sect. 4) (see also Peskir and Shiryaev 2006, Chap. VI, Sect. 24).

4.1

Let \(B=(B_t)_{t \ge 0}\) be a standard Wiener process defined on a probability space \((\Omega , \mathcal{F}, P)\). With \(t > 0\) and \(\nu \in {\mathbb R}\) given and fixed, recall from Yor (1992, p. 527) that the random variable \(A_t^{(\nu )} = \int _0^t e^{2(B_s+\nu s)} ds\) has the conditional distribution

$$\begin{aligned} P \Big ( A_t^{(\nu )} \in dz \, \Big | \, B_t + \nu t = y \Big ) = a(t, y, z) \, dz \end{aligned}$$
(4.1)

where the density function a for \(z>0\) is given by:

$$\begin{aligned} a(t, y, z)&= \frac{1}{\pi z^2} \, \exp \bigg ( \frac{y^2+\pi ^2}{2t} + y - \frac{1}{2z} \Big ( 1 + e^{2y} \Big ) \bigg ) \nonumber \\&\quad \times \int _0^{\infty } \exp \bigg ( - \frac{w^2}{2t} - \frac{e^y}{z} \, \cosh (w) \bigg ) \sinh (w) \, \sin \Big ( \frac{\pi w}{t} \Big ) dw. \end{aligned}$$
(4.2)

This implies that the random vector \(( 2(B_t+\nu t), A^{(\nu )}_t)\) has the distribution

$$\begin{aligned} P \Big ( 2(B_t + \nu t) \in dy, A_t^{(\nu )} \in dz \Big ) = b(t, y, z) \, dy \, dz \end{aligned}$$
(4.3)

where the density function b for \(z>0\) is given by:

$$\begin{aligned} b(t, y, z)&= a \left( t, \frac{y}{2}, z \right) \frac{1}{2 \sqrt{t}} \, \varphi \left( \frac{y - 2 \nu t}{2 \sqrt{t}} \right) \nonumber \\&\quad = \frac{1}{(2\pi )^{3/2} z^2 \sqrt{t}} \,\exp \left( \frac{\pi ^2}{2t} + \Big ( \frac{\nu +1}{2} \Big ) y - \frac{\nu ^2}{2} \, t - \frac{1}{2 z} \Big ( 1 + e^y \Big ) \right) \nonumber \\&\qquad \times \int _0^{\infty } \exp \left( - \frac{w^2}{2t} - \frac{e^{y/2}}{z} \cosh (w) \right) \sinh (w) \, \sin \left( \frac{\pi w}{t} \right) dw \end{aligned}$$
(4.4)

and we set \(\varphi (x)=(1/\sqrt{2 \pi })e^{-x^2/2}\) for \(x \in {\mathbb R}\) (see Dufresne 2001 and Schröder 2003 for related expressions in terms of Hermite functions).

Denoting \(I_t = \alpha B_t + \beta t\) and \(J_t = \int _0^t e^{\alpha B_s + \beta s} ds\) with \(\alpha \ne 0\) and \(\beta \in {\mathbb R}\) given and fixed, and using the fact that the scaling property of B implies:

$$\begin{aligned} P \bigg ( \alpha B_t {+} \beta t \le y, {\int }_0^t e^{\alpha B_s {+} \beta s} \, ds {\le } z \bigg ) {=} P \bigg ( 2 (B_{t'} {+} \nu t') {\le } y, \int _0^{t'} e^{2 (B_s{+}\nu s)} \, ds \le \frac{\alpha ^2}{4} \, z \bigg ) \end{aligned}$$
(4.5)

with \(t' = \alpha ^2 t/4\) and \(\nu =2 \beta /\alpha ^2\), it follows by applying (4.3) and (4.4) that the random vector \((I_t, J_t)\) has the distribution:

$$\begin{aligned} P \Big ( I_t \in dy, J_t \in dz \Big ) = f(t, y, z) \, dy \, dz \end{aligned}$$
(4.6)

where the density function f for \(z>0\) is given by:

$$\begin{aligned} f(t, y, z)&= \frac{\alpha ^2}{4} \, b \left( \frac{\alpha ^2}{4} \, t, \, y, \frac{\alpha ^2}{4} \, z \right) \nonumber \\&= \frac{2\sqrt{2}}{\pi ^{3/2} \alpha ^3} \, \frac{1}{z^2 \sqrt{t}} \exp \left( \frac{2 \pi ^2}{\alpha ^2 t} + \Big ( \frac{\beta }{\alpha ^2} + \frac{1}{2} \Big ) y - \frac{\beta ^2}{2 \alpha ^2} \, t - \frac{2}{\alpha ^2 z} \Big ( 1 + e^y \Big ) \right) \nonumber \\&\quad \times \int _0^{\infty } \exp \left( - \frac{2 w^2}{\alpha ^2 t} - \frac{4 e^{y/2}}{\alpha ^2 z} \cosh (w) \right) \sinh (w) \, \sin \Big ( \frac{4 \pi w}{\alpha ^2 t} \Big ) dw. \end{aligned}$$
(4.7)

4.2

Letting \(\alpha = - \mu /\sigma \) and \(\beta = - \lambda - \mu ^2/(2 \sigma ^2)\), it follows from the explicit expression in (2.5) that:

$$\begin{aligned} P^0 (\Phi ^2_t \in dx) = P \Big ( e^{-I_t} \Big ( {\phi } + \lambda J_t \Big ) \in dx \Big ) = g(\phi ; t, x) \, dx \end{aligned}$$
(4.8)

where the density function g for \(x>0\) is given by:

$$\begin{aligned} g(\phi ; t, x)&= \frac{d}{dx} \int _{-\infty }^\infty \int _0^\infty I \Big ( e^{-y} \Big ( {\phi } + \lambda z \Big ) \le x \Big ) f(t, y, z) \, dy \, dz \nonumber \\&= \int _{-\infty }^\infty f \Big (t, y, \frac{1}{\lambda } \Big ( x e^y - {\phi } \Big ) \Big ) \frac{e^y}{\lambda } \, dy. \end{aligned}$$
(4.9)

Here \(P^t\) is the distribution \(P^t(X^2 \in \cdot ) = P(X^2 \in \cdot \, | \, \theta = t)\) of the process \(X^2\) under condition that \(\theta = t\), for each \(t \in [0, \infty ]\).

Moreover, setting \({\widetilde{I}}_{t-s}=\alpha (B_t-B_s)+\beta (t-s)\) and \({\widetilde{J}}_{t-s}= \int _s^t e^{\alpha (B_u-B_s)+\beta (u-s)} du\) as well as \(\widehat{I}_s=\alpha B_s + \widehat{\beta } s\) and \(\widehat{J}_s=\int _0^s e^{\alpha B_u+{\widehat{\beta }} u} du\) with \({\widehat{\beta }} = - \lambda + \mu ^2/(2 \sigma ^2)\), it follows from the explicit expression in (2.5) that:

$$\begin{aligned} P^s (\Phi ^2_t \in dx)&= P \Big ( e^{-\gamma s} e^{-{\widetilde{I}}_{t-s}} \Big ( e^{({\widehat{\beta }}-\beta ) s} \, e^{-\widehat{I}_s} \Big ( \frac{\pi }{1-\pi } + \lambda \widehat{J}_s \Big ) + \lambda e^{\gamma s} {\widetilde{J}}_{t-s} \Big ) \in dx \Big ) \nonumber \\&= h(s; \phi ; t, x) \, dx \end{aligned}$$
(4.10)

for \(0< s < t\) where \(\gamma = \mu ^2/\sigma ^2\). Since stationary independent increments of B imply that the random vector \(({\widetilde{I}}_{t-s},{\widetilde{J}}_{t-s})\) is independent of \((\widehat{I}_s,\widehat{J}_s)\) and equally distributed as \((I_{t-s},J_{t-s})\), we see upon recalling (4.8)–(4.9) that the density function h for \(x>0\) is given by:

$$\begin{aligned}&h(s; \phi ; t, x) \nonumber \\&\quad = \frac{d}{dx} \int _{-\infty }^{\infty } \int _0^{\infty } \int _0^{\infty } I \Big ( e^{-\gamma s} e^{-y} \Big ( e^{({\widehat{\beta }}-\beta ) s} w + \lambda e^{\gamma s} z \Big ) \le x \Big ) f(t-s, y, z)\,{\widehat{g}}(\pi ; s, w) \, dy \, dz \, dw \nonumber \\&\quad = \int _{-\infty }^{\infty } \int _0^{\infty } f \Big ( t-s, y, \frac{1}{\lambda } \big ( x e^y - e^{(\widehat{\beta }-\beta -\gamma )s} w \big ) \Big ) \,{\widehat{g}}(\phi ; s, w) \, \frac{e^y}{\lambda } \, dy \, dw \end{aligned}$$
(4.11)

where the density function \(\widehat{g}\) for \(w>0\) equals:

$$\begin{aligned} {\widehat{g}}(\phi ; s, w)&= \frac{d}{dx} \int _{-\infty }^{\infty } \int _0^{\infty } I \Big ( e^{-y} \Big ( {\phi } + \lambda z \Big ) \le w \Big ) {\widehat{f}}(s, y, z) \, dy \, dz \nonumber \\&= \int _{-\infty }^{\infty } {\widehat{f}} \Big (s, y, \frac{1}{\lambda } \Big ( w e^y - {\phi } \Big ) \Big ) \frac{e^y}{\lambda } \, dy \end{aligned}$$
(4.12)

and the density function \({\widehat{f}}\) for \(z>0\) is defined as in (4.6)–(4.7) with \(\widehat{\beta }\) instead of \(\beta \).

Finally, by means of the same arguments as in (4.8)–(4.9) it follows from the explicit expression in (2.5) that

$$\begin{aligned} P^t (\Phi ^2_t \in dx) = P \Big ( e^{-\widehat{I}_t} \Big ( \frac{\pi }{1-\pi } + \lambda \widehat{J}_t \Big ) \in dx \Big ) = {\widehat{g}}(\phi ; t, x) \, dx \end{aligned}$$
(4.13)

where the density function \(\widehat{g}\) for \(x>0\) is given by (4.12).

4.3

Noting that:

$$\begin{aligned}&P_{\phi }(\Phi ^2_t \in dx) \nonumber \\&\quad = \frac{\phi }{1+\phi } \, P^0(\Phi ^2_t \in dx) + \frac{1}{1+\phi } \, \int _0^t \lambda e^{-\lambda s} P^s(\Phi ^2_t \in dx) \, ds \nonumber \\&\qquad + (1-\pi ) \,e^{-\lambda t} P^t(\Phi ^2_t \in dx) \end{aligned}$$
(4.14)

we see by (4.8) + (4.10) + (4.13) that the process \(\Phi ^2\) has the marginal distribution

$$\begin{aligned} P_{\phi }(\Phi ^2_t \in dx) = q(\phi ; t, x) \, dx \end{aligned}$$
(4.15)

where the transition density function q for \(x>0\) is given by

$$\begin{aligned}&q(\phi ; t, x) = \frac{\phi }{1+\phi } \, g(\phi ; t, x) + \frac{1}{1+\phi } \, \int _0^t \lambda e^{-\lambda s} \, h(s; \phi ; t, x) \, ds \nonumber \\&\qquad \qquad \qquad + (1-\pi ) \, e^{-\lambda t} \, {\widehat{g}}(\phi ; t, x) \end{aligned}$$
(4.16)

with g, h, \({\widehat{g}}\) from (4.9), (4.11), (4.12) respectively.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gapeev, P.V. On the problems of sequential statistical inference for Wiener processes with delayed observations. Stat Papers (2020). https://doi.org/10.1007/s00362-020-01178-0

Download citation

Keywords

  • Sequential testing problem
  • Quickest change-point (disorder) detection problem
  • Weighted likelihood ratio
  • (Time-homogeneous) diffusion process
  • Delayed optimal stopping problem
  • Free-boundary problem
  • Change-of-variable formula with local time on curves

Mathematics Subject Classification

  • Primary 60G40
  • 60J60
  • 34K10
  • Secondary 62M20
  • 62C10
  • 62L15