Skip to main content
Log in

A two-dimensional control problem arising from dynamic contracting theory

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

We study a dynamic corporate finance contracting model in which the firm’s profitability fluctuates and is impacted by the unobservable managerial effort. Thereby, we introduce in an agency framework the issue of strategic liquidation. We show that the principal’s problem takes the form of a two-dimensional fully degenerate Markov control problem. We prove regularity properties of the value function and derive explicitly the optimal contract that implements full effort. Our regularity results appear in some recent studies, but with heuristic proofs that do not clarify the importance of the regularity of the value function at the boundaries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. We review the literature below, at the end of this section.

  2. See the seminal paper of DeMarzo and Sannikov [7] and Biais et al. [1] for a survey of the literature.

  3. See e.g. Sannikov [20], DeMarzo and Sannikov [7] and Strulovici and Szydlowski [23].

  4. See Sannikov [20] and DeMarzo and Sannikov [7].

  5. In DeMarzo and Sannikov [8], cash flows are modelled as the increment of an arithmetic Brownian motion with an unknown drift. Optimal liquidation occurs when beliefs about the constant drift fall to an endogenous threshold. The principal can ensure optimal liquidation once the agent has established a sufficiently high record. In He [11], liquidation is always inefficient. When the agent establishes a sufficiently high record, the principal can design payments in such a way that the continuation value of the agent becomes proportional to the firm size that follows a geometric Brownian motion. Thus, the continuation value of the agent remains always positive and liquidation never occurs.

  6. For instance, in standard models with i.i.d. shocks, payouts to the manager take place when the agent’s continuation value reaches a payout threshold (see, for instance, Biais et al. [1]).

  7. This also holds true for DeMarzo and Sannikov [8].

  8. See DeMarzo and Sannikov [7], Zhu [27].

References

  1. Biais, B., Mariotti, T., Rochet, J.C.: Dynamic financial contracting. In: Acemoglu, D., et al. (eds.) Advances in Economics and Econometrics: Tenth World Congress of the Econometric Society, vol. 1, pp. 125–172. Cambridge University Press, Cambridge (2013)

    Chapter  Google Scholar 

  2. Biais, B., Mariotti, T., Plantin, G., Rochet, J.C.: Dynamic security design: convergence to continuous time and asset pricing implications. Rev. Econ. Stud. 74, 345–390 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  3. Borodin, A., Salminen, P.: Handbook on Brownian Motion. Facts and Formulae. Birkhäuser, Basel (1996)

    Book  MATH  Google Scholar 

  4. Cvitanić, J., Possamaï, D., Touzi, N.: Moral hazard in dynamic risk management. Manag. Sci. 63, 3328–3346 (2016)

    Article  Google Scholar 

  5. Cvitanić, J., Possamaï, D., Touzi, N.: Dynamic programming approach to principal–agent problems. Finance Stoch. 22, 1–37 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  6. Daskalopoulos, P., Feehan, P.M.N.: \(C^{1,1}\) regularity for degenerate elliptic obstacle problems. J. Differ. Equ. 260, 5043–5074 (2016)

    Article  MATH  Google Scholar 

  7. DeMarzo, P.M., Sannikov, Y.: Optimal security design and dynamic capital structure in a continuous time agency model. J. Finance 61, 2681–2724 (2006)

    Article  Google Scholar 

  8. DeMarzo, P.M., Sannikov, Y.: Learning, termination, and payout policy in dynamic incentive contracts. Rev. Econ. Stud. 84, 182–236 (2017)

    Article  MathSciNet  Google Scholar 

  9. Dixit, A.K., Pindyck, R.S.: Investment Under Uncertainty. Princeton University Press, Princeton (1994)

    Google Scholar 

  10. Faingold, E., Vasama, S.: Real options and dynamic incentives. Working paper (2014). Available online at http://gtcenter.org/Archive/2014/Conf/Vasama1855.pdf

  11. He, Z.: Optimal executive compensation when firm size follows geometric Brownian motion. Rev. Financ. Stud. 22, 859–892 (2009)

    Article  Google Scholar 

  12. Hynd, R.: Analysis of Hamilton–Jacobi–Bellman equations arising in stochastic singular control. ESAIM Control Optim. Calc. Var. 19, 112–128 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hynd, R.: An eigenvalue problem for a fully nonlinear elliptic equation with gradient constraint. Calc. Var. Partial Differ. Equ. 56(34), 1–31 (2017)

    MathSciNet  MATH  Google Scholar 

  14. Hynd, R., Mawi, H.: On Hamilton–Jacobi–Bellman equations with convex gradient constraints. Interfaces Free Bound. 18, 291–315 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lamberton, D., Terenzi, G.: Variational formulation of American option prices in the Heston model (2017). arXiv:1711.11311

  16. Krylov, N.V.: Lectures on Elliptic and Parabolic Equations in Sobolev Spaces. Graduate Studies in Mathematics. AMS, Providence (2008)

    Book  MATH  Google Scholar 

  17. Pham, H.: On the smooth-fit property for one-dimensional optimal switching problem. In: Donati-Martin, C., et al. (eds.) Séminaire de Probabilités XL. Lecture Notes in Mathematics, vol. 1899, pp. 187–199. Springer, Berlin (2007)

    Chapter  Google Scholar 

  18. Pham, H.: Continuous-Time Stochastic Control and Optimization with Financial Applications. Springer, Berlin (2010)

    Google Scholar 

  19. Rogers, L.C.G., Williams, R.: Diffusions, Markov Processes and Martingales, vol. 2. Cambridge University Press, Cambridge (2000)

    Book  MATH  Google Scholar 

  20. Sannikov, Y.: A continuous-time version of the principal–agent problem. Rev. Econ. Stud. 75, 957–984 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  21. Soner, H.M., Shreve, S.E.: Regularity of the value function for a two-dimensional singular stochastic control problem. SIAM J. Control Optim. 27, 876–907 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  22. Strulovici, B.: Contracts, information persistence, and renegotiation. Working paper, Northwestern University (2011). Available online at http://faculty.wcas.northwestern.edu/~bhs675/RMP.pdf

  23. Strulovici, B., Szydlowski, M.: On the smoothness of value functions and the existence of optimal strategies in diffusion models. J. Econ. Theory 159, 1016–1055 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Vasama, S.: On moral hazard and persistent private information. Discussion paper No. 15/2017, Bank of Finland Research (2017). Available online at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3013567

  25. Williams, N.: Persistent private information. Econometrica 79, 1233–1275 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Zhang, Y.: Dynamic contracting with persistent shocks. J. Econ. Theory 144, 635–675 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  27. Zhu, J.Y.: Optimal contracts with shirking. Rev. Econ. Stud. 80, 812–839 (2013)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stéphane Villeneuve.

Additional information

We thank seminar audiences at ENPC, INRIA and UPEMLV. We thank the Co-Editor Jakša Cvitanić as well as an anonymous referee for very fruitful comments. We also thank participants at the conference “Information in Finance and Assurance”, Paris, June 2015, “Robust Finance”, Bielefeld, May 2016, “Frontiers for Stochastic Modelling for Finance”, Padova, February 2016. We are indebted to Laurent Miclo for suggesting us the use of the Doob \(h\)-transform. Financial support from the Chair IDEI-SCOR under the aegis of Fondation du Risque Market Risk and Value Creation and ANR project PACMAN (ANR-16-CE05-0027) are gratefully acknowledged.

Appendix

Appendix

Proof of Lemma 4.4

First observe that for every pair \((x,w) \in (x^{*},+\infty )\times (0,+\infty )\), there is some \(C>0\) such that \(u(x,w)\le C(1+x)\). Indeed,

$$\begin{aligned} u(x,w) &\le \mathbb{E}^{0}\left [\int_{0}^{\infty }e^{-rs}\vert x+ \sigma Z_{s}\vert \,ds\right ] \\ &\le \frac{x}{r} + \sigma \mathbb{E}^{0}\left [\int_{0}^{\infty }e ^{-rs}\vert Z_{s}\vert \,ds\right ] \\ &= \frac{x}{r} + \sigma \sqrt{\frac{2}{\pi }}\mathbb{E}^{0}\left [\int _{0}^{\infty }e^{-rs}\sqrt{s} \,ds \right ] \\ &\le C(1+x). \end{aligned}$$

Therefore, Lemma 4.4 holds for \(w \ge 1\). Let us now consider \(w \in (0,1)\). We decompose \(u(x,w)\) as \(u(x,w)=u_{1}(x,w)+u _{2}(x,w)\), with

u 1 ( x , w ) = E 0 [ 1 { τ 0 w < τ 1 w } 0 τ τ 0 w e r s ( x + σ Z s ) d s ] , u 2 ( x , w ) = E 0 [ 1 { τ 0 w > τ 1 w } 0 τ τ 0 w e r s ( x + σ Z s ) d s ] ,

where \(\tau_{1}^{w}=\inf \{ t \ge 0: W_{t}^{w}=1 \}\). On the event \(\{\tau_{0}^{w} < \tau_{1}^{w} \}\), we have for every \(t \le \tau^{*} \wedge \tau_{0}^{w} < \tau_{1}^{w}\) the inequality \(X_{t} \le \frac{1}{ \lambda } + x\). Therefore,

u 1 (x,w) ( 1 λ + x ) E 0 [ 1 { τ 0 w < τ 1 w } 0 τ 0 w e r s ds].

Conditioning the process \((W_{t})_{t \in [0,\tau_{1}^{w}]}\) on the event \(\{\tau_{0}^{w} < \tau_{1}^{w} \}\) and using Doob’s \(h\)-transform (see Rogers and Williams [19, Chap. IV, Sect. 39] for a definition) makes \((W_{t})_{t \in [0,\tau_{1}^{w}]}\) a diffusion absorbed at 0 with generator

$$ \tilde{\mathcal{L}}=\frac{\lambda^{2} \sigma^{2}}{2}\frac{\partial^{2}}{ \partial w^{2}}+\left (rw+\lambda \sigma \frac{h^{\prime }(w)}{h(w)}\right )\frac{ \partial }{\partial w}, $$

where

$$ h(w)=\mathbb{P}^{0}[\tau_{0}^{w} < \tau_{1}^{w}]=\frac{\int_{w}^{1}e ^{-rs^{2}} \, ds}{\int_{0}^{1}e^{-rs^{2}} \, ds}. $$

Let us denote \(\tilde{\tau }_{0}^{w}=\inf \{t \ge 0:\tilde{W}_{t}^{w}=0 \}\), where

$$ d\tilde{W}_{t}=\bigg( r\tilde{W}_{t} +\lambda \sigma \frac{h^{\prime }(\tilde{W}_{t})}{h(\tilde{W}_{t})} \bigg)\,dt +\lambda \sigma\, dZ_{t}. $$

We have

E 0 [ 1 { τ 0 w < τ 1 w } 0 τ 0 w e r s d s ] = P 0 [ τ 0 w < τ 1 w ] E 0 [ 0 τ ˜ 0 w e r s d s ] E 0 [ 0 τ ˜ 0 w e r s d s ] = ϕ ˜ ( w ) .

The function \(\tilde{\phi }\) satisfies

$$ \tilde{\mathcal{L}}\tilde{\phi }- r\tilde{\phi }= 0 $$

with \(\tilde{\phi }(0)=0\). A computation shows that \(rw+\lambda \sigma \frac{h^{\prime }(w)}{h(w)}<0\), and because \(\tilde{\phi }\) is nondecreasing, we get that \(\tilde{\phi }\) is convex. We deduce that \(\tilde{\phi }(w)\le C w\) for some positive constant \(C\) which implies \(u_{1}(x,w)\le C(1+x)w\). Now, we decompose \(u_{2}\) as

u 2 ( x , w ) = E 0 [ 1 { τ 0 w > τ 1 w } 0 τ τ 1 w e r s ( x + σ Z s ) d s ] = : + E 0 [ 1 { τ 0 w > τ 1 w } τ τ 1 w τ τ 0 w e r s ( x + σ Z s ) d s ] = u 3 ( x , w ) + u 4 ( x , w ) .

Because for every \(s \le \tau_{1}^{w}\), we have \(X_{s}\le \frac{1}{ \lambda } +x\), it follows that

$$\begin{aligned} u_{3}(x,w) &\le \left ( \frac{1}{\lambda } +x \right )\int_{0}^{\infty }e^{-rs}\,ds \;\mathbb{P}^{0}[\tau_{0}^{w} > \tau_{1}^{w}] \\ &=\frac{\frac{1}{\lambda } +x}{r}\mathbb{P}^{0}[\tau_{0}^{w} > \tau _{1}^{w}] \\ &=\frac{\frac{1}{\lambda } +x}{r}\big(1-h(w)\big) \\ &\le C(1+x)w. \end{aligned}$$

Finally, the strong Markov property implies

u 4 ( x , w ) = E 0 [ 1 { τ 0 w > τ 1 w } e r ( τ τ 1 w ) u ( X τ τ 1 w , W τ τ 1 w ) ] E 0 [ 1 { τ 0 w > τ 1 w } e r τ 1 w u ( X τ 1 w , 1 ) ] E 0 [ 1 { τ 0 w > τ 1 w } e r τ 1 w C ( 1 + X τ 1 w ) ] C ( 1 + 1 λ + x ) P 0 [ τ 0 w > τ 1 w ] C ( 1 + x ) w ,

where the first inequality holds because \(u(x^{*},W_{\tau^{*}})=0\). This ends the proof of Lemma 4.4. □

Proof of Proposition 4.5

We show that \(u(x,w) = c (x) \, w + o(w)\) for \(w\) sufficiently small, where \(c (x)\) is a real constant. This will prove that \(\frac{\partial u}{\partial w} (x,0)< +\infty \).

We have

$$\begin{aligned} \begin{aligned}[b] u(x,w) &= \mathbb{E}^{0} \bigg[\int_{0}^{\tau_{0}^{w}} e^{-rs} (x + \sigma Z_{s}) \, ds\bigg]- \mathbb{E}^{0} \bigg[\int_{\tau_{R}}^{\tau _{0}^{w} }e^{-rs} (x + \sigma Z_{s}) \, ds\bigg] \\ &=A-B. \end{aligned} \end{aligned}$$
(A.1)

First, observe that

$$ A= \frac{x}{r} \big(1 - \ell (w)\big) + \mathbb{E}^{0} \bigg[\int_{0} ^{\tau_{0}^{w}} e^{-rs} \sigma Z_{s} \, ds \bigg], $$
(A.2)

where we set \(\ell (w) =\mathbb{E}^{0} [e^{-r\tau_{0}^{w}}]\). Following [3, Chap. 2, Sect. 1.10], we introduce the function \(\phi \) defined as the nonincreasing fundamental solution on ℝ of

$$\begin{aligned} \frac{\sigma^{2}\lambda^{2}}{2} \phi^{\prime \prime } + r w \phi^{ \prime } - r \phi =0, \\ \hbox{ with }\phi (0) =1, \; \; \lim_{w\to +\infty }\phi (w) = 0. \end{aligned}$$

The function \(\ell \) coincides with \(\phi \) on \((0,+\infty )\), and in particular \(\ell \) is twice continuously differentiable over \((0, \infty )\) with \(\ell^{\prime }(0+)<+\infty \). It follows that for \(w\) small enough, \(1 - \ell (w) = - \ell^{\prime }(0+) \, w = o(w)\).

We now study the second term on the right-hand side of (A.2). Let us define the \(\mathbb{P}^{0}\)-uniformly integrable martingale

$$ N_{t}=\int_{0}^{t} e^{-rs}dZ_{s}=e^{-r t}Z_{t}-r\int_{0}^{t} e^{-rs}Z _{s}\,ds, \qquad t \geq 0. $$

The optional sampling theorem gives \(\mathbb{E}^{0}[N_{\tau_{0}^{w} \wedge T}]=0\). Then, letting \(T \to +\infty \),

$$ \mathbb{E}^{0} \bigg[\int_{0}^{\tau_{0}^{w}} e^{-rs} Z_{s} \, ds \bigg] = -\frac{1}{r}\mathbb{E}^{0}[e^{-r\tau_{0}^{w} } Z_{\tau_{0} ^{w}}]. $$
(A.3)

Observe that \(e^{-rt} W_{t} = w+\lambda \sigma N_{t}\), \(t \geq 0\), is thus under \(\mathbb{P}^{0}\) a uniformly integrable martingale. Using the dynamics of \((W_{t})\), we deduce that

$$ w e^{-rt} + r e^{-rt} \int_{0}^{t} W_{s} \, ds + \lambda \sigma e^{-rt} Z_{t}, \qquad t \geq 0, $$

is a \(\mathbb{P}^{0}\)-uniformly integrable martingale. Thus the optional sampling theorem yields

$$\begin{aligned} w &=\mathbb{E}^{0}[e^{-r(T\wedge \tau_{0}^{w})} W_{T\wedge \tau_{0} ^{w}}] \\ &= w \mathbb{E}^{0} [e^{-r(T\wedge \tau_{0}^{w})}] + r \mathbb{E} ^{0} \bigg[e^{-r(T\wedge \tau_{0}^{w})} \int_{0}^{T\wedge \tau_{0} ^{w}} W_{s} \, ds\bigg] \\ & \phantom{=:}+ \lambda \sigma \mathbb{E}^{0}[ e^{-r(T\wedge \tau_{0} ^{w})} Z_{T\wedge \tau_{0}^{w}}] . \end{aligned}$$
(A.4)

Letting \(T\) tend to \(\infty \), we obtain

$$ w=w \mathbb{E}^{0} (e^{-r\tau_{0}^{w}}) + r \mathbb{E}^{0} \bigg[e ^{-r \tau_{0}^{w}} \int_{0}^{ \tau_{0}^{w}} W_{s} \, ds\bigg] + \lambda \sigma \mathbb{E}^{0}[ e^{-r \tau_{0}^{w}} Z_{\tau_{0}^{w}}]. $$

Using (A.3) and (A.4), one gets

E 0 [ 0 τ 0 w e r s Z s d s ] = 1 λ σ r w ( E 0 [ e r τ 0 w ] 1 ) + 1 λ σ E 0 [ e r τ 0 w 0 τ 0 w W s d s ] = 1 λ σ ( 1 r w ( ( w ) 1 ) + E 0 [ 0 e r τ 0 w 1 { s τ 0 w } W s d s ] ) = 1 λ σ ( 1 r w ( ( w ) 1 ) = : ( + E 0 [ 0 E 0 [ e r τ 0 w | F s ] 1 { s τ 0 w } W s d s ] ) = 1 λ σ ( 1 r w ( ( w ) 1 ) + E 0 [ 0 τ 0 w e r s ( W s ) W s d s ] ) ,

where the last equality follows from the strong Markov property. Now, it remains to prove that

$$ g(w) := \mathbb{E}^{0}\bigg[\int_{0}^{\tau_{0}^{w}} e^{-rs} \ell (W _{s}) W_{s} \, ds \bigg] = cw + o(w) $$

for \(w\) small enough. First, we show that the function \(w\mapsto w \ell (w)\) is bounded. Let us consider the real function \(k(w) = (\int _{w}^{\infty }e^{-\frac{r}{\sigma^{2}\lambda^{2}} t^{2}} \, dt) / ( \int_{0}^{\infty }e^{-\frac{r}{\sigma^{2}\lambda^{2}} t^{2}} \, dt)\) which is the smooth solution to

$$\begin{aligned} \frac{\sigma^{2}\lambda^{2}}{2} k{''} + r w k' &=0, \\ k(0) =1, \; \lim_{w\to +\infty }k(w) &=0. \end{aligned}$$

Note that the function \(\theta =k-\ell \) is twice continuously differentiable, bounded over \((0, \infty )\) and satisfies \(\theta (0) = \lim_{w\to +\infty }\theta (w) = 0\) together with

$$ \frac{\sigma^{2}\lambda^{2}}{2} \theta {''} + r w \theta ' = -r\ell \leq 0. $$

Then the process \((\theta (W_{t}))_{t \geq 0}\) is a bounded \(\mathbb{P}^{0}\)-supermartingale and thus for every \(T>0\),

$$ \mathbb{E}^{0} [\theta (W_{\tau_{0}^{w}\wedge T})] \le \theta (w). $$

Letting \(T \to +\infty \), we conclude that \(\theta (w) \ge 0\) because \(\theta \) is bounded with \(\theta (0)=0\). It follows that \(\ell (w) w \leq k(w) w\) over \([0, \infty )\). We observe that \(\lim_{w \to \infty } w k(w) =0\) and thus \(\lim_{w \to \infty } w \ell (w) =0\). Therefore, the function \(w \mapsto w \ell (w) \) is bounded on \([0, \infty )\) and the function \(g\) is well defined.

Assume for a while the existence of a bounded \(C^{2}\) solution \(f\) on ℝ to the differential equation

$$ \frac{\sigma^{2} \lambda^{2}}{2} f{''} + r w f{'} - r f + w \ell (w) =0, \qquad f(0) =0. $$
(A.5)

Itô’s formula yields for every \(T>0\) that

$$ \mathbb{E}^{0}\big[ e^{-r(T\wedge \tau_{0}^{w})}f(W_{T\wedge \tau_{0} ^{w}}) \big]=f(w)-\mathbb{E}^{0} \bigg[\int_{0}^{T\wedge \tau_{0}^{w}} e^{-rs} \ell (W_{s}) W_{s} \, ds\bigg]. $$
(A.6)

Observe that because \(f(0)=0\),

E 0 [ e r ( T τ 0 w ) f( W T τ 0 w )]= E 0 [ e r T f( W T ) 1 { T τ 0 w } ] f e r T .

The monotone convergence theorem gives

$$ \lim_{T\to +\infty }\mathbb{E}^{0} \bigg[\int_{0}^{T\wedge \tau_{0} ^{w}} e^{-rs} \ell (W_{s}) W_{s} \, ds\bigg]=g(w). $$

Letting \(T\) tend to \(+\infty \) in (A.6), we have that \(g\) coincides with \(f\) on \((0,+\infty )\). Therefore, \(g\) is a bounded, twice continuously differentiable function over \((0, \infty )\) and thus \(g(w) = g' (0+) w + o(w)\).

The existence of a bounded solution of (A.5) comes from the general form of solutions given by the method of variation of constants,

$$ f(x)=\alpha x+ \beta \ell (x)+\ell (x)\int_{0}^{x} \frac{u^{2}\ell (u)}{J(u)}\,du-x\int_{x}^{\infty } \frac{u\ell^{2}(u)}{J(u)}\,du, $$

where \(J(x)=\frac{2}{\sigma^{2}}(\ell (x)-x\ell^{\prime }(x))\) is the Wronskian. Because we have \(\ell \le k\) and \(J(x)=Ce^{-\frac{r}{ \sigma^{2}\lambda^{2}}x^{2}}\) (see [3, Chap. 2, Sect. 1.11]), the two integrals \(\int_{0}^{x} \frac{u^{2}\ell (u)}{J(u)}\,du\) and \(\int_{0}^{x} \frac{u\ell^{2}(u)}{J(u)}\,du\) converge for \(x \to \infty \). Then, it suffices to choose \(\beta =-\int_{0}^{\infty }\frac{u ^{2}\ell (u)}{J(u)}\,du\) and \(\alpha =0\) to conclude.

Summing up our results, we have obtained that

$$ \tilde{u}(x,w) := \mathbb{E}^{0}\bigg[ \int_{0}^{\tau_{0}^{w}} e^{-rs} (x + \sigma Z_{s}) \,ds\bigg] = c(x) w + w\eta (w), \qquad \lim_{w \to 0} \eta (w)=0. $$

Moreover, observe that \(\eta \) is a continuous bounded function on \((0,+\infty )\).

We now turn to the second term of (A.1). We show that for \(w\) sufficiently small,

$$ B= \mathbb{E}^{0}\bigg[ \int_{\tau_{R}}^{\tau_{0}^{w}} e^{-rs} (x + \sigma Z_{s}) \,ds\bigg] = Cw+o(w), $$

where \(C\) is a constant that may depend on \(x\). The strong Markov property yields

B = E 0 [ 1 { τ < τ 0 w } τ τ 0 w e r s ( x + σ Z s ) d s ] = E 0 [ 1 { τ < τ 0 w } e r τ u ˜ ( x , W τ w ) ] = E 0 [ 1 { τ < τ 0 w } e r τ ( C W τ + W τ w η ( W τ w ) ) ] = C E 0 [ 1 { τ < τ 0 w } e r τ W τ w ] + E 0 [ 1 { τ < τ 0 w } e r τ W τ w η ( W τ w ) ] = C w + E 0 [ 1 { τ < τ 0 w } e r τ W τ w η ( W τ w ) ] ,

where the last equality used again the optional sampling theorem with the martingale \((e^{-rt} W_{t})_{t \geq 0}\) because \(\tau^{*}\) is almost surely finite. We end the proof by showing that

E 0 [ 1 { τ < τ 0 w } e r τ W τ w η( W τ w )]=o(w)

for sufficiently small \(w\). For every \(\varepsilon >0\), there is some \(\delta >0\) such that \(\vert \eta (w) \vert \le \varepsilon \) for all \(w < \delta \). Now, fix \(\varepsilon >0\), \(w<\delta \) and introduce the stopping time

$$ \tau^{w}_{\delta }=\inf \{ t \ge 0: W^{w}_{t}=\delta \}. $$

We have

| E 0 [ 1 { τ < τ 0 w } e r τ W τ w η ( W τ w ) ] | = | E 0 [ e r τ W τ w η ( W τ w ) 1 { τ < τ 0 w } 1 { τ τ δ w } ] = | + E 0 [ e r τ W τ w η ( W τ w ) 1 { τ < τ 0 w } 1 { τ > τ δ w } ] | ε E 0 [ e r τ W τ w 1 { τ < τ 0 w } ] = : + η E 0 [ e r τ W τ w 1 { τ < τ 0 w } 1 { τ > τ δ w } ] ε w + η E 0 [ e r τ δ w W τ δ w w 1 { τ δ w < τ 0 w } ] .

Proceeding analogously as in the proof of Lemma 4.4, we define on the set \(\{\tau_{\delta }^{w} < \tau_{0}^{w}\}\) the diffusion \((\hat{W}_{t})_{t \le \tau_{0}}\) absorbed at \(\delta \) by using Doob’s \(h\)-transform and obtain

E 0 [ e r τ δ w W τ δ w 1 { τ δ w < τ 0 w } ]= P 0 [ τ δ w < τ 0 w ] E 0 [ e r τ ˆ δ w W ˆ τ ˆ δ w ].

Now, both expressions \(\mathbb{P}^{0}[ \tau_{\delta }^{w} < \tau_{0} ^{w}]\) and \(\mathbb{E}^{0} [ e^{-r\hat{\tau }_{\delta }^{w}} \hat{W} _{\hat{\tau }_{\delta }^{w}}]= \delta \mathbb{E}^{0} [ e^{-r \hat{\tau }_{\delta }^{w}} ]\) converge to zero when \(w \to 0\). Moreover, as in Lemma 4.4, \(\mathbb{P}^{0}[ \tau_{\delta }^{w} < \tau_{0}^{w}]=Cw+o(w)\). This ends the proof that \(\frac{\partial u }{\partial w}(x,0)\) exists and is finite. Furthermore, we observe that the function \(x \mapsto \frac{\partial u }{\partial w}(x,0)\) is nondecreasing because \(x \mapsto u(x,w)\) is nondecreasing.

With these preparations, we are now ready to prove (i)–(iii).

(i) We have for \(\varepsilon >0\) that

$$ u(x,w +\varepsilon )-u(x,w)=\mathbb{E}^{0}\bigg[\int_{0}^{\tau_{R} ^{\varepsilon }}e^{-rs}X_{s}\,ds\bigg]-\mathbb{E}^{0}\bigg[\int_{0} ^{\tau_{R}}e^{-rs}X_{s}\,ds\bigg], $$

where \(\tau_{R}^{\varepsilon }=\inf \{t \ge 0:(x + \sigma Z_{t}, w + \varepsilon + \int_{0}^{t} r W_{s} \, ds + \lambda \sigma Z_{t}) \notin R\} \ge \tau_{R}\). The strong Markov property gives for the first term

$$ \mathbb{E}^{0}\bigg[\int_{0}^{\tau_{R}^{\varepsilon }}e^{-rs}X_{s}\,ds \bigg]=\mathbb{E}^{0}\bigg[\int_{0}^{\tau_{R}}e^{-rs}X_{s}\,ds\bigg]+ \mathbb{E}^{0} \big[e^{-r(\tau^{*}\wedge \tau^{w}_{0})}u\big(X_{\tau ^{*}\wedge \tau^{w}_{0}},W^{w+\varepsilon }_{\tau^{*}\wedge \tau^{w} _{0}}\big)\big]. $$

Using \(u(x^{*},w)=0\) for all \(w>0\), we get

1 ε (u(x,w+ε)u(x,w))= 1 ε E 0 [ e r τ 0 w u( X τ 0 w , W τ 0 w w + ε ) 1 { τ τ 0 w } ].

Now, observe that \(W^{w+\varepsilon }_{\tau_{0}}=\varepsilon e^{r \tau _{0}^{w}}\) and thus

1 ε (u(x,w+ε)u(x,w))= E 0 [ u ( X τ 0 w , ε e r τ 0 w ) ε e r τ 0 w 1 { τ τ 0 w } ]0.
(A.7)

Because we have just proved that \(\frac{\partial u}{\partial w}(x,0)\) exists for every \(x>0\), we know that the random variables u ( X τ 0 w , ε e r τ 0 w ) ε e r τ 0 w 1 { τ τ 0 w } converge to u w ( X τ 0 w ,0) 1 { τ τ 0 w } almost surely when \(\varepsilon \) tends to zero. Moreover, up to a constant, they are bounded above by \(1+X_{\tau^{w} _{0}}\) by Lemma 4.4. Now observe that

$$ \lambda X_{\tau^{w}_{0}} \le \lambda x -w $$

on the set \(\{\tau^{*}\ge \tau^{w}_{0} \}\) and thus \(1+X_{\tau^{w} _{0}}\) is bounded on this set. Then the dominated convergence theorem yields assertion (i) by letting \(\varepsilon \) tend to zero in (A.7).

(ii) First, we prove that \(X^{x}_{\tau_{0}^{w_{0}}} \ge X^{x}_{\tau _{0}^{w_{1}}}\) for any \(w_{0} \le w_{1}\) on the set \(\{ \tau_{0}^{w _{1}}<+\infty \}\). Note that \(\tau_{0}^{w_{0}} \le \tau_{0}^{w_{1}}\) almost surely for \(w_{0} \le w_{1}\). We integrate \(\lambda dX_{t} = dW _{t}-rW_{t}\,dt \) on the interval \((\tau_{0}^{w_{0}},\tau_{0}^{w_{1}})\) on the set \(\{ \tau_{0}^{w_{1}}<+\infty \}\). We obtain

$$ \lambda ( X^{x}_{\tau_{0}^{w_{1}}}-X^{x}_{\tau_{0}^{w_{0}}})= -W^{w _{1}}_{\tau_{0}^{w_{0}}}-r\int_{\tau_{0}^{w_{0}}}^{\tau_{0}^{w_{1}}}W _{s}^{w_{1}}\,ds \le 0. $$

According to assertion (i),

u w ( x , w 0 ) = E 0 [ 1 { τ 0 w 0 τ } u w ( X τ 0 w 0 x , 0 ) ] E 0 [ 1 { τ 0 w 1 τ } u w ( X τ 0 w 0 x , 0 ) ] E 0 [ 1 { τ 0 w 1 τ } u w ( X τ 0 w 1 x , 0 ) ] = u w ( x , w 1 ) ,

where the last inequality comes from the fact that \(x \mapsto \frac{ \partial u}{\partial w}(x,0)\) is nondecreasing. Thus, the function \(w \mapsto \frac{\partial u}{\partial w}\) is a decreasing function. Because we know that \(u\) is twice continuously differentiable over \(R\), we get assertion (ii).

(iii) Let us consider \(f\) defined as

$$ f(x)=\frac{\partial u}{\partial w}\big(x,\lambda (x-c)\big) \qquad \hbox{for } x\ge x^{*}. $$

To prove assertion (iii), we show that \(f\) is nonincreasing for any \(c\) such that \((x,\lambda (x-c))\) is in \(R\). Take \(x_{0} \le x_{1}\) and \(w_{i}=\lambda (x_{i}-c)\) for \(i=0,1\). From assertion (ii), we have

f( x 0 )= E 0 [ 1 { τ 0 w 0 τ , x 0 } u w ( X τ 0 w 0 x 0 ,0)].

We show that 1 { τ 0 w 0 τ , x 0 } 1 { τ 0 w 1 τ , x 1 } or, equivalently, that

$$ \{ \tau_{0}^{w_{0}} > \tau^{*, x_{0}} \} \subseteq \{ \tau_{0}^{w_{1}} > \tau^{*, x_{1}} \}. $$

On the set \(\{ \tau_{0}^{w_{0}} > \tau^{*,x_{0}} \} \), we have

$$\begin{aligned} X_{\tau^{*, x_{0}} }^{x_{1}} &=x^{*}+x_{1}-x_{0}, \\ W_{\tau^{*, x_{0}} }^{w_{1}} &=W_{\tau^{*, x_{0}}}^{w_{0}}+(w_{1}-w _{0})e^{r\tau^{*, x_{0}}}. \end{aligned}$$

Therefore,

$$\begin{aligned} W_{\tau^{*, x_{0}} }^{w_{1}}-\lambda X_{\tau^{*, x_{0}} }^{x_{1}} =&W _{\tau^{*, x_{0}} }^{w_{0}}+(w_{1}-w_{0})e^{r\tau^{*, x_{0}} }-\lambda (x^{*}+x_{1}-x_{0}) \\ \ge &\lambda (x_{1}-x_{0})(e^{r\tau^{*, x_{0}} }-1)-\lambda x^{*} \\ \ge &-\lambda x^{*}, \end{aligned}$$

and thus for all \(t \ge \tau^{*, x_{0}} \), we have \(W_{t}^{w_{1}} \ge \lambda (X_{t}^{x_{1}}-x^{*})\). This implies \(\tau_{0}^{w_{1}} > \tau^{*, x_{1}}\). Consequently,

f( x 0 ) E 0 [ 1 { τ 0 w 1 τ , x 1 } u w ( X τ 0 w 0 x 0 ,0)].

Proceeding as previously, on the set \(\{\tau_{0}^{w_{1}} < +\infty \}\), we have

$$ \lambda \big(X^{x_{0}}_{\tau_{0}^{w_{0}}} - X^{x_{1}}_{\tau_{0}^{w _{1}}}\big)= r\bigg(\int_{0}^{\tau_{0}^{w_{1}}}W^{w_{1}}_{s}\,ds-\int _{0}^{\tau_{0}^{w_{0}}}W^{w_{0}}_{s}\,ds \bigg)\ge 0. $$

Thus,

f( x 0 ) E 0 [ 1 { τ 0 w 1 τ , x 1 } u w ( X τ 0 w 1 x 1 ,0)]=f( x 1 ).

 □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Décamps, JP., Villeneuve, S. A two-dimensional control problem arising from dynamic contracting theory. Finance Stoch 23, 1–28 (2019). https://doi.org/10.1007/s00780-018-0376-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-018-0376-4

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation