A mathematical treatment of bank monitoring incentives


In this paper, we take up the analysis of a principal/agent model with moral hazard introduced by Pagès (J. Financ. Intermed. doi:10.1016/j.jfi.2012.06.001, 2012), with optimal contracting between competitive investors and an impatient bank monitoring a pool of long-term loans subject to Markovian contagion. We provide here a comprehensive mathematical formulation of the model and show, using martingale arguments in the spirit of Sannikov (Rev. Econ. Stud. 75:957–984, 2008), how the maximization problem with implicit constraints faced by investors can be reduced to a classical stochastic control problem. The approach has the advantage of avoiding the more general techniques based on forward-backward stochastic differential equations described by Cvitanić and Zhang (Contract Theory in Continuous Time Models, Springer 2012) and leads to a simple recursive system of Hamilton–Jacobi–Bellman equations. We provide a solution to our problem by a verification argument and give an explicit description of both the value function and the optimal contract. Finally, we study the limit case where the bank is no longer impatient.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2


  1. 1.

    Abreu, D., Milgrom, P., Pearce, D.: Information and timing in repeated partnerships. Econometrica 59, 1713–1733 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  2. 2.

    Aït-Sahalia, Y., Cacho-Diaz, J., Laeven, R.: Modeling financial contagion using mutually exciting jump processes. NBER working paper No. 15850, available at http://www.nber.org/papers/w15850 (2010)

  3. 3.

    Azizpour, S., Giesecke, K.: Self-exciting corporate defaults: contagion vs. frailty. Working paper, Stanford University, available at http://www.stanford.edu/dept/MSandE/cgi-bin/people/faculty/giesecke/pdfs/selfexciting.pdf (2008)

  4. 4.

    Biais, B., Mariotti, T., Rochet, J.-C., Villeneuve, S.: Large risks, limited liability and dynamic moral hazard. Econometrica 78, 73–118 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  5. 5.

    Brémaud, P.: Point Processes and Queues: Martingale Dynamics. Springer, Berlin (1981)

    Book  MATH  Google Scholar 

  6. 6.

    Cvitanić, J., Zhang, J.: Contract Theory in Continuous Time Models. Springer, Berlin (2012)

    Google Scholar 

  7. 7.

    Davis, M., Lo, V.: Infectious defaults. Quant. Finance 1, 382–387 (2001)

    Article  Google Scholar 

  8. 8.

    Dellacherie, C., Meyer, P.-A.: Probabilities and Potential, vol. B. North-Holland, Amsterdam (1982)

    MATH  Google Scholar 

  9. 9.

    DeMarzo, P., Fishman, M.: Agency and optimal investment dynamics. Rev. Financ. Stud. 20, 151–189 (2007)

    Article  Google Scholar 

  10. 10.

    DeMarzo, P., Fishman, M.: Optimal long-term financial contracting. Rev. Financ. Stud. 20, 2079–2128 (2007)

    Article  Google Scholar 

  11. 11.

    Frey, R., Backhaus, J.: Pricing and hedging of portfolio credit derivatives with interacting default intensities. Int. J. Theor. Appl. Finance 11, 611–634 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  12. 12.

    Giesecke, K., Kakavand, H., Mousavi, M., Takada, H.: Exact and efficient simulation of correlated defaults. SIAM J. Financ. Math. 1, 868–896 (2010)

    Article  MathSciNet  Google Scholar 

  13. 13.

    Jarrow, R., Yu, F.: Counterparty risk and the pricing of defaultable securities. J. Finance 53, 2225–2243 (2001)

    Google Scholar 

  14. 14.

    Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Springer, New York (1991)

    MATH  Google Scholar 

  15. 15.

    Kraft, H., Steffensen, M.: Bankruptcy, counterparty risk, and contagion. Rev. Finance 11, 209–252 (2007)

    Article  MATH  Google Scholar 

  16. 16.

    Laurent, J.-P., Cousin, A., Fermanian, J.-D.: Hedging default risks of CDOs in Markovian contagion models. Quant. Finance 12, 1773–1791 (2011)

    Article  MathSciNet  Google Scholar 

  17. 17.

    Pagès, H.: Bank monitoring incentives and optimal ABS. J. Financ. Intermed. (2012). doi:10.1016/j.jfi.2012.06.001

    Google Scholar 

  18. 18.

    Sannikov, Y.: A continuous-time version of the principal-agent problem. Rev. Econ. Stud. 75, 957–984 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  19. 19.

    Sannikov, Y., Skrzypacz, A.: Impossibility of collusion under imperfect monitoring with flexible production. Am. Econ. Rev. 97, 1794–1823 (2007)

    Article  Google Scholar 

  20. 20.

    Sannikov, Y., Skrzypacz, A.: The role of information in repeated games with frequent actions. Econometrica 78, 847–882 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  21. 21.

    Yu, F.: Correlated defaults in intensity-based models. Math. Finance 17, 155–173 (2007)

    Article  MATH  MathSciNet  Google Scholar 

Download references


Research partly supported by the Chair Financial Risks of the Risk Foundation sponsored by Société Générale, the Chair Derivatives of the Future sponsored by the Fédération Bancaire Française, and the Chair Finance and Sustainable Development sponsored by EDF and Calyon.

The authors would like to thank Nizar Touzi for his precious advice, as well as two anonymous referees and an associate editor who helped to improve a previous version of this paper.

Author information



Corresponding author

Correspondence to Dylan Possamaï.

Appendix A

Appendix A

Proof of Proposition 3.5

In this particular case, Problem (3.2) becomes


Consider first the subproblem derived from (A.1) by abstracting from the initial payment D 0 and ignoring the incentive compatibility constraint u t b 1, i.e.,

The constraint can be written equivalently as

$$ \mathbb{E}^{\mathbb{P}} \biggl[ \int_{0^{+}}^{\tau}e^{-rt} \bigl( dD_{t}-(r+\lambda_{1})u\,dt \bigr) \biggr] \geq0. $$

The corresponding Lagrangian is

$$ \mathcal{L}_{t}=\mu\,dt-dD_{t}+\nu_{t}e^{-rt} \bigl( dD_{t}-u(r+\lambda_{1})\,dt \bigr) , $$

where ν t is the Lagrange multiplier at time t. Optimizing with respect to D, we get ν t =e rt and the complementary slackness conditions imply that the dividend process is absolutely continuous and constant, namely dD t =δ t dt, with δ t =(r+λ 1)u. Because the process D obtained in this manner is clearly admissible, this yields \(\widetilde{v}_{1}(u)= ( \mu-(r+\lambda_{1})u ) /\lambda_{1}\).

Turning now to (A.1), but still ignoring the incentive compatibility constraint, we have

$$ v_{1}(u)=\sup_{D_{0}} \bigl(-D_{0}+ \widetilde{v}_{1}(u-D_{0}) \bigr), $$

which is increasing in D 0 when r>0. Since u 0=uD 0 from the bank’s promise-keeping constraint (3.4), the highest initial payment consistent with the incentive compatibility constraint at time 0 is D 0=ub 1. This yields

$$ v_{1}(u) =b_{1}-u+\widetilde{v}_{1}(b_{1}) = b_{1}-u+\overline{v}_{1}, $$

where \(\overline{v}_{1}\) is defined as in the proposition. Finally, one verifies that δ t =b 1(r+λ 1) yields u t =b 1 on [0,τ), so that the incentive compatibility condition binds at all times before default, as desired. □

Proof of Proposition 3.13(i)

We show the result by induction.

  • Initialization with j=2

The solution of the ODE (3.16) for j=2 and a given fixed value of γb 2 can be easily calculated and is given by \(\widetilde{v}_{2}(u,\gamma)=\gamma-u+v_{2}(\gamma)\) for u>γ and

Now since we have shown that v 1 is everywhere twice differentiable except at b 1, we have for every γb 1+b 2 and every b 2<uγ that

$$\frac{\partial\widetilde{v}_2}{\partial\gamma}(u,\gamma)= \biggl(v'_1( \gamma-b_2)+1-\frac{r}{\lambda_2} \biggr) \biggl( \biggl( \frac{ru+\lambda_2b_2}{r\gamma+\lambda_2b_2} \biggr)^{\frac {\lambda_2}{r}}1_{\{u\leq\gamma\}}+1_{\{u>\gamma\}} \biggr). $$

Thus the above expression always has the sign of \(v'_{1}(\gamma -b_{2})+1-\frac{r}{\lambda_{2}}\), that is to say it is positive for γ<b 1+b 2 and negative for γ>b 1+b 2. Hence, we clearly have for all b 2<u that

$$\sup_{\gamma\geq b_2}\widetilde{v}_2(u, \gamma)=\widetilde{v}_2(u,b_1+b_2), $$

which means that the maximal solution of (3.16) for j=2 corresponds to the choice γ 2=b 1+b 2, which also happens to correspond to the unique solution of

$$\frac{r}{\lambda_{2}}-1\in\partial v_{1}(\gamma_{2}-b_1). $$

Then, after some calculations, we find that for all b 2<u<b 1+b 2,

$$v''_2(u)=- \biggl(\lambda_2-r+ \lambda_2\frac{\overline{v}_1}{b_1} \biggr)\frac{ (ru+\lambda_2b_2 )^{\frac{\lambda_2}{r}-1}}{ (r(b_1+b_2)+\lambda_2b_2 )^{\frac{\lambda_2}{r}}}\leq0, $$

because of (3.18). Hence, since v 2 is linear on [b 1+b 2,+∞) and differentiable at b 1+b 2, it is concave on (b 2,+∞). Now if we consider the linear extrapolation of v 2 over [0,b 1] by (3.15), we just need to verify that the left derivative of v 2 at b 2 is less than its right derivative to obtain the concavity of v 2 over [0,+∞]. Taking the limit for ub 2 in (3.16), we obtain

$$v'_2 \bigl(b_2^+ \bigr)=\frac{\lambda_2\overline{v}_2-2\mu }{b_2(r+\lambda_2)}. $$

This implies that

$$v'_2 \bigl(b_2^- \bigr)-v'_2 \bigl(b_2^+ \bigr)=\frac{2\mu}{b_2\lambda_2}+v'_2 \bigl(b_2^+ \bigr)\frac {r}{\lambda_2}\geq\frac{\mu\varepsilon}{B} - \frac{r}{\lambda_2}. $$

Now recall Assumption 2.3, which implies that

$$\frac{r}{\lambda_j}<\frac{r}{\overline{\alpha}_j}\leq\frac{\mu \varepsilon-B}{B}\frac{\varepsilon}{1+\varepsilon}< \frac{\mu\varepsilon}{B} $$

for any j so that \(v'_{2}(b_{2}^{-})-v'_{2}(b_{2}^{+})\geq0\).

  • Heredity for j≥3

Let us now suppose that the maximal solution v j−1 of (3.16) has been constructed for some j≥3, that it is globally concave on [0,+∞), everywhere differentiable except at b j−1, everywhere twice differentiable except at b j−1 and b j−1+b j−2, and that the corresponding γ j−1b j−1+b j−2. Let us now construct the maximal solution corresponding to j. Exactly as in the case j=2, the solution of the ODE (3.16) for a given fixed value of γb j can be easily calculated and is given by

for b j <uγ, and \(\widetilde{v}_{j}(u,\gamma)=\gamma -u+v_{j}(\gamma)\) for u>γ. Note also that it is clear from (3.16) that v j is differentiable everywhere except at b j , and twice differentiable everywhere except at b j and b j +b j−1.

Now since we assumed that v j−1 is everywhere differentiable except at b j−1, we have for every γb j−1+b j and every b j <uγ that

$$\frac{\partial\widetilde{v}_j}{\partial\gamma}(u,\gamma)= \biggl(v'_{j-1}( \gamma-b_j)+1-\frac{r}{\lambda_j} \biggr) \biggl( \biggl( \frac {ru+\lambda_jb_j}{r\gamma+\lambda_jb_j} \biggr)^{\frac{\lambda _j}{r}}1_{\{u\leq\gamma\}}+1_{\{u>\gamma\}} \biggr). $$

Thus, since v j−1 is concave and its derivative is therefore nonincreasing, we can conclude as in the case j=2 that the maximal solution is uniquely determined by the choice γ j which corresponds to the solution of

$$\frac{r}{\lambda_{j}}-1\in\partial v_{j-1}(\gamma_{j}-b_j). $$

More precisely, using (3.18), we have only two cases. Either

$$v'_{j-1} \bigl(b_{j-1}^+ \bigr)\leq \frac{r}{\lambda_j}-1\leq\frac{\overline {v}_{j-1}}{b_{j-1}} $$

and γ j =b j−1+b j , or

$$\frac{r}{\lambda_j}-1<v'_{j-1} \bigl(b_{j-1}^+ \bigr) $$

and b j−1+b j <γ j γ j−1+b j .

Let us now study the concavity. We can differentiate twice equation (3.16) on (b j ,b j +b j−1) since v j−1(ub j ) is linear and thus twice differentiable on this open interval. We then obtain easily that


There are then two cases. If γ j =b j +b j−1, differentiating once (3.16) and then taking the limit ub j +b j−1, we get

$$\bigl(r(b_j+b_{j-1})+\lambda_jb_j \bigr)v''_j \bigl((b_j+b_{j-1})^- \bigr)=\lambda_j \biggl(\frac{r}{\lambda_j}-1-\frac{\overline{v}_{j-1}}{b_{j-1}} \biggr)\leq0. $$

Since \(v^{\prime\prime}_{j}(u)=0\) for \(u>b_{j}+b_{j_{1}}\), we have proved concavity on (b j ,+∞). If γ j >b j +b j−1, differentiating once (3.16) and taking limits on both sides of b j +b j−1, we obtain


where the right-hand side is positive by the concavity of v j−1. Next, we differentiate twice (3.16) on (b j +b j−1,γ j ]. We obtain easily

$$ v''_j(u)= \lambda_j(ru+\lambda_jb_j)^{\frac{\lambda_j}{r}-2}\int _u^{\gamma_j}\frac{v''_{j-1}(x-b_j)}{(ru+\lambda_jb_j)^{\frac {\lambda _j}{r}-1}}\,dx. $$

Note that we should normally distinguish between the cases b j +b j−1+b j−2γ j or not, since v j−1 is not twice differentiable at b j−1+b j−2. However, since we know that v j is twice differentiable at b j +b j−1+b j−2, this actually does not change the result. Since v j−1 is concave, (A.4) implies that v j is concave on (b j +b j−1,+∞). Then with (A.3) we see that the left second derivative of v j at b j +b j−1 is negative, which thanks to (A.2) finally shows the concavity on (b j ,+∞).

Finally, it remains to show that \(v'_{j}(b_{j}^{+})\leq\frac{\overline {v}_{j}}{b_{j}}\). We take the limit for ub j in (3.16) to obtain

$$v'_j \bigl(b_j^+ \bigr)=\frac{\lambda_j\overline{v}_j-j\mu }{b_j(r+\lambda_j)}. $$

Since \(v'_{j}\geq-1\), this implies that

$$v'_j \bigl(b_j^- \bigr)-v'_j \bigl(b_j^+ \bigr)=\frac{j\mu}{b_j\lambda_j}+v'_j \bigl(b_j^+ \bigr)\frac {r}{\lambda_j}\geq\frac{\mu\varepsilon}{B}- \frac{r}{\lambda_j}, $$

which has already been shown to be positive under Assumption 2.3. Hence v j is concave on [0,+∞). □

Proof of Proposition 3.13(ii)

First of all, by the properties of the function ψ 1 recalled in Remark 3.12, it is clear that we can always find a λ j such that (3.19) is satisfied. Then if for a fixed j≥2, we have \(v'_{j-1}(b_{j-1}^{+})\leq0\), differentiating (3.16) immediately gives for u>b j and ub j +b j−1 that

$$ \lambda_j \bigl(v'_j(u)-v'_{j-1}(u-b_j) \bigr)=(ru+\lambda_jb_j)v''_j(u)+rv'_j(u). $$

Since we have proved in (i) that the v j are concave, it is clear that if \(v'_{j-1}(b_{j-1}^{+})\leq0\), the right-hand side above is negative. Then by left- and right-continuity of \(v'_{j-1}\) at b j−1, the result extends to u=b j +b j−1. Hence the desired property (3.20) follows. In particular, this proves the result for j=2 since \(v'_{1}(b_{1}^{+})=-1\). Note also that the property (3.20) clearly holds for v j when u>γ j . Indeed, we have then

$$v'_{j}=-1, $$

and we know that the derivative of v j−1 is always greater than −1.

Let us now show the rest of the result by induction. Since (3.20) is true for j=2, let us fix a j≥3 and assume that

$$ v'_{j-1}(u)-v'_{j-2}(u-b_{j-1}) \leq0,\quad u>b_{j-1}. $$

Now if \(v'_{j-1}(b_{j-1}^{+})\leq0\), we already know that the property (3.20) is true for v j , so we assume that \(v'_{j-1}(b_{j-1}^{+})> 0\). Moreover, by our remark above, we know that (3.20) holds true for v j when u>γ j . Let us then first prove (3.20) for v j when u>b j +b j−1. If γ j =b j +b j−1, there is nothing to do. Otherwise, we have, using successively (A.5) and (A.4),


Now if we differentiate (3.16) and solve the corresponding ODE for \(v'_{j}\), we obtain

$$ v'_j(u)=(ru+\lambda_jb_j)^{\frac{\lambda_j}{r}-1} \int_u^{\gamma_j}\frac {\lambda_jv'_{j-1}(x-b_j)}{(rx+\lambda_jb_j)^{\frac{\lambda_j}{r}}}\,dx - \biggl( \frac{ru+\lambda_jb_j}{r\gamma_j+\lambda_jb_j} \biggr)^{\frac{\lambda_j}{r}-1}. $$

Using (A.8) in (A.7), we obtain for u>b j +b j−1 that


Then we have for all xu>b j +b j−1 and xb j +b j−1+b j−2 that

where we used the induction hypothesis (A.6) in the last inequality. Since v j−1 is concave, the sign of the right-hand side above is given by the sign of

Using this in (A.9) implies

$$v'_j(u)-v'_{j-1}(u-b_j) \leq0,\quad u>b_j+b_{j-1}. $$

It remains to prove (3.20) when b j <u<b j +b j−1. In that case, (3.20) can be written as

$$v'_j(u)-\frac{\overline{v}_{j-1}}{b_{j-1}}\leq0,\quad b_j<u<b_j+b_{j-1}, $$

which is equivalent by concavity of v j to

$$v'_j \bigl(b_j^+ \bigr)-\frac{\overline{v}_{j-1}}{b_{j-1}} \leq0. $$

Now using (A.8), we also have

And thus we get with (3.17)

which implies

$$v'_j \bigl(b_j^+ \bigr)-\frac{\overline{v}_{j-1}}{b_{j-1}} \leq\phi_{\frac {b_{j-1}}{b_j}} \biggl(\frac{r}{\lambda_j} \biggr)\frac{\overline {v}_{j-1}}{b_{j-1}} \biggl(\frac{v'_{j-1}(b_{j-1}^+)}{\frac{\overline {v}_{j-1}}{b_{j-1}}}-\psi_{\frac{b_{j-1}}{b_j}} \biggl(\frac {r}{\lambda_j} \biggr) \biggr). $$

By Assumption 2.4, we know that b j b j−1. Hence with (3.19) and by what we recalled earlier about the functions ψ β in Remark 3.12, we have

$$\frac{\overline{v}_{j-1}}{b_{j-1}}\leq\psi \biggl(\frac{r}{\lambda_j} \biggr)\leq \psi_{\frac{b_{j-1}}{b_j}} \biggl(\frac{r}{\lambda_j} \biggr), $$

which implies the desired property and ends the proof. □

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Pagès, H., Possamaï, D. A mathematical treatment of bank monitoring incentives. Finance Stoch 18, 39–73 (2014). https://doi.org/10.1007/s00780-013-0202-y

Download citation


  • Principal/agent problem
  • Dynamic moral hazard
  • Optimal incentives
  • Optimal securitization
  • Stochastic control
  • Verification theorem

Mathematics Subject Classification (2000)

  • 60H30
  • 91G40

JEL Classification

  • G21
  • G28
  • G32