Skip to main content
Log in

Bubbly Markov equilibria

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

Bubbly Markov equilibria (BME) are recursive equilibria on the natural state space which admit a non-trivial bubble. The present paper studies the existence and properties of BME in a general class of overlapping generations economies with capital accumulation and stochastic production shocks. Using monotone methods, we develop a general approach to construct Markov equilibria and provide necessary and sufficient conditions for these equilibria to be bubbly. Our main result shows that a BME exists whenever the bubbleless equilibrium is Pareto inefficient due to either overaccumulation of capital or inefficient risk sharing between generations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. This holds even though markets in our model are sequentially complete in that equilibria can be supported by a complete set of state-contingent claims (Arrow securities) as in Barbie et al. (2007). Along the bubbleless equilibrium, however, these claims are not traded between generations, while the presence of a bubbly asset allows for such intergenerational trades.

  2. Formally, this is because investments in capital and the bubbly asset being imperfect substitutes in our framework give rise to two Euler equations, while there is only one such equation in the deterministic or pure exchange case.

  3. The idea of taking the limit of an economy with positive dividends to obtain bubbly equilibria was also used in Barbie and Kaul (2015) and Aiyagari and Peled (1991).

  4. Here, \(c_{\mathrm{max}}\) is a suitable upper bound for equilibrium consumption. It can formally be obtained if capital is restricted to the bounded state space defined below.

  5. This uniqueness property will be important to obtain several results including Theorem 1. Otherwise, we could have defined \(w_{\mathrm{max}}\) in (10) to be the maximum fixed point of \(W_0^\mathrm{E}(\cdot , \theta _{\mathrm{max}})\).

  6. To see this, suppose \(f(0)=0\). Then, by L’Hopital’s rule, \(\lim _{k \searrow 0} \frac{f(k)}{k f'(k)} = \lim _{k \searrow 0}\frac{1}{1-E_{f'}(k)}\). The condition thus requires \(\lim _{k \searrow 0} E_{f'}(k)>0\). As \(\lim _{k \searrow 0} f'(k) = \infty \), this can only hold under SI.

  7. For the deterministic OLG growth model, Konishi and Perera-Tallo (1997) established existence of a non-trivial steady-state equilibrium when NLS holds and lifetime utility is homothetic, see their Corollary 1 on p. 535. These restrictions are somewhat similar to those of Lemma 2 for the present stochastic case.

  8. This property rests crucially on the i.i.d. structure of the shock process. While this will simplify the subsequent construction of ME considerably, we expect the underlying principle along with most of the results to carry over to more general classes of economies including correlated production shocks. Clearly, in this case the function space \(\mathscr {G}\) consists of mappings defined on \(\mathbb {X}\) rather than \(\mathbb {W}\).

  9. As \(K_P\) yields the solution \(K_0\) defined by (7) for \(P \equiv 0\), this notation is consistent with Sect. 2.4.

  10. This restriction is necessary because initial old-age consumption \(c_0^o\) can, in general, not be written as a function of \(w_0\) but requires knowledge of the full initial state \(x_0\). If \(\theta _0\) is fixed, there is a one-to-one correspondence between \(w_0\) and the initial state \(x_0\) and the process \(\{x_t\}_{t \ge 0}\) can fully be recovered from \(\{w_t\}_{t \ge 0}\) as \(k_t = K(w_{t-1})\) and \(\theta _t = w_t/W(k_t, 1)\) for \(t \ge 1\).

  11. An analogous terminology is adopted in the case of dynamic (in-) efficiency of a MEA.

  12. Under the additional restrictions from Lemma 8 (ii) below, the efficiency properties of A become to some extent independent of the initial state \(w_0\).

  13. The same kind of argument is used in Barbie and Kaul (2015), going back to the basic idea in Aiyagari and Peled (1991), for the case of an exchange economy, where instead of the monotonicity methods applied here Schauder’s fixed point theorem is used. Since in our framework in addition the capital stock adjusts as an endogenous variable, the analysis becomes more complicated than under pure exchange.

  14. The result that economies with a dividend-paying asset have efficient equilibria is well known and can also be proved by defining state-contingent claims prices and showing that the value of the aggregate endowment is finite (due to the presence of dividends). Efficiency of the equilibrium allocation then follows along the lines of the standard proof of the first welfare theorem.

  15. Their result states the simple, but in our analysis very useful fact that if a sequence of monotonic real-valued functions \(f_n\) defined on the interval [ab] with \(a<b\) converges pointwise to a continuous function f on [ab], then f is also monotonic and convergence is uniform.

  16. To see this formally, let \(q_t := \prod _{s=1}^t r_s\) for each \(t \ge 0\). Using the particular form of factor prices (28) and that \(k_{t+1} = \frac{\beta }{1+\beta } w_t\) for each \(t \ge 0\), one verifies by induction that \(q_t = R_{\mathrm{max}}^t w_t/w_0\). Now suppose \(R_{\mathrm{max}} \ge 1\). Then, \(\sum _{t=0}^{\infty } q_t = \infty \) for any shock path \((\theta _t)_{t\ge 0}\) which by Proposition 4(a) in Barbie et al. (2007) implies Pareto efficiency. Conversely, suppose \(R_{\mathrm{max}}<1\). Since shocks and wages are uniformly bounded above and away from zero, one obtains \(\sum _{t=0}^{\infty } q_t<\infty \) for any shock path which implies dynamic inefficiency as shown in Zilcha (1990). Using the same parameterization, the last result is also derived in Rangazas and Russell (2005) who do not, however, consider Pareto (in-) efficiency.

  17. If \(P_0^{**} \in \mathscr {G}'\), this follows trivially by monotonicity of T and the fixed point property \(TP_0^{**} = P_0^{**}\) which can be established as in the proof of Theorem 1. Unfortunately, however, we only know \(P_0^{**} \in \mathscr {G}\).

  18. Observe that any convex combination \(P_{\lambda } = \lambda P + (1-\lambda )\tilde{P}\) lies between P and \(\tilde{P}\). Therefore, by monotonicity of \(K_{\bullet }\), \(W(K_{P_{\lambda }}(\hat{w}), \theta ) \in \overline{\mathbb {W}}_{\hat{w}}\) for all \(\theta \in [\theta _{\mathrm{min}}, \theta _{\mathrm{max}}]\).

  19. To see this, fix \(w \in \mathbb {W}\) and let \(p_n := T_{d_n} P(w)\) and \(k_n := K_{P + d_n}(w)\). By Corollary 1 and monotonicity of \(K_{\bullet }\), these sequences converge monotonically to values \(p^{*} \ge 0\) and \(k^{*}>0\), respectively. As \(H^i(k_n, p_n, w, P, d_n) = 0\) for all n and \(i=1,2\), continuity of \(H^i\) implies \(H^i(k^{*}, p^{*}, w, P, 0) = 0\). Uniqueness of the solution to (15) implies \(p^{*} = T P(w)\) and \(k^{*} = K_{P}(w)\).

  20. As explained in detail in Barbie and Kaul (2015), the definition of a weight given in Chattopadhyay and Gottardi (1999) (and also in Barbie et al. 2007) is slightly different from here (and in Barbie and Kaul 2015). Because Chattopadhyay and Gottardi (1999) used an abstract date-event tree setting without objective probabilities, their definition is without probabilities, but equivalent to the one given here.

  21. These are \(\max _{x} \{A(x) + B(x)\} \le \max _{x} \{A(x)\} + \max _{x} \{B(x)\}\) and \(\max _{x}\{A(x)\}\max _{y \in G(x)}\{B(y) + C(y)\} \le \max _{x}\{A(x)\}\max _{y \in G(x)}\{B(y)\} + \max _{x}\{A(x)\}\max _{y \in G(x)}\{C(y)\}\) for real-valued functions A, B, C and some correspondence G.

References

  • Aiyagari, R., Peled, D.: Dominant root characterization of Pareto optimality and the existence of optimal equilibria in stochastic overlapping generations models. J. Econ. Theory 54, 69–83 (1991)

    Article  Google Scholar 

  • Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis. Springer, Berlin (2007)

    Google Scholar 

  • Ball, L., Elmendorf, D., Mankiw, N.: The deficit gamble. J. Money Credit Bank. 30, 699–720 (1998)

    Article  Google Scholar 

  • Barbie, M., Kaul, A.: The Zilcha criteria for dynamic inefficiency reconsidered. Econ. Theory 40, 339–348 (2009). doi:10.1007/s00199-008-0367-3

    Article  Google Scholar 

  • Barbie, M., Kaul, A.: Pareto optimality and existence of monetary equilibria in a stochastic OLG model: a recursive approach. Working paper, University of Cologne, Cologne (2015)

  • Barbie, M., Hagedorn, M., Kaul, A.: On the interaction between risk sharing and capital accumulation in a stochastic OLG model with production. J. Econ. Theory 137, 568–579 (2007)

    Article  Google Scholar 

  • Bose, A., Ray, D.: Monetary equilibrium in an overlapping generations model with productive capital. Econ. Theory 3, 697–716 (1993). doi:10.1007/BF01210266

    Article  Google Scholar 

  • Buchanan, H.E., Hildebrandt, T.H.: Note on the convergence of a sequence of functions of a certain type. Ann. Math. 9(3), 123–126 (1908)

    Article  Google Scholar 

  • Cass, D.: On capital overaccumulation in the aggregative neoclassical model of economic growth: a complete characterization. J. Econ. Theory 4, 200–223 (1972)

    Article  Google Scholar 

  • Chattopadhyay, S., Gottardi, P.: Stochastic OLG models, market structure and optimality. J. Econ. Theory 89, 21–67 (1999)

    Article  Google Scholar 

  • Coleman, W.J.I.: Equilibrium in a production economy with an income tax. Econometrica 59, 1091–1104 (1991)

    Article  Google Scholar 

  • Coleman, W.J.I.: Uniqueness of an equilibrium in infinite-horizon economies subject to taxes and externalities. J. Econ. Theory 95, 71–78 (2000)

    Article  Google Scholar 

  • Demange, G.: On optimality in intergenerational risk sharing. Econ. Theory 20, 1–27 (2002). doi:10.1007/s001990100199

    Article  Google Scholar 

  • Demange, G., Laroque, G.: Social security, optimality, and equilibria in a stochastic overlapping generations economy. J. Public Econ. Theory 2(1), 1–23 (2000)

    Article  Google Scholar 

  • Diamond, P.: National debt in a neoclassical growth model. Am. Econ. Rev. 55(5), 1126–1150 (1965)

    Google Scholar 

  • Farhi, E., Tirole, J.: Bubbly liquidity. Rev. Econ. Stud. 79, 678–706 (2012)

    Article  Google Scholar 

  • Galor, O., Ryder, H.E.: Existence, uniqueness, and stability of equilibrium in an overlapping-generations model with productive capital. J. Econ. Theory 49, 360–375 (1989)

    Article  Google Scholar 

  • Gottardi, P., Kübler, F.: Social security and risk sharing. J. Econ. Theory 146, 1078–1106 (2011)

    Article  Google Scholar 

  • Greenwood, J., Huffman, G.: On the existence of nonoptimal equilibria in dynamic stochastic economies. J. Econ. Theory 65, 611–623 (1995)

    Article  Google Scholar 

  • Hauenschild, N.: Capital accumulation in a stochastic overlapping generations model with social security. J. Econ. Theory 106, 201–216 (2002)

    Article  Google Scholar 

  • Hellwig, C., Lorenzoni, G.: Bubbles and self-enforcing debt. Econometrica 77(4), 1137–1164 (2009)

    Article  Google Scholar 

  • Hillebrand, M.: On the role of labor supply for the optimal size of social security. J. Econ. Dyn. Control 35, 1091–1105 (2011)

    Article  Google Scholar 

  • Hillebrand, M.: Uniqueness of Markov equilibrium in stochastic OLG models with nonclassical production. Econ. Lett. 123(2), 171–176 (2014)

    Article  Google Scholar 

  • Ikeda, D., Phan, T.: Toxic asset bubbles. Econ. Theory 61, 241–271 (2016). doi:10.1007/s00199-015-0928-1

    Article  Google Scholar 

  • Kamihigashi, T., Stachurski, J.: Stochastic stability in monotone economies. Theor. Econ. 9, 383–407 (2014)

    Article  Google Scholar 

  • Konishi, H., Perera-Tallo, F.: Existence of steady-state equilibrium in an overlapping-generations model with production. Econ. Theory 9, 529–537 (1997). doi:10.1007/BF01213853

    Article  Google Scholar 

  • Kübler, F., Polemarchakis, H.: Stationary Markov equilibria for overlapping generations. Econ. Theory 24(3), 623–643 (2004). doi:10.1007/s00199-004-0523-3

    Article  Google Scholar 

  • Li, J., Lin, S.: Existence and uniqueness of steady state equilibrium in a generalized overlapping generations model. Macroecon. Dyn. 16, 299–311 (2012)

    Article  Google Scholar 

  • Magill, M., Quinzii, M.: Indeterminacy of equilibrium in stochastic OLG models. Econ. Theory 21, 435–454 (2003). doi:10.1007/s00199-002-0287-6

    Article  Google Scholar 

  • Manuelli, R.: Existence and optimality of currency equilibrium in stochastic overlapping generations models: the pure endowment case. J. Econ. Theory 51, 268–294 (1990)

    Article  Google Scholar 

  • Martin, A., Ventura, J.: Economic growth with bubbles. Am. Econ. Rev. 102, 3033–3058 (2012)

    Article  Google Scholar 

  • McGovern, J., Morand, O.F., Reffett, K.L.: Computing minimal state space recursive equilibrium in OLG models with stochastic production. Econ. Theory 54, 623–674 (2013). doi:10.1007/s00199-012-0728-9

    Article  Google Scholar 

  • Miao, J., Wang, P., Xu, L.: Stock market bubbles and unemployment. Econ. Theory 61, 273–307 (2016). doi:10.1007/s00199-015-0906-7

    Article  Google Scholar 

  • Michel, P., Wigniolle, B.: Temporary bubbles. J. Econ. Theory 112, 173–183 (2003)

    Article  Google Scholar 

  • Morand, O.F., Reffett, K.L.: Existence and uniqueness of equilibrium in nonoptimal unbounded infinite horizon economies. J. Monet. Econ. 50, 1351–1373 (2003)

    Article  Google Scholar 

  • Morand, O.F., Reffett, K.L.: Stationary Markovian equilibrium in overlapping generations models with stochastic nonclassical production and Markov shocks. J. Math. Econ. 43, 501–522 (2007)

    Article  Google Scholar 

  • Rangazas, P., Russell, S.: The Zilcha criterion for dynamic inefficiency. Econ. Theory 26, 701–716 (2005). doi:10.1007/s00199-004-0547-8

    Article  Google Scholar 

  • Samuelson, P.A.: An exact consumption-loan model of interest with or without the social contrivance of money. J. Polit. Econ. 66(6), 467–482 (1958)

    Article  Google Scholar 

  • Tirole, J.: Asset bubbles and overlapping generations. Econometrica 53(6), 1499–1528 (1985)

    Article  Google Scholar 

  • Wang, Y.: Stationary equilibria in an overlapping generations economy with stochastic production. J. Econ. Theory 61(2), 423–435 (1993)

    Article  Google Scholar 

  • Wang, Y.: Stationary Markov equilibria in an OLG model with correlated production shocks. Int. Econ. Rev. 35(3), 731–744 (1994)

    Article  Google Scholar 

  • Weil, P.: Confidence and the real value of money in an overlapping generations economy. Q. J. Econ. 102, 1–22 (1987)

    Article  Google Scholar 

  • Zilcha, I.: Dynamic efficiency in overlapping generations models with stochastic production. J. Econ. Theory 52(2), 364–379 (1990)

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Tim Deeken, Tomoo Kikuchi, Herakles Polemarchakis, Clemens Puppe, Kevin Reffett, Caren Söhner, John Stachurski, and Klaus Wälde for helpful suggestions and discussions and two anonymous referees for their valuable and very constructive comments. We also thank seminar participants at various conferences including the 2013 SAET Conference in Paris, the 2013 Annual Meeting of the Verein für Socialpolitik in Düsseldorf, the 2014 CEF Conference in Oslo, and the 2017 SAET conference in Faro.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marten Hillebrand.

Appendices

A Mathematical Appendix

1.1 A.1 Proof of Lemma 1

For convenience, define the numerator in (b) as \(D(k) := \mathbb {E}_{\nu }[R(k, \cdot )v'(k R(k, \cdot ))]\) which, as shown in the proof of Lemma 3 below is strictly decreasing under property (U) from Assumption 2. Conditions (a) and (b) permit to choose a lower bound \(\underline{k}>0\) such that \(W(k, \theta _{\mathrm{min}})>k\) and \(H(k) := u'(W(k, \theta _{\mathrm{min}}) - k) - D(k) < 0\) for all \(0<k \le \underline{k}\). Now define \(\underline{w} := W(\underline{k},\theta _{\mathrm{min}})\) and choose an arbitrary value \(\hat{w} \in ]0, \underline{w}]\). By monotonicity of W, there exists a unique \(\hat{k} \in ]0, \underline{k}]\) such that \(W(\hat{k}, \theta _{\mathrm{min}}) = \hat{w}\). Then, \(H(\hat{k})<0\) and the properties of D and \(u'\) permit to choose a unique \(\hat{k}_1\) such that \(\hat{k}<\hat{k}_1<\hat{w}\) and \(u'(\hat{w} - \hat{k}_1) = D(\hat{k}_1)\) which is equivalent to \(\hat{k}_1 = K_0(\hat{w})\) defined by (7). Then, by monotonicity of \(W(\cdot , \theta _{\mathrm{min}})\),

$$\begin{aligned} \hat{w} = W\left( \hat{k}, \theta _{\mathrm{min}}\right) < W\left( \hat{k}_1, \theta _{\mathrm{min}} \right) = W\left( K_0(\hat{w}), \theta _{\mathrm{min}} \right) = W_0^\mathrm{E}\left( \hat{w}, \theta _{\mathrm{min}}\right) . \end{aligned}$$

Since \(\hat{w} \in ]0, \underline{w}]\) was arbitrary, this proves Assumption 5. \(\square \)

1.2 A.2 Proof of Lemma 2

By (1), \(\frac{f(k)}{k f'(k)} = \frac{W(k, \theta _{\mathrm{min}})}{k R(k, \theta _{\mathrm{min}})}+1\). Thus, NLS implies \(\liminf _{k \searrow 0} \frac{W(k, \theta _{\mathrm{min}})}{k R(k, \theta _{\mathrm{min}})}>0\). Using this and the boundary behavior of R, we can choose values \(\bar{B}> 1> \bar{b}>0\) and a lower bound \(\underline{k}_1>0\) such that \(R(k, \theta _{\mathrm{min}})> \frac{\bar{B}}{\bar{b}}\) and \(W(k, \theta _{\mathrm{min}})> \bar{b} R(k, \theta _{\mathrm{min}}) k > \bar{B} k\) for all \(0<k \le \underline{k}_1\). Clearly, this implies \(\liminf _{k \searrow 0} \frac{W(k, \theta _{\mathrm{min}})}{k} > \bar{B}\) and condition (a).

To establish (b), suppose first that \(u = \beta ^{-1} v\). Note that \(\liminf _{k \searrow 0} \frac{W(k, \theta _{\mathrm{min}})}{k R(k, \theta _{\mathrm{min}})}>0\) and \(\lim _{k \searrow 0} \frac{k}{k R(k, \theta _{\mathrm{min}})}=0\) imply \(\liminf _{k \searrow 0} \frac{W(k, \theta _{\mathrm{min}}) - k}{k R(k, \theta _{\mathrm{min}})}>0\). Thus, there exists \(0<\underline{k}_{2} \le \underline{k}_1\) and \(\bar{b}_1>0\) such that for all \(0<k \le \underline{k}_{2}\) we have

$$\begin{aligned} W(k, \theta _{\mathrm{min}}) - k \ge \bar{b}_1 k R(k, \theta _{\mathrm{min}}). \end{aligned}$$
(A.1)

Suppose \(\bar{b}_1 \ge 1\). Then, \(u'(W(k, \theta _{\mathrm{min}}) - k) \le u'(k R(k, \theta _{\mathrm{min}}))\) for all \(0<k \le \underline{k}_{2}\) yields

$$\begin{aligned} \liminf _{k \searrow 0} \frac{v'\left( k R\left( k, \theta _{\mathrm{min}}\right) \right) }{u'\left( W\left( k, \theta _{\mathrm{min}}\right) -k\right) } \ge \frac{1}{\beta } >0. \end{aligned}$$

Second, suppose \(\bar{b}_1 <1\). From Assumption 2 (which holds for \(u=\beta ^{-1} v\)), we infer that

$$\begin{aligned} \bar{b}_1 u'\left( W\left( k, \theta _{\mathrm{min}}\right) - k\right) \le u'\left( \bar{b}_1^{-1}\left( W\left( k, \theta _{\mathrm{min}}\right) - k\right) \right) \end{aligned}$$
(A.2)

for all \(0<k \le \underline{k}_{2}\). To see this, define for fixed \(c>0\) the map \(H(a) := a v'(a c)\). Then, \(H(1) = v'(c)\) and \(H'(a) = v'(ac) + ac v''(ac) \ge 0\). Thus, H is non-decreasing and \(H(a)\ge H(1)\) for all \(a>1\). Setting \(c = W(k, \theta _{\mathrm{min}}) - k\) and \(a=\bar{b}_1^{-1}>1\), this proves (A.2) which, combined with (A.1), gives

$$\begin{aligned} \bar{b}_1 u'\left( W\left( k, \theta _{\mathrm{min}}\right) - k\right) \le u'\left( \bar{b}_1^{-1}\left( W\left( k, \theta _{\mathrm{min}}\right) - k\right) \right) \le u'\left( k R\left( k, \theta _{\mathrm{min}}\right) \right) \end{aligned}$$
(A.3)

for all \(0<k \le \underline{k}_{2}\) which implies

$$\begin{aligned} \liminf _{k \searrow 0} \frac{v'\left( k R(k, \theta _{\mathrm{min}})\right) }{u'\left( W(k, \theta _{\mathrm{min}})-k\right) } \ge \frac{\bar{b}_1}{\beta } >0. \end{aligned}$$

Finally, let \(f(0)>0\). Then, NLS implies \(\lim _{k \searrow 0} W(k, \theta _{\mathrm{min}}) = \theta _{\mathrm{min}} f(k) (1 - \frac{k f'(k)}{f(k)}) >0\) and, therefore, \(\lim _{k \searrow 0} u'(W(k, \theta _{\mathrm{min}})-k) >0\) in (12). As the numerator diverges, this implies (b). \(\square \)

1.3 A.3 Proof of Lemma 3

Let \(P\in \mathscr {G}\) be given and \(w\in \mathbb {W}\) be arbitrary but fixed. For each \(k \in \mathbb {K}=]0, k_{\mathrm{max}}]\) and \(\theta \in \varTheta \), set \(c(k, \theta ) := P(W(k, \theta ) ) + k R (k, \theta ).\) Define the functions

$$\begin{aligned} V(k) := \dfrac{\mathbb {E}_{\nu } \bigl [ P(W(k,\cdot ) ) v' \bigl ( c(k,\cdot ) \bigr ) \bigr ] }{\mathbb {E}_{\nu } \bigl [ R (k,\cdot ) v' \bigl ( c(k,\cdot ) \bigr ) \bigr ] } =: \frac{N(k)}{D(k) } , \quad k \in \mathbb {K} \end{aligned}$$
(A.4)

and

$$\begin{aligned} S(k) := k + V(k) = \dfrac{\mathbb {E}_{\nu } \bigl [ c(k,\cdot )v'\bigl ( c(k,\cdot ) \bigr ) \bigr ] }{\mathbb {E}_{\nu } \bigl [ R (k,\cdot ) v'\bigl ( c(k,\cdot ) \bigr ) \bigr ] } =: \frac{M(k) }{D(k) }, \quad k \in \mathbb {K}. \end{aligned}$$
(A.5)

Since P is continuous, so are the mappings V, N, D, L, and S. The first part of the proof establishes certain monotonicity properties and the boundary behavior of the previously defined functions. First, we will show that D is strictly decreasing and S is strictly increasing. Fixing an arbitrary interior point \(k \in \mathbb {K}\), it suffices to show \(D(k+\Delta )< D(k)\) and \(S(k+\Delta )> S(k)\) for any \(0<\Delta \le k_{\mathrm{max}} - k\). Since P and W and are weakly increasing and \(v'\) strictly decreasing,

$$\begin{aligned} D(k+ \Delta ) \le \tilde{D}(\Delta ) := \mathbb {E}_{\nu } \bigl [ R (k+\Delta ,\cdot ) v'\bigl ( \tilde{c}(\Delta , \cdot ) \bigr ) \bigr ] \end{aligned}$$
(A.6)

where \(\tilde{c}(\Delta , \theta ) := P(W(k, \theta ) ) + (k+\Delta ) R (k+\Delta , \theta )\). Likewise, property (U) from Assumption 2 implies that the map \(a \mapsto (a+b) v'( a + b)\), \(a>0\) is weakly increasing for any \(b \ge 0\). Therefore,

$$\begin{aligned} M(k+\Delta ) \ge \tilde{M}(\Delta ) := \mathbb {E}_{\nu } \bigl [ \tilde{c}(\Delta , \cdot ) v'\bigl ( \tilde{c}(\Delta , \cdot ) \bigr ) \bigr ]. \end{aligned}$$
(A.7)

This and (A.6) combined show that

$$\begin{aligned} S(k+\Delta ) \ge \tilde{S}(\Delta ) := \frac{\tilde{M}(\Delta )}{\tilde{D}(\Delta )}. \end{aligned}$$
(A.8)

Since \(\tilde{D}(0) = D(k)\) and \(\tilde{S}(0) = S(k)\), it suffices to establish monotonicity of \(\tilde{D}\) and \(\tilde{S}\). The major advantage is that, unlike L, D, and S, the maps \(\tilde{L}\), \(\tilde{D}\), and \(\tilde{S}\) are all differentiable. Dropping arguments when convenient, the derivative of \(\tilde{D}\) computes

$$\begin{aligned} \nonumber \tilde{D}'(\Delta )= & {} -\frac{E_{f'}(k+\Delta )}{k+\Delta }\mathbb {E}_{\nu } \Bigl [ R (k+\Delta ,\cdot ) \bigl (v'(\cdot ) - (k+\Delta ) R(k+\Delta , \cdot ) |v''(\cdot )| \bigr ) \Bigr ] \\&-\, \mathbb {E}_{\nu } \bigl [ R (k+\Delta ,\cdot )^2 |v''(\cdot )| \bigr ] \end{aligned}$$
(A.9)

where we have used that \(\partial _{\Delta }\tilde{c}(\Delta , \theta ) = R (k+\Delta , \theta )(1 - E_{f'}(k+\Delta ))\). Property (U) from Assumption (2) implies that the expectation in the first term is nonnegative and, therefore, \(\tilde{D}'(\Delta )<0\) and monotonicity of D. By (A.8), the sign of \(\tilde{S}'\) is determined by \(H(\Delta ) := (k+\Delta )\bigl (\tilde{M}'(\Delta ) \tilde{D}(\Delta ) - \tilde{M}(\Delta ) \tilde{D}'(\Delta )\bigr )\). Noting that \(\tilde{M}(\Delta ) \ge (k+\Delta ) \tilde{D}(\Delta )\) and using \(\tilde{D}'(\Delta )<0\), it suffices to show that \(\tilde{M}'(\Delta ) > (k+\Delta ) \tilde{D}'(\Delta )\). The derivative of \(\tilde{M}\) computes as

$$\begin{aligned} \tilde{M}'(\Delta ) = \left( 1-E_{f'}(k+\Delta )\right) \mathbb {E}_{\nu } \Bigl [ R (k+\Delta ,\cdot ) \bigl (v'(\cdot ) - \tilde{c}(\Delta , \cdot ) |v''(\cdot )| \bigr ) \Bigr ] \end{aligned}$$
(A.10)

For brevity, define

$$\begin{aligned} A_1(\Delta ):= & {} \mathbb {E}_{\nu } \Bigl [ R (k+\Delta ,\cdot ) \bigl (v'(\cdot ) - \tilde{c}(\Delta , \cdot ) |v''(\cdot )| \bigr ) \Bigr ] \\ A_2(\Delta ):= & {} \mathbb {E}_{\nu } \Bigl [ R (k+\Delta ,\cdot ) \bigl (v'(\cdot ) - (k+\Delta ) R(k+\Delta , \cdot ) |v''(\cdot )| \bigr ) \Bigr ]. \end{aligned}$$

Note that property (U) from Assumption 2 implies \(0 \le A_1(\Delta ) \le A_2(\Delta )\). Further, note from (A.9) that \( -(k+\Delta )\tilde{D}'(\Delta ) > E_{f'}(k+\Delta ) A_2(\Delta )\). Using this property and (A.10) gives the desired result

$$\begin{aligned} \tilde{M}'(\Delta ) - (k+\Delta ) \tilde{D}'(\Delta )> A_1(\Delta ) + E_{f'}(k+\Delta )\left( A_2(\Delta ) - A_1(\Delta ) \right) > 0. \end{aligned}$$

Since \(\varTheta \) is compact and \(k \le k_{\mathrm{max}}\), consumption \(c(k, \theta )\) is uniformly bounded from above (e.g., by \(c_{\mathrm{max}}:= P(w_{\mathrm{max}}) + \theta _{\mathrm{max}} f(k_{\mathrm{max}})\)) and so is M defined in (A.5) (e.g., by \(c_{\mathrm{max}} v'(c_{\mathrm{max}})\)). The boundary conditions from Assumptions 1 and 2 then imply

$$\begin{aligned}&\lim _{k \searrow 0} D(k) = \infty . \end{aligned}$$
(A.11)
$$\begin{aligned} \quad \hbox {and}\quad \quad \quad 0\le & {} \lim _{k \searrow 0} V(k) \le \lim _{k \searrow 0} S(k) =\lim _{k \searrow 0}\frac{M(k) }{D(k) } = 0. \end{aligned}$$
(A.12)

Having established the properties necessary for the proof, define

$$\begin{aligned} G(k;w) := u'\left( w - S(k)\right) - D(k). \end{aligned}$$
(A.13)

Then, the desired solution \(\tilde{k}\) solves \(G(\tilde{k}; {w})=0\). Observe that \(G(\cdot ; w)\) is a strictly increasing function which follows from the monotonicity of S and D and \(u'\). Thus, any zero is necessarily unique. Also observe the boundary behavior \(\lim _{k \searrow 0}G(k; w) = -\infty \) due to (A.11). By continuity, it suffices to find a \(k < w\) such that \(G(k; w) \ge 0\). Suppose \(P \equiv 0\). Then, the solution is \(\tilde{k} = k_0 := K_0(w)\) defined by (7) and \(\tilde{p} =0\). If \(P \ne 0\), consider the following two cases. First, \(S(k_0) \ge w\). Then, by (A.12) and monotonicity and continuity of S, there exists a unique value \(0<\hat{k} \le k_0\) such that \(S(\hat{k})=w\) which implies \(\lim _{k \nearrow \hat{k}}G(k; w) = \infty \). Second, suppose \(S(k_0) < w\). Then, \(\lim _{k \nearrow k_0}G(k; w) = u'(w - S(k_0)) - D(k_0) \ge G_0(k_0; w) =0\) with \(G_0\) defined by (7). Thus, in either case, there exists a solution \(0<\tilde{k} \le k_0 < w\). Setting \(\tilde{p}= V(\tilde{k})\) completes the proof. \(\square \)

1.4 A.4 Proof of Lemma 4

Let \(P \in \mathscr {G}\) be arbitrary. As shown in the previous proof, \(T P = V \circ K_{P}\) where V is defined in (A.4) and, for \(w \in \mathbb {W}\), \(k = K_{P}(w)\) is the unique solution to \(G(k; w) = 0\) defined in (A.13). Clearly, \(K_P\) is continuous. Note from (A.4) that \(T P \ge 0\), \(P >0\) implies \(T P >0\) and \(P =0\) implies \(T P =0\). As G in (A.13) is increasing in P and V, \(K_P \le K_0\) for all P with strict inequality if \(P>0\). By definition of \(K_{P}\) and (A.13), \(w> S(K_{P}(w)) > V(K_{P}(w)) = T P (w)\) for \(w \in \mathbb {W}\) which proves \(TP < \mathrm{id}_{\mathbb {W}}\).

To show that \(w \mapsto w - T P (w)\) is (even strictly) increasing, let \(w \in \mathbb {W}\) be an arbitrary interior point and choose \(\Delta >0\) such that \(w + \Delta \in \mathbb {W}\). We show that \(T P (w + \Delta ) < T P (w) + \Delta \). By contradiction, suppose \(T P (w + \Delta ) \ge T P (w) + \Delta \). Note that G defined in (A.13) is strictly decreasing in w and strictly increasing in k by strict monotonicity of D and S. These properties imply that \(K_{P}\) is strictly increasing which gives \(K_{P}(w + \Delta ) > K_{P}(w)\). Further, as shown in the previous proof, the function D defined in (A.5) is strictly decreasing which gives \(D(K_{P}(w+ \Delta ))<D(K_{P}(w))\). But by (A.13) and our hypothesis

$$\begin{aligned} D\left( K_{P}(w + \Delta )\right)= & {} u'\left( w + \Delta - T P (w + \Delta ) - K_{P}(w+\Delta ) \right) \\\ge & {} u'\left( w - T P (w) - K_{P}(w+\Delta )\right) \\> & {} u'\left( w - T P (w) - K_{P}(w)\right) \\= & {} D(K_{P}(w)) \end{aligned}$$

which is a contradiction and proves that \(w \mapsto w - T P (w)\) is increasing.

Next, we show that TP is increasing. As \(T P = V \circ K_{P}\) and we have already shown that \(K_{P}\) is strictly increasing, it remains to show that V defined in (A.4) is increasing as well. To avoid trivialities, assume in the remainder that \(P>0\). Adjusting the arguments to the case where \(P \ge 0\) is straightforward. Let \(k \in \mathbb {K}\) be an arbitrary but fixed interior point. We show that \(V(k + \Delta ) \ge V(k )\) for any \(0<\Delta < k_{\mathrm{max}} - k\). By property (U) from Assumption 2, the map \(a \mapsto a v'( a + b)\), \(a>0\) is increasing for any \(b \ge 0\). Thus, by monotonicity of \(P \circ W\) the numerator in (A.4) satisfies

$$\begin{aligned} N(k+\Delta ) \ge \tilde{N}(\Delta ) := \mathbb {E}_{\nu } \bigl [ P(W(k,\cdot ) ) v' \bigl ( P(W, k, \cdot ) + (k+\Delta ) R(k+\Delta , \cdot ) \bigr ) \bigr ].\nonumber \\ \end{aligned}$$
(A.14)

Furthermore, by Eq. (A.6), the denominator in (A.4) satisfies \(D(k+\Delta ) \le \tilde{D}(\Delta )\). Thus, defining \(\tilde{V}(\Delta ):= \frac{\tilde{N}(\Delta )}{\tilde{N}(\Delta )}\), we have \(V(k+\Delta ) \ge \tilde{V}(\Delta )\) and \(\tilde{V}(0) = V(k)\). It therefore suffices to show that \(\tilde{V}\) is increasing. Observe that unlike V, \(\tilde{V}\) is \(C^1\) and, by direct computation, the derivative satisfies \(\tilde{V}'(\Delta ) \ge 0\), if and only if

$$\begin{aligned} A(\Delta ) := (k+\Delta )^2 \tilde{N}'(\Delta ) \tilde{D}(\Delta ) - (k+\Delta )^2 \tilde{D}'(\Delta ) \tilde{N}(\Delta ) \ge 0. \end{aligned}$$
(A.15)

To establish that \(A(\Delta ) \ge 0\), let \(0<\Delta \le k_{\mathrm{max}} -k\) be arbitrary but fixed and, for the sake of brevity, define the nonnegative random variables

$$\begin{aligned} X:= & {} (k+\Delta )R(k+\Delta , \cdot ) |v''(P(k,\cdot ) + (k+\Delta )R(k+\Delta ,\cdot ))|^{\frac{1}{2}} \end{aligned}$$
(A.16a)
$$\begin{aligned} Y:= & {} P(k,\cdot ) |v''(P(k,\cdot ) + (k+\Delta )R(k+\Delta , \cdot ))|^{\frac{1}{2}} \end{aligned}$$
(A.16b)

both defined on the probability space \((\varTheta , \mathscr {B}(\varTheta ), \nu )\). Then, by direct computation again, the derivative of \(\tilde{N}\) defined in (A.14) can be expressed as

$$\begin{aligned} (k+\Delta ) \tilde{N}'(\Delta ) = -(1- E_{f'}(k+\Delta )) \mathbb {E}_{\nu } \bigl [ X Y \bigr ] \end{aligned}$$
(A.17)

while the derivative of \(\tilde{D}\) computed in (A.9) satisfies

$$\begin{aligned} -(k+\Delta )^2 \tilde{D}'(\Delta ) = E_{f'}(k+\Delta ) (k+\Delta ) \tilde{D}(\Delta ) + (1- E_{f'}(k+\Delta ))\mathbb {E}_{\nu }[X^2].\nonumber \\ \end{aligned}$$
(A.18)

Suppose \(E_{f'}(k+\Delta ) \ge 1\). Then, by (A.17) \(\tilde{N}'(\Delta ) \ge 0\) while, as shown in (A.9), \(\tilde{D}'(\Delta ) < 0\). Thus, all terms in (A.15) are positive implying \(A(\Delta )>0\) in this case. Therefore, the remainder assumes \(E_{f'}(k+\Delta ) < 1\). Note from (4) that \(\tilde{N}(\Delta ) \ge \frac{\mathbb {E}_{\nu }[Y^2 + XY]}{E_{v'}^{\mathrm{max}}} \ge \frac{\mathbb {E}_{\nu }[XY]}{E_{v'}^{\mathrm{max}}}\). Using this together with (A.17) and (A.18) in (A.19) gives

$$\begin{aligned} A(\Delta )\ge & {} - (k+\Delta ) \tilde{D}(\Delta )\mathbb {E}_{\nu } \bigl [ X Y \bigr ] \left( 1- E_{f'}(k+\Delta )\frac{1+E_{v'}^{\mathrm{max}}}{E_{v'}^{\mathrm{max}}}\right) \nonumber \\&+\, B(\Delta ) \end{aligned}$$
(A.19)

where

$$\begin{aligned} B(\Delta ) := \frac{\mathbb {E}_{\nu }[Y^2 + XY]\mathbb {E}_{\nu }[X^2]}{E_{v'}^{\mathrm{max}}}\left( 1- E_{f'}(k+\Delta )\right) . \end{aligned}$$
(A.20)

By Hölder’s inequality [see Aliprantis and Border (2007, p. 463) for the special case \(p=q =2\) implying \(\frac{1}{p} + \frac{1}{q} =1\)], we have \(\mathbb {E}_{\nu } \bigl [{X}^2 \bigr ] \mathbb {E}_{\nu } \bigl [{Y}^2 \bigr ] \ge (\mathbb {E}_{\nu } \bigl [{X}Y \bigr ])^2\) which implies \(\mathbb {E}_{\nu }[Y^2 + XY]\mathbb {E}_{\nu }[X^2] = \mathbb {E}_{\nu }[Y^2]\mathbb {E}_{\nu }[X^2] + \mathbb {E}_{\nu }[ XY]\mathbb {E}_{\nu }[X^2] \ge \mathbb {E}_{\nu }[XY]\mathbb {E}_{\nu }[X (Y+X)]\). Further, using (A.16) and the bounds defined in (4) gives \(\mathbb {E}_{\nu }[X (X+Y)]\ge E_{v'}^{\mathrm{min}} (k+\Delta ) \tilde{D}(\Delta )\). Using both results in (A.20) gives

$$\begin{aligned} B(\Delta ) \ge \frac{(k+\Delta ) \tilde{D}(\Delta ) \mathbb {E}_{\nu }[XY]E_{v'}^{\mathrm{min}}}{E_{v'}^{\mathrm{max}}}(1- E_{f'}(k+\Delta )). \end{aligned}$$
(A.21)

Finally, using (A.21) in (A.19) gives the desired result

$$\begin{aligned} A(\Delta )\ge & {} - \frac{(k+\Delta ) \tilde{D}(\Delta )\mathbb {E}_{\nu } \bigl [ X Y \bigr ]}{E_{v'}^{\mathrm{max}}} \left( E_{v'}^{\mathrm{max}}- E_{v'}^{\mathrm{min}} - E_{f'}(k+\Delta ) (1 + E_{v'}^{\mathrm{max}}- E_{v'}^{\mathrm{min}}\right) \\\ge & {} 0 \end{aligned}$$

where the last inequality follows from condition (5) in Assumption 3. This proves that \(\tilde{V}\) is weakly increasing which implies the desired result \(V(k + \Delta ) \ge \tilde{V}(\Delta ) \ge \tilde{V}(0) = V(k)\). Finally, adopting an argument used and proved in Morand and Reffett (2003, p. 1360), monotonicity of TP and \(w \mapsto w - T P(w)\), \(w \in \mathbb {W}\) imply continuity of TP. \(\square \)

1.5 A.5 Proof of Lemma 5

Let \(P \in \mathscr {G}'\) be arbitrary. We need to show that TP is \(C^1\). Since P is \(C^1\), so are the mappings \(\tilde{P}\), S, D, and \(\tilde{N}\) defined in (A.4) and (A.5) and G defined in (A.13). Recall that for each \(w \in \mathbb {W}\), \(K_{P}\) determines the unique zero of \(G(\cdot ; w)\). Since \(\partial _k G (k;w) >0\), \(K_{P}\) is \(C^1\) by the implicit function theorem. Thus, \(T P = \tilde{P} \circ K_{P}\) is \(C^1\) as well.\(\square \)

1.6 A.6 Proof of Lemma 6

We only prove the strict inequalities, as the proof of the weak inequalities is analogous. Given \(P_1, P_0 \in \mathscr {G}'\), suppose \(P_1 > P_0\). For \(\lambda \in [0,1]\), define \(P_{\lambda }:= \lambda P_1 + (1-\lambda ) P_0\). Since \(\mathscr {G}'\) is convex, \(P_{\lambda } \in \mathscr {G}'\) and, using the monotonicity properties in (13), the derivative satisfies \(0 \le P_{\lambda }' \le 1\) for all \(\lambda \). Moreover, the map \(\lambda \mapsto P_{\lambda } = P_0 + \lambda \Delta \) where \(\Delta := P_1 -P_0 >0\) is strictly increasing (with respect to the pointwise ordering on \(\mathscr {G}\)).

Let \(w \in \mathbb {W}\) be arbitrary but fixed. By Lemma 3 (and a slight abuse of notation), for each \(\lambda \in [0,1]\) there exists a unique pair \((k_{\lambda }, p_{\lambda })\) which solves \(H_1(k_{\lambda }, p_{\lambda }; w, \lambda ) = H_2(k_{\lambda }, p_{\lambda }; w, \lambda ) =0\). We will now show that \(\lambda \mapsto k_{\lambda }\), \(\lambda \in [0,1]\) is strictly decreasing and \(\lambda \mapsto p_{\lambda }\), \(\lambda \in [0,1]\) is strictly increasing. This implies \(p_1 > p_0\) and \(k_1 < k_0\) and the claim.

Employing the same definitions and notation as in the proof of Lemma 3, write \(c_{\lambda }(k, \theta ) := P_{\lambda }(W(k, \theta )) + k R(k, \theta )\). Then, the pair \((k_{\lambda }, p_{\lambda })\) satisfies \(p_{\lambda } = \tilde{P}(k_{\lambda }, \lambda )\) where

$$\begin{aligned} \tilde{P}(k, \lambda ) := \dfrac{\mathbb {E}_{\nu } \bigl [ P_{\lambda }(W(k, \cdot ) ) v' \bigl ( c_{\lambda }(k, \cdot ) \bigr ) \bigr ] }{\mathbb {E}_{\nu } \bigl [ R (k, \cdot ) v' \bigl ( c_{\lambda }(k, \cdot ) \bigr ) \bigr ] } =: \frac{{N}(k, \lambda )}{ {D}(k, \lambda )}, \quad k \in \mathbb {K}, \lambda \in [0,1].\qquad \quad \end{aligned}$$
(A.22)

To compute the partial derivatives of D and N, note that \(\partial _k W(k, \theta ) = E_{f'}(k) R(k, \theta ) >0\) by (1a, 1b) which implies

$$\begin{aligned} \partial _k c_{\lambda }(k,\theta ) = R(k, \theta ) \bigl ( E_{f'}(k) P_{\lambda }'(-) + 1 - E_{f'}(k)) \ge - R(k, \theta )E_{f'}(k).\quad \end{aligned}$$
(A.23)

Taking the derivative of (A.22) one obtains, exploiting property (U) and suppressing arguments when convenient

$$\begin{aligned} \partial _k {N}(k, \lambda )= & {} \mathbb {E}_{\nu } \Bigl [ P_{\lambda }'(\cdot ) E_{f'}(k) R(k,\cdot ) v'(\cdot ) - P_{\lambda }(\cdot )|v''(\cdot )|\partial _k c_{\lambda }(k,\cdot ) \Bigr ]\end{aligned}$$
(A.24)
$$\begin{aligned} \partial _{\lambda }{N}(k, \lambda )= & {} \mathbb {E}_{\nu } \bigl [\Delta (k,\cdot )\bigl ( v'(\cdot ) -P_{\lambda }(W(k,\cdot ) ) |v''(\cdot )| \bigr ) \bigr ] >0 \end{aligned}$$
(A.25)
$$\begin{aligned} \partial _k {D}(k, \lambda )= & {} -\frac{1}{k}\mathbb {E}_{\nu } \Bigl [E_{f'}(k) R(k,\cdot ) v'(\cdot ) + k R(k,\cdot ) |v''(\cdot )|\partial _k c_{\lambda }(k,\cdot ) \Bigr ] \end{aligned}$$
(A.26)
$$\begin{aligned} \partial _{\lambda }{D}(k, \lambda )= & {} -\mathbb {E}_{\nu } \bigl [\Delta (k,\cdot ) R (k,\cdot ) |v'' (\cdot )| \bigr ] <0 \end{aligned}$$
(A.27)

where \(\Delta (k, \theta ):= P_1(W(k, \theta )) - P_0(W(k, \theta ))>0\) for all \(k \in \mathbb {K}\) and \(\theta \in \varTheta \). Using (A.23) and property (U) from Assumption 2 in (A.26), we infer that

$$\begin{aligned} \partial _k {D}(k, \lambda ) < -\frac{E_{f'}(k)}{k}\mathbb {E}_{\nu } \Bigl [R(k,\cdot ) \Bigl (v'(\cdot ) - k R(k,\cdot ) |v''(\cdot )|\Bigr ) \Bigr ] \le 0. \end{aligned}$$
(A.28)

We show that \(\frac{\hbox {d} k_{\lambda }}{\hbox {d} \lambda }<0\). As \(k_{\lambda }\) is the unique solution to \(G(k,\lambda ) := u'(w-k-\tilde{P}(k, \lambda )) - {D}(k, \lambda )=0\), the implicit function theorem yields the derivative

$$\begin{aligned} \frac{\hbox {d} k_{\lambda }}{\hbox {d} \lambda } = - \frac{\partial _{\lambda }G (k, \lambda ) }{\partial _k G(k, \lambda ) }\Bigl |_{k= k_{\lambda }} = - \frac{|u''(\cdot )| \partial _{\lambda } \tilde{P}(k_{\lambda },\lambda ) - \partial _{\lambda }{D}(k_{\lambda }, \lambda ) }{|u''(\cdot )|\left( 1+ \partial _k \tilde{P}(k_{\lambda },\lambda )\right) - \partial _{k}{D}(k_{\lambda }, \lambda ) }.\quad \end{aligned}$$
(A.29)

As shown in the proof of Lemma 3, the map \(S(k,\lambda ) := k+\tilde{P}(k,\lambda )\) is strictly increasing in k and, therefore, satisfies \(\partial _{k}{S}(k,\lambda ) = 1+\partial _{k} \tilde{P}(k,\lambda ) \ge 0\). Further, combining (A.22) with (A.25) and (A.27) shows that \(\partial _{\lambda } \tilde{P}(k,\lambda )>0\). This together with (A.27) and (A.28) shows that all terms determining the fraction in (A.29) are positive which gives \(\frac{d k_{\lambda }}{d \lambda }<0\).

Second, we show that \( \frac{\hbox {d} p_{\lambda }}{\hbox {d} \lambda }>0\). As \(p_{\lambda } = \tilde{P}(k_{\lambda }, \lambda )\) one obtains the derivative

$$\begin{aligned} \frac{\hbox {d} p_{\lambda }}{\hbox {d} \lambda } = \partial _k \tilde{P}(k_{\lambda }, \lambda ) \frac{\hbox {d} k_{\lambda }}{\hbox {d} \lambda } + \partial _{\lambda } \tilde{P}(k_{\lambda }, \lambda ). \end{aligned}$$
(A.30)

Using (A.29), the derivative (A.30) can equivalently be written as

$$\begin{aligned} \frac{\hbox {d} p_{\lambda }}{\hbox {d} \lambda } = \frac{|u''(\cdot )|\partial _{\lambda } \tilde{P}(k_{\lambda },\lambda ) + Z(k_{\lambda },\lambda )}{|u''(\cdot )|\left( 1+\partial _{k} \tilde{P}(k_{\lambda },\lambda )\right) - \partial _{k}{D}(k_{\lambda }, \lambda ) } \end{aligned}$$
(A.31)

where \(Z(k,\lambda ):= \partial _{\lambda } D(k,\lambda ) \partial _{k} \tilde{P}(k,\lambda ) - \partial _{k} D(k,\lambda ) \partial _{\lambda } \tilde{P}(k,\lambda )\). By (A.26) and our previous result, both the denominator and the first term in the numerator in (A.31) are strictly positive. Hence, it suffices to show that \(M(k_{\lambda },\lambda ) \ge 0\). Using the explicit form of the derivatives \(\partial _{k} \tilde{P}\) and \(\partial _{\lambda } \tilde{P}\) computed from (A.22), this last expression can be written as

$$\begin{aligned} Z(k,\lambda ) = \frac{\partial _{\lambda }{D}(k, \lambda ) \partial _{k}{N}(k, \lambda ) - \partial _{k}{D}(k, \lambda ) \partial _{\lambda }{N}(k, \lambda )}{{D}(k, \lambda )}. \end{aligned}$$

Using property (U), (A.25), and (A.27) gives \(\partial _{\lambda }{N}(k, \lambda ) \ge - k \partial _{\lambda }{D}(k, \lambda )\). Thus, it suffices to show \(\partial _k N(k, \lambda ) + k \partial _k D(k, \lambda ) \le 0\). By (A.24) and (A.26), recalling that \(0 \le P_{\lambda }' \le 1\),

$$\begin{aligned}&-\partial _k N(k, \lambda ) - k \partial _k D(k, \lambda )\\&\quad = \mathbb {E}_{\nu } \bigl [\bigl (1-P_{\lambda }'(\cdot )\bigr ) E_f'(k) R(k,\cdot ) v'(\cdot ) +\, c_{\lambda }(k, \cdot ) |v''(\cdot )| \partial _k c_{\lambda }(k, \cdot ) \bigr ] \\&\quad > E_f'(k) \mathbb {E}_{\nu } \bigl [\bigl (1-P_{\lambda }'(\cdot )\bigr ) R(k,\cdot ) \bigl (v'(\cdot ) -\, c_{\lambda }(k, \cdot )|v''(\cdot )|\bigr ) \bigr ] \ge 0 \end{aligned}$$

where the last inequality exploits (A.23). This proves \(Z(k_{\lambda },\lambda ) >0\) and the claim. \(\square \)

1.7 A.7 Proof of Corollary 1

  1. (i)

    \(T_{d} P_1 = T(P_1 + d) \ge T(P_0 + d) = T_{d} P_0\).

  2. (ii)

    \(T_{d_1} P = T(P + d_1) \ge T(P + d_0) = T_{d_0} P\). \(\square \)

1.8 A.8 Proof of Theorem 1

(i) We show the fixed point property for \(d=0\). The proof for \(d>0\) is analogous. For convenience, we drop the subscript \(d=0\) and denote the sequence \((T^n P_0)_{n \ge 0}\) simply as \((P_n)_{n \ge 0}\) and its pointwise limit by \(P^*\). Also, for the sake of brevity we abuse our notation by writing \(P(k, \theta )\) instead of \(P(W(k, \theta ))\).

Let \(w\in \mathbb {W}\) be arbitrary but fixed. As \((P_n)_{n}\) is a decreasing sequence of functions in \(\mathscr {G}'\), monotonicity of \(K_{\bullet }\) due to Lemma 6 implies that the sequence \(k_n := K_{P_{n}}(w)\), \(n \ge 0\) is strictly increasing and converges to some limit \(0<k^{*} \le K_0(w) \le k_{\mathrm{max}}\). The claim will follow if we show that \(k^{*}\) and \(p^{*}:= P^{*}(w)\) satisfy (15), i.e., \(H_1(k^{*}, p^{*}; w, P^{*}, 0) = H_2(k^{*}, p^{*}; w, P^{*}, 0) =0\). Uniqueness of the solution to (15) then implies \(k^{*} = K_{P^{*}}(w)\).

Let \(\theta \in [\theta _{\mathrm{min}}, \theta _{\mathrm{max}}]\) be arbitrary but fixed. We show that \(\lim _{n \rightarrow \infty }P_{n}(k_n, \theta ) = P^{*}(k^{*}, \theta )\). As \((P_n)_{n \ge 0}\) is a sequence of increasing functions which converges pointwise to the continuous function \(P^{*}\), convergence is uniform on \(\overline{\mathbb {W}}:=[W(k_0,\theta _{\mathrm{min}}), w_{\mathrm{max}}] \subset \mathbb {W}\) by Theorem A in Buchanan & Hildebrandt (1908). Note that \(W(k_n, \theta ) \in \overline{\mathbb {W}}\) for \(n \ge 0\). Thus, for each \(\delta >0\), there is \(n_0 \ge 0\) such that \(\vert P_{n}(k_n, \theta ) - P^{*}(k_n,\theta )\vert < \delta /2\) for all \(n \ge n_0\). Further, by continuity of W and \(P^{*}\) there is \(n_0'>0\) such that \(n \ge n_0'\) implies \(\vert P^{*}(k_n, \theta ) - P^{*}(k^{*}, \theta )\vert < \delta /2\). Combining both insights, we have for all \(n \ge \max \{n_0, n_0' \}\):

$$\begin{aligned} \vert P_{n}(k_n, \theta ) - P^{*}(k^{*}, \theta )\vert \le \vert P_{n}(k_n, \theta ) - P^{*}(k_n,\theta ) \vert + \vert P^{*}(k_n, \theta ) - P^{*}(k^{*}, \theta )\vert < \delta . \end{aligned}$$

For \(\theta \in [\theta _{\mathrm{min}}, \theta _{\mathrm{max}}]\), define the functions

$$\begin{aligned} \phi ^1_n(\theta ):= & {} R(k_n,\theta )v'\left( P_{n}(k_n,\theta )+k_n R(k_n,\theta )\right) \\ \phi ^2_n(\theta ):= & {} P_{n}(k_n,\theta )v'\left( P_{n}(k_n,\theta )+k_n R(k_n,\theta )\right) . \end{aligned}$$

The previous result and continuity of \(v'\) and R imply for each \(\theta \in [\theta _{\mathrm{min}}, \theta _{\mathrm{max}}]\)

$$\begin{aligned} \lim _{n \rightarrow \infty } \phi ^1_n(\theta )= & {} \phi ^1_{*}(\theta ) := R(k^{*},\theta )v'\left( P^{*}(k^{*},\theta )+k^{*} R(k^{*},\theta )\right) \\ \lim _{n \rightarrow \infty } \phi ^2_n(\theta )= & {} \phi ^2_{*}(\theta ) := P^{*}(k^{*},\theta ) v'\left( P^{*}(k^{*},\theta )+k^{*} R(k^{*},\theta )\right) . \end{aligned}$$

As \(\phi ^1_n(\theta ) < R(k_1,\theta _{\mathrm{max}})v'(k_1 R(k_{\mathrm{max}},\theta _{\mathrm{min}}))\) and \(\phi ^2_n(\theta ) < w_{\mathrm{max}} v'(k_1 R(k_{\mathrm{max}},\theta _{\mathrm{min}}))\) for all n, the Lebesgue-dominated convergence theorem implies \(\lim _{n \rightarrow \infty } \mathbb {E}_{\nu }[\phi ^i_n(\cdot )] = \mathbb {E}_{\nu }[\phi ^i_{*}(\cdot )]\), \(i=1,2\). This, \(\lim _{n \rightarrow \infty } P_n(w) =p^{*}\) and \(\lim _{n \rightarrow \infty } u'(w - P_n(w) - k_n) = u'(w - p^{*} - k^{*})\) imply that (15) is satisfied. Since w was arbitrary, \(P^{*}\) is a fixed point of T.

That \(d>0\) implies \(P_d^{*}>0\) follows directly from the Euler equations (14a, 14b) resp. (15).

To prove the stated properties of \(P_0^{*}\), we show that \(P_0^{*}(w)=0\) for some \(w\in \mathbb {W}\) implies \(P_0^{*}(w)=0\) for all \(w\in \mathbb {W}\). Let \(w_0 \in \mathbb {W}\) be arbitrary and suppose \(P_0^{*}(w_0)=0\). If \(w_0= w_{\mathrm{max}}\), the claim follows from monotonicity of \(P_0^{*}\), so suppose \(w_0< w_{\mathrm{max}}\). By (14b) and (15), \(P_0^{*}(w_0)=0\) implies \(P_0^{*}(W(K_{P_0^{*}}(w_0), \theta ))=0\) \(\nu \)–a.s. As \(\theta _{\mathrm{max}}\) is contained in the support of \(\nu \), continuity of \(P_0^{*}\) yields \(P_0^{*}(W(K_0^{*}(w_0), \theta _{\mathrm{max}}))=0\). Moreover, (14a) and (15) imply \(K_0^{*}(w_0) = K_0(w_0)\), the latter being defined by (7). Thus, under Assumption 4, \(w_1 := W(K_0^{*}(w_0), \theta _{\mathrm{max}})\) satisfies \(w_1= W(K_0(w_0), \theta _{\mathrm{max}}) > w_0\) and \(P_0^{*}(w_1) =0\).

Let \(w_1 \le w_n< w_{\mathrm{max}}\) be any value for which \(P_0^{*}(w_n)=0\). Repeating the previous argument shows that \(w_{n+1} := W(K_0^{*}(w_{n}), \theta _{\mathrm{max}}) = W(K_0(w_{n}), \theta _{\mathrm{max}})>w_n\) and \(P_0^{*}(w_{n+1}) =0\). Due to Assumption 4, the sequence \((w_n)_{n \ge 1}\) converges monotonically to \(w_{\mathrm{max}}\) and \(P_0^{*}(w_n)=0\) for all \(n \ge 1\) implies \(P_0^{*}(w_{\mathrm{max}})=0\) due to continuity of \(P_0^{*}\).

The remaining inequalities follow as limits from the monotonicity of \(K_{\bullet }\) and \(T_{\cdot }\) due to Lemma 6 and Corollary 1 which imply \(P_d^m > P_{d'}^m\) and \(K_{P_d^m + d} < K_{P_{d'}^m + d'}\) for all m which must (weakly) also hold in the limit. As for each \(w \in \mathbb {W}\), \(K_d^{*}(w)\) is the unique zero k of \(G_d(k;w)= u'(w - k - P_d^{*}(w)) - \mathbb {E}_{\nu }[R(k,\cdot ) v'( P_d^{*}(W(k,\cdot )) + d + k R(k,\cdot )) ]\) which is strictly increasing in d, the second inequality even holds strictly.

  1. (ii)

    Follows directly from \(P_d^{*} \in \mathscr {G}\) as shown in the main text and Lemma 4(ii).

  2. (iii)

    Follows directly from the previous results and Definitions 1 and 2. \(\square \)

1.9 A.9 Proof of Lemma 7

Let \((d_n)_{n \ge 0}\) be a sequence converging monotonically to zero. For each \(n \ge 1\), define \((P_{d_n}^m)_{m \ge 1}\) as \(P_0 = \mathrm{id}_\mathbb {W}\) and \(P_{d_n}^m = T_{d_n}^m P_0 \in \mathscr {G}'\) for \(m \ge 1\). This sequence is strictly monotonic and converges pointwise to \(P_{d_n}^{*} \in \mathscr {G}\) defined in (18). It follows from Theorem 1(i) that the sequence of limits \((P_{d_n}^{*})_{n \ge 1}\) is decreasing such that the limiting function

$$\begin{aligned} P^{**}_0(w) := \lim _{n \rightarrow \infty } P_{d_n}^{*}(w) \end{aligned}$$
(A.32)

is well defined for all \(w \in \mathbb {W}\). Denote by \(P_0^{*}\) the limit in (18) for \(d=0\), i.e.,

$$\begin{aligned} P_0^{*}(w) = \lim _{m \rightarrow \infty } T^m P_0(w) \end{aligned}$$
(A.33)

for \(w \in \mathbb {W}\). We would like to show that \(P^{**}_0 = P_0^{*}\).

As \(T_d\) is increasing in d by Corollary 1, \(P_{d_n}^m = T_{d_n}^m P_0 \ge T^m P_0 = P_0^m\) for all m which implies \(P_{d_n}^{*} \ge P_0^{*}\) for all n. Therefore, \(P_0^{**} \ge P_0^{*}\). We therefore need to show \(P_0^{**} \le P_0^{*}\).

Suppose \(d_n =0\) for all \(n \ge n_0\). In this case \(n \ge n_0\) implies \(P_{d_n}^m = T_{d_n}^m P_0 = T^m P_0 = P_0^m\) for all \(m \ge 1\) and, therefore, \(P_0^{**} = P_0^{*}\). The remainder of the proof therefore assumes that the dividend sequence is strictly positive, i.e., \(d_n>0\) for all n and strictly decreasing.

We first show that \(P_0^{**}\) in (A.32) is independent of the particular dividend sequence. For \(i=1,2\), let \((d_n^i)_{n \ge 1}\) be a strictly positive sequence converging monotonically to zero. Denote by \(P^{**, i}_0\) the pointwise limit (A.32) induced by \((d_n^i)_{n \ge 1}\). Now, for each \(n \ge 1\) there exists \(k \ge 0\) such that \(d_n^1 > d_{n+m}^2\) for all \(m \ge k\). By Theorem 1(i), this implies \(P^{*}_{d_n^1} \ge P^{*}_{d_{n+m}^2}\) and, therefore, \(P_{d_n^1}^{*}(w) \ge \lim _{m \rightarrow \infty } P^{*}_{d_{n+m}^2}(w)= P^{**, 2}_0(w)\) for all \(w \in \mathbb {W}\). Since n was arbitrary, \(P^{**, 1}_0 \ge P^{**, 2}_0\). Reversing the argument gives \(P^{**, 2}_0 \ge P^{**, 1}_0\).

We show that \(P > P^{**}_0\) implies \(T P > P^{**}_0\) for any \(P \in \mathscr {G}'\). As \(P_0> P^{**}_0\) and \(P_0\in \mathscr {G}'\), we then obtain by simple induction that \(T^m P_0 > P^{**}_0\) for all m which proves \(P^{*}_0 \ge P^{**}_0\).

Let \(P\in \mathscr {G}'\) satisfy \(P > P^{**}_0\) and \(\hat{w} \in \mathbb {W}\) be arbitrary. We show \(TP (\hat{w}) > P^{**}_0(\hat{w})\).Footnote 17 Given \(\hat{w}\), define the compact set \(\overline{\mathbb {W}}_{\hat{w}} := [W(K_P(\hat{w}), \theta _{\mathrm{min}}), w_{\mathrm{max}}] \subset \mathbb {W}\). We will construct a function \(\tilde{P}\in \mathscr {G}'\) such that \(P>\tilde{P}\) on \(\overline{\mathbb {W}}_{\hat{w}}\). Noting that only the behavior of P and \(\tilde{P}\) on the interval \(\overline{\mathbb {W}}_{\hat{w}}\) is relevant to compute \(TP(\hat{w})\) and \(T \tilde{P}(\hat{w})\), the same arguments as in the proof of Lemma 6 can then be used to show \(TP(\hat{w}) > T \tilde{P}(\hat{w})\).Footnote 18

In order to construct such a \(\tilde{P}\), set \(\delta := \min _{w \in \overline{\mathbb {W}}_{\hat{w}}} \{P(w) - P^{**}_0(w)\}>0\). By Theorem A in Buchanan and Hildebrandt (1908), there exists a \(d>0\) such that \(\Vert P^*_d(w)-P^{**}_0(w)\Vert _{\infty }<\frac{\delta }{3}\) on \(\overline{\mathbb {W}}_{\hat{w}}\) as \(P^*_d\) converges montonically to \(P^{**}_0\) for \(d {\searrow } 0\) (here \(\Vert \cdot \Vert _{\infty }\) denotes the supremum norm). By the same argument, there exists \(m\in \mathbb {N}\) such that \(\Vert T^m_d P_0(w)-P^*_d(w)\Vert _{\infty }<\frac{\delta }{3}\) on \(\overline{\mathbb {W}}_{\hat{w}}\) as \((T^m_d P_0)_{m \ge 0}\) converges pointwise to \(P_d^{*}\). Define \(\tilde{P}:=T^m_d P_0\) and note that \(\Vert \tilde{P}-P^{**}_0\Vert _{\infty }<\frac{2 \delta }{3}\) on \(\overline{\mathbb {W}}_{\hat{w}}\). Further, \(P_0^{**}<T^{m+1}_{\tilde{d}} P_0<T_{\tilde{d}}\circ T^m_d P_0\) on \(\mathbb {W}\) for any \(0<\tilde{d}<d\). Thus, \(P_0^{**}<T_{\tilde{d}}\tilde{P}\) for any \(\tilde{d}>0\) which implies \(P^{**}_0 \le T\tilde{P}\). This last result uses that

$$\begin{aligned} \lim _{n \rightarrow \infty } T_{d_n} P(w) = T P(w) \end{aligned}$$

for all \(P \in \mathscr {G}'\), \(w \in \mathbb {W}\) and any monotonic sequence \((d_n)_{n}\) converging to zero.Footnote 19 Combining these results we get \(TP(\hat{w})>T\tilde{P}(\hat{w})\geqslant P^{**}_0 (\hat{w})\) for any \(\hat{w} \in \mathbb {W}\).

To show that \(\lim _{n \rightarrow \infty } K^{*}_{d_n}(w) = K_0^{*}(w)\) for each \(w \in \mathbb {W}\), note that \((K^{*}_{d_n}(w))_{n}\) is increasing by Theorem 1(i) and converges to some limit \(k^{*} \le K_0(w)\). By the same arguments used in the proof of Theorem 1(i), \(k^{*}\) and \(p^{*}:= P_0^{*}(w)\) satisfy the Euler equations at \(P=P_0^{*}\) and \(d=0\) which implies \(k^{*} = K_{P_0^{*}}(w)\) by uniqueness of the solution to (15). \(\square \)

B Efficiency and inefficiency of MEA

In this appendix, we review the recursive characterization of interim Pareto optimality for stationary exchange economies obtained in Barbie and Kaul (2015) and adapt their results to characterize the optimality of MEA in a stochastic production economy. As large parts of the analysis holds almost unchanged and requires mainly notational changes, we will frequently refer to Barbie and Kaul (2015) for the details and proofs and just repeat the core facts. To adapt the results, we need the characterization of interim optimality for production OLG models from Barbie et al. (2007) who extended the pure exchange case in Chattopadhyay and Gottardi (1999).

1.1 B.1 Notation and definitions

Let \(A= (K, C^y, C^o)\) be a continuous, bounded MEA defined as in Sect. 4.2 and \(\overline{\mathbb {W}}=[\underline{w}, w_{\mathrm{max}}]\) be a stable set of A. Fixing the initial shock \(\theta _0 \in \varTheta \) permits \(\overline{\mathbb {W}}\) to be used as the state space which corresponds to the set S in Barbie and Kaul (2015). To adapt our notation to their setup, note that any two successive states w and \(w'\) permit to recover the shock in the second period via \(\theta ' = w'/W(K(w),1)\). Thus, define the (modified) pricing kernel \(m:{\overline{\mathbb {W}}} \times {\overline{\mathbb {W}}} \longrightarrow \mathbb {R}_{++}\)

$$\begin{aligned} m(w, w'):= \dfrac{v'\left( C^o(w, w'/W(K(w), 1))\right) }{u'(C^y(w))}. \end{aligned}$$
(B.1)

Denote by \(\mathscr {B}(\overline{\mathbb {W}})\) the Borel-\(\sigma \) algebra on \(\overline{\mathbb {W}}\). As shocks are i.i.d., function K defines a transition probability \(Q:\overline{\mathbb {W}} \times \mathscr {B}(\overline{\mathbb {W}}) \longrightarrow [0,1]\),

$$\begin{aligned} Q(w, G) := \nu (\{ \theta \in \varTheta \, | \, W(K(w), \theta ) \in G. \}). \end{aligned}$$
(B.2)

Note that Q has the Feller property since the function \(W\circ K \) is continuous. By the change-of-variable formula, the inequality (21) can be written as

$$\begin{aligned} \int _{\overline{\mathbb {W}}} \eta (w') m(w, w') Q(w, \mathrm{d}w') > \eta (w). \end{aligned}$$
(B.3)

To adapt their formal arguments the remainder follows Barbie et al. (2007) by assuming that the shock process is finite-valued, i.e., \(\varTheta =\{\theta _1, \ldots , \theta _N\}\). Thus, if \(w_t\in \overline{\mathbb {W}}\) is the state in period t, there are N successive states \(w_{t+1} = W(K(w_t), \theta _{t+1})\). If \(w^{\prime }\in \overline{\mathbb {W}}\) is a such a successor, we write \(w^{\prime }\succ w_{t}\). With this notation, an integral of the form (B.3) can be written as \(\sum _{w' \succ w} \eta (w') m(w, w') Q(w, w')\).

Given some initial state \(w_0 \in \overline{\mathbb {W}}\), denote by \(W^t(w_0)\) the set of histories \(w^t=(w_0,\ldots , w_t)\) observed up to time t, i.e., \(w_{n} \succ w_{n-1}\) for all \(n=1, \ldots , t\). Further, let \(W^{\infty }(w_0)\) denote the set of all infinite histories \(w^{\infty } = (w_t^{\infty })_{t \ge 0}\), i.e., \(w^{\infty }_{t} \succ w^{\infty }_{t-1}\) for all \(t \ge 1\) and \(w^{\infty }_0=w_0\). For any infinite path \(w^{\infty }\in W^{\infty }(w_0)\), denote by \(\left( w^{\infty }\right) ^{t}\) the induced history up to time \(t \ge 0\) along this path, i.e., \(\left( w^{\infty }\right) ^{t}=\left( w_{0}^{\infty },w_{1}^{\infty },\ldots ,w_{t}^{\infty }\right) \in W^t(w_0)\).

Similar to Chattopadhyay and Gottardi (1999), define for each \(w^{t} \in W^t(w_0)\) the set weights Footnote 20

$$\begin{aligned} \mathcal {U}( w^{t}) = \left\{ \lambda (w^{t},w^{\prime }) \in \mathbb {R}_+ \left| \right. w^{\prime }\succ w_t, \sum _{w^{\prime }\succ w_t}\lambda (w^{t},w^{\prime }) Q( w_{t},w^{\prime }) =1\right\} . \end{aligned}$$

Given some \(w_0 \in \overline{\mathbb {W}}\), define \(\mathcal {U}^{\infty }\left( w_{0}\right) \) to be the family of weights \(\lambda ^{\infty } = (\lambda (w^t, \cdot ))_{t \ge 1}\) where \(w^t \in W^t(w_0)\) and \(\lambda (w^t, \cdot ) \in \mathcal {U}(w^t)\) for all t.

1.2 B.2 Recursive characterization of inefficiency

Barbie et al. (2007) derived a condition for interim Pareto inefficiency in a stochastic Diamond model. For a MEA A which satisfies the restrictions from Lemma 8, the necessary part of this result can be stated as follows.

Lemma 13

If \(A=(K, C^y, C^o)\) is inefficient at \(w_0\in \overline{\mathbb {W}}\), there exists a family of weights \(\lambda ^{\infty } \in \mathcal {U}^{\infty }\left( w_{0}\right) \) and a constant \(C\ge 0\) such that for each path \(w^{\infty } \in W^{\infty }(w_0)\)

$$\begin{aligned} \sum _{i=0}^{\infty }\prod _{j=0}^{i}\frac{\lambda \left( (w^{\infty })^{j},w_{j+1}^{\infty }\right) }{m\left( w_{j}^{\infty },w_{j+1}^{\infty }\right) }\le C . \end{aligned}$$
(B.4)

As noted in Barbie and Kaul (2015), the condition (B.4) can be restated as a minimax problem. The max-part is taking the supremum over all possible paths, the min-part is taking the infimum over all possible weights. For any \(w_{0}\in \overline{\mathbb {W}}\), define the value function

$$\begin{aligned} {J^{*}\left( w_{0}\right) := } \underset{\lambda ^{\infty }\in \mathcal {U}^{\infty }\left( w_{0}\right) }{ \inf }\ \ \underset{w^{\infty }\in W^{\infty }(w_0)}{\sup } \left\{ \ \ 1+\sum _{i=0}^{\infty }\prod _{j=0}^{i}\frac{\lambda \left( \left( w^{\infty }\right) ^{j},w_{j+1}^{\infty }\right) }{m\left( w_{j}^{\infty },w_{j+1}^{\infty }\right) } \right\} . \end{aligned}$$
(B.5)

The next result follows immediately from Lemma 13 and (B.5).

Corollary 3

If A is inefficient at \(w_0 \in \overline{\mathbb {W}}\), then \(J^*(w_0)<\infty \).

Following Barbie and Kaul (2015) we show that (B.5) defines a recursive structure permitting \(J^{*}\) to be computed as a fixed point of some operator Z. For each \(w\in \overline{\mathbb {W}}\), denote the set of all stationary weights

$$\begin{aligned} \mathcal {U}(w) = \left\{ \lambda (w,w^{\prime }) \in \mathbb {R}_+ \, | \, w' \succ w, \sum _{w^{\prime }\succ w}\lambda (w,w^{\prime }) Q( w,w^{\prime }) =1 \right\} . \end{aligned}$$

Define the operator Z which associates with any nonnegative extended real-valued function \(J:\overline{\mathbb {W}} \longrightarrow \mathbb {R}_+ \cup \{+ \infty \}\) the new function ZJ defined for all \(w \in \overline{\mathbb {W}}\) as

$$\begin{aligned} Z J(w) :=1+\underset{{\lambda \left( w, \cdot \right) } \in \mathcal {U}\left( w\right) }{\inf } \ \ \underset{w^{\prime }\succ w}{\sup } \left\{ \frac{\lambda \left( w,w^{\prime }\right) }{m\left( w,w^{\prime }\right) } \cdot J\left( w^{\prime }\right) \right\} . \end{aligned}$$
(B.6)

Note that Z is monotonic, i.e., \(J_1 \ge J_2\) implies \(Z J_1 \ge Z J_2\). The operator Z can now be used to compute a value function that solves the functional Eq. (B.6). Construct the sequence \((J_n)_{n \ge 0}\) of functions \(J_n\) defined on \(\overline{\mathbb {W}}\) recursively by setting \(J_{0}\equiv 1\) and \(J_n = Z J_{n-1}\) for \(n \ge 1\). For each \(w\in \overline{\mathbb {W}}\), define the function

$$\begin{aligned} J_{\infty }\left( w\right) := \lim _{n \rightarrow \infty } J_n(w). \end{aligned}$$
(B.7)

Note that the pointwise limit in (B.7) exists since the sequence \((J_n)_{n \ge 0}\) is increasing. We now have the following result. The proof is the same as the ones of Theorem 1 and Proposition 2 in Barbie and Kaul (2015) (with the appropriate notational changes).

Lemma 14

The function \(J_{\infty }\) defined in (B.7) is a fixed point of Z that coincides with the value function \(J^{*}\) defined in (B.5), i.e., \(J_{\infty } = Z J_{\infty } = J^{*}\).

1.3 B.3 Proof of Lemma 8(i)

By Corollary 3, if A is inefficient then \(J^*(w_0)< {\infty }\) for all \(w_0 \in \overline{\mathbb {W}}\). Set \(\eta (w):= 1{/}J^{*}(w)\) for \(w \in \overline{\mathbb {W}}\). It follows from the same arguments as in the proofs of Proposition 4 and Theorem 2(a) in Barbie and Kaul (2015) that \(\eta \) is a strictly positive, upper-semicontinuous function which takes values in the unit interval (since \(J^{*}>1\)) and satisfies (B.3) for all \(w\in \overline{\mathbb {W}}\). As boundedness of A permits to choose the lower bound \(\underline{w}\) arbitrarily small, the previous construction of \(\eta \) can be extended to the entire interval \(\mathbb {W}=]0, w_{\mathrm{max}}]\). \(\square \)

1.4 B.4 Proof of Lemma 8(ii)

In this section we present a new additional sufficient condition under which the function \(\eta \) constructed as in the previous subsection is continuous, not just upper-semicontinuous. We will then argue that this condition is satisfied if the kernel \(m_A\) exhibits the monotonicity property required in Lemma 8(ii). We have the following result:

Lemma 15

Suppose \(J^{*}=J_{\infty }\) defined in (B.7) is uniformly bounded on \(\overline{\mathbb {W}}\), i.e., there exists a constant \(M \ge 0\) such \(J^{*}(w) \le M\) for all \(w\in \overline{\mathbb {W}}\). Then, \(\eta =1/J^{*}\) is continuous.

Proof

Construct the sequence \((J_n)_{n \ge 0}\) as above by setting \(J_0 \equiv 1\) and \(J_n = Z J_{n-1}\) for \(n \ge 1\). Recall that \(J_1 > 1 = J_0\) and monotonicity of Z imply that \((J_n)_{n \ge 0}\) is strictly increasing, i.e., \(J_n > J_{n-1}\) for all \(n \ge 0\). By Lemma 14, we know that the pointwise limit \(J^*\) defined in (B.7) is a fixed point of Z. We will show that under the hypotheses of Lemma 15, \((J_n)_{n \ge 1}\) is a Cauchy sequence in the space of bounded continuous functions on \(\overline{\mathbb {W}}\). As this space is complete, the sequence must converge to some bounded continuous function, which coincides with the pointwise limit \(J^*\).

First, we show that each \(J_n\) is of the form \(J_n(w) = 1 + c_n^{*}(w)\) for some continuous function \(c_n^{*}:\overline{\mathbb {W}} \longrightarrow \mathbb {R}_+\). Clearly, this holds trivially for \(n=0\) and \(c_0^{*} \equiv 0\). By induction, suppose \(J_{n-1}(w) = 1+c_{n-1}^{*}(w)\) for some \(n \ge 1\). For each \(w\in \overline{\mathbb {W}}\) and \(w' \succ w\), define the function

$$\begin{aligned} \lambda _n^{*}(w, w'):= \frac{m(w, w')}{J_{n-1}(w')}c_n^{*}(w) \end{aligned}$$
(B.8)

where \(c_n^{*}\) is chosen such that \(\sum _{w^{\prime }\succ w}\lambda _n^{*} (w,w^{\prime }) Q( w,w^{\prime }) =1\) for all \(w \in \overline{\mathbb {W}}\), i.e.,

$$\begin{aligned} c_{n}^{*}(w) : = \left[ \sum _{w^{\prime }\succ w} \frac{m(w,w^{\prime })}{J_{n-1}(w')} Q( w,w^{\prime })\right] ^{-1}. \end{aligned}$$
(B.9)

Note that \(\lambda _n^{*}\) is continuous and attains the infimum in (B.6). Hence,

$$\begin{aligned} J_{n}(w) = 1 + \max _{w'\succ w} \frac{\lambda _n^{*}(w, w')}{m(w, w')} J_{n-1}(w') = 1+c_{n}^{*}(w). \end{aligned}$$
(B.10)

As continuity of \(c_{n-1}^{*}\) implies continuity of \(c_n^{*}\), this proves that each \(J_n\) is continuous and, therefore, bounded on the compact set \(\overline{\mathbb {W}}\).

Defining \(\lambda _n^{*}\) by (B.8) for each \(n\ge 1\) we can now use the first equality in (B.10) to expand \(J_n\) for all \(w_0 \in \overline{\mathbb {W}}\) as

$$\begin{aligned} J_{n}(w_0)= & {} 1 + \max _{w_1\succ w_{0}} \frac{\lambda _n^{*}(w_{0}, w_{1})}{m(w_{0}, w_1)}\left[ 1 + \max _{w_{2}\succ w_{1}} \frac{\lambda _{n-1}^{*}(w_1, w_2)}{m(w_1, w_2)}J_{n-2}(w_2) \right] \nonumber \\ \nonumber= & {} 1 + \max _{w_1\succ w_{0}} \frac{\lambda _n^{*}(w_{0}, w_{1})}{m(w_{0}, w_1)}\left[ 1 + \max _{w_{2}\succ w_{1}} \frac{\lambda _{n-1}^{*}(w_1, w_2)}{m(w_1, w_2)} \biggl [ \ldots \right. \nonumber \\&\left. \left. \left[ 1+ \max _{w_n\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_n)} \right] \ldots \right] \right] . \end{aligned}$$
(B.11)

The final term in (B.11) satisfies \(1+\max _{w_n\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_n)} = J_1(w_{n-1}) = 1+c_1^{*}(w_{n-1})\).

Clearly, \(\lambda _{n}^{*}\) does not necessarily attain the infimum when defining \(J_{n+1}\) by (B.6). Therefore, for all \(w_0 \in \overline{\mathbb {W}}\), recalling that \(J_1(w) = 1+c_1^{*}(w)\)

$$\begin{aligned} J_{n+1}(w_0)= & {} 1+ \max _{w_1\succ w_0} \frac{\lambda _{n+1}^{*}(w_0, w_1)}{m(w_0, w_1)} J_{n}(w_1) \nonumber \\\le & {} 1+ \max _{w_1\succ w_0} \frac{\lambda _n^{*}(w_0, w_1)}{m(w_0, w_1)} J_{n}(w_1) \nonumber \\\le & {} 1 + \max _{w_1\succ w_{0}} \frac{\lambda _n^{*}(w_{0}, w_{1})}{m(w_{0}, w_1)}\left[ 1 + \max _{w_{2}\succ w_{1}} \frac{\lambda _{n-1}^{*}(w_1, w_2)}{m(w_1, w_2)} \biggl [ \ldots \right. \nonumber \\&\left. \left. 1+\,\max _{w_{n-1}\succ w_{n-2}} \frac{\lambda _2^{*}(w_{n-2}, w_{n-1})}{m(w_{n-2}, w_{n-1})}\left[ \right. \right. \right. \nonumber \\&\left. \left. \left. 1+\, \max _{w_{n}\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_{n})}\left( 1+c_1^{*}(w_n)\right) \right] \ldots \right] \right] . \end{aligned}$$
(B.12)

By elementary observations,Footnote 21 the final term in (B.12) satisfies for any \(w_{n-2} \in \overline{\mathbb {W}}\)

$$\begin{aligned}&\max _{w_{n-1}\succ w_{n-2}} \frac{\lambda _2^{*}(w_{n-2}, w_{n-1})}{m(w_{n-2}, w_{n-1})}\left[ 1+ \max _{w_{n}\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_{n})}\left( 1+c_1^{*}(w_n)\right) \right] \\&\quad \le \max _{w_{n-1}\succ w_{n-2}} \frac{\lambda _2^{*}(w_{n-2}, w_{n-1})}{m(w_{n-2}, w_{n-1})} \left[ 1\right. +\,\max _{w_{n}\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_{n})} \nonumber \\&\qquad +\left. \max _{w_{n}\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_{n})} c_1^{*}(w_n) \right] \\&\quad \le \max _{w_{n-1}\succ w_{n-2}} \frac{\lambda _2^{*}(w_{n-2}, w_{n-1})}{m(w_{n-2}, w_{n-1})} \left[ 1+\max _{w_{n}\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_{n})} \right] \\&\qquad +\, \max _{w_{n-1}\succ w_{n-2}} \frac{\lambda _2^{*}(w_{n-2}, w_{n-1})}{m(w_{n-2}, w_{n-1})} \max _{w_{n}\succ w_{n-1}} \frac{\lambda _1^{*}(w_{n-1}, w_{n})}{m(w_{n-1}, w_{n})} c_1^{*}(w_n). \end{aligned}$$

Solving (B.12) in this recursive fashion and using (B.11) we obtain for all n and \(w_0 \in \overline{\mathbb {W}}\)

$$\begin{aligned} J_{n+1}(w_0) \le J_n(w_0) + \max \limits _{w_1\succ w_0} \frac{\lambda ^*_n(w_0,w_1)}{m(w_0,w_1)} \cdots \max \limits _{w_n\succ w_{n-1}} \frac{\lambda ^*_{1}(w_{n-1},w_n)}{m(w_{n-1},w_n)}c_1^{*}(w_n).\nonumber \\ \end{aligned}$$
(B.13)

Using (B.8) and (B.10) in (B.13) we obtain for all \(n \in \mathbb {N}\) and \(w_0 \in \overline{\mathbb {W}}\)

$$\begin{aligned} J_{n+1}(w_0)-J_n (w_0)&\leqslant \max \limits _{w_1\succ w_0}\frac{c^*_n(w_0)}{1+c^*_{n-1}(w_1)}\\&\quad \times \max \limits _{w_2\succ w_1}\frac{c^*_{n-1}(w_1)}{1+c^*_{n-2}(w_2)}\ldots \max \limits _{w_n\succ w_{n-1}}c^*_1(w_{n-1})\cdot c^*_1(w_n)\\&=c^*_n(w_0)\cdot \max \limits _{w_1\succ w_0}\frac{c^*_{n-1}(w_1)}{1+c^*_{n-1}(w_1)}\ldots \max \limits _{w_{n-1}\succ w_{n-2}}\frac{c^*_1(w_{n-1})}{1+c^*_1(w_{n-1})}\\&\quad \times \max \limits _{w_n\succ w_{n-1}}c^*_1(w_n). \end{aligned}$$

Since \(M\geqslant J^*(w)\geqslant J_n(w)=1+c^*_n(w)>c^*_n(w)\) for any \(w\in \overline{\mathbb {W}}\) and \(n\in \mathbb {N}\), we get

$$\begin{aligned} 0<&J_{n+1}(w)-J_n (w)\leqslant M^2\cdot \left( \frac{M}{1+M}\right) ^{n-1} \end{aligned}$$

for all \(w\in \overline{\mathbb {W}}\). But this means that

$$\begin{aligned} \Vert J_{n+1} - J_n \Vert _{\infty }\leqslant B\left( \beta \right) ^{n-1} \end{aligned}$$

where \(\Vert \cdot \Vert _{\infty }\) is the supremum norm on the space of bounded continuous functions on \(\overline{\mathbb {W}}\) and \(B>0\) and \(0<\beta <1\). By standard arguments, this implies

$$\begin{aligned} \Vert J_{n+m} -J_n\Vert _{\infty }\leqslant B \beta ^{n-1}\frac{1}{1-\beta } \end{aligned}$$

for all \(n,m>0\) and so \((J_n )_{n \ge 0}\) is a Cauchy sequence, as was to be shown.\(\square \)

Now suppose \(m_A\) defined in (20) is monotonically increasing. We show that this implies the hypothesis of Lemma 15. Using the change-of-variable formula in (B.9) yields

$$\begin{aligned} \frac{1}{c_n^{*}(w)} = \sum _{w^{\prime }\succ w} \frac{m(w,w^{\prime })}{1+c_{n-1}^{*}(w')} Q( w,w^{\prime }) = \mathbb {E}_{\nu }\left[ \frac{m_A(w, \cdot )}{1+c_{n-1}^{*}\left( W(K(w), \cdot )\right) } \right] . \end{aligned}$$

As the term to the far right is a strictly increasing function whenever \(c_{n-1}^{*}\) is decreasing, it follows by induction that each \(J_n(w) = 1 + c_n^{*}(w)\), \(w \in \overline{\mathbb {W}}\) is strictly decreasing which implies \(J_n(w) \le J_n(\underline{w})\) for all n. Taking the limit gives \(J^{*}(w) \le J^{*}(\underline{w})\) for all \(w \in \overline{\mathbb {W}}\). Finally, if A is inefficient at \(w_0\), monotonicity of \(J^{*}\) implies \(J^{*}(w_0') \le J^{*}(w_0) <\infty \) also for \(w_0 \ge w_0'\), i.e., A is also inefficient for all \(w_0 \ge w_0'\). \(\square \)

1.5 B.5 Proof of Lemma 9

For each \(w \in \mathbb {W}\) and \(\theta \in \varTheta \), define \(C_0^o(w, \theta ):= K_0(w) R(K_0(w), \theta )\) and \(\tilde{m}(w) := \mathbb {E}_{\nu }[R(K_0(w), \cdot ) v'(C_0^o(w; \cdot ))]\). Using (7), the pricing kernel \(m_0\) can be written as

$$\begin{aligned} m_0 (w, \theta ) = v'\left( C_0^o(w, \theta )\right) /\tilde{m}(w). \end{aligned}$$

Let \(w \in \mathbb {W}\) and \(\theta \in \varTheta \) be arbitrary but fixed and set \(c_0 := C_0^o(w, \theta )\) and \(k_0 := K_0(w)\). Then, by direct computations \(\frac{\partial m_0}{\partial w}(w, \theta ) = \frac{K_0'(w) v'(c_0)}{k_0 \tilde{m}(w)} H(w)\) where

$$\begin{aligned} H(w):= & {} E_{f'}(k_0) + (1-E_{f'}(k_0))\nonumber \\&\times \left( \frac{ \mathbb {E}_{\nu }\left[ R(k_0, \cdot )C_0(w, \cdot ) |v''\left( C_0(w, \cdot )\right) |\right] }{\tilde{m}(w)} - E_{v'} (c_0)\right) \end{aligned}$$
(B.14)

determines the sign of \(\frac{\partial m_0}{\partial w}(w, \theta )\). Using (4), we have \(0 \le E_{v'}^{\mathrm{min}} \le E_{v'} (c_0) \le E_{v'}^{\mathrm{max}} \le 1\) and

$$\begin{aligned} E_{v'}^{\mathrm{min}} \tilde{m}(w) \le \mathbb {E}_{\nu }[R(k_0, \cdot )C_0(w, \cdot ) |v''(C_0(w, \cdot ))|] \le E_{v'}^{\mathrm{max}} \tilde{m}(w). \end{aligned}$$

Using these bounds in (B.14), we obtain

$$\begin{aligned} H(w) \ge E_{f'}(k_0) + E_{v'}^{\mathrm{min}} - E_{v'}^{\mathrm{max}} - E_{f'}(k_0) \left( E_{v'}^{\mathrm{max}} - E_{v'}^{\mathrm{min}}\right) . \end{aligned}$$
(B.15)

As the r.h.s in (B.15) is nonnegative due to (5) in Assumption 3, the claim follows.

\(\square \)

1.6 B.6 Proof of Lemma 10

As both \(C^y\) and \(C^{o}\) are continuous, strictly positive functions on their compact domains \(\overline{\mathbb {W}}\) and \(\overline{\mathbb {W}}\times \varTheta \), respectively, we can choose \(\overline{\alpha }>0\) such that the ‘perturbed’ allocation \((K, C^y_{\alpha },C^{o}_{\alpha })\) defined as \(C^y_{\alpha }(w):=C^y(w)-\alpha \eta (w)\) and \(C^{o}_{\alpha }(w,\theta )=C^{o}(w,\theta )+\alpha \eta (W(K(w), \theta ))\) is strictly positive and feasible for all \(\alpha \in [-\overline{\alpha }, \overline{\alpha }]\) and \(w \in \overline{\mathbb {W}}\). Thus, given \(w\in \overline{\mathbb {W}}\), the map \(h(\alpha ; w):=u(C^y_{\alpha }(w))+\mathbb {E}_\nu [v(C^{o}_{\alpha }(w, \cdot ))]\) is well defined and determines the utility of a generation born in state \(w\in \overline{\mathbb {W}}\) under the perturbation \(\alpha \in [-\overline{\alpha }, \overline{\alpha }]\).

We will determine \(\alpha ^{*}>0\) such that \(h(\alpha ^{*};w) - h(0; w)>0\) for all \(w\in \overline{\mathbb {W}}\), i.e., the perturbed allocation improves the utility of any generation. Let \(w \in \overline{\mathbb {W}}\) be fixed. As \(h(\cdot ; w)\) is twice continuously differentiable on the open interval \(]-\overline{\alpha }, \overline{\alpha }[\), we have

$$\begin{aligned} h(\alpha ;w)-h(0;w)=h'(0;w)\alpha +\frac{1}{2}h''(\xi ;w) \alpha ^2 \end{aligned}$$

for \(0 \le \alpha \le \overline{\alpha }\) and some \(0<\xi <\alpha \) that may depend on both w and \(\alpha \). By hypothesis,

$$\begin{aligned} h'(0;w)=-u'\left( C^y(w)\right) \eta (w)+\mathbb {E}_\nu \left[ v'\left( C^{o}(w, \cdot )\right) \eta \left( W\left( K(w, \cdot )\right) \right) \right] >0 \end{aligned}$$

for all w. Further, using the Lebesgue-dominated convergence theorem

$$\begin{aligned} h''(\xi ;w)=u''\left( C^y_{\xi }(w)\right) \left( \eta (w)\right) ^2+\mathbb {E}_\nu \left[ v''\left( {C^{o}_{\xi }(w, \cdot )}\right) \left( \eta \left( W\left( K(w),\cdot \right) \right) \right) ^2\right] <0. \end{aligned}$$

By the Lebesgue-dominated convergence theorem again, both mappings \(w \mapsto h'(0; w)\) and \((\xi ; w) \mapsto h''(\xi ; w)\) are continuous on \(\overline{\mathbb {W}}\) and \([0,\overline{\alpha }] \times \overline{\mathbb {W}}\), respectively. Thus, there exist \(\Delta _1>0\) and \(\Delta _2<0\) such that \(h(\alpha ;w)-h(0;w)\geqslant \Delta _1 \alpha +\Delta _2 \alpha ^2\) for all \(w\in \overline{\mathbb {W}}\) and \(\alpha \in [0,\overline{\alpha }]\). Choosing \(\alpha ^{*}>0\) sufficiently small therefore ensures that \(h(\alpha ^{*};w)>h(0;w)\) for all \(w\in \overline{\mathbb {W}}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barbie, M., Hillebrand, M. Bubbly Markov equilibria. Econ Theory 66, 627–679 (2018). https://doi.org/10.1007/s00199-017-1082-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-017-1082-8

Keywords

JEL Classification

Navigation