Skip to main content
Log in

On maximin dynamic programming and the rate of discount

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

This article establishes a dynamic programming argument for a maximin optimization problem where the agent completes a minimization over a set of discount rates. Even though the consideration of a maximin criterion results in a program that is not convex and not stationary over time, it is proved that a careful reference to extended dynamic programming principles and a maxmin functional equation however allows for circumventing these difficulties and recovering an optimal sequence that is time consistent.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. One may further notice that the dynamic programming argument of this article could be equally be used with a non-connected set of discount rates for which a time-consistent decision rule would indeed keep on being available.

  2. In Wakai (2008) and Wakai (2013), Wakai established an axiomatic basis for such a utility smoothing behavior.

  3. This is a mild assumption that is, e.g., satisfied in the case when, for any x, \(0 \in \Gamma (x)\) and \(\Gamma \) is expanding. See Topkis (1978).

  4. V is (strictly) supermodular if, for every \( (x, x^\prime )\) and \( (y, y^\prime )\) that belong to \(\hbox {Graph}(\Gamma )\), \( {V}(x,y)+{V}(x',y') (>) \ge {V}(x',y)+{V}(x,y') \) prevails whenever \((x',y') (>) \ge (x,y)\). When V is twice differentiable, (strict) supermodularity sums up to (strictly) positive cross derivatives.

  5. See Section 4.4 in Stokey et al. (1989) for the calculus details.

  6. Observe, however, that it would be more difficult to apply these methods were one to relax Assumption V6, for there would then exist multiple steady solutions for any \({\mathscr {R}}(\delta )\).

References

  • Alvarez, F., Stokey, N.L.: Dynamic programming with homogeneous functions. J. Econ. Theory 82, 167–189 (1998)

    Article  Google Scholar 

  • Amir, R.: Sensitivity analysis of multisector optimal of economic dynamics. J. Math. Econ. 25, 123–141 (1996)

    Article  Google Scholar 

  • Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)

    Google Scholar 

  • Bich, P., Drugeon, J.-P., Morhaim, L.: On temporal aggregators and dynamic programming. Econ. Theory 63, 1–31 (2017)

    Article  Google Scholar 

  • Chambers, C., Echenique, F.: On multiple discount rates. Econometrica 86, 1325–1346 (2018)

    Article  Google Scholar 

  • Drugeon, J.P., Ha-Huy, T.: A not so myopic axiomatization of discounting. Working paper (2018)

  • Durán, J.: On dynamic programming with unbounded returns. Econ. Theory 15, 339–352 (2000)

    Article  Google Scholar 

  • Geoffard, P.Y.: Discounting and optimizing, capital accumulation problems as variational minimax problems. J. Econ. Theory 69, 53–70 (1996)

    Article  Google Scholar 

  • Gilboa, I., Schmeidler, D.: Maximin expected utility with no-unique prior. J. Math. Econ. 18, 141–153 (1989)

    Article  Google Scholar 

  • Jaśkiewicz, A., Matkowski, J., Nowak, A.S.: On variable discounting in dynamic programming: applications to resource extraction and other economic models. Ann. Oper. Res. 220, 263278 (2014)

    Article  Google Scholar 

  • Kahimigashi, T.: Elementary results on solutions to the Bellman equation of dynamic programming: existence, uniqueness, and convergence. Econ. Theory 56, 251–273 (2014a)

    Article  Google Scholar 

  • Kahimigashi, T.: An order-theoretic approach to dynamic programming: an exposition. Econ. Theory Bull. 2, 13–21 (2014b)

    Article  Google Scholar 

  • Le Van, C., Morhaim, L.: Optimal growth models with bounded or unbounded returns: a unifying approach. J. Econ. Theory 105, 158–187 (2002)

    Article  Google Scholar 

  • Le Van, C., Vailakis, Y.: Recursive utility and optimal growth with bounded or unbounded returns. J. Econ. Theory 123, 187–209 (2005)

    Article  Google Scholar 

  • Rincon-Zapatero, J.P., Rodriguez-Palmero, C.: Existence and uniqueness of solutions to the Belllman equation in the unbounded case. Econometrica 71, 1519–1555 (2003)

    Article  Google Scholar 

  • Rincon-Zapatero, J.P., Rodriguez-Palmero, C.: Recursive utility with unbounded aggregators. Econ. Theory 33, 381–391 (2007)

    Article  Google Scholar 

  • Stokey, N.L., Lucas Jr., R., Prescott, E.: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge (1989)

    Book  Google Scholar 

  • Streufert, P.A.: Stationary recursive utility and dynamic programming under the assumption of biconvergence. Rev. Econ. Stud. 57, 79–97 (1990)

    Article  Google Scholar 

  • Streufert, P.A.: An abstract topological approach to dynamic programming. J. Math. Econ. 21, 59–88 (1992)

    Article  Google Scholar 

  • Topkis, D.: Minimizing a submodular function on a lattice. Oper. Res. 26, 305–321 (1978)

    Article  Google Scholar 

  • Wakai, K.: A model of utility smoothing. Econometrica 76, 137–153 (2008)

    Article  Google Scholar 

  • Wakai, K.: Intertemporal utility smoothing: theory and applications. Jpn. Econ. Rev. 64, 16–41 (2013)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-Pierre Drugeon.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This study results from discussions that took place by Fall 2015 in a theory working group organized by Cuong Le Van. The authors would like to thank him as well as the other participants for their useful insights on earlier versions. They are also grateful to Federico Echenique for pointing out, at RUD16 and in the course of a conversation about Chambers and Echenique (2018), the first version of which was concomitantly but independently completed from the current study by spring 2016, the existence of Wakai (2008, 2013) to their attention. Preliminary versions were presented at the SAET 2016 and PET 2016 conferences in Rio de Janeiro and the authors would like their participants for their comments. A more recent version was presented at the KIER seminar of the University of Kyoto, at the RIEB seminar of the University of Kobe, at the University of Hiroshima, at the University of Evry and at the European University at Saint-Petersburg; the authors would like to thank Tadashi Shigoka, Kazuo Nishimura, Tadashi Sekiguchi, Takashi Kamihigashi, Hien Pham and Makoto Yano for their insightful suggestions on these occasions. They are also strongly indebted to the insightful comments and suggestions of the two anonymous referees who pointed out the need for an ascending correspondence, a strict supermodularity assumption and, last but not the least, made an eventual and opportune suggestion toward a clarified and compact form for the title of this study.

Appendices

Proof of Proposition 2.1

Proof

(i) This is a straightforward application of the Weierstrass Theorem.

(ii) a/ Fix \(\chi _1, \chi _2 \in \Pi (x_0)\) such that \(\chi _1 \ne \chi _2\) and \(v(\chi _1,\delta ), v(\chi _2,\delta ) > -\infty \) for every \(\delta \in [\underline{\delta }, \overline{\delta }]\). Fix \(0< \lambda < 1\). Let \(\chi _\lambda = (1-\lambda ) \chi _1 + \lambda \chi _2\). From the convexity of \(\Gamma \), it derives that \(\chi \in \Pi (x_0)\). Fix \(\delta _\lambda \in {\mathscr {D}}\) such that \({\hat{v}}(\chi _\lambda )= v(\delta _\lambda , \chi _\lambda )\).

From the strict concavity of V, it is obtained that

$$\begin{aligned} {\hat{v}}(\chi _\lambda )&= v(\delta _\lambda , (1-\lambda ) \chi _1 + \lambda \chi _2) \\&> (1-\lambda ) v(\delta _\lambda , \chi _1) + (1-\lambda ) v(\delta _\lambda , \chi _2) \\&\ge (1-\lambda ) \min _{\delta \in {\mathscr {D}}} v(\delta , \chi _1) + \lambda \min _{\delta \in {\mathscr {D}}} v(\delta , \chi _2)\\&= (1-\lambda ) {\hat{v}}(\chi _1) + \lambda {\hat{v}}(\chi _2). \end{aligned}$$

(iii) b/ From (1), take the sequence \(\chi ^n\) such that \(\lim _{n \rightarrow \infty } {\hat{v}}(\chi ^n) = {J}(x_0) .\) Since from assumption T2, \(\Pi (x_0)\) is compact in the product topology, it can be supposed that \(\chi ^n \rightarrow \chi ^*\). It will first be proved that \({\hat{v}}(\chi ^*) > - \infty \).

Fix any \(\epsilon > 0\), there exists large enough values of N such that, for \(n \ge N\), \(v(\chi ^n) > {J}(x_0) - \epsilon \) prevails. Observe, that from assumption T2, there exists \({C} > 0\) such that \(0 \le x_t \le {C}\) for any \(\chi \in \Pi (x_0)\). Hence, there exists T satisfying, for any \(\chi \in \Pi (x_0)\) and for any \(\delta \in {\mathscr {D}}\),

$$\begin{aligned} (1-\delta )\sum _{t=T+1}^\infty \delta ^t {V}(x_t, x_{t+1}) < \epsilon . \end{aligned}$$

For any \(n \ge N\), for any \(\delta \in {\mathscr {D}}\):

$$\begin{aligned} {J}(x_0) - \epsilon&\le v(\chi ^n) \\&\le (1-\delta ) \sum _{t=0}^{{T}} \delta ^{{T}} {V}(x^n_t, x^n_{t+1}) + (1-\delta )\sum _{t=T+1}^\infty \delta ^{{T}} {V}(x^n_t, x^n_{t+1}), \end{aligned}$$

which implies

$$\begin{aligned} (1-\delta ) \sum _{t=0}^{{T}} \delta ^{{T}} {V}(x^n_t, x^n_{t+1})&\ge {J}(x_0) - \epsilon - (1-\delta )\sum _{t=T+1}^\infty \delta ^{{T}} {V}(x^n_t, x^n_{t+1}) \\&\ge {J}(x_0) - 2 \epsilon . \end{aligned}$$

Letting n converge to infinity, for any \(\delta \in {\mathscr {D}}\),

$$\begin{aligned} (1-\delta ) \sum _{t=0}^{{T}} \delta ^t {V}(x^*_t, x^*_{t+1}) \ge {J}(x_0) - 2 \epsilon . \end{aligned}$$

This inequality prevails for large enough values of T. Letting T tend to infinity, for any \(\delta \in {\mathscr {D}}\):

$$\begin{aligned} (1-\delta ) \sum _{t=0}^\infty \delta ^t {V}(x^*_t, x^*_{t+1}) \ge {J}(x_0) - 2 \epsilon , \end{aligned}$$

whence \({\hat{v}}(\chi ^*) \ge {J}(x_0) - 2 \epsilon .\)

The parameter \(\epsilon \) being chosen arbitrarily, \({\hat{v}}(\chi ^*) \ge {J}(x_0)\). From the strict concavity of \({\hat{v}}\), \(\chi ^*\) is the unique solution of \(\sup _{\chi \in \Pi (x_0)} {\hat{v}}(\chi ).\)\(\square \)

Proof of Lemma 3.1

Proof

From the compactness of \({\mathscr {D}}\) and \(\Gamma (x_0)\) and for every \(x_0 > 0\), let T denote the operator from the space of continuous function into itself:

$$\begin{aligned} {T}g(x_0) = \max _{x_1 \in \Gamma (x_0)} \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, x_1) + \delta g(x_1)\right] . \end{aligned}$$

Supposing that \(g(x) \le {\tilde{g}}(x)\) for any x, \({T}g(x) \le {T} {\tilde{g}}(x)\) for any x. Indeed, for each \(x_1 \in \Gamma (x_0)\), for every \(\delta \), it is obtained that:

$$\begin{aligned} (1-\delta ) {V}(x_0, x_1) + \delta g(x_1) \le (1-\delta ) {V}(x_0, x_1) + \delta {\tilde{g}}(x_1), \end{aligned}$$

which implies

$$\begin{aligned} \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, x_1) + \delta g(x_1)\right] \le \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, x_1) + \delta {\tilde{g}}(x_1)\right] . \end{aligned}$$

The inequality being obviously true for every \(x_1\), \({T}v(x_0) \le {T}{\tilde{g}}(x_0)\) in turn prevails. By the same arguments, it can be proved that for every constant \(a > 0\), \({T}(g+a)(x_0) \le {T} g(x_0) + {\overline{\delta }} a\).

Since T satisfies the properties of Blackwell, i.e., monotonicity and discounting, by Theorem 3.3 in Stokey et al. (1989), the operator T is a contracting map and has a unique fixed point.

In preparation for the concavity of this fixed point function, it will now be proved that the concavity of g implies the concavity of Tg. Fix \(x_0\) and \(y_0\) and fix \((\delta _x, x_1)\) and \((\delta _y, y_1)\) such that

$$\begin{aligned} {T}g(x_0)&= (1-\delta _x) {V}(x_0, x_1) + \delta _x g(x_1)\\&= \min _{\delta \in {\mathscr {D}}}[(1-\delta ) {V}(x_0, x_1) + \delta g(x_1)],\\ {T}g(y_0)&= (1-\delta _y){V}(y_0, y_1) + \delta _y g(y_1) \\&= \min _{\delta \in {\mathscr {D}}}[ (1-\delta ){V}(y_0, y_1) + \delta g(y_1)]. \end{aligned}$$

For any \(0 \le \lambda \le 1\), define \(x_{0}^\lambda = (1-\lambda ) x_0 + \lambda y_0\), \( x^\lambda _1= (1-\lambda )x_1 + \lambda y_1\). For any \(\delta \in {\mathscr {D}}\), it derives that:

$$\begin{aligned} (1-\delta _x) {V}(x_0, x_1) + \delta _x g(x_1)&\le (1-\delta ) {V}(x_0, x_1) + \delta g(x_1), \\ (1-\delta _y) {V}(y_0, y_1) + \delta _y g(y_1)&\le (1-\delta ) {V}(y_0, y_1) + \delta g(y_1). \end{aligned}$$

Hence, for any \({\underline{\delta }} \le \delta \le {\overline{\delta }}\):

$$\begin{aligned} (1-\lambda ) {T}g(x_0) + \lambda {T}g(y_0)&= (1-\lambda ) \left[ (1-\delta _x) {V}(x_0, x_1)+ \delta _x g(x_1) \right] \\&\quad + \lambda \left[ (1-\delta _y) {V}(y_0, y_1)+ \delta _y g(y_1) \right] \\&\le (1-\lambda ) \left[ (1-\delta ) {V}(x_0, x_1)+ \delta g(x_1) \right] \\&\quad + \lambda \left[ (1-\delta ) {V}(y_0, y_1)+ \delta g(y_1) \right] \\&\le (1-\delta ) {V}(x_0^\lambda , x_1^\lambda ) + \delta g(x_1^\lambda ). \end{aligned}$$

The inequality being true for every \(\delta \), it is obtained that:

$$\begin{aligned} (1-\lambda ) {T}g(x_0) + \lambda {T}g(y_0)&\le \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}\left( x_0^\lambda , x_1^\lambda \right) + \delta g\left( x_1^\lambda \right) \right] \\&\le {T}g(x_0^\lambda ). \end{aligned}$$

Take \(g^0(x_0) = 0\) for every \(x_0\), define \(g^{n+1}(x_0) = Tg^n(x_0)\) for any \(n \ge 0\). By induction, \(g^n(x_0)\) is a concave function for any n. Taking limits, \({W}(x_0) = \lim _{n \rightarrow \infty } g^n(x_0) = \lim _{n \rightarrow \infty } {T}^n g^0(x_0)\), the concavity property is available for W.

The strict concavity of W is now going to be established. Consider \(x_0 \ne y_0\). Define \((\delta _x, x_1)\), \((\delta _y, y_1)\), \(x^\lambda _0\) and \(x^\lambda _1\) as in the first part of this proof. Relying on the same arguments and calculus as well as on the strictly concavity of V, for any \(\delta \in {\mathscr {D}}\), it is obtained that:

$$\begin{aligned} (1-\lambda ) {W}(x_0) + \lambda {W}(x_1)&\le (1-\delta ) \left[ (1-\lambda ) {V}(x_0, x_1) + \lambda {V}(y_0, y_1) \right] \\&\quad + \delta \left[ (1-\lambda ){W}(x_1)+\lambda {W}(y_1)) \right] \\&< (1-\delta ) {V}(x^\lambda _0,x^\lambda _1) + \delta {W} \left( x^\lambda _1\right) . \end{aligned}$$

From the compacity of the set of discount factors \({\mathscr {D}}\), it derives that:

$$\begin{aligned} (1-\lambda ) {W}(x_0) + \lambda {W}(x_1)&< \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x^\lambda _0, x^\lambda _1) + \delta {W}(x^\lambda _1) \right] \\&\le {W}(x^\lambda _0). \end{aligned}$$

\(\square \)

Proof of Proposition 3.1

Proof

Define \(w(x_0, x_1)= \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) \right] \).

(i) Observe that the function \(w(x_0, x_1)\) is strictly concave in \(x_1\). Indeed, for any \(x_1 \ne y_ 1\) belonging to \(\Gamma (x_0)\), and \(0< \lambda < 1\), define \(x^\lambda _1 = (1-\lambda )x_1 + \lambda y_1\). For any \(\delta \in {\mathscr {D}}\):

$$\begin{aligned} (1-\lambda ) w(x_0, x_1) + \lambda w(x_0, y_1)&\le (1-\lambda ) \left[ (1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) \right] \\&\quad + \lambda \left[ (1-\delta ) {V}(x_0, y_1) + \delta {W}(y_1) \right] \\&< (1-\delta ) {V}(x_0, x^\lambda _1) + \delta {W}(x^\lambda _1) . \end{aligned}$$

The definition set \(\mathscr {D}\) for the discount factors \(\delta \) being compact, it is obtained that:

$$\begin{aligned} (1-\lambda ) w(x_0, x_1) + \lambda w(x_0, y_1)&< \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}\left( x_0, x^\lambda _1\right) + \delta {W}\left( x^\lambda _1\right) \right] \\&= w(x_0, x^\lambda _1) . \end{aligned}$$

The strict concavity of \(w(x_0, \cdot )\) implies that this function assumes a unique maximizer \(x_1\).

(ii) The monotonicity of the policy function \(\varphi \) is now going to be established. For that purpose, it must first be checked that there cannot exist a couple \((x_0, z_0)\) such that \(x_0 < z_0\) and \(\varphi (x_0) > \varphi (z_0)\). Suppose the opposite and let \(y_0 = {\hbox {argmin}}_{[x_0, z_0]} \varphi (y)\).

  1. (a)

    Consider the first case with \(y_0 < \varphi (y_0)\). This implies \({W}(y_0) <{W}(\varphi (y_0))\). Take

    $$\begin{aligned} \delta _{y_{0}} \in \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(y_0, \varphi (y_0)) + \delta {W}(\varphi (y_0))]. \end{aligned}$$

    It derives that:

    $$\begin{aligned} (1-\delta _{y_0}) {V}(y_0, \varphi (y_0)) + \delta _{y_0} {W}(\varphi (y_0)) < {W}(\varphi (y_0)) , \end{aligned}$$

    that implies \({V}(y_0, \varphi (y_0)) < {W}(\varphi (y_0))\). There further exists \(\epsilon > 0\) such that

    $$\begin{aligned} V(y, \varphi (y))< {W}(\varphi (y)) \hbox { for any } y_0 - \epsilon< y < y_0 + \epsilon . \end{aligned}$$

    For every y belonging to this interval, the following is satisfied:

    $$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(y, \varphi (y)) + \delta {W}(\varphi (y))] = \{{\underline{\delta }}\}. \end{aligned}$$

    Recall that, in this case, for any \(y_0 - \epsilon< y < y_0 + \epsilon \)

    $$\begin{aligned} \varphi (y) = \mathop {\mathrm{argmax}}\limits _{y^\prime \in \Gamma (y)} [(1-{\underline{\delta }}) {V}(y, y^\prime ) + {\underline{\delta }} {W}(y^\prime )] . \end{aligned}$$

    From the super modularity of V, using the Topkis’s theorem quoted in Amir (1996), \(\varphi \) is strictly increasing (ascending) in \(]y_0 - \epsilon , y_0 + \epsilon [\), hence for \(y_0-\epsilon< y < y_0\), the inequality \(\varphi (y) < \varphi (y_0)\), a contradiction with the choice of \(y_0\).

  2. (b)

    The case where \(y_0 > \varphi (y_0)\) is faced with by applying the same arguments and a contradiction is similarly obtained.

  3. (c)

    Consider the third case. \(x_0 < y_0\) and \(\varphi (x_0) > \varphi (y_0) = y_0\) are simultaneously satisfied. But, and from Assumption V4, V is increasing in its first argument and decreasing in its second one,

    $$\begin{aligned} {V}(x_0, \varphi (x_0))&< {V}(y_0, y_0) \\&= {W}(y_0)\\&< {W}(\varphi (x_0)). \end{aligned}$$

    Hence

    $$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, \varphi (x_0)) + \delta {W}(\varphi (x_0))\right] = \{{\underline{\delta }} \}. \end{aligned}$$

    Since \(\varphi (x_0) > x_0\), \(\varphi (x) > x\) prevails in a neighborhood of \(x_0\). Denoting by \({I}_{x_0}\) the set of \(z \in ]x_0, y_0[\) such that for any \(x_0< x < z\), it derives that:

    $$\begin{aligned} {V}(x, \varphi (x))< {W}(\varphi (x)) \hbox { which is equivalent to } x < \varphi (x). \end{aligned}$$

    It is further obvious that for any \(x_0< x < z\) with \(z \in {I}_{x_0}\), one has:

    $$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x, \varphi (x)) + \delta {W}(\varphi (x))] = \{{\underline{\delta }} \}, \end{aligned}$$

    which implies for any \(x_0< x < z\):

    $$\begin{aligned} \varphi (x) = \mathop {\mathrm{argmax}}\limits _{x^\prime \in \Gamma (x)} [(1-{\underline{\delta }}) {V}(x, x^\prime ) + {\underline{\delta }} {W}(x^\prime ) ]. \end{aligned}$$

    Hence, on the interval \({I}_{x_0}\), from the super modularity of V, the function \(\varphi \) is strictly increasing. Take now \({\overline{z}} = \sup ({I}_{x_0})\). If \(z_0 < \varphi (z_0)\), then and by the continuity of V and W, \(\varphi (x) > x\) prevails for any \(x_0< x < {\overline{z}} + \epsilon \), for \(\epsilon \) that is sufficiently small: a contradiction. Hence \(\varphi ({\overline{z}}) \le {\overline{z}}\). From the continuity of \(\varphi \), \(\varphi ({\overline{z}}) = {\overline{z}}\). From the increasing property of \(\varphi \) on \({I}_{x_0}\), it is obtained that:

    $$\begin{aligned} \varphi (x_0)&< \varphi ({\overline{z}}) \\&\le z\\&\le y_0 \\&= \varphi (y_0) \\&< \varphi (x_0), \end{aligned}$$

    a contradiction.

    It has been proved that, for any \(x_0 < z_0\), \(\varphi (x_0) \le \varphi (z_0)\). Suppose that there exists a couple \((x_0, z_0)\) such that \(x_0 < z_0\) and \(\varphi (x_0) = \varphi (z_0)\). This implies for any \(x_0 \le y \le z_0\), \(\varphi (y) = \varphi (x_0) = \varphi (z_0)\). There hence exists \(y_0 \in [x_0, z_0]\) such that \(\varphi (y_0) \ne y_0\). Using the same arguments as in the first part of the proof, \(\varphi \) is to be strictly increasing in a neighborhood of \(y_0\), which contradicts with the result that \(\varphi \) is constant on \([x_0, z_0]\).

The monotonicity of \(\chi ^* = \{x^*_t\}_{t=0}^\infty \) is a direct consequence of the increasingness property for \(\varphi \). \(\square \)

Proof of Proposition 3.2

Proof

First consider the case of a sequence \(\chi ^* = \{x^*_t\}_{t=0}^\infty = \{\varphi ^t(x_0)\}_{t=0}^\infty \) that is strictly increasing. The objective will be to prove that \(\chi ^*\) is solution of a fixed discount Ramsey problem \({\mathscr {R}}({\underline{\delta }})\).

Selecting any \(\delta ^* \in {\hbox {argmin}}_{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x_0, x^*_1) + \delta {W}(x^*_1)]\), it derives that \({W}(x_0) < {W}(x^*_1)\), whence

$$\begin{aligned} (1-\delta ^*) {V}(x_0) + \delta ^* {W}(x^*_1) < {W}(x^*_1) . \end{aligned}$$

This implies that \({V}(x_0, x^*_1) < {W}(x^*_1)\). Hence

$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x_0, x^*_1) + \delta {W}(x^*_1)] = \{{\underline{\delta }}\}. \end{aligned}$$

Since \(x_0 < x^*_1\), \(\varphi (x_0) < \varphi (x^*_1)\), or \(x^*_1 < x^*_2\). By induction, it is obtained that \(x^*_t < x^*_{t+1}\) and

$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x^*_t, x^*_{t+1}) + \delta {W}(x^*_{t+1})\right] = \{{\underline{\delta }} \} \hbox { for any } t \ge 0. \end{aligned}$$

This implies, for any finite T

$$\begin{aligned} {W}(x_0)&= (1-{\underline{\delta }}) {V}(x_0, x^*_1) + {\underline{\delta }} {W}(x_1^*) \\&= (1- {\underline{\delta }}) {V}(x_0, x^*_1) + (1-{\underline{\delta }}) {\underline{\delta }} {V}(x^*_1, x^*_2) + {\underline{\delta }}^2 {W}(x^*_{t+2}) \\ \ldots&\\&= (1- {\underline{\delta }}) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}) + {\underline{\delta }}^{{T}+1} {W}(x^*_{{T}+1}). \end{aligned}$$

Observe that from assumptions T3 and T4, there exists a \({C} > 0\) such that \(x_0 \le x^*_t < {C}\). This implies

$$\begin{aligned} \lim _{{T} \rightarrow \infty } {\underline{\delta }}^{{T}} {W}(x^*_{{T}}) = 0. \end{aligned}$$

Hence

$$\begin{aligned} {W}(x_0)&= (1- {\underline{\delta }}) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}) + {\underline{\delta }}^{{T}+1} {W}(x^*_{{T}+1}) \\&= (1-{\underline{\delta }}) \sum _{t=0}^\infty {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}), \end{aligned}$$

by letting T tends to infinity.

The aim is now to prove that \(\chi ^*\) is a solution of the fixed discount problem \({\mathscr {R}}({\underline{\delta }})\). Suppose the opposite and denote by \({\hat{\chi }} = \{{\hat{x}}_t \}_{t=0}^\infty \) the unique solution of \({\mathscr {R}}({\underline{\delta }})\). The sequence \(\{{W}({\hat{x}}_t)\}\) being uniformly bounded, the date T can be selected as being large enough in order to satisfy:

$$\begin{aligned} (1-{\underline{\delta }}) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}({\hat{x}}_t, {\hat{x}}_{t+1}) + {\underline{\delta }}^{{T}+1} {W}({\hat{x}}_{{T}+1}) > (1-{\underline{\delta }}) \sum _{t=0}^\infty {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}). \end{aligned}$$

For \(0 \le \lambda \le 1\), define \(x^\lambda _t = (1-\lambda ) {\hat{x}}_t + \lambda x^*_t\). Recall that

$$\begin{aligned} {V}(x^*_t, x^*_{t+1}) < {W}(x^*_{t+1}) \hbox { for every } t \ge 0. \end{aligned}$$

Take \(\lambda \) sufficiently close from 1 so that, for every \(0 \le t \le {T}\), the inequality \({V}(x^\lambda _t, x^\lambda _{t+1}) < {W}(x^\lambda _{t+1})\) is satisfied: for any \(0 \le t \le {T}\),

$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x^\lambda _t, x^\lambda _{t+1}) + \delta {W}(x^\lambda _{t+1})\right] = \{{\underline{\delta }} \}. \end{aligned}$$

Hence

$$\begin{aligned} {W}(x_0)&\ge (1-{\underline{\delta }}) {V}(x_0, x^\lambda _1) + {\underline{\delta }} {W}(x^\lambda _1) \\&\ge (1-{\underline{\delta }}) {V}(x_0, x^\lambda _1) + (1-{\underline{\delta }}) {\underline{\delta }} {V}(x^\lambda _1, x^\lambda _2) + {\underline{\delta }}^2 {W}(x^\lambda _2) \\&\dots \\&\ge (1-{\underline{\delta }}) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}(x^\lambda _t, x^\lambda _{t+1}) + {\underline{\delta }}^{{T}+1} {W}(x^\lambda _{{T}+1} ). \end{aligned}$$

Observe that

$$\begin{aligned}&\sum _{t=0}^{{T}} {\underline{\delta }}^t {V}(x^\lambda _t, {\hat{x}}^\lambda _{t+1}) \ge (1-\lambda ) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}({\hat{x}}_t, {\hat{x}}_{t+1}) + \lambda \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}), \\&{W}(x^\lambda _{{T}+1}) \ge (1-\lambda ) {W}({\hat{x}}_{{T}+1}) + \lambda {W}(x^*_{{T}+1}) . \end{aligned}$$

Hence, and from the definition of T

$$\begin{aligned} W(x_0)&\ge (1-{\underline{\delta }}) \left[ (1-\lambda ) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}({\hat{x}}_t, {\hat{x}}_{t+1}) + \lambda \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}) \right] \\&\quad + {\underline{\delta }}^{{T}+1} \left[ (1-\lambda ) {W}({\hat{x}}_{{T}+1}) +\lambda {W}(x^*_{{T}+1}) \right] \\&=(1-\lambda ) \left[ (1-{\underline{\delta }}) \sum _{t=0}^{{T}} {\underline{\delta }}^t{V}({\hat{x}}_t, {\hat{x}}_{t+1}) + {\underline{\delta }}^{{T}+1} {W}({\hat{x}}_{{T}+1}) \right] \\&\quad +\lambda \left[ (1-{\underline{\delta }}) \sum _{t=0}^{{T}} {\underline{\delta }}^t {V}( x^*_t, x^*_{t+1}) + {\underline{\delta }}^{{T}+1} {W}( x^*_{{T}+1}) \right] \\&> (1-{\underline{\delta }}) \sum _{t=0}^\infty {\underline{\delta }}^t {V}(x^*_t, x^*_{t+1}) \\&= {W}(x_0), \end{aligned}$$

a contradiction.

Observe that since \(\chi ^* = \hbox {argmax } {\mathscr {R}}({\underline{\delta }})\), the monotonic sequence \(\{x^*_t\}_{t=0}^\infty \) converges to a steady state of \({\mathscr {R}}({\underline{\delta }})\). Supposing that \(x_0 > x^*_1\), by using the same arguments, it can be proved that \(\chi \) is strictly decreasing, is a solution of a fixed discount problem \({\mathscr {R}}({\overline{\delta }})\) and converges to a steady state of that problem \({\mathscr {R}}({\overline{\delta }})\).

Consider now the case \(\varphi (x_0) = x_0\) and hence \(x^*_t = x_0\) for any t. Two cases are to be considered:

  1. (i)

    For any \(\epsilon > 0\), there exists \(x_0 - \epsilon< x < x_0 + \epsilon \) satisfying \(x \ne \varphi (x)\).

  2. (ii)

    There exists \(\epsilon > 0\) such that for any \(x_0 - \epsilon< x < x_0 + \epsilon \), \(x = \varphi (x)\).

First consider case (i). Take a sequence \(x^n_0\) that converges to \(x_0\) and such that \(\varphi (x^n_0) \ne x^n_0\) for any n. Since, in this case, the sequence is either solution of \({\mathscr {R}}({\underline{\delta }})\), or of \({\mathscr {R}}({\overline{\delta }})\), without loss of generality, it can be assumed that, for any n, the sequence \(\{\varphi ^t(x^n_0) \}_{t=0}^\infty \) is a solution of the fixed discount Ramsey problem \({\mathscr {R}}(\delta )\), with \(\delta \in \{{\underline{\delta }}, {\overline{\delta }} \}\).

But \(\lim _{n \rightarrow \infty } \varphi (x^n_0) = \varphi (x_0) = x_0\), which implies \(\lim _{n \rightarrow \infty } \varphi ^2(x^n_0) = x_0\). From the Euler equation

$$\begin{aligned} {V}_2\left( x^n_0, \varphi (x^n_0)\right) + \delta {V}_1\left( \varphi (x^n_0), \varphi ^2(x^n_0)\right) =0 , \end{aligned}$$

it is derived that:

$$\begin{aligned} {V}_2(x_0, x_0) + \delta {V}_1(x_0, x_0) = 0. \end{aligned}$$

Hence, the sequence \(x_t = x_0\) for any t is solution of \({\mathscr {R}}(\delta )\). Suppose now that for \(x_0 - \epsilon< x < x_0 + \epsilon \), \(\varphi (x) = x\). This implies that \({W}(x) = {V}(x, x)\) for \(x \in ]x_0 - \epsilon , x_0 + \epsilon [\), that in its turn implies

$$\begin{aligned} {W}'(x_0) = {V}_1(x_0, x_0) + {V}_2(x_0, x_0). \end{aligned}$$

Observe that, the function \((1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1)\) is linear in \(\delta \) and concave in \(x_1\), the following also prevails:

$$\begin{aligned}&\max _{x_1 \in \Gamma (x_0)} \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) \right] \\&\quad = \min _{\delta \in {\mathscr {D}}} \max _{x_1 \in \Gamma (x_0)} \left[ (1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) \right] . \end{aligned}$$

From \(\varphi (x_0) = x_0\), there thus exists \(\delta \in {\mathscr {D}}\) such that

$$\begin{aligned} (1-\delta ) {V}_2(x_0, x_0) + \delta {W}'(x_0) = 0. \end{aligned}$$

This implies

$$\begin{aligned} (1-\delta ) {V}_2(x_0, x_0) + \delta {V}_1(x_0, x_0) + \delta {V}_2(x_0, x_0) = 0, \end{aligned}$$

that is equivalent to

$$\begin{aligned} {V}_2(x_0, x_0) + \delta {V}_1(x_0, x_0) = 0, \end{aligned}$$

For any t, the sequence \(x_t = x_0\) is thus a solution of \({\mathscr {R}}(\delta )\). \(\square \)

Proof of Theorem 3.1

Proof

(i) This follows as an immediate corollary of Proposition 3.2.

(ii) For each \(x_0 > 0\), define \(\chi ^* = \{x^*_t\}_{t=0}^\infty = \{\varphi ^t(x_0) \}_{t=0}^\infty \). Using Propositions 3.2, there exists \(\delta ^* \in {\mathscr {D}}\) such that \(\chi ^*\) is a solution of \({\mathscr {R}}(\delta ^*)\) and

$$\begin{aligned} {W}(x_0) = (1-\delta ^*) \sum _{t=0}^\infty (\delta ^*)^t {V}(x^*_t, x^*_{t+1}) . \end{aligned}$$

For any \(\delta \in {\mathscr {D}}\), it is obtained that:

$$\begin{aligned} {W}(x_0)&\le (1-\delta ) {V}(x_0, x^*_1) + \delta {W}(x^*_1) \\&\le (1-\delta ) {V}(x_0, x^*_1) + \delta \left( (1-\delta ) {V}(x^*_1, x^*_2) + \delta {W}(x^*_2) \right) \\&\ldots \\&\le (1-\delta ) \sum _{t=0}^\infty \delta ^t {V}(x^*_t, x^*_{t+1}). \end{aligned}$$

Hence

$$\begin{aligned} {W}(x_0)&\le \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) \sum _{t=0}^\infty \delta ^t {V}(x^*_t, x^*_{t+1}) \right] \\&\le {J}(x_0) . \end{aligned}$$

Suppose now that \(({\tilde{\delta }}, {\tilde{\chi }})\) satisfies

$$\begin{aligned} v({\tilde{\delta }}, {\tilde{\chi }}) = \max _{\chi \in \Pi (x_0)} \min _{\delta \in {\mathscr {D}}} v(\delta , \chi ). \end{aligned}$$

The value \(\chi ^* = \{x^*_t\}_{t=0}^\infty \) being an optimal solution of the problem \({\mathscr {R}}(\delta ^*)\), it is obtained that:

$$\begin{aligned} J(x_0)&= v({\tilde{\delta }}, {\tilde{\chi }}) \\&\le v(\delta ^*, {\tilde{\chi }}) \\&\le v(\delta ^*, \chi ^*) \\&= {W}(x_0). \end{aligned}$$

Hence:

$$\begin{aligned} {J}(x_0)&= \max _{\chi \in \Pi (x_0)} \min _{\delta \in {\mathscr {D}}} v(\delta , \chi )\\&= \min _{\delta \in {\mathscr {D}}} \max _{\chi \in \Pi (x_0)}v(\delta ,\chi )\\ {}&= {W}(x_0), \end{aligned}$$

that establishes the details of the statement. \(\square \)

Payoffs unbounded from below

Assumption V3 is relaxed and the payoff becomes unbounded from below. As a result of this unboundedness, contracting map techniques cannot any longer be used to prove the existence of a solution to the functional Bellman-like equation. However, and thanks to the supplementary condition V6, the existence argument can be established through some simple guess and verify methods.Footnote 6

Under assumptions T1–T5 and V1–V2, V4–V6 and for every \(\delta \) there exists a unique long-run steady state \(x^\delta \) for the fixed discount Ramsey problem \({\mathscr {R}}(\delta )\), and \(x^{{\underline{\delta }}} \le x^\delta \le x^{{\overline{\delta }}}\), for every \(\delta \in {\mathscr {D}}\). Define then \({J}_\delta (x_0)\) as the value function of \({\mathscr {R}}(\delta )\):

$$\begin{aligned} {J}_\delta (x_0) = \sup _{\chi \in \Pi (x_0)} (1-\delta ) \sum _{t=0}^\infty \delta ^t {V}(x_t, x_{t+1}) . \end{aligned}$$

Consider now the function \({W}(\cdot )\) as defined by:

  1. (i)

    For \(0 \le x_0 \le x^{{\underline{\delta }}}\), \({W}(x_0) = {J}_{{\underline{\delta }}}(x_0)\),

  2. (ii)

    For \(x^{{\underline{\delta }}} \le x_0 \le x^{{\overline{\delta }}}\), take \(\delta \) satisfying \({V}_2(x_0, x_0) + \delta {V}_1(x_0, x_0) = 0\), \({W}(x_0) = {J}_\delta (x_0) = {V}(x_0, x_0)\).

  3. (iii)

    For \(x \ge x^{{\overline{\delta }}}\), \({W}(x_0) = {J}_{{\overline{\delta }}}(x_0)\).

Making use of the monotonicity of \(x^\delta \) with \(\delta \in {\mathscr {D}}\), the following preparation lemma can first be established,

Lemma F.1

Assume T1–T5 and V1–V2, V4–V6. The function W is strictly concave.

Proof

Obviously, it is well known in the literature on fixed discount Ramsey problems that W is strictly concave in \(]0, x^{{\underline{\delta }}}]\), \([x^{{\underline{\delta }}}, x^{{\overline{\delta }}}]\) and \([x^{{\overline{\delta }}}, +\infty [\). It just remains to prove that W is differentiable at \(x^{{\underline{\delta }}}\) and \(x^{{\overline{\delta }}}\). The left derivative of W at \(x^{{\underline{\delta }}}\) is first equal to the derivative of \({J}_{{\underline{\delta }}}\):

$$\begin{aligned} {W}_{-}^\prime (x^{{\underline{\delta }}}) = {J}_{{\underline{\delta }}}^\prime (x^{{\underline{\delta }}}). \end{aligned}$$

It is also well known that

$$\begin{aligned} {J}_{{\underline{\delta }}}^\prime (x^{{\underline{\delta }}})&= (1-\delta ) {V}_1(x^{{\underline{\delta }}}, x^{{\underline{\delta }}}) \\&= {V}_1(x^{{\underline{\delta }}}, x^{{\underline{\delta }}}) + {V}_2(x^{{\underline{\delta }}}, x^{{\underline{\delta }}}) \\&= {W}_+^\prime (x_0), \end{aligned}$$

whence \({W}_-^\prime (x^{{\underline{\delta }}}) = {W}_+^\prime (x^{{\underline{\delta }}})\), or W is differentiable at \(x^{{\underline{\delta }}}\). The proof of the differentiability at \(x^{{\overline{\delta }}}\) relies on the same arguments. \(\square \)

Proposition F.1

Assume T1–T5 and V1–V2, V4–V6. The function W is solution of the functional equation

$$\begin{aligned} W(x_0) = \max _{x_1 \in \Gamma (x_0)} \min _{\delta \in {\mathscr {D}}} \bigl \{(1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1)\bigr \}. \end{aligned}$$

Proof

Denote by \(\varphi _\delta \) the optimal policy function of \({\mathscr {R}}(\delta )\). Consider the case \(x_0 < x^{{\underline{\delta }}}\). The system of inequalities \(x_0< \varphi _{{\underline{\delta }}}(x_0) < x^{{\underline{\delta }}}\) is available, hence

$$\begin{aligned} {W}(x_0)&= {J}_{{\underline{\delta }}}(x_0) \\&= (1 - {\underline{\delta }}) {V}(x_0, \varphi _{{\underline{\delta }}}(x_0)) + {\underline{\delta }} {J}(\varphi _{{\underline{\delta }}}(x_0)) \\&= (1 - {\underline{\delta }}) {V}(x_0, \varphi _{{\underline{\delta }}}(x_0)) + {\underline{\delta }} {W}(\varphi _{{\underline{\delta }}}(x_0)) \\&= \min _{\delta \in {\mathscr {D}}} [(1 - \delta ) {V}(x_0, \varphi _{{\underline{\delta }}}(x_0)) + \delta {W}(\varphi _{{\underline{\delta }}}(x_0))]. \end{aligned}$$

It remains to prove that

$$\begin{aligned} W(x_0) \ge \min _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) ] \end{aligned}$$

for any \(x_1 \in \Gamma (x_0)\). But, and for every \(x_1 \in \Gamma (x_0)\),

$$\begin{aligned} \min _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) ] \le (1-{\underline{\delta }}) {V}(x_0, x_1) + {\underline{\delta }} {W}(x_1). \end{aligned}$$

The function \((1-{\underline{\delta }}) {V}(x_0, x_1) + {\underline{\delta }} {W}(x_1)\) being strictly concave in \(x_1\), it hence attains its maximum at \(x_1 = \varphi _{{\underline{\delta }}}(x_0)\). Recall indeed that \(x_0< \varphi _{{\underline{\delta }}}(x_0) < x^{{\underline{\delta }}}\), and \({W}(\varphi _{{\underline{\delta }}}(x_0)) = {J}_{{\underline{\delta }}}(\varphi _{{\underline{\delta }}}(x_0))\) and \({W}'(\varphi _{{\underline{\delta }}}(x_0)) = {J}'_{{\underline{\delta }}}(\varphi _{{\underline{\delta }}}(x_0))\). This implies that:

$$\begin{aligned} (1-{\underline{\delta }}) {V}_2(x_0, \varphi _{ {\underline{\delta }}}(x_0)) + {\underline{\delta }} {W}'(\varphi _{ {\underline{\delta }}}(x_0)) = 0. \end{aligned}$$

From the strict concavity of \((1-{\underline{\delta }}) {V}(x_0, x_1) + {\underline{\delta }} {W}(x_1)\) and for any \(x_1 \in \Gamma (x_0)\):

$$\begin{aligned} (1-{\underline{\delta }}) {V}(x_0, x_1) + {\underline{\delta }} {W}(x_1) \le (1-{\underline{\delta }}) {V}(x_0, \varphi _{{\underline{\delta }}}(x_0)) + {\underline{\delta }} {W}(\varphi _{{\underline{\delta }}}(x_0)). \end{aligned}$$

The same arguments can be used for the remaining cases \(x^{{\underline{\delta }}} \le x_0 \le x^{{\overline{\delta }}}\) and \(x^{{\overline{\delta }}} \le x_0\) and the argument of the proof is complete. \(\square \)

Theorem F.1

Assume T1–T5 and V1–V2, V4–V6.

  1. (i)

    The value of the Maximin problem is equal to the value of the Minimax problem

    $$\begin{aligned} \sup _{\chi \in \Pi (x_0)} \inf _{\delta \in {\mathscr {D}}} v(\delta , \chi )&= \max _{\chi \in \Pi (x_0)} \min _{\delta \in {\mathscr {D}}} v(\delta , \chi ) \\&= \inf _{\delta \in {\mathscr {D}}} \sup _{\chi \in \Pi (x_0)} v(\delta , \chi ) \\&= \min _{\delta \in {\mathscr {D}}} \max _{\chi \in \Pi (x_0)} v(\delta , \chi ). \end{aligned}$$
  2. (ii)

    \({J}(x_{0}) = {W}(x_0)\).

This characterization of payoffs unbounded from below will prove useful for the characterization of a benchmark example in Sect. 4.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Drugeon, JP., Ha-Huy, T. & Nguyen, T.D.H. On maximin dynamic programming and the rate of discount. Econ Theory 67, 703–729 (2019). https://doi.org/10.1007/s00199-018-1166-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-018-1166-0

Keywords

JEL Classification

Navigation