Abstract
This article establishes a dynamic programming argument for a maximin optimization problem where the agent completes a minimization over a set of discount rates. Even though the consideration of a maximin criterion results in a program that is not convex and not stationary over time, it is proved that a careful reference to extended dynamic programming principles and a maxmin functional equation however allows for circumventing these difficulties and recovering an optimal sequence that is time consistent.
Similar content being viewed by others
Notes
One may further notice that the dynamic programming argument of this article could be equally be used with a non-connected set of discount rates for which a time-consistent decision rule would indeed keep on being available.
This is a mild assumption that is, e.g., satisfied in the case when, for any x, \(0 \in \Gamma (x)\) and \(\Gamma \) is expanding. See Topkis (1978).
V is (strictly) supermodular if, for every \( (x, x^\prime )\) and \( (y, y^\prime )\) that belong to \(\hbox {Graph}(\Gamma )\), \( {V}(x,y)+{V}(x',y') (>) \ge {V}(x',y)+{V}(x,y') \) prevails whenever \((x',y') (>) \ge (x,y)\). When V is twice differentiable, (strict) supermodularity sums up to (strictly) positive cross derivatives.
See Section 4.4 in Stokey et al. (1989) for the calculus details.
Observe, however, that it would be more difficult to apply these methods were one to relax Assumption V6, for there would then exist multiple steady solutions for any \({\mathscr {R}}(\delta )\).
References
Alvarez, F., Stokey, N.L.: Dynamic programming with homogeneous functions. J. Econ. Theory 82, 167–189 (1998)
Amir, R.: Sensitivity analysis of multisector optimal of economic dynamics. J. Math. Econ. 25, 123–141 (1996)
Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)
Bich, P., Drugeon, J.-P., Morhaim, L.: On temporal aggregators and dynamic programming. Econ. Theory 63, 1–31 (2017)
Chambers, C., Echenique, F.: On multiple discount rates. Econometrica 86, 1325–1346 (2018)
Drugeon, J.P., Ha-Huy, T.: A not so myopic axiomatization of discounting. Working paper (2018)
Durán, J.: On dynamic programming with unbounded returns. Econ. Theory 15, 339–352 (2000)
Geoffard, P.Y.: Discounting and optimizing, capital accumulation problems as variational minimax problems. J. Econ. Theory 69, 53–70 (1996)
Gilboa, I., Schmeidler, D.: Maximin expected utility with no-unique prior. J. Math. Econ. 18, 141–153 (1989)
Jaśkiewicz, A., Matkowski, J., Nowak, A.S.: On variable discounting in dynamic programming: applications to resource extraction and other economic models. Ann. Oper. Res. 220, 263278 (2014)
Kahimigashi, T.: Elementary results on solutions to the Bellman equation of dynamic programming: existence, uniqueness, and convergence. Econ. Theory 56, 251–273 (2014a)
Kahimigashi, T.: An order-theoretic approach to dynamic programming: an exposition. Econ. Theory Bull. 2, 13–21 (2014b)
Le Van, C., Morhaim, L.: Optimal growth models with bounded or unbounded returns: a unifying approach. J. Econ. Theory 105, 158–187 (2002)
Le Van, C., Vailakis, Y.: Recursive utility and optimal growth with bounded or unbounded returns. J. Econ. Theory 123, 187–209 (2005)
Rincon-Zapatero, J.P., Rodriguez-Palmero, C.: Existence and uniqueness of solutions to the Belllman equation in the unbounded case. Econometrica 71, 1519–1555 (2003)
Rincon-Zapatero, J.P., Rodriguez-Palmero, C.: Recursive utility with unbounded aggregators. Econ. Theory 33, 381–391 (2007)
Stokey, N.L., Lucas Jr., R., Prescott, E.: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge (1989)
Streufert, P.A.: Stationary recursive utility and dynamic programming under the assumption of biconvergence. Rev. Econ. Stud. 57, 79–97 (1990)
Streufert, P.A.: An abstract topological approach to dynamic programming. J. Math. Econ. 21, 59–88 (1992)
Topkis, D.: Minimizing a submodular function on a lattice. Oper. Res. 26, 305–321 (1978)
Wakai, K.: A model of utility smoothing. Econometrica 76, 137–153 (2008)
Wakai, K.: Intertemporal utility smoothing: theory and applications. Jpn. Econ. Rev. 64, 16–41 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This study results from discussions that took place by Fall 2015 in a theory working group organized by Cuong Le Van. The authors would like to thank him as well as the other participants for their useful insights on earlier versions. They are also grateful to Federico Echenique for pointing out, at RUD16 and in the course of a conversation about Chambers and Echenique (2018), the first version of which was concomitantly but independently completed from the current study by spring 2016, the existence of Wakai (2008, 2013) to their attention. Preliminary versions were presented at the SAET 2016 and PET 2016 conferences in Rio de Janeiro and the authors would like their participants for their comments. A more recent version was presented at the KIER seminar of the University of Kyoto, at the RIEB seminar of the University of Kobe, at the University of Hiroshima, at the University of Evry and at the European University at Saint-Petersburg; the authors would like to thank Tadashi Shigoka, Kazuo Nishimura, Tadashi Sekiguchi, Takashi Kamihigashi, Hien Pham and Makoto Yano for their insightful suggestions on these occasions. They are also strongly indebted to the insightful comments and suggestions of the two anonymous referees who pointed out the need for an ascending correspondence, a strict supermodularity assumption and, last but not the least, made an eventual and opportune suggestion toward a clarified and compact form for the title of this study.
Appendices
Proof of Proposition 2.1
Proof
(i) This is a straightforward application of the Weierstrass Theorem.
(ii) a/ Fix \(\chi _1, \chi _2 \in \Pi (x_0)\) such that \(\chi _1 \ne \chi _2\) and \(v(\chi _1,\delta ), v(\chi _2,\delta ) > -\infty \) for every \(\delta \in [\underline{\delta }, \overline{\delta }]\). Fix \(0< \lambda < 1\). Let \(\chi _\lambda = (1-\lambda ) \chi _1 + \lambda \chi _2\). From the convexity of \(\Gamma \), it derives that \(\chi \in \Pi (x_0)\). Fix \(\delta _\lambda \in {\mathscr {D}}\) such that \({\hat{v}}(\chi _\lambda )= v(\delta _\lambda , \chi _\lambda )\).
From the strict concavity of V, it is obtained that
(iii) b/ From (1), take the sequence \(\chi ^n\) such that \(\lim _{n \rightarrow \infty } {\hat{v}}(\chi ^n) = {J}(x_0) .\) Since from assumption T2, \(\Pi (x_0)\) is compact in the product topology, it can be supposed that \(\chi ^n \rightarrow \chi ^*\). It will first be proved that \({\hat{v}}(\chi ^*) > - \infty \).
Fix any \(\epsilon > 0\), there exists large enough values of N such that, for \(n \ge N\), \(v(\chi ^n) > {J}(x_0) - \epsilon \) prevails. Observe, that from assumption T2, there exists \({C} > 0\) such that \(0 \le x_t \le {C}\) for any \(\chi \in \Pi (x_0)\). Hence, there exists T satisfying, for any \(\chi \in \Pi (x_0)\) and for any \(\delta \in {\mathscr {D}}\),
For any \(n \ge N\), for any \(\delta \in {\mathscr {D}}\):
which implies
Letting n converge to infinity, for any \(\delta \in {\mathscr {D}}\),
This inequality prevails for large enough values of T. Letting T tend to infinity, for any \(\delta \in {\mathscr {D}}\):
whence \({\hat{v}}(\chi ^*) \ge {J}(x_0) - 2 \epsilon .\)
The parameter \(\epsilon \) being chosen arbitrarily, \({\hat{v}}(\chi ^*) \ge {J}(x_0)\). From the strict concavity of \({\hat{v}}\), \(\chi ^*\) is the unique solution of \(\sup _{\chi \in \Pi (x_0)} {\hat{v}}(\chi ).\)\(\square \)
Proof of Lemma 3.1
Proof
From the compactness of \({\mathscr {D}}\) and \(\Gamma (x_0)\) and for every \(x_0 > 0\), let T denote the operator from the space of continuous function into itself:
Supposing that \(g(x) \le {\tilde{g}}(x)\) for any x, \({T}g(x) \le {T} {\tilde{g}}(x)\) for any x. Indeed, for each \(x_1 \in \Gamma (x_0)\), for every \(\delta \), it is obtained that:
which implies
The inequality being obviously true for every \(x_1\), \({T}v(x_0) \le {T}{\tilde{g}}(x_0)\) in turn prevails. By the same arguments, it can be proved that for every constant \(a > 0\), \({T}(g+a)(x_0) \le {T} g(x_0) + {\overline{\delta }} a\).
Since T satisfies the properties of Blackwell, i.e., monotonicity and discounting, by Theorem 3.3 in Stokey et al. (1989), the operator T is a contracting map and has a unique fixed point.
In preparation for the concavity of this fixed point function, it will now be proved that the concavity of g implies the concavity of Tg. Fix \(x_0\) and \(y_0\) and fix \((\delta _x, x_1)\) and \((\delta _y, y_1)\) such that
For any \(0 \le \lambda \le 1\), define \(x_{0}^\lambda = (1-\lambda ) x_0 + \lambda y_0\), \( x^\lambda _1= (1-\lambda )x_1 + \lambda y_1\). For any \(\delta \in {\mathscr {D}}\), it derives that:
Hence, for any \({\underline{\delta }} \le \delta \le {\overline{\delta }}\):
The inequality being true for every \(\delta \), it is obtained that:
Take \(g^0(x_0) = 0\) for every \(x_0\), define \(g^{n+1}(x_0) = Tg^n(x_0)\) for any \(n \ge 0\). By induction, \(g^n(x_0)\) is a concave function for any n. Taking limits, \({W}(x_0) = \lim _{n \rightarrow \infty } g^n(x_0) = \lim _{n \rightarrow \infty } {T}^n g^0(x_0)\), the concavity property is available for W.
The strict concavity of W is now going to be established. Consider \(x_0 \ne y_0\). Define \((\delta _x, x_1)\), \((\delta _y, y_1)\), \(x^\lambda _0\) and \(x^\lambda _1\) as in the first part of this proof. Relying on the same arguments and calculus as well as on the strictly concavity of V, for any \(\delta \in {\mathscr {D}}\), it is obtained that:
From the compacity of the set of discount factors \({\mathscr {D}}\), it derives that:
\(\square \)
Proof of Proposition 3.1
Proof
Define \(w(x_0, x_1)= \min _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1) \right] \).
(i) Observe that the function \(w(x_0, x_1)\) is strictly concave in \(x_1\). Indeed, for any \(x_1 \ne y_ 1\) belonging to \(\Gamma (x_0)\), and \(0< \lambda < 1\), define \(x^\lambda _1 = (1-\lambda )x_1 + \lambda y_1\). For any \(\delta \in {\mathscr {D}}\):
The definition set \(\mathscr {D}\) for the discount factors \(\delta \) being compact, it is obtained that:
The strict concavity of \(w(x_0, \cdot )\) implies that this function assumes a unique maximizer \(x_1\).
(ii) The monotonicity of the policy function \(\varphi \) is now going to be established. For that purpose, it must first be checked that there cannot exist a couple \((x_0, z_0)\) such that \(x_0 < z_0\) and \(\varphi (x_0) > \varphi (z_0)\). Suppose the opposite and let \(y_0 = {\hbox {argmin}}_{[x_0, z_0]} \varphi (y)\).
-
(a)
Consider the first case with \(y_0 < \varphi (y_0)\). This implies \({W}(y_0) <{W}(\varphi (y_0))\). Take
$$\begin{aligned} \delta _{y_{0}} \in \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(y_0, \varphi (y_0)) + \delta {W}(\varphi (y_0))]. \end{aligned}$$It derives that:
$$\begin{aligned} (1-\delta _{y_0}) {V}(y_0, \varphi (y_0)) + \delta _{y_0} {W}(\varphi (y_0)) < {W}(\varphi (y_0)) , \end{aligned}$$that implies \({V}(y_0, \varphi (y_0)) < {W}(\varphi (y_0))\). There further exists \(\epsilon > 0\) such that
$$\begin{aligned} V(y, \varphi (y))< {W}(\varphi (y)) \hbox { for any } y_0 - \epsilon< y < y_0 + \epsilon . \end{aligned}$$For every y belonging to this interval, the following is satisfied:
$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(y, \varphi (y)) + \delta {W}(\varphi (y))] = \{{\underline{\delta }}\}. \end{aligned}$$Recall that, in this case, for any \(y_0 - \epsilon< y < y_0 + \epsilon \)
$$\begin{aligned} \varphi (y) = \mathop {\mathrm{argmax}}\limits _{y^\prime \in \Gamma (y)} [(1-{\underline{\delta }}) {V}(y, y^\prime ) + {\underline{\delta }} {W}(y^\prime )] . \end{aligned}$$From the super modularity of V, using the Topkis’s theorem quoted in Amir (1996), \(\varphi \) is strictly increasing (ascending) in \(]y_0 - \epsilon , y_0 + \epsilon [\), hence for \(y_0-\epsilon< y < y_0\), the inequality \(\varphi (y) < \varphi (y_0)\), a contradiction with the choice of \(y_0\).
-
(b)
The case where \(y_0 > \varphi (y_0)\) is faced with by applying the same arguments and a contradiction is similarly obtained.
-
(c)
Consider the third case. \(x_0 < y_0\) and \(\varphi (x_0) > \varphi (y_0) = y_0\) are simultaneously satisfied. But, and from Assumption V4, V is increasing in its first argument and decreasing in its second one,
$$\begin{aligned} {V}(x_0, \varphi (x_0))&< {V}(y_0, y_0) \\&= {W}(y_0)\\&< {W}(\varphi (x_0)). \end{aligned}$$Hence
$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} \left[ (1-\delta ) {V}(x_0, \varphi (x_0)) + \delta {W}(\varphi (x_0))\right] = \{{\underline{\delta }} \}. \end{aligned}$$Since \(\varphi (x_0) > x_0\), \(\varphi (x) > x\) prevails in a neighborhood of \(x_0\). Denoting by \({I}_{x_0}\) the set of \(z \in ]x_0, y_0[\) such that for any \(x_0< x < z\), it derives that:
$$\begin{aligned} {V}(x, \varphi (x))< {W}(\varphi (x)) \hbox { which is equivalent to } x < \varphi (x). \end{aligned}$$It is further obvious that for any \(x_0< x < z\) with \(z \in {I}_{x_0}\), one has:
$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x, \varphi (x)) + \delta {W}(\varphi (x))] = \{{\underline{\delta }} \}, \end{aligned}$$which implies for any \(x_0< x < z\):
$$\begin{aligned} \varphi (x) = \mathop {\mathrm{argmax}}\limits _{x^\prime \in \Gamma (x)} [(1-{\underline{\delta }}) {V}(x, x^\prime ) + {\underline{\delta }} {W}(x^\prime ) ]. \end{aligned}$$Hence, on the interval \({I}_{x_0}\), from the super modularity of V, the function \(\varphi \) is strictly increasing. Take now \({\overline{z}} = \sup ({I}_{x_0})\). If \(z_0 < \varphi (z_0)\), then and by the continuity of V and W, \(\varphi (x) > x\) prevails for any \(x_0< x < {\overline{z}} + \epsilon \), for \(\epsilon \) that is sufficiently small: a contradiction. Hence \(\varphi ({\overline{z}}) \le {\overline{z}}\). From the continuity of \(\varphi \), \(\varphi ({\overline{z}}) = {\overline{z}}\). From the increasing property of \(\varphi \) on \({I}_{x_0}\), it is obtained that:
$$\begin{aligned} \varphi (x_0)&< \varphi ({\overline{z}}) \\&\le z\\&\le y_0 \\&= \varphi (y_0) \\&< \varphi (x_0), \end{aligned}$$a contradiction.
It has been proved that, for any \(x_0 < z_0\), \(\varphi (x_0) \le \varphi (z_0)\). Suppose that there exists a couple \((x_0, z_0)\) such that \(x_0 < z_0\) and \(\varphi (x_0) = \varphi (z_0)\). This implies for any \(x_0 \le y \le z_0\), \(\varphi (y) = \varphi (x_0) = \varphi (z_0)\). There hence exists \(y_0 \in [x_0, z_0]\) such that \(\varphi (y_0) \ne y_0\). Using the same arguments as in the first part of the proof, \(\varphi \) is to be strictly increasing in a neighborhood of \(y_0\), which contradicts with the result that \(\varphi \) is constant on \([x_0, z_0]\).
The monotonicity of \(\chi ^* = \{x^*_t\}_{t=0}^\infty \) is a direct consequence of the increasingness property for \(\varphi \). \(\square \)
Proof of Proposition 3.2
Proof
First consider the case of a sequence \(\chi ^* = \{x^*_t\}_{t=0}^\infty = \{\varphi ^t(x_0)\}_{t=0}^\infty \) that is strictly increasing. The objective will be to prove that \(\chi ^*\) is solution of a fixed discount Ramsey problem \({\mathscr {R}}({\underline{\delta }})\).
Selecting any \(\delta ^* \in {\hbox {argmin}}_{\delta \in {\mathscr {D}}} [(1-\delta ) {V}(x_0, x^*_1) + \delta {W}(x^*_1)]\), it derives that \({W}(x_0) < {W}(x^*_1)\), whence
This implies that \({V}(x_0, x^*_1) < {W}(x^*_1)\). Hence
Since \(x_0 < x^*_1\), \(\varphi (x_0) < \varphi (x^*_1)\), or \(x^*_1 < x^*_2\). By induction, it is obtained that \(x^*_t < x^*_{t+1}\) and
This implies, for any finite T
Observe that from assumptions T3 and T4, there exists a \({C} > 0\) such that \(x_0 \le x^*_t < {C}\). This implies
Hence
by letting T tends to infinity.
The aim is now to prove that \(\chi ^*\) is a solution of the fixed discount problem \({\mathscr {R}}({\underline{\delta }})\). Suppose the opposite and denote by \({\hat{\chi }} = \{{\hat{x}}_t \}_{t=0}^\infty \) the unique solution of \({\mathscr {R}}({\underline{\delta }})\). The sequence \(\{{W}({\hat{x}}_t)\}\) being uniformly bounded, the date T can be selected as being large enough in order to satisfy:
For \(0 \le \lambda \le 1\), define \(x^\lambda _t = (1-\lambda ) {\hat{x}}_t + \lambda x^*_t\). Recall that
Take \(\lambda \) sufficiently close from 1 so that, for every \(0 \le t \le {T}\), the inequality \({V}(x^\lambda _t, x^\lambda _{t+1}) < {W}(x^\lambda _{t+1})\) is satisfied: for any \(0 \le t \le {T}\),
Hence
Observe that
Hence, and from the definition of T
a contradiction.
Observe that since \(\chi ^* = \hbox {argmax } {\mathscr {R}}({\underline{\delta }})\), the monotonic sequence \(\{x^*_t\}_{t=0}^\infty \) converges to a steady state of \({\mathscr {R}}({\underline{\delta }})\). Supposing that \(x_0 > x^*_1\), by using the same arguments, it can be proved that \(\chi \) is strictly decreasing, is a solution of a fixed discount problem \({\mathscr {R}}({\overline{\delta }})\) and converges to a steady state of that problem \({\mathscr {R}}({\overline{\delta }})\).
Consider now the case \(\varphi (x_0) = x_0\) and hence \(x^*_t = x_0\) for any t. Two cases are to be considered:
-
(i)
For any \(\epsilon > 0\), there exists \(x_0 - \epsilon< x < x_0 + \epsilon \) satisfying \(x \ne \varphi (x)\).
-
(ii)
There exists \(\epsilon > 0\) such that for any \(x_0 - \epsilon< x < x_0 + \epsilon \), \(x = \varphi (x)\).
First consider case (i). Take a sequence \(x^n_0\) that converges to \(x_0\) and such that \(\varphi (x^n_0) \ne x^n_0\) for any n. Since, in this case, the sequence is either solution of \({\mathscr {R}}({\underline{\delta }})\), or of \({\mathscr {R}}({\overline{\delta }})\), without loss of generality, it can be assumed that, for any n, the sequence \(\{\varphi ^t(x^n_0) \}_{t=0}^\infty \) is a solution of the fixed discount Ramsey problem \({\mathscr {R}}(\delta )\), with \(\delta \in \{{\underline{\delta }}, {\overline{\delta }} \}\).
But \(\lim _{n \rightarrow \infty } \varphi (x^n_0) = \varphi (x_0) = x_0\), which implies \(\lim _{n \rightarrow \infty } \varphi ^2(x^n_0) = x_0\). From the Euler equation
it is derived that:
Hence, the sequence \(x_t = x_0\) for any t is solution of \({\mathscr {R}}(\delta )\). Suppose now that for \(x_0 - \epsilon< x < x_0 + \epsilon \), \(\varphi (x) = x\). This implies that \({W}(x) = {V}(x, x)\) for \(x \in ]x_0 - \epsilon , x_0 + \epsilon [\), that in its turn implies
Observe that, the function \((1-\delta ) {V}(x_0, x_1) + \delta {W}(x_1)\) is linear in \(\delta \) and concave in \(x_1\), the following also prevails:
From \(\varphi (x_0) = x_0\), there thus exists \(\delta \in {\mathscr {D}}\) such that
This implies
that is equivalent to
For any t, the sequence \(x_t = x_0\) is thus a solution of \({\mathscr {R}}(\delta )\). \(\square \)
Proof of Theorem 3.1
Proof
(i) This follows as an immediate corollary of Proposition 3.2.
(ii) For each \(x_0 > 0\), define \(\chi ^* = \{x^*_t\}_{t=0}^\infty = \{\varphi ^t(x_0) \}_{t=0}^\infty \). Using Propositions 3.2, there exists \(\delta ^* \in {\mathscr {D}}\) such that \(\chi ^*\) is a solution of \({\mathscr {R}}(\delta ^*)\) and
For any \(\delta \in {\mathscr {D}}\), it is obtained that:
Hence
Suppose now that \(({\tilde{\delta }}, {\tilde{\chi }})\) satisfies
The value \(\chi ^* = \{x^*_t\}_{t=0}^\infty \) being an optimal solution of the problem \({\mathscr {R}}(\delta ^*)\), it is obtained that:
Hence:
that establishes the details of the statement. \(\square \)
Payoffs unbounded from below
Assumption V3 is relaxed and the payoff becomes unbounded from below. As a result of this unboundedness, contracting map techniques cannot any longer be used to prove the existence of a solution to the functional Bellman-like equation. However, and thanks to the supplementary condition V6, the existence argument can be established through some simple guess and verify methods.Footnote 6
Under assumptions T1–T5 and V1–V2, V4–V6 and for every \(\delta \) there exists a unique long-run steady state \(x^\delta \) for the fixed discount Ramsey problem \({\mathscr {R}}(\delta )\), and \(x^{{\underline{\delta }}} \le x^\delta \le x^{{\overline{\delta }}}\), for every \(\delta \in {\mathscr {D}}\). Define then \({J}_\delta (x_0)\) as the value function of \({\mathscr {R}}(\delta )\):
Consider now the function \({W}(\cdot )\) as defined by:
-
(i)
For \(0 \le x_0 \le x^{{\underline{\delta }}}\), \({W}(x_0) = {J}_{{\underline{\delta }}}(x_0)\),
-
(ii)
For \(x^{{\underline{\delta }}} \le x_0 \le x^{{\overline{\delta }}}\), take \(\delta \) satisfying \({V}_2(x_0, x_0) + \delta {V}_1(x_0, x_0) = 0\), \({W}(x_0) = {J}_\delta (x_0) = {V}(x_0, x_0)\).
-
(iii)
For \(x \ge x^{{\overline{\delta }}}\), \({W}(x_0) = {J}_{{\overline{\delta }}}(x_0)\).
Making use of the monotonicity of \(x^\delta \) with \(\delta \in {\mathscr {D}}\), the following preparation lemma can first be established,
Lemma F.1
Assume T1–T5 and V1–V2, V4–V6. The function W is strictly concave.
Proof
Obviously, it is well known in the literature on fixed discount Ramsey problems that W is strictly concave in \(]0, x^{{\underline{\delta }}}]\), \([x^{{\underline{\delta }}}, x^{{\overline{\delta }}}]\) and \([x^{{\overline{\delta }}}, +\infty [\). It just remains to prove that W is differentiable at \(x^{{\underline{\delta }}}\) and \(x^{{\overline{\delta }}}\). The left derivative of W at \(x^{{\underline{\delta }}}\) is first equal to the derivative of \({J}_{{\underline{\delta }}}\):
It is also well known that
whence \({W}_-^\prime (x^{{\underline{\delta }}}) = {W}_+^\prime (x^{{\underline{\delta }}})\), or W is differentiable at \(x^{{\underline{\delta }}}\). The proof of the differentiability at \(x^{{\overline{\delta }}}\) relies on the same arguments. \(\square \)
Proposition F.1
Assume T1–T5 and V1–V2, V4–V6. The function W is solution of the functional equation
Proof
Denote by \(\varphi _\delta \) the optimal policy function of \({\mathscr {R}}(\delta )\). Consider the case \(x_0 < x^{{\underline{\delta }}}\). The system of inequalities \(x_0< \varphi _{{\underline{\delta }}}(x_0) < x^{{\underline{\delta }}}\) is available, hence
It remains to prove that
for any \(x_1 \in \Gamma (x_0)\). But, and for every \(x_1 \in \Gamma (x_0)\),
The function \((1-{\underline{\delta }}) {V}(x_0, x_1) + {\underline{\delta }} {W}(x_1)\) being strictly concave in \(x_1\), it hence attains its maximum at \(x_1 = \varphi _{{\underline{\delta }}}(x_0)\). Recall indeed that \(x_0< \varphi _{{\underline{\delta }}}(x_0) < x^{{\underline{\delta }}}\), and \({W}(\varphi _{{\underline{\delta }}}(x_0)) = {J}_{{\underline{\delta }}}(\varphi _{{\underline{\delta }}}(x_0))\) and \({W}'(\varphi _{{\underline{\delta }}}(x_0)) = {J}'_{{\underline{\delta }}}(\varphi _{{\underline{\delta }}}(x_0))\). This implies that:
From the strict concavity of \((1-{\underline{\delta }}) {V}(x_0, x_1) + {\underline{\delta }} {W}(x_1)\) and for any \(x_1 \in \Gamma (x_0)\):
The same arguments can be used for the remaining cases \(x^{{\underline{\delta }}} \le x_0 \le x^{{\overline{\delta }}}\) and \(x^{{\overline{\delta }}} \le x_0\) and the argument of the proof is complete. \(\square \)
Theorem F.1
Assume T1–T5 and V1–V2, V4–V6.
-
(i)
The value of the Maximin problem is equal to the value of the Minimax problem
$$\begin{aligned} \sup _{\chi \in \Pi (x_0)} \inf _{\delta \in {\mathscr {D}}} v(\delta , \chi )&= \max _{\chi \in \Pi (x_0)} \min _{\delta \in {\mathscr {D}}} v(\delta , \chi ) \\&= \inf _{\delta \in {\mathscr {D}}} \sup _{\chi \in \Pi (x_0)} v(\delta , \chi ) \\&= \min _{\delta \in {\mathscr {D}}} \max _{\chi \in \Pi (x_0)} v(\delta , \chi ). \end{aligned}$$ -
(ii)
\({J}(x_{0}) = {W}(x_0)\).
This characterization of payoffs unbounded from below will prove useful for the characterization of a benchmark example in Sect. 4.
Rights and permissions
About this article
Cite this article
Drugeon, JP., Ha-Huy, T. & Nguyen, T.D.H. On maximin dynamic programming and the rate of discount. Econ Theory 67, 703–729 (2019). https://doi.org/10.1007/s00199-018-1166-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00199-018-1166-0