Appendix 1: An Example of Differential Games Problem
In this section, we demonstrate how to obtain a feedback solution
for a differential games
problem, utilizing the HJB equation
.
1.1
Feedback Nash Equilibrium
Consider the general objective function as follows:
$$V^{i} = \int_{0}^{T} {\left( {\left( {a_{i} x_{i} - a_{j} x_{j} } \right) - e_{i} u_{i}^{2} + e_{j} u_{j}^{2} } \right)dt + H\left( {a_{i} x_{i} \left( T \right) - a_{j} x_{j} \left( T \right)} \right),\,i \ne j \,{\text{and}}\, i,j = 1,2}$$
(25)
More specifically, Player 1’s decision problem writes:
$$Maximize \int_{0}^{T} {\left( {\left( {a_{1} x_{1} - a_{2} x_{2} } \right) - e_{1} u_{1}^{2} + e_{2} u_{2}^{2} } \right)dt + H\left( {a_{1} x_{1} \left( T \right) - a_{2} x_{2} \left( T \right)} \right)}$$
(26)
$$\begin{aligned} Subject\,to\,x_{1}^{'} & = b_{1} u_{1} \\ x_{2}^{'} & = b_{2} u_{2} \\ x_{1} \left( 0 \right) & = x_{2} \left( 0 \right) = 0\,given. \\ \end{aligned}$$
Similarly, we define the decision problem for Player 2 as
$$Maximize \int_{0}^{T} {\left( {\left( { - a_{1} x_{1} + a_{2} x_{2} } \right) + e_{1} u_{1}^{2} - e_{2} u_{2}^{2} } \right)dt + H\left( { - a_{1} x_{1} \left( T \right) + a_{2} x_{2} \left( T \right)} \right)}$$
(27)
$$\begin{aligned} Subject\,to\,x_{1}^{'} & = b_{1} u_{1} \\ x_{2}^{'} & = b_{2} u_{2} \\ x_{1} \left( 0 \right) & = x_{2} \left( 0 \right) = 0\,given. \\ \end{aligned}$$
Then, we solve the problem as follows.
First, define the HJB equations
for the players:
$$- J_{t}^{1} = max\left( {\left( {a_{1} x_{1} - a_{2} x_{2} } \right) - e_{1} u_{1}^{2} + e_{2} \bar{u}_{2}^{2} + J_{{x_{1} }}^{1} \left( {b_{1} u_{1} } \right) + J_{{x_{2} }}^{1} \left( {b_{2} \bar{u}_{2} } \right)} \right)$$
(28)
$$- J_{t}^{2} = max\left( {\left( { - a_{1} x_{1} + a_{2} x_{2} } \right) + e_{1} \bar{u}_{1}^{2} - e_{2} u_{2}^{2} + J_{{x_{1} }}^{2} \left( {b_{1} \bar{u}_{1} } \right) + J_{{x_{2} }}^{2} \left( {b_{2} u_{2} } \right)} \right)$$
(29)
From (28) and (29), obtain the optimal controls:
$$- 2e_{1} u_{1} + b_{1} J_{{x_{1} }}^{1} = 0\quad \therefore\,u_{1}^{*} = \frac{{b_{1} }}{{2e_{1} }}J_{{x_{1} }}^{1}$$
(30)
$$- 2e_{2} u_{2} + b_{2} J_{{x_{2} }}^{2} = 0\quad \therefore\,u_{2}^{*} = \frac{{b_{2} }}{{2e_{2} }}J_{{x_{2} }}^{2}$$
(31)
Putting (30) and (31) in (28), we get:
$$\begin{aligned} - J_{t}^{1} & = \left( {a_{1} x_{1} - a_{2} x_{2} } \right) - e_{1} \left( {\frac{{b_{1}^{2} }}{{4e_{1}^{2} }}\left( {J_{{x_{1} }}^{1} } \right)^{2} } \right) + e_{2} \left( {\frac{{b_{2}^{2} }}{{4e_{2}^{2} }}\left( {J_{{x_{2} }}^{2} } \right)^{2} } \right) + J_{{x_{1} }}^{1} b_{1} \left( {\frac{{b_{1} }}{{2e_{1} }}J_{{x_{1} }}^{1} } \right) + J_{{x_{2} }}^{1} b_{2} \left( {\frac{{b_{2} }}{{2e_{2} }}J_{{x_{2} }}^{2} } \right) \\ & = \left( {a_{1} x_{1} - a_{2} x_{2} } \right) + \frac{{b_{1}^{2} }}{{4e_{1} }}\left( {J_{{x_{1} }}^{1} } \right)^{2} + \frac{{b_{2}^{2} }}{{4e_{2} }}\left( {J_{{x_{2} }}^{2} } \right)^{2} + \frac{{b_{2}^{2} }}{{2e_{2} }}\left( {J_{{x_{2} }}^{1} J_{{x_{2} }}^{2} } \right) \\ \end{aligned}$$
(32)
From the structure of (25), we suggest
$$J^{i} = K\left( t \right)\left( {a_{i} x_{i} - a_{j} x_{j} } \right) + Q\left( t \right),\,i \ne j\,\, {\text{and}}\,\, i,j = 1, 2.$$
(33)
From (33), we obtain from (34) to (37):
$$J_{t}^{1} = K\left( t \right)^{'} \left( {a_{1} x_{1} - a_{2} x_{2} } \right) + Q\left( t \right)^{'}$$
(34)
$$J_{{x_{1} }}^{1} = a_{1} K\left( t \right)$$
(35)
$$J_{{x_{2} }}^{1} = - a_{2} K\left( t \right)$$
(36)
$$J_{{x_{2} }}^{2} = a_{2} K\left( t \right)$$
(37)
By plugging (34)–(37) into (38), we have (39):
$$- J_{t}^{1} = \left( {a_{1} x_{1} - a_{2} x_{2} } \right) + \frac{{b_{1}^{2} }}{{4e_{1} }}\left( {J_{{x_{1} }}^{1} } \right)^{2} + \frac{{b_{2}^{2} }}{{4e_{2} }}\left( {J_{{x_{2} }}^{2} } \right)^{2} + \frac{{b_{2}^{2} }}{{2e_{2} }}\left( {J_{{x_{2} }}^{1} J_{{x_{2} }}^{2} } \right)$$
(38)
$$\begin{aligned} - K\left( t \right)^{'} \left( {a_{1} x_{1} - a_{2} x_{2} } \right) - Q\left( t \right)^{'} & = \left( {a_{1} x_{1} - a_{2} x_{2} } \right) + \frac{{b_{1}^{2} }}{{4e_{1} }}\left( {a_{1} K\left( t \right)} \right)^{2} + \frac{{b_{2}^{2} }}{{4e_{2} }}\left( {a_{2} K\left( t \right)} \right)^{2} \\ & \quad +\,\frac{{b_{2}^{2} }}{{2e_{2} }}\left( { - a_{2} K\left( t \right)} \right)\left( {a_{2} K\left( t \right)} \right) \\ \end{aligned}$$
(39)
We rearrange (39) to get (40):
$$\therefore\,\left[ {\frac{{K\left( t \right)^{2} }}{4}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) + Q\left( t \right)^{'} } \right] + \left( {a_{1} x_{1} - a_{2} x_{2} } \right)\left( {1 + K\left( t \right)^{'} } \right) = 0$$
(40)
To make (40) valid, we have to have:
$$1 + K\left( t \right)^{'} = 0\,{\text{and}}\,\frac{{K\left( t \right)^{2} }}{4}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) + Q\left( t \right)^{'} = 0$$
(41)
The first part of (41) leads us to determine (42):
$$\begin{aligned} K\left( t \right)^{'} & = - 1,\,K(t) = - t + c_{1} ,\,{\text{where}}\,c_{1} \,{\text{is}}\,{\text{a}}\,{\text{constant}} \\ K\left( T \right) & = - T + c_{1} = H,\,{\text{therefore}}\,c_{1} = T + H \\ \end{aligned}$$
$$K(t) = - t + T + H$$
(42)
Using (42) in (35) and (37), we derive:
$$J_{{x_{1} }}^{1} = a_{1} K\left( t \right) = a_{1} \left( { - t + T + H} \right);\,J_{{x_{2} }}^{2} = a_{2} K\left( t \right) = a_{2} \left( { - t + T + H} \right)$$
(43)
With (43), we determine the optimal control variables
as in (44) and (45):
$$u_{1}^{*} = \frac{{b_{1} }}{{2e_{1} }}J_{{x_{1} }}^{1} = \frac{{a_{1} b_{1} }}{{2e_{1} }}\left( { - t + T + H} \right)$$
(44)
$$u_{2}^{*} = \frac{{b_{2} }}{{2e_{2} }}J_{{x_{2} }}^{2} = \frac{{a_{2} b_{2} }}{{2e_{2} }}\left( { - t + T + H} \right)$$
(45)
Using (44) and (45), we derive the optimal state variables as in (46) and (47):
$$\begin{aligned} & x_{1}^{'} = b_{1} u_{1} = b_{1} \frac{{a_{1} b_{1} }}{{2e_{1} }}\left( { - t + T + H} \right) = \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( { - t + T + H} \right) \\ & x_{1} \left( t \right) = - \frac{{a_{1} b_{1}^{2} }}{{4e_{1} }}t^{2} + \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( {T + H} \right)t + k_{1} ,\,{\text{where}}\,k_{1} \,{\text{is}}\,{\text{a}}\,{\text{constant}} \\ \end{aligned}$$
$$x_{1} \left( 0 \right) = k_{1} = 0\,\therefore \,x_{1} \left( t \right) = - \frac{{a_{1} b_{1}^{2} }}{{4e_{1} }}t^{2} + \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( {T + H} \right)t$$
(46)
$$\begin{aligned} & x_{2}^{'} = b_{2} u_{2} = b_{2} \frac{{a_{2} b_{2} }}{{2e_{2} }}\left( { - t + T + H} \right) = \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( { - t + T + H} \right) \\ & x_{2} \left( t \right) = - \frac{{a_{2} b_{2}^{2} }}{{4e_{2} }}t^{2} + \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( {T + H} \right)t + k_{2} ,\,{\text{where}}\,k_{2} \,{\text{is}}\,{\text{a}}\,{\text{constant}} \\ \end{aligned}$$
$$x_{2} \left( 0 \right) = k_{2} = 0\,\therefore \,x_{2} \left( t \right) = - \frac{{a_{2} b_{2}^{2} }}{{4e_{2} }}t^{2} + \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( {T + H} \right)t$$
(47)
Using (41) and (42), we determine \(Q\left( t \right)\) as in (48):
$$\begin{aligned} Q\left( t \right)^{'} & = - \frac{{K\left( t \right)^{2} }}{4}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) = - \frac{{\left( { - t + T + H} \right)^{2} }}{4}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) \\ Q\left( t \right) & = \frac{{\left( { - t + T + H} \right)^{3} }}{12}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) + c_{2} ,\,{\text{where}}\,c_{2} \,{\text{is}}\,{\text{a}}\,{\text{constant}} \\ Q\left( T \right) & = \frac{{H^{3} }}{12}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) + c_{2} = 0 \\ \end{aligned}$$
$$\therefore\,Q\left( t \right) = \frac{{\left( { - t + T + H} \right)^{3} - H^{3} }}{12}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right)$$
(48)
Verification of Value Function: To verify the optimal solution, we need to confirm that we have indeed determined a right value function
in (33). We do this in two ways, i.e., calculating the optimal total profit by utilizing the objective function in (26) and the value function in (33).
-
A.
Using the objective function in (26)
$$\begin{aligned} V^{1} & = \int_{0}^{T} {\left( {\left( {a_{1} x_{1} - a_{2} x_{2} } \right) - e_{1} u_{1}^{2} + e_{2} u_{2}^{2} } \right)dt + H\left( {a_{1} x_{1} \left( T \right) - a_{2} x_{2} \left( T \right)} \right)} \\ & = \int\limits_{0}^{T} {\left\{ {\left[ {a_{1} \left( { - \frac{{a_{1} b_{1}^{2} }}{{4e_{1} }}t^{2} + \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( {T + H} \right)t} \right) - a_{2} \left( { - \frac{{a_{2} b_{2}^{2} }}{{4e_{2} }}t^{2} + \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( {T + H} \right)t} \right)} \right]} \right. - e_{1} \left[ {\frac{{a_{1} b_{1} }}{{2e_{1} }}( - t} \right.}
\\ &\quad \left. { + \left. {T + H)} \right]^{2} + e_{2} \left[ {\frac{{a_{2} b_{2} }}{{2e_{2} }}\left( { - t + T + H} \right)} \right]^{2} } \right\}dt \\ & \quad+ H\left\{ {a_{1} \left[ { - \frac{{a_{1} b_{1}^{2} }}{{4e_{1} }}T^{2} + \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( {T + H} \right)T} \right]} \right. - \left. {a_{2} \left[ { - \frac{{a_{2} b_{2}^{2} }}{{4e_{2} }}T^{2} + \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( {T + H} \right)T} \right]} \right\} \\ & = \frac{{T\left( {3H^{2} + 3HT + T^{2} } \right)}}{12}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }} } \right) = \frac{{\left( {T + H} \right)^{3} - H^{3} }}{12}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right). \\ \end{aligned}$$
(49)
-
B.
Using the value function in (33)
$$\begin{aligned} & J^{1} \left( {0, x_{1} \left( 0 \right), x_{2} \left( 0 \right)} \right) = K\left( 0 \right)\left( {a_{1} x_{1} \left( 0 \right) - a_{2} x_{2} \left( 0 \right)} \right) + Q\left( 0 \right) = K\left( 0 \right)\left( {a_{1} \times 0 - a_{2} \times 0} \right) \\ & + Q\left( 0 \right) = Q\left( 0 \right) = \frac{{\left( {T + H} \right)^{3} - H^{3} }}{12}\left( {\frac{{a_{1}^{2} b_{1}^{2} }}{{e_{1} }} - \frac{{a_{2}^{2} b_{2}^{2} }}{{e_{2} }}} \right) \\ \end{aligned}$$
(50)
Since (49) is the same as (50), we know that the value function is correct and we have obtained an optimal solution.
1.2
Open-Loop Nash Equilibrium
Now let’s find an open-loop Nash equilibrium solution for the differential games
problem. Applying the maximum principle
, we obtain optimal controls in (57) and (58) and also optimal state variables in (59) and (60).
$$H^{1} = \left( {a_{1} x_{1} - a_{2} x_{2} } \right) - e_{1} u_{1}^{2} + e_{2} u_{2}^{2} + \lambda_{1}^{1} b_{1} u_{1} + \lambda_{2}^{1} b_{2} u_{2}$$
(51)
$$H^{2} = \left( { - a_{1} x_{1} + a_{2} x_{2} } \right) + e_{1} u_{1}^{2} - e_{2} u_{2}^{2} + \lambda_{1}^{2} b_{1} u_{1} + \lambda_{2}^{2} b_{2} u_{2}$$
(52)
$$\lambda_{1}^{{1{\prime }}} = - H_{{x_{1} }}^{1} = - a_{1} ,\,\lambda_{1}^{1} \left( T \right) = a_{1} H\quad \therefore\,\lambda_{1}^{1} \left( t \right) = a_{1} \left( { - t + T + H} \right)$$
(53)
$$\lambda_{2}^{{1{\prime }}} = - H_{{x_{2} }}^{1} = a_{2} ,\,\lambda_{2}^{1} \left( T \right) = - a_{2} H\quad \therefore\, \lambda_{2}^{1} \left( t \right) = a_{2} \left( {t - T - H} \right)$$
(54)
$$\lambda_{1}^{{2{\prime }}} = - H_{{x_{1} }}^{2} = a_{1} ,\,\lambda_{1}^{2} \left( T \right) = - a_{1} H\quad \therefore\, \lambda_{1}^{2} \left( t \right) = a_{1} \left( {t - T - H} \right)$$
(55)
$$\lambda_{2}^{{2{\prime }}} = - H_{{x_{2} }}^{2} = - a_{2} ,\,\lambda_{2}^{2} \left( T \right) = a_{2} H\quad \therefore\, \lambda_{2}^{2} \left( t \right) = a_{2} \left( { - t + T + H} \right)$$
(56)
$$- 2e_{1} u_{1} + b_{1} \lambda_{1}^{1} = 0\quad \therefore\, u_{1}^{*} = \frac{{b_{1} }}{{2e_{1} }}\lambda_{1}^{1} = \frac{{a_{1} b_{1} }}{{2e_{1} }}\left( { - t + T + H} \right)$$
(57)
$$- 2e_{2} u_{2} + b_{2} \lambda_{2}^{2} = 0\quad \therefore\, u_{2}^{*} = \frac{{b_{2} }}{{2e_{2} }}\lambda_{2}^{2} = \frac{{a_{2} b_{2} }}{{2e_{2} }}\left( { - t + T + H} \right)$$
(58)
$$\begin{aligned} & x_{1}^{'} = b_{1} u_{1} = \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( { - t + T + H} \right),\,x_{1} \left( 0 \right) = 0 \\ & \therefore \,x_{1} \left( t \right) = - \frac{{a_{1} b_{1}^{2} }}{{4e_{1} }}t^{2} + \frac{{a_{1} b_{1}^{2} }}{{2e_{1} }}\left( {T + H} \right)t \\ \end{aligned}$$
(59)
$$\begin{aligned} & x_{2}^{'} = b_{2} u_{2} = \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( { - t + T + H} \right),\,x_{2} \left( 0 \right) = 0 \\ & \therefore \,x_{2} \left( t \right) = - \frac{{a_{2} b_{2}^{2} }}{{4e_{2} }}t^{2} + \frac{{a_{2} b_{2}^{2} }}{{2e_{2} }}\left( {T + H} \right)t \\ \end{aligned}$$
(60)
For this particular differential games
problem, we confirm that the open-loop Nash equilibrium
solution is equivalent to the feedback Nash equilibrium
solution.
1.3 Numerical Example
In order to visualize the optimal dynamics, we conduct a numerical analysis, using the parameter values in Table 5.
Figure 10 shows the optimal dynamics of the objective function and also those of the state variables. Figure 11 depicts the optimal dynamics of the control variables
.
Appendix 2
1.1 Derivation of Theorem 1
Using the information in (19), we can reformulate each element of (17) as
-
(i)
\(J_{t}^{1} = \dot{h}_{1} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} + \dot{l}_{1} \left( t \right)e^{{ - \lambda x_{2} }}\)
-
(ii)
\(\begin{aligned} & \left( {J_{{x_{1} }}^{1} } \right)^{2} \frac{{3\alpha_{1}^{2} }}{{2K_{1} }}t^{{ - n_{1} }} e^{rt} e^{{\lambda \left( {x_{1} + x_{2} } \right)}} = \left( { - \lambda h_{1} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} } \right)^{2} \frac{{3\alpha_{1}^{2} }}{{2K_{1} }}t^{{ - n_{1} }} e^{rt} e^{{\lambda \left( {x_{1} + x_{2} } \right)}} \\ & = \lambda^{2} \frac{{3\alpha_{1}^{2} }}{{2K_{1} }}t^{{ - n_{1} }} e^{rt} e^{{\lambda \left( {x_{1} + x_{2} } \right)}} h_{1}^{2} \\ \end{aligned}\)
-
(iii)
\(\begin{aligned} & J_{{x_{2} }}^{1} J_{{x_{2} }}^{2} \frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} e^{rt} e^{{\lambda \left( {x_{1} + x_{2} } \right)}} \\ & = \left( { - \lambda h_{1} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} - \lambda l_{1} \left( t \right)e^{{ - \lambda x_{2} }} } \right)\left( { - \lambda h_{2} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} } \right)\frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} e^{rt} e^{{\lambda \left( {x_{1} + x_{2} } \right)}} \\ & = \lambda^{2} \frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} e^{rt} e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} h_{1} h_{2} + \lambda^{2} \frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} e^{rt} e^{{ - \lambda x_{2} }} l_{1} h_{2} \\ \end{aligned}\)
-
(iv)
\(\begin{aligned} & J_{{x_{2} }}^{2} \frac{{\alpha_{2}^{2} }}{{K_{2} }}R_{0} t^{{m - n_{2} }} e^{rt} \lambda \left( {e^{{\lambda x_{1} }} - 1} \right) = - \lambda h_{2} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} \frac{{\alpha_{2}^{2} }}{{K_{2} }}R_{0} t^{{m - n_{2} }} e^{rt} \lambda \left( {e^{{\lambda x_{1} }} - 1} \right) = \\ & - \lambda^{2} \frac{{\alpha_{2}^{2} }}{{K_{2} }}R_{0} t^{{m - n_{2} }} e^{rt} e^{{ - \lambda x_{2} }} h_{2} + \lambda^{2} \frac{{\alpha_{2}^{2} }}{{K_{2} }}R_{0} t^{{m - n_{2} }} e^{rt} e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} h_{2} \\ \end{aligned}\)
-
(v)
\(mR_{0} t^{m - 1} \left( {1 - e^{{ - \lambda x_{1} }} } \right)e^{{ - \lambda x_{2} }} = mR_{0} t^{m - 1} e^{{ - \lambda x_{2} }} - mR_{0} t^{m - 1} e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}}\)
By putting (i)–(v) into (17) and rearranging, we obtain:
$$E_{1} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} + E_{2} \left( t \right)e^{{ - \lambda x_{2} }} = 0,$$
(61)
where \(E_{1} \left( t \right) = \dot{h}_{1} + \lambda^{2} e^{rt} \left( {\frac{{3\alpha_{1}^{2} }}{{2K_{1} }}t^{{ - n_{1} }} h_{1}^{2} + \frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} h_{1} h_{2} } \right) + \lambda^{2} e^{rt} \frac{{\alpha_{2}^{2} }}{{K_{2} }}R_{0} t^{{m - n_{2} }} h_{2} + mR_{0} t^{m - 1}\)and
$$\begin{aligned} E_{2} \left( t \right) & = \dot{l}_{1} + \lambda^{2} e^{rt} \frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} l_{1} h_{2} - \lambda^{2} e^{rt} \frac{{\alpha_{2}^{2} }}{{K_{2} }}R_{0} t^{{m - n_{2} }} h_{2} - mR_{0} t^{m - 1} \\ & = \dot{l}_{1} + \lambda^{2} e^{rt} \frac{{\alpha_{2}^{2} }}{{K_{2} }}t^{{ - n_{2} }} \left( {l_{1} - R_{0} t^{m} } \right)h_{2} - mR_{0} t^{m - 1} . \\ \end{aligned}$$
In order to satisfy (61), both \(E_{1} \left( t \right)\) and \(E_{2} \left( t \right)\) must be zero. In addition, we can confirm that \(l_{1} \left( t \right) = R_{0} t^{m}\) makes \(E_{2} \left( t \right) = 0\).
1.2 Derivation of Corollary 1
Following steps similar with those for Firm 1, we can propose
$$J^{2} \left( {t,x_{1} ,x_{2} } \right) = h_{2} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} + l_{2} \left( t \right)e^{{ - \lambda x_{1} }} .$$
Using (19) and rearranging appropriately, we have the following condition for optimality: \(G_{1} \left( t \right)e^{{ - \lambda \left( {x_{1} + x_{2} } \right)}} + G_{2} \left( t \right)e^{{ - \lambda x_{1} }} = 0\), where
$$\begin{aligned} G_{1} \left( t \right) & = \dot{h}_{2} + \lambda^{2} e^{rt} \left( {\frac{{3\alpha_{2}^{2} }}{{2K_{2} }}t^{{ - n_{2} }} h_{2}^{2} + \frac{{\alpha_{1}^{2} }}{{K_{1} }}t^{{ - n_{1} }} h_{1} h_{2} } \right) + \lambda^{2} e^{rt} \frac{{\alpha_{1}^{2} }}{{K_{1} }}R_{0} t^{{m - n_{1} }} h_{1} + mR_{0} t^{m - 1} \,{\text{and}} \\ G_{2} \left( t \right) & = \dot{l}_{2} + \lambda^{2} e^{rt} \frac{{\alpha_{1}^{2} }}{{K_{1} }}t^{{ - n_{1} }} l_{2} h_{1} - \lambda^{2} e^{rt} \frac{{\alpha_{1}^{2} }}{{K_{1} }}R_{0} t^{{m - n_{1} }} h_{1} - mR_{0} t^{m - 1} \\ & = \dot{l}_{2} + \lambda^{2} e^{rt} \frac{{\alpha_{1}^{2} }}{{K_{1} }}t^{{ - n_{1} }} \left( {l_{2} - R_{0} t^{m} } \right)h_{1} - mR_{0} t^{m - 1} . \\ \end{aligned}$$
Again, it must be that \(G_{1} \left( t \right) = G_{2} \left( t \right) = 0\). We can also show that \(l_{2} = R_{0} t^{m}\) makes \(G_{2} \left( t \right) = 0\).
1.3 Derivation of Theorem 3
Consider for \(i = 1\), using HJB equation
for an infinity case:
$$\begin{aligned} & rJ = \max_{{u_{1} }} \left\{ {\left( {x_{1} + x_{2} } \right)\left[ {a_{i} - (x_{1} + x_{2} )} \right] + s_{i} x_{i} \left[ {b_{i} - x_{i} } \right] - \rho u_{i} - \frac{1}{2}u_{i}^{2} + J_{{x_{1} }} \left( {u_{1} - \delta x_{1} } \right)} \right. \\ & \quad + \left. {J_{{x_{2} }} (u_{2} - \delta x_{2} )} \right\} \\ \end{aligned}$$
(62)
Thus, we obtain: \(u_{1} = J_{{x_{1} }} - \rho ,\,u_{2} = J_{{x_{2} }}^{2} - \rho\).
We suggest the following value functions
:
$$\begin{aligned} J & = \frac{a}{2}\left( {x_{1} + x_{2} } \right)^{2} + \beta_{1} \left( {x_{1} + x_{2} } \right) + \frac{\theta }{2}x_{1}^{2} + \phi_{1} x_{1} + \gamma_{1} \,{\text{for}}\,i = 1\,,\,{\text{and}} \\ J^{2} & = \frac{a}{2}\left( {x_{1} + x_{2} } \right)^{2} + \beta_{2} \left( {x_{1} + x_{2} } \right) + \frac{\theta }{2}x_{2}^{2} + \phi_{2} x_{2} + \gamma_{2} \,{\text{for}}\,i = 2. \\ \end{aligned}$$
Take appropriate partial differentiations and plug them into (62):
$$\begin{aligned} & J_{{x_{1} }} = \alpha \left( {x_{1} + x_{2} } \right) + \beta_{1} + \theta x_{1} + \phi_{1} \\ & J_{{x_{2} }} = \alpha \left( {x_{1} + x_{2} } \right) + \beta_{1} \\ & J_{{x_{2} }}^{2} = \alpha \left( {x_{1} + x_{2} } \right) + \beta_{2} + \theta x_{2} + \phi_{2} \\ & \therefore\, rJ = \left( {x_{1} + x_{2} } \right)\left[ {a_{1} - \left( {x_{1} + x_{2} } \right)} \right] + s_{1} x_{1} \left[ {b_{1} - x_{1} } \right] - \rho \left( {J_{{x_{1} }} - \rho } \right) - \frac{1}{2}\left( {J_{{x_{1} }} - \rho } \right)^{2} \\ & + J_{{x_{1} }} \left( {J_{{x_{1} }} - \rho - \delta x_{1} } \right) + J_{{x_{2} }} (J_{{x_{2} }}^{2} - \rho - \delta x_{2} ) \\ & \therefore\, rJ = \left( {x_{1} + x_{2} } \right)\left[ {a_{1} - \left( {x_{1} + x_{2} } \right)} \right] + s_{1} x_{1} \left[ {b_{1} - x_{1} } \right] + \frac{1}{2}\left( {J_{{x_{1} }} } \right)^{2} - \rho J_{{X_{1} }} - \delta x_{1} J_{{x_{1} }} + \frac{1}{2}\rho^{2} \\ & + J_{{x_{2} }} J_{{x_{2} }}^{2} - \rho J_{{x_{2} }} - \delta x_{2} J_{{x_{2} }} \\ \end{aligned}$$
For simplicity, we consider separate cases:
-
(1)
\(\begin{aligned} \frac{1}{2}\left( {J_{{x_{1} }} } \right)^{2} & = \frac{1}{2}\left\{ {\alpha^{2} \left( {x_{1} + x_{2} } \right)^{2} + \theta^{2} x_{1}^{2} + \left( {\beta_{1} + \phi_{1} } \right)^{2} + 2\alpha \theta x_{1} \left( {x_{1} + x_{2} } \right) + 2\alpha \left( {\beta_{1} } \right.} \right. \\ & \quad\quad\,\, \left. { + \phi_{1} )\left( {x_{1} + x_{2} } \right) + 2\theta \left( {\beta_{1} + \phi_{1} } \right)x_{1} } \right\} \\ \end{aligned}\)
-
(2)
\(\rho J_{{x_{1} }} = \alpha \rho \left( {x_{1} + x_{2} } \right) + \theta \rho x_{1} + \rho (\beta_{1} + \phi_{1} )\)
-
(3)
\(\delta x_{1} J_{{x_{1} }} = \alpha \delta x_{1} \left( {x_{1} + x_{2} } \right) + \theta \delta x_{1}^{2} + \delta \left( {\beta_{1} + \phi_{1} } \right)x_{1}\)
-
(4)
\(\begin{aligned} J_{{x_{2} }} J_{{x_{2} }}^{2} & = \alpha^{2} \left( {x_{1} + x_{2} } \right)^{2} + \alpha \theta x_{2} \left( {x_{1} + x_{2} } \right) + \alpha \left( {\beta_{2} + \phi_{2} } \right)\left( {x_{1} + x_{2} } \right) + \alpha \beta_{1} \left( {x_{1} + x_{2} } \right) \\ & \quad + \theta \beta_{1} x_{2} + \beta_{1} (\beta_{2} + \phi_{2} ) \\ \end{aligned}\)
-
(5)
\(\rho J_{{x_{2} }} = \alpha \rho \left( {x_{1} + x_{2} } \right) + \rho \beta_{1}\)
-
(6)
\(\delta x_{2} J_{{x_{2} }} = \alpha \delta x_{2} \left( {x_{1} + x_{2} } \right) + \delta \beta_{1} x_{2}\)
Now,
$$\begin{aligned} & \therefore\, - r\frac{\alpha }{2}\left( {x_{1} + x_{2} } \right)^{2} - r\beta_{1} \left( {x_{1} + x_{2} } \right) - r\frac{\theta }{2}x_{1}^{2} - r\phi_{1} x_{1} - r\gamma_{1} \\ & + a_{1} \left( {x_{1} + x_{2} } \right) - \left( {x_{1} + x_{2} } \right)^{2} + s_{1} b_{1} x_{1} - s_{1} x_{1}^{2} \\ & + \frac{1}{2}\alpha^{2} \left( {x_{1} + x_{2} } \right)^{2} + \frac{1}{2}\theta^{2} x_{1}^{2} + \frac{1}{2}\left( {\beta_{1} + \phi_{1} } \right)^{2} + \alpha \theta x_{1} (x_{1} + x_{2} ) \\ & + \alpha (\beta_{1} + \phi_{1} )(x_{1} + x_{2} ) + \theta (\beta_{1} + \phi_{1} )x_{1} \\ & - \alpha \rho \left( {x_{1} + x_{2} } \right) - \theta \rho x_{1} - \rho (\beta_{1} + \phi_{1} ) \\ & - \alpha \delta x_{1} \left( {x_{1} + x_{2} } \right) - \theta \delta x_{1}^{2} - \delta \left( {\beta_{1} + \phi_{1} } \right)x_{1} + \frac{1}{2}\rho^{2} \\ & +\upalpha^{2} \left( {x_{1} + x_{2} } \right)^{2} + a\theta x_{2} \left( {x_{1} + x_{2} } \right) + \alpha \left( {\beta_{2} + \phi_{2} } \right)\left( {x_{1} + x_{2} } \right) + \alpha \beta_{1} \left( {x_{1} + x_{2} } \right) + \theta \beta_{1} x_{2} \\ & + \beta_{1} \left( {\beta_{2} + \phi_{2} } \right) - \alpha \rho \left( {x_{1} + x_{2} } \right) - \rho \beta_{1} - \alpha \delta x_{2} \left( {x_{1} + x_{2} } \right) - \delta \beta_{1} x_{2} = 0 \\ \end{aligned}$$
$$\begin{aligned} & \therefore\, \left\{ { - r\frac{a}{2} - 1 + \frac{3}{2}\alpha^{2} } \right\}\left( {x_{1} + x_{2} } \right)^{2} \\ & + \left\{ { - r\beta_{1} + a_{1} + \alpha \left( {\beta_{1} + \phi_{1} } \right) - \alpha \rho + \alpha \left( {\beta_{2} + \phi_{2} } \right) + \alpha \beta_{1} - \alpha \rho } \right\}(x_{1} + x_{2} ) \\ & + \left\{ { - r\frac{\theta }{2} - s_{1} + \frac{1}{2}\theta^{2} - \theta \delta } \right\}x_{1}^{2} + \left\{ { - r\phi_{1} + s_{1} b_{1} + \theta \left( {\beta_{1} + \phi_{1} } \right) - \theta \rho - \delta (\beta_{1} + \phi_{1} )} \right\}x_{1} \\ & + \left\{ {\alpha \theta - \alpha \delta } \right\}x_{1} \left( {x_{1} + x_{2} } \right) + \left\{ {\alpha \theta - \alpha \delta } \right\}x_{2} \left( {x_{1} + x_{2} } \right) + \left\{ {\theta \beta_{1} - \delta \beta_{1} } \right\}x_{2} \\ & + \left\{ { - r\gamma_{1} + \frac{1}{2}\left( {\beta_{1} + \phi_{1} } \right)^{2} - \rho \left( {\beta_{1} + \phi_{1} } \right) + \frac{1}{2}\rho^{2} + \beta_{1} \left( {\beta_{2} + \phi_{2} } \right) - \rho \beta_{1} } \right\} = 0 \\ \end{aligned}$$
$$\therefore\, Ax_{1}^{2} + Bx_{1} x_{2} + Cx_{2}^{2} + Dx_{1} + Ex_{2} + F = 0$$
Therefore, we must have: A = B = C = D = E = F = 0, where
$$\begin{aligned} A & = \left\{ { - r\frac{\alpha }{2} + 1 + \frac{3}{2}\alpha^{2} } \right\} + \left\{ { - r\frac{\theta }{2} - s_{1} + \frac{1}{2}\theta^{2} - \theta \delta } \right\} + \left\{ {\alpha \theta - \alpha \delta } \right\} \\ B & = 2\left\{ { - r\frac{\alpha }{2} - 1 + \frac{3}{2}\alpha^{2} } \right\} + \left\{ {\alpha \theta - \alpha \delta } \right\} + \left\{ {\alpha \theta - \alpha \delta } \right\} \\ C & = \left\{ { - r\frac{\alpha }{2} - 1 + \frac{3}{2}\alpha^{2} } \right\} + \left\{ {\alpha \theta - \alpha \delta } \right\} \\ D & = \left\{ { - r\beta_{1} + a_{1} + \alpha \left( {2\beta_{1} + \beta_{2} + \phi_{1} + \phi_{2} - 2\rho } \right)} \right\} + \left\{ { - r\phi_{1} + s_{1} b_{1} + \left( {\theta - \delta } \right)\left( {\beta_{1} + \phi_{1} } \right) - \theta \rho } \right\} \\ E & = \left\{ { - r\beta_{1} + a_{1} + \alpha \left( {2\beta_{1} + \beta_{2} + \phi_{1} + \phi_{2} - 2\rho } \right)} \right\} + \left\{ {\theta \beta_{1} - \delta \beta_{1} } \right\} \\ F & = - r\gamma_{1} + \frac{1}{2}\left( {\beta_{1} + \phi_{1} } \right)^{2} - \rho \left( {\beta_{1} + \phi_{1} } \right) + \frac{1}{2}\rho^{2} + \beta_{1} \left( {\beta_{2} + \phi_{2} } \right) - \rho \beta_{1} . \\ \end{aligned}$$
From A = C = 0, it must be that \(\left\{ { - r\frac{\theta }{2} - s_{1} + \frac{1}{2}\theta^{2} - \theta \delta } \right\} = \frac{1}{2}\theta^{2} - \left( {\frac{r}{2} + \delta } \right)\theta - s_{1} = 0\).
Therefore, \(\theta = \frac{{(r + 2\delta ) \pm \sqrt {\left( {r + 2\delta } \right)^{2} + 8s_{1} } }}{2}\). We also need to impose \(\theta < 0\), thus \(\theta = \frac{{\left( {r + 2\delta } \right) - \sqrt {\left( {r + 2\delta } \right)^{2} + 8s_{1} } }}{2}.\)
From B = C = 0, \(\left\{ { - r\frac{\alpha }{2} - 1 + \frac{3}{2}\alpha^{2} } \right\} + \left\{ {\alpha \theta - \alpha \delta } \right\} = \frac{3}{2}\alpha^{2} - \left( {\frac{r}{2} + \delta - \theta } \right)\alpha - 1 = 0\), and \(3\alpha^{2} - \left( {r + 2\delta - 2\theta } \right)\alpha - 2 = 0\). Therefore, \(\upalpha = \frac{{\left( {r + 2\delta - 2\theta } \right) \pm \sqrt {\left( {r + 2\delta - 2\theta } \right)^{2} + 24} }}{6}\). Due to the condition \(\alpha < 0\), \(\alpha = \frac{{\left( {r + 2\delta - 2\theta } \right) - \sqrt {\left( {r + 2\delta - 2\theta } \right)^{2} + 24} }}{6}\).
Taking the similar steps for i = 2, we obtain the following:
$$\begin{aligned} D^{2} & = \left\{ { - r\beta_{2} + a_{2} + \alpha \left( {2\beta_{2} + \beta_{1} + \phi_{1} + \phi_{2} - 2\rho } \right)} \right\} + \left\{ { - r\phi_{2} + s_{2} b_{2} + \left( {\theta - \delta } \right)\left( {\beta_{2} + \phi_{2} } \right) - \theta \rho } \right\} \\ E^{2} & = \left\{ { - r\beta_{2} + a_{2} + \alpha \left( {2\beta_{2} + \beta_{1} + \phi_{1} + \phi_{2} - 2\rho } \right)} \right\} + \left\{ {\theta \beta_{2} - \delta \beta_{2} } \right\} \\ F^{2} & = - r\gamma_{2} + \frac{1}{2}\left( {\beta_{2} + \phi_{2} } \right)^{2} - \rho \left( {\beta_{2} + \phi_{2} } \right) + \frac{1}{2}\rho^{2} + \beta_{2} \left( {\beta_{1} + \phi_{1} } \right) - \rho \beta_{2} \\ \end{aligned}$$
Now, we have 4 equations, D, E, D
2, and \(E^{2}\), and 4 variables, \(\beta_{1}\), \(\beta_{2}\), \(\phi_{1}\), and \(\phi_{2}\), to determine.
$$K\left[ {\begin{array}{*{20}c} {\beta_{1} } \\ {\beta_{2} } \\ {\phi_{1} } \\ {\phi_{2} } \\ \end{array} } \right] = L,\,{\text{where}}\,K \equiv \left[ {\begin{array}{*{20}c} {2\alpha + \theta - r - \delta } & \alpha & {\alpha + \theta - r - \delta } & \alpha \\ {2\alpha + \theta - r - \delta } & \alpha & \alpha & \alpha \\ \alpha & {2\alpha + \theta - r - \delta } & \alpha & {\alpha + \theta - r - \delta } \\ \alpha & {2\alpha + \theta - r - \delta } & \alpha & \alpha \\ \end{array} } \right]$$
and \(L \equiv - \left[ {\begin{array}{*{20}c} {a_{1} - 2\alpha \rho + s_{1} b_{1} - \theta \rho } \\ {a_{1} - 2\alpha \rho } \\ {a_{2} - 2\alpha \rho + s_{2} b_{2} - \theta \rho } \\ {a_{2} - 2\alpha \rho } \\ \end{array} } \right]\)
$$\left. { \times \left[ {\left. {\begin{array}{*{20}c} {2\alpha + \theta - r - \delta } & \alpha & {\alpha + \theta - r - \delta } & \alpha \\ {2\alpha + \theta - r - \delta } & \alpha & \alpha & \alpha \\ \alpha & {2\alpha + \theta - r - \delta } & \alpha & {\alpha + \theta - r - \delta } \\ \alpha & {2\alpha + \theta - r - \delta } & \alpha & \alpha \\ \end{array} } \right|} \right.\begin{array}{*{20}c} {2\alpha \rho + \theta \rho - a_{1} - s_{1} b_{1} } \\ {2\alpha \rho - a_{1} } \\ {2\alpha \rho + \theta \rho - a_{2} - s_{2} b_{2} } \\ {2\alpha \rho - a_{2} } \\ \end{array} } \right]$$
$$\left. { \times \left[ {\left. {\begin{array}{*{20}c} {2\alpha + \theta - r - \delta } & \alpha & {\alpha + \theta - r - \delta } & \alpha \\ 0 & 0 & {r + \delta - \theta } & 0 \\ \alpha & {2\alpha + \theta - r - \delta } & \alpha & {\alpha + \theta - r - \delta } \\ 0 & 0 & 0 & {r + \delta - \theta } \\ \end{array} } \right|} \right.\begin{array}{*{20}c} {2\alpha \rho + \theta \rho - a_{1} - s_{1} b_{1} } \\ {s_{1} b_{1} - \theta \rho } \\ {2\alpha \rho + \theta \rho - a_{2} - s_{2} b_{2} } \\ {s_{2} b_{2} - \theta \rho } \\ \end{array} } \right]$$
\(\therefore\, \phi_{1} = \frac{{s_{1} b_{1} - \theta \rho }}{r + \delta - \theta }\) and \(\phi_{2} = \frac{{s_{2} b_{2} - \theta \rho }}{r + \delta - \theta }\)
$$\begin{aligned} & \left( {2\alpha + \theta - r - \delta } \right)\beta_{1} + \alpha \beta_{2} + \frac{{\left( {\alpha + \theta - r - \delta } \right)\left( {s_{1} b_{1} - \theta \rho } \right)}}{r + \delta - \theta } + \frac{{\alpha \left( {s_{2} b_{2} - \theta \rho } \right)}}{r + \delta - \theta } = 2\alpha \rho + \theta \rho - a_{1} - s_{1} b_{1} \\ & \alpha \beta_{1} + \left( {2\alpha + \theta - r - \delta } \right)\beta_{2} + \frac{{\alpha \left( {s_{1} b_{1} - \theta \rho } \right)}}{r + \delta - \theta } + \frac{{\left( {\alpha + \theta - r - \delta } \right)\left( {s_{2} b_{2} - \theta \rho } \right)}}{r + \delta - \theta } = 2\alpha \rho + \theta \rho - a_{2} - s_{2} b_{2} \\ \end{aligned}$$
Let’s define \(W_{i} = \frac{{\alpha \left( {2r\rho + 2\delta \rho - s_{1} b_{1} - s_{2} b_{2} } \right)}}{r + \delta - \theta } - a_{i}\).
Then, \(\left( {2\alpha + \theta - r - \delta } \right)\beta_{1} + \alpha \beta_{2} = W_{1}\) and \(\alpha \beta_{1} + \left( {2\alpha + \theta - r - \delta } \right)\beta_{2} = W_{2}\).
$$\begin{aligned} \therefore\, \beta_{1} & = \frac{2\alpha + \theta - r - \delta }{{\left( {2\alpha + \theta - r - \delta } \right)^{2} - \alpha^{2} }}W_{1} + \frac{ - \alpha }{{\left( {2\alpha + \theta - r - \delta } \right)^{2} - \alpha^{2} }}W_{2} \,{\text{and}} \\ \beta_{2} & = \frac{ - \alpha }{{\left( {2\alpha + \theta - r - \delta } \right)^{2} - \alpha^{2} }}W_{1} + \frac{2\alpha + \theta - r - \delta }{{\left( {2\alpha + \theta - r - \delta } \right)^{2} - \alpha^{2} }}W_{2} \\ \end{aligned}$$
Finally, from \(F,\,\gamma_{1} = \frac{1}{r}\left[ {\frac{1}{2}\left( {\beta_{1} + \phi_{1} } \right)^{2} - \rho \left( {\beta_{1} + \phi_{1} } \right) + \frac{1}{2}\rho^{2} + \beta_{1} \left( {\beta_{2} + \phi_{2} } \right) - \rho \beta_{1} } \right].\)
Likewise, from \(F^{2} ,\,\gamma_{2} = \frac{1}{r}\left[ {\frac{1}{2}\left( {\beta_{2} + \phi_{2} } \right)^{2} - \rho \left( {\beta_{2} + \phi_{2} } \right) + \frac{1}{2}\rho^{2} + \beta_{2} \left( {\beta_{1} + \phi_{1} } \right) - \rho \beta_{2} } \right]\).
$$\begin{aligned} & {\text{Now}},\,u_{1} = J_{{x_{1} }} - \rho = \alpha \left( {x_{1} + x_{2} } \right) + \theta x_{1} + \beta_{1} + \phi_{1} - \rho \,{\text{and}} \\ & u_{2} = J_{{x_{2} }}^{2} - \rho = \alpha \left( {x_{1} + x_{2} } \right) + \theta x_{2} + \beta_{2} + \phi_{2} - \rho . \\ \end{aligned}$$
In order to get exact expressions, we need to do the following:
$$\begin{aligned} \dot{x}_{1} &= u_{1} - \delta x_{1} = \alpha \left( {x_{1} + x_{2} } \right) + \left( {\theta - \delta } \right)x_{1} + \beta_{1} + \phi_{1} - \rho \\ & = \left( {\alpha + \theta - \delta } \right)x_{1} + \alpha x_{2} + \beta_{1}+ \phi_{1} - \rho \\ \end{aligned}$$
$$\begin{aligned} \dot{x}_{2} & = u_{2} - \delta x_{2} = \alpha \left( {x_{1} + x_{2} } \right) + \left( {\theta - \delta } \right)x_{2} + \beta_{2} + \phi_{2} - \rho \\ &= \alpha x_{1} + \left( {\alpha + \theta - \delta } \right)x_{2} + \beta_{2} + \phi_{2} - \rho . \\ \end{aligned}$$
Homogeneous equations:
$$\begin{aligned} & \ddot{x}_{1} - 2\left( {\alpha + \theta - \delta } \right)\dot{x}_{1} + \left[ {\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} } \right] = 0 \\ & y^{2} - 2\left( {\alpha + \theta - \delta } \right)y + \left[ {\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} } \right] = 0 \\ & y = \left( {\alpha + \theta - \delta } \right) \pm \alpha . \\ \end{aligned}$$
Therefore, \(x_{1} = c_{1} e^{{\left( {2\alpha + \theta - \delta } \right)t}} + c_{2} e^{{\left( {\theta - \delta } \right)t}} + N_{1}\).
Since \(x_{2} = \frac{1}{\alpha }\left[ {\dot{x}_{1} - \left( {\alpha + \theta - \delta } \right)x_{1} } \right],\,x_{2} = c_{1} e^{{\left( {2\alpha + \theta - \delta } \right)t}} - c_{2} e^{{\left( {\theta - \delta } \right)t}} + N_{2}\).
Now, a particular solution:
$$\begin{aligned} & 0 = \left( {\alpha + \theta - \delta } \right)x_{1} + \alpha x_{2} + \beta_{1} + \phi_{1} - \rho \\ & 0 = \alpha x_{1} + \left( {\alpha + \theta - \delta } \right)x_{2} + \beta_{2} + \phi_{2} - \rho \\ & x_{2} = \frac{{ - \alpha x_{1} - (\beta_{2} + \phi_{2} - \rho )}}{\alpha + \theta - \delta } \\ & \left( {\alpha + \theta - \delta } \right)x_{1} - \frac{{\alpha^{2} }}{\alpha + \theta - \delta }x_{1} - \frac{{\alpha \left( {\beta_{2} + \phi_{2} - \rho } \right)}}{\alpha + \theta - \delta } + \beta_{1} + \phi_{1} - \rho = 0 \\ & \left( {\alpha + \theta - \delta } \right)^{2} x_{1} - \alpha^{2} x_{1} - \alpha \left( {\beta_{2} + \phi_{2} - \rho } \right) + \left( {\alpha + \theta - \delta } \right)\left( {\beta_{1} + \phi_{1} - \rho } \right) = 0 \\ & x_{1} = \frac{{\alpha \left( {\beta_{2} + \phi_{2} - \rho } \right) - \left( {\alpha + \theta - \delta } \right)(\beta_{1} + \phi_{1} - \rho )}}{{\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} }} \\ & x_{2} = \frac{ - \alpha }{\alpha + \theta - \delta }\left( {\frac{{\alpha \left( {\beta_{2} + \phi_{2} - \rho } \right) - \left( {\alpha + \theta - \delta } \right)\left( {\beta_{1} + \phi_{1} - \rho } \right)}}{{\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} }}} \right) - \frac{{\beta_{2} + \phi_{2} - \rho }}{\alpha + \theta - \delta } \\ & x_{2} = \frac{{\alpha \left( {\beta_{1} + \phi_{1} - \rho } \right) - \left( {\alpha + \theta - \delta } \right)(\beta_{2} + \phi_{2} - \rho )}}{{\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} }} \\ \end{aligned}$$
Therefore,
$$\begin{aligned} x_{1} & = c_{1} e^{{\left( {2\alpha + \theta - \delta } \right)t}} + c_{2} e^{{\left( {\theta - \delta } \right)t}} + \frac{{\alpha \left( {\beta_{2} + \phi_{2} - \rho } \right) - \left( {\alpha + \theta - \delta } \right)(\beta_{1} + \phi_{1} - \rho )}}{{\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} }} \\ x_{2} & = c_{1} e^{{\left( {2\alpha + \theta - \delta } \right)t}} - c_{2} e^{{\left( {\theta - \delta } \right)t}} + \frac{{\alpha \left( {\beta_{1} + \phi_{1} - \rho } \right) - \left( {\alpha + \theta - \delta } \right)(\beta_{2} + \phi_{2} - \rho )}}{{\left( {\alpha + \theta - \delta } \right)^{2} - \alpha^{2} }} \\ \end{aligned}$$
To determine \(c_{1}\) and \(c_{2}\), assume \(x_{1} \left( 0 \right) = x_{10}\) and \(x_{2} \left( 0 \right) = {\text{x}}_{20}\).
(Further suppose \(x_{1} \left( 0 \right) = x_{2} \left( 0 \right) = 0\) to calculate \(c_{1}\) and \(c_{2}\).).
1.4 Derivation of Theorem 4
Now, consider a perfect coordination
(i.e., collusive
) case:
$$\begin{aligned} J & = \mathop \smallint \limits_{0}^{\infty } e^{ - rt} \left\{ {\left( {x_{1} + x_{2} } \right)\left[ {a_{1} + a_{2} - 2\left( {x_{1} + x_{2} } \right)} \right] + s_{1} x_{1} \left[ {b_{1} - x_{1} } \right] + s_{2} x_{2} \left[ {b_{2} - x_{2} } \right]} \right. \\ & \quad \left. { - \rho \left( {u_{1} + u_{2} } \right) - \frac{1}{2}\left( {u_{1}^{2} + u_{2}^{2} } \right)} \right\}dt \\ \end{aligned}$$
$$\begin{aligned} \dot{x}_{1} & = u_{1} - \delta x_{1} \\ \dot{x}_{2} & = u_{2} - \delta x_{2} \\ \end{aligned}$$
Current value Hamiltonian
is:
$$\begin{aligned} H & = \left( {x_{1} + x_{2} } \right)\left[ {a_{1} + a_{2} - 2\left( {x_{1} + x_{2} } \right)} \right] + s_{1} x_{1} \left[ {b_{1} - x_{1} } \right] + s_{2} x_{2} \left[ {b_{2} - x_{2} } \right] - \rho \left( {u_{1} + u_{2} } \right) \\ & \quad - \frac{1}{2}\left( {u_{1}^{2} + u_{2}^{2} } \right) + \lambda_{1} \left( {u_{1} - \delta x_{1} } \right) + \lambda_{2} \left( {u_{2} - \delta x_{2} } \right) \\ \end{aligned}$$
Necessary conditions
:
$$\begin{aligned} & \frac{\partial H}{{\partial u_{1} }} = - \rho - u_{1} + \lambda_{1} = 0,\,u_{1} = \lambda_{1} - \rho ;\,\frac{\partial H}{{\partial u_{2} }} = - \rho - u_{2} + \lambda_{2} = 0,\,u_{2} = \lambda_{2} - \rho \\ & \dot{\lambda }_{1} = r\lambda_{1} - \frac{\partial H}{{\partial x_{1} }} = \left( {r + \delta } \right)\lambda_{1} + \left( {2s_{1} + 4} \right)x_{1} + 4x_{2} - (a_{1} + a_{2} + s_{1} b_{1} ). \\ \end{aligned}$$
Likewise, \(\dot{\lambda }_{2} = r\lambda_{2} - \frac{\partial H}{{\partial x_{2} }} = \left( {r + \delta } \right)\lambda_{2} + 4x_{1} + \left( {2s_{2} + 4} \right)x_{2} - (a_{1} + a_{2} + s_{2} b_{2} )\).
Now, we have
$$\begin{aligned} \dot{x}_{1} & = - \delta x_{1} + \lambda_{1} - \rho \\ \dot{x}_{2} & = - \delta x_{2} + \lambda_{2} - \rho \\ \dot{\lambda }_{1} & = \left( {2s_{1} + 4} \right)x_{1} + 4x_{2} + \left( {r + \delta } \right)\lambda_{1} - \left( {a_{1} + a_{2} + s_{1} b_{1} } \right) \\ \dot{\lambda }_{2} & = 4x_{1} + \left( {2s_{2} + 4} \right)x_{2} + \left( {r + \delta } \right)\lambda_{2} - \left( {a_{1} + a_{2} + s_{2} b_{2} } \right) \\ \end{aligned}$$
For homogeneous systems,
$$\left[ {\begin{array}{*{20}c} { - \delta } & 0 & 1 & 0 \\ 0 & { - \delta } & 0 & 1 \\ {2s_{1} + 4} & 4 & {r + \delta } & 0 \\ 4 & {2s_{2} + 4} & 0 & {r + \delta } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x_{1} } \\ {x_{2} } \\ {\lambda_{1} } \\ {\lambda_{2} } \\ \end{array} } \right]\left| {\begin{array}{*{20}c} { - \delta - z} & 0 & 1 & 0 \\ 0 & { - \delta - z} & 0 & 1 \\ {2s_{1} + 4} & 4 & {r + \delta - z} & 0 \\ 4 & {2s_{2} + 4} & 0 & {r + \delta - z} \\ \end{array} } \right| = 0$$
$$\left( { - \delta - z} \right)\left| {\begin{array}{*{20}c} { - \delta - z} & 0 & 1 \\ 4 & {r + \delta - z} & 0 \\ {2s_{2} + 4} & 0 & {r + \delta - z} \\ \end{array} } \right| + \left| {\begin{array}{*{20}c} 0 & { - \delta - z} & 1 \\ {2s_{1} + 4} & 4 & 0 \\ 4 & {2s_{2} + 4} & {r + \delta - z} \\ \end{array} } \right| = 0$$
$$\begin{aligned} & \left( { - \delta - z} \right)\left\{ {\left( { - \delta - z} \right)\left( {r + \delta - z} \right)^{2} - \left( {2s_{2} + 4} \right)\left( {r + \delta - z} \right)} \right\} + \left( {\delta + z} \right)\left( {2s_{1} + 4} \right)\left( {r + \delta - z} \right) \\ & + \left( {2s_{1} + 4} \right)\left( {2s_{2} + 4} \right) - 16 = 0 \\ \end{aligned}$$
$$\begin{aligned} & \left\{ {\left( {\delta + z} \right)\left( {r + \delta - z} \right)} \right\}^{2} + \left( {2s_{1} + 2s_{2} + 8} \right)\left\{ {\left( {\delta + z} \right)\left( {r + \delta - z} \right)} \right\} + \left( {2s_{1} + 4} \right)\left( {2s_{2} + 4} \right) - 16 = 0 \\ & \left( {\delta + z} \right)\left( {r + \delta - z} \right) = \frac{{ - \left( {2s_{1} + 2s_{2} + 8} \right) \pm \sqrt {\left( {2s_{1} + 2s_{2} + 8} \right)^{2} - 4\left\{ {\left( {2s_{1} + 4} \right)\left( {2s_{2} + 4} \right) - 16} \right\}} }}{2} \\ \end{aligned}$$
and we can determine 4 values of z.
For instance, suppose the case when \({\text{s}}_{1} = s_{2} = 1\).
$$\left[ {\left\{ {\left( {\delta + z} \right)\left( {r + \delta - z} \right)} \right\} + 2} \right]\left[ {\left\{ {\left( {\delta + z} \right)\left( {r + \delta - z} \right)} \right\} + 10} \right] = 0.$$
Therefore, \(\left( {\delta + z} \right)\left( {r + \delta - z} \right) = - 2\) or \(\left( {\delta + z} \right)\left( {r + \delta - z} \right) = - 10.\)
From \(\left( {\delta + z} \right)\left( {r + \delta - z} \right) = - 2\), we know \(z = \frac{{r \pm \sqrt {r^{2} + 4\left\{ {\delta \left( {r + \delta } \right) + 2} \right\}} }}{2}\).
From \(\left( {\delta + z} \right)\left( {r + \delta - z} \right) = - 10\), we know \(z = \frac{{r \pm \sqrt {r^{2} + 4\left\{ {\delta \left( {r + \delta } \right) + 10} \right\}} }}{2}\).
In order for the problem to have a solution, it must hold that \(\mathop {\lim }\limits_{t \to \infty } x_{i} \left( t \right)\) converges. Therefore, out of four possible values, we can have only two, i.e., \(z_{1} = \frac{{r - \sqrt {r^{2} + 4\left\{ {\delta \left( {r + \delta } \right) + 2} \right\}} }}{2}\) and \(z_{3} = \frac{{r - \sqrt {r^{2} + 4\left\{ {\delta \left( {r + \delta } \right) + 10} \right\}} }}{2}\) by posing \({\text{A}}_{2} = A_{4} = B_{2} = B_{4} = C_{2} = C_{4} = D_{2} = D_{4} = 0\).
Now, we have the following system.
$$\begin{aligned} x_{1} & = A_{1} e^{{z_{1} t}} + A_{3} e^{{z_{3} t}} + Constant \\ x_{2} & = B_{1} e^{{z_{1} t}} + B_{3} e^{{z_{3} t}} + Constant \\ \lambda_{1} & = C_{1} e^{{z_{1} t}} + C_{3} e^{{z_{3} t}} + Constant \\ \lambda_{2} & = D_{1} e^{{z_{1} t}} + D_{3} e^{{z_{3} t}} + Constant \\ \end{aligned}$$
In order to determine the constant terms, we can use the following set of relations.
$$0 = - \delta x_{1} + \lambda_{1} - \rho$$
(63)
$$0 = - \delta x_{2} + \lambda_{2} - \rho$$
(64)
$$\begin{aligned} 0 & = \left( {2s_{1} + 4} \right)x_{1} + 4x_{2} + \left( {r + \delta } \right)\lambda_{1} - \left( {a_{1} + a_{2} + s_{1} b_{1} } \right) \\ 0 & = 4x_{1} + \left( {2s_{2} + 4} \right)x_{2} + \left( {r + \delta } \right)\lambda_{2} - \left( {a_{1} + a_{2} + s_{2} b_{2} } \right) \\ \end{aligned}$$
After some arrangements, we have the following:
$$\begin{aligned} Const\left( {x_{1} } \right) & = \frac{{4\left[ {\rho \left( {r + \delta } \right) - \left( {a_{1} + a_{2} + s_{2} b_{2} } \right)} \right] - \left[ {\rho \left( {r + \delta } \right) - \left( {a_{1} + a_{2} + s_{1} b_{1} } \right)} \right]\left[ {\delta \left( {r + \delta } \right) + 2s_{2} + 4} \right]}}{{\left[ {\delta \left( {r + \delta } \right) + 2s_{1} + 4} \right]\left[ {\delta \left( {r + \delta } \right) + 2s_{2} + 4} \right] - 16}} \\ Const\left( {x_{2} } \right) & = \frac{{4\left[ {\rho \left( {r + \delta } \right) - \left( {a_{1} + a_{2} + s_{1} b_{1} } \right)} \right] - \left[ {\rho \left( {r + \delta } \right) - \left( {a_{1} + a_{2} + s_{2} b_{2} } \right)} \right]\left[ {\delta \left( {r + \delta } \right) + 2s_{1} + 4} \right]}}{{\left[ {\delta \left( {r + \delta } \right) + 2s_{1} + 4} \right]\left[ {\delta \left( {r + \delta } \right) + 2s_{2} + 4} \right] - 16}}. \\ \end{aligned}$$
Now we have:
$$\begin{aligned} x_{1} & = A_{1} e^{{z_{1} t}} + A_{3} e^{{z_{3} t}} + Const(x_{1} ) \\ x_{2} & = B_{1} e^{{z_{1} t}} + B_{3} e^{{z_{3} t}} + Const(x_{2} ) \\ \end{aligned}$$
From (63) and (64), we know:
$$\begin{aligned} & Const\left( {\lambda_{1} } \right) = \delta Const\left( {x_{1} } \right) + \rho \,{\text{and}}\,Const\left( {\lambda_{2} } \right) = \delta Const\left( {x_{2} } \right) + \rho . \\ & \lambda_{1} = C_{1} e^{{z_{1} t}} + C_{3} e^{{z_{3} t}} + Const(\lambda_{1} ) \\ & \lambda_{2} = D_{1} e^{{z_{1} t}} + D_{3} e^{{z_{3} t}} + Const(\lambda_{2} ) \\ \end{aligned}$$
Now, how to determine \(A_{1} ,\,A_{3} ,\,B_{1} ,\,B_{3} ,\,C_{1} ,\,C_{3} ,\,D_{1} ,\,D_{3}\) ?
Since \(\dot{x}_{1} = - \delta x_{1} + \lambda_{1} - \rho\) and \(\dot{x}_{2} = - \delta x_{2} + \lambda_{2} - \rho\),
$$\begin{aligned} & A_{1} z_{1} e^{{z_{1} t}} + A_{3} z_{3} e^{{z_{3} t}} = - \delta A_{1} e^{{z_{1} t}} - \delta A_{3} e^{{z_{3} t}} - \delta Const\left( {x_{1} } \right) + C_{1} e^{{z_{1} t}} + C_{3} e^{{z_{3} t}} \\ & + Const\left( {\lambda_{1} } \right) - \rho . \\ \end{aligned}$$
$$A_{1} \left( {z_{1} + \delta } \right) = C_{1} ,\,A_{3} \left( {z_{3} + \delta } \right) = C_{3} ,\,\delta Const\left( {x_{1} } \right) - Const\left( {\lambda_{1} } \right) + \rho = 0.$$
Likewise, \(B_{1} \left( {z_{1} + \delta } \right) = D_{1} ,\,D_{3} \left( {z_{3} + \delta } \right) = D_{3}\).
Therefore, we only need to determine \(A_{1} ,A_{3} ,B_{1} ,B_{3}\). It will be possible to determine exact values of these parameters, if specific initial conditions for the state variables are provided. For our purpose, we only need the steady-state values, which are represented by the constants in the state variables.
1.5 Derivation of Observation 1
\(X_{1N} \le X_{1C}\) implies \(\frac{{2\rho \left( {r + \delta - \alpha } \right) - \left( {a_{1} + a_{2} } \right)}}{{\left( {r + \delta - 3\alpha } \right)\left( {2\alpha - \delta } \right)}} \le \frac{{2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]}}{{\delta \left( {r + \delta } \right) + 8}}\).
That is, \(\because \left( {r + \delta - 3\alpha } \right)\left( {2\alpha - \delta } \right) < 0\),
$$\left( {\delta \left( {r + \delta } \right) + 8} \right)\left[ {2\rho \left( {r + \delta - \alpha } \right) - \left( {a_{1} + a_{2} } \right)} \right] \ge 2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]\left( {r + \delta - 3\alpha } \right)\left( {2\alpha - \delta } \right)$$
(65)
Now,
$$\begin{aligned} RHS & = 2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]\left( {r + \delta - 3\alpha } \right)\left( {2\alpha - \delta } \right) \\ & = 2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]\left[ { - 6\alpha^{2} + \alpha \left( {2r + 5\delta } \right) - \delta \left( {r + \delta } \right)} \right] \\ \end{aligned}$$
and \(3\alpha^{2} = \left( {r + 2\delta } \right)\alpha + 2\) (see p. 261 in Dockner et al. (2000)).
$$\therefore\, RHS = 2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]\left( {\delta \alpha - 4 - \delta \left( {r + \delta } \right)} \right)$$
Then, from (65)
$$\begin{aligned} & \left( {\delta \left( {r + \delta } \right) + 8} \right)\left[ {2\rho \left( {r + \delta } \right) - \left( {a_{1} + a_{2} } \right)} \right] - 2\alpha \rho \left( {\delta \left( {r + \delta } \right) + 8} \right) \\ & \ge 2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]\left( {\delta \alpha - 4 - \delta \left( {r + \delta } \right)} \right) \\ & \left( {\delta \left( {r + \delta } \right) + 8} \right)\left[ {2\rho \left( {r + \delta } \right) - \left( {a_{1} + a_{2} } \right)} \right] + 2\left[ {\left( {a_{1} + a_{2} } \right) - \rho \left( {r + \delta } \right)} \right]\delta \left( {r + \delta } \right) \ge 2[(a_{1} \\ & + a_{2} ) - [(\rho (r + \delta )](\delta \alpha - 4) + 2\alpha \rho (\delta (r + \delta ) + 8) \\ & \delta \left( {r + \delta } \right)\left( {a_{1} + a_{2} } \right) + 16\rho \left( {r + \delta } \right) - 8\left( {a_{1} + a_{2} } \right) \\ & \ge 2\delta \alpha \left( {a_{1} + a_{2} } \right) - 8\left( {a_{1} + a_{2} } \right) + 8\rho \left( {r + \delta } \right) + 16\alpha \rho \\ & \delta \left( {r + \delta } \right)\left( {a_{1} + a_{2} } \right) + 8\rho \left( {r + \delta } \right) \ge 2\alpha \left\{ {8\rho + \delta \left( {a_{1} + a_{2} } \right)} \right\} \\ & \left( {r + \delta } \right)\left\{ {8\rho + \delta \left( {a_{1} + a_{2} } \right)} \right\} \ge 2\alpha \left\{ {8\rho + \delta \left( {a_{1} + a_{2} } \right)} \right\} \\ \end{aligned}$$
Therefore, \(X_{1N} \le X_{1C}\) if and only if \(\frac{(r + \delta )}{2} \ge \alpha\).
Since α is negative for all r and δ, \(\frac{(r + \delta )}{2} \ge \alpha\) holds always and so does \(X_{1N} \le X_{1C}\).
Appendix 3
See Appendix Tables 6 and 7.