Skip to main content
Log in

Equilibrium for a Time-Inconsistent Stochastic Linear–Quadratic Control System with Jumps and Its Application to the Mean-Variance Problem

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper studies a kind of time-inconsistent linear–quadratic control problem in a more general framework with stochastic coefficients and random jumps. The time inconsistency comes from the dependence of the terminal cost on the current state as well as the presence of a quadratic term of the expected terminal state in the objective functional. Instead of finding a global optimal control, we look for a time-consistent locally optimal equilibrium solution within the class of open-loop controls. A general sufficient and necessary condition for equilibrium controls via a flow of forward–backward stochastic differential equations is derived. This paper further develops a new methodology to cope with the mathematical difficulties arising from the presence of stochastic coefficients and random jumps. As an application, we study a mean-variance portfolio selection problem in a jump-diffusion financial market; an explicit equilibrium investment strategy in a deterministic coefficients case is obtained and proved to be unique.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Strotz, R.H.: Myopia and inconsistency in dynamic utility maximization. Rev. Econ. Stud. 23(3), 165–180 (1955)

    Article  Google Scholar 

  2. Lin, X., Qian, Y.: Time-consistent mean-variance reinsurance-investment strategy for insurers under CEV model. Scand. Actuar. J. 2016(7), 646–671 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  3. Zeng, Y., Li, Z.: Optimal time-consistent investment and reinsurance policies for mean-variance insurers. Insur. Math. Econ. 49(1), 145–154 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ekeland, I., Pirvu, T.A.: Investment and consumption without commitment. Math. Financ. Econ. 2(1), 57–86 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  5. Marín-Solano, J., Navas, J.: Consumption and portfolio rules for time-inconsistent investors. Eur. J. Oper. Res. 201(3), 860–872 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Björk, T., Murgoci, A., Zhou, X.Y.: Mean-variance portfolio optimization with state dependent risk aversion. Math. Finance 24(1), 1–24 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Yong, J.: Time-inconsistent optimal control problems and the equilibrium HJB equation. Math. Control Related Fields 2(3), 271–329 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Björk, T., Murgoci, A.: A general theory of Markovian time inconsistent stochastic control problems. SSRN 1694759 (2010)

  9. Yong, J.: A deterministic linear quadratic time-inconsistent optimal control problem. Math. Control Related Fields 1(1), 83–118 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Yong, J.: Linear-quadratic optimal control problems for mean-field stochastic differential equations-time consistent solutions. Trans. Am. Math. Soc. 369(8), 5467–5523 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Hu, Y., Jin, H., Zhou, X.Y.: Time-inconsistent stochastic linear-quadratic control. SIAM J. Control Optim. 50(3), 1548–1572 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hu, Y., Jin, H., Zhou, X.Y.: Time-inconsistent stochastic linear-quadratic control: characterization and uniqueness of equilibrium. SIAM J. Control Optim. 55(2), 1261–1279 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  13. Alia, I., Chighoub, F., Sohail, A.: The maximum principle in time-inconsistent LQ equilibrium control problem for jump diffusions. Serdica Math. J. 42, 103–138 (2016a)

    MathSciNet  Google Scholar 

  14. Wu, Z., Zhuang, Y.: Partially observed time-inconsistent stochastic linear-quadratic control with random jumps. Optim. Control Appl. Methods 39(1), 230–247 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  15. Sun, Z., Guo, J., Zhang, X.: Maximum principle for Markov regime-switching forward backward stochastic control system with jumps and relation to dynamic programming. J. Optim. Theory Appl. 176(2), 319–350 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Sun, Z., Kemajou-Brown, I., Menoukeu-Pamen, O.: A risk-sensitive maximum principle for a Markov regime-switching jump-diffusion system and applications. ESAIM Control Optim. Calc. Var. 24(3), 985–1013 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  17. Sun, Z., Menoukeu-Pamen, O.: The maximum principles for partially observed risk-sensitive optimal controls of Markov regime-switching jump-diffusion system. Stoch. Anal. Appl. (2018). https://doi.org/10.1080/07362994.2018.1465824

  18. Hu, Y., Huang, J., Li, X.: Equilibrium for time-inconsistent stochastic linear–quadratic control under constraint. arXiv preprint arXiv:1703.09415 (2017)

  19. Situ, R.: Theory of Stochastic Differential Equations with Jumps and Applications. Springer, New York (2005)

    MATH  Google Scholar 

  20. Tang, S., Li, X.: Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim. 32(5), 1447–1475 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang, X., Sun, Z., Xiong, J.: A general stochastic maximum principle for a Markov regime switching jump-diffusion model of mean-field type. SIAM J. Control Optim. 56(4), 2563–2592 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Shen, Y., Siu, T.K.: The maximum principle for a jump-diffusion mean-field model and its application to the mean-variance problem. Nonlinear Anal. 86(1), 58–73 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Alia, I., Chighoub, F., Sohail, A.: A characterization of equilibrium strategies in continuous-time mean-variance problems for insurers. Insur. Math. Econom. 68, 212–223 (2016b)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their careful reading of the paper and helpful suggestions. This work was supported by the National Natural Science Foundation of China (NSFC Grant Nos. 11571189, 11701087, 61773411) and Shandong Provincial Natural Science Foundation, China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongyang Sun.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by Negash G. Medhin.

Appendix

Appendix

In this appendix, we provide an essential estimate assisting the proof of Proposition 3.2. To ease the explosion of the results, we only consider the case for \(n=1\), and the extension to the multidimensional case is straightforward. If not specified, we will denote by C some positive constants that may differ from line to line in the following estimates.

Lemma A.1

For each \(t\in [0,T]\), let \((\varPsi (s))_{s\in [t,T]}\) be a progressively measurable process, such that, for any \(k\ge 1\)

$$\begin{aligned} {\mathbb {E}}_t\bigg [\sup \limits _{s\in [t,T]}\big |\varPsi (s)\big |^{k}\bigg ]\le C. \end{aligned}$$
(43)

Then, there exists a function \(\rho : \varOmega \times ]0,\infty [\rightarrow ]0,\infty [\) with \(\rho (\varepsilon )\downarrow 0\) as \(\varepsilon \downarrow 0,\ a.s.\), such that

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[\varPsi (s)Y^\varepsilon (s)] \big |^2\mathrm{d}s\le \varepsilon \rho (\varepsilon ). \end{aligned}$$
(44)

Proof

Define an auxiliary process \((\varLambda (s))_{s\in [t,T]}\) with \(\varLambda (t)=1\) by:

$$\begin{aligned} \varLambda (s)= & {} \exp \bigg \{\int _t^s\bigg [-A(r)+\frac{1}{2}\sum _{i=1}^dC_i^2(r)\\&\quad +\sum _{j=1}^m\int _{{\mathbb {R}}_0}\Big (E_j(r,e)+\ln \frac{1}{1+E_j(r,e)}\Big )\nu _j(de)\bigg ]\mathrm{d}r\\&\quad -\sum _{i=1}^d\int _t^sC_i(r)\mathrm{d}W_i(r) +\sum _{j=1}^m\int _t^s\int _{{\mathbb {R}}_0}\ln \frac{1}{1+E_j(r,e)} {\widetilde{N}}_j(dr,de)\bigg \}. \end{aligned}$$

Since \(A, C_i, E_j\) are uniformly bounded, for any \(k\ge 1\), there exists a positive constant C, such that

$$\begin{aligned} {\mathbb {E}}_t\bigg [\sup \limits _{s\in [t,T]}\big (\big |\varLambda (s) \big |^{k}+\big |\varGamma (s)\big |^{k}\big )\bigg ]\le C, \end{aligned}$$
(45)

where \(\varGamma (s)=\varLambda (s)^{-1}\). Furthermore, in view of (43), we can easily obtain

$$\begin{aligned} {\mathbb {E}}_t\bigg [\sup \limits _{s\in [t,T]}\big |\varPsi (s)\varGamma (s)\big |^{k}\bigg ]\le C. \end{aligned}$$
(46)

Following the martingale representation theorem (see, e.g., [20, Lemma 2.3]), for every \(s\in [t,T]\), there exists a unique pair \((\xi (\cdot ;s),\beta (\cdot ,\cdot ;s))\in L^2_{{\mathcal {F}},p}(t,s;{\mathbb {R}}^d) \times F^2_{p}(t,s;{\mathbb {R}}^m)\) such that

$$\begin{aligned} \varPsi (s)\varGamma (s)&={\mathbb {E}}_t[\varPsi (s)\varGamma (s)] +\sum _{i=1}^d\int _t^s\xi _i(r;s)\mathrm{d}W_i(r)\nonumber \\&\quad +\sum _{j=1}^m\int _t^s\int _{{\mathbb {R}}_0}\beta _j(r,e;s){\widetilde{N}}_j(dr,de), \end{aligned}$$
(47)

where \(\xi (\cdot ;s)=(\xi _1(\cdot ;s),\ldots ,\xi _d(\cdot ;s))\) and \(\beta (\cdot ,\cdot ;s)=(\beta _1(\cdot ,\cdot ;s),\ldots ,\beta _m(\cdot ,\cdot ;s))\).

Following from (46), the Burkholder–Davis–Gundy inequality, and Doob’s maximal inequality, we get for \(k>1\)

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg (\sum _{i=1}^d\int _t^s|\xi _i(r;s)|^2\mathrm{d}r +\sum _{j=1}^m\int _t^s\int _{{\mathbb {R}}_0}|\beta _j(r,e;s) |^2\nu _j(de)\mathrm{d}r\bigg )^{\frac{k}{2}}\bigg ]\nonumber \\&\quad \le C{\mathbb {E}}\bigg [\sup _{u\in [t,s]}\bigg |\sum _{i=1}^d\int _t^u\xi _i(r;s)\mathrm{d}W_i(r) +\sum _{j=1}^m\int _t^u\int _{{\mathbb {R}}_0}\beta _j(r,e;s) {\widetilde{N}}_j(dr,de)\bigg |^k\bigg ] \nonumber \\&\quad \le C\bigg (\frac{k}{k-1}\bigg )^k{\mathbb {E}} \bigg [\bigg |\sum _{i=1}^d\int _t^s\xi _i(r;s)\mathrm{d}W_i(r) +\sum _{j=1}^m\int _t^s\int _{{\mathbb {R}}_0}\beta _j(r,e;s) {\widetilde{N}}_j(dr,de)\bigg |^k\bigg ]\nonumber \\&\quad \le C{\mathbb {E}}\Big [\big |\varPsi (s) \varGamma (s)-{\mathbb {E}}_t\big [\varPsi (s)\varGamma (s)\big ]\big |^k\Big ] \le C\Big ({\mathbb {E}}\Big [\big |\varPsi (s)\varGamma (s)\big |^k\Big ] +{\mathbb {E}}\Big [\big |{\mathbb {E}}_t\big [\varPsi (s) \varGamma (s)\big ]\big |^k\Big ]\Big )\nonumber \\&\quad \le C{\mathbb {E}}\bigg [\big |\varPsi (s)\varGamma (s)\big |^k\bigg ] \le C{\mathbb {E}}\bigg [\sup \limits _{s\in [t,T]} \big |\varPsi (s)\varGamma (s)\big |^{k}\bigg ]\le C. \end{aligned}$$
(48)

For \(k=1\), by using Hölder’s inequality, we can easily get

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg (\sum _{i=1}^d\int _t^s|\xi _i(r;s)|^2\mathrm{d}r +\sum _{j=1}^m\int _t^s\int _{{\mathbb {R}}_0}|\beta _j(r,e;s)| ^2\nu _j(de)\mathrm{d}r\bigg )^{\frac{1}{2}}\bigg ]\le C. \end{aligned}$$
(49)

Combining (48) and (49), we have for any \(k\ge 1\)

$$\begin{aligned}&\sup _{s\in [t,T]} {\mathbb {E}}\bigg [\bigg (\sum _{i=1}^d\int _t^s|\xi _i(r;s)|^2\mathrm{d}r +\sum _{j=1}^m\int _t^s\int _{{\mathbb {R}}_0}|\beta _j(r,e;s)|^2 \nu _j(de)\mathrm{d}r\bigg )^{\frac{k}{2}}\bigg ]\le C. \end{aligned}$$
(50)

Now, using Itô’s formula to \(s\mapsto \varLambda (s)Y^\varepsilon (s)\) yields

$$\begin{aligned} Y^\varepsilon (s)= & {} \varGamma (s)\int _t^s\varLambda (r) \bigg \{\Big [B(r)-\sum _{i=1}^dC_i(r)D_i(r) \\&\quad -\sum _{j=1}^m\int _{{\mathbb {R}}_0}\frac{E_j(r,e)F_j(r,e)}{1+E_j(r,e)}\nu _j(de)\Big ]v\mathbf{1}_{[t,t+\varepsilon [}(r)\bigg \}\mathrm{d}r\\&\quad +\sum _{i=1}^d\varGamma (s)\int _t^s\varLambda (r)D_i(r)v\mathbf{1} _{[t,t+\varepsilon [}(r)\mathrm{d}W_i(r)\\&\quad +\sum _{j=1}^m\varGamma (s)\int _t^s \int _{{\mathbb {R}}_0}\varLambda (r-)\frac{F_j(r,e)}{1+E_j(r,e)}v\mathbf{1}_{[t,t+\varepsilon [}(r){\widetilde{N}}_j(dr,de). \end{aligned}$$

Consider

$$\begin{aligned} \varPsi (s)Y^\varepsilon (s):=L_1(s)+L_2(s)+L_3(s), \quad s\in [t,T], \end{aligned}$$

with

$$\begin{aligned} L_1(s)= & {} \varPsi (s)\varGamma (s)\int _t^s\varLambda (r)\bigg \{\Big [B(r)-\sum _{i=1}^dC_i(r)D_i(r)\\&\quad -\sum _{j=1}^m\int _{{\mathbb {R}}_0}\frac{E_j(r,e) F_j(r,e)}{1+E_j(r,e)}\nu _j(de)\Big ]v\mathbf{1}_{[t,t+\varepsilon [}(r)\bigg \}\mathrm{d}r,\\ L_2(s)= & {} \sum _{i=1}^d\varPsi (s)\varGamma (s) \int _t^s\varLambda (r)D_i(r)v\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}W_i(r),\\ L_3(s)= & {} \sum _{j=1}^m\varPsi (s)\varGamma (s) \int _t^s\int _{{\mathbb {R}}_0}\varLambda (r-)\frac{F_j(r,e)}{1+E_j(r,e)}v\mathbf{1}_{[t,t+\varepsilon [}(r){\widetilde{N}}_j(dr,de). \end{aligned}$$

Actually, by virtue of (45) and (46), the following estimate for \({\mathbb {E}}_t[L_1(s)]\) holds:

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[L_1(s)]\big |^2\mathrm{d}s\le C|v|^2\varepsilon ^2, \quad s\in [t,T]. \end{aligned}$$
(51)

Following the expression of \(\varPsi (s)\varGamma (s)\) in (47), we have

$$\begin{aligned} {\mathbb {E}}_t[L_2(s)]&=\sum _{i=1}^d{\mathbb {E}} _t\bigg [\Big (\varPsi (s)\varGamma (s)\Big )\cdot \int _t^s \varLambda (r)D_i(r)v\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}W_i(r)\bigg ]\\&=\sum _{i=1}^d{\mathbb {E}}_t\bigg [\int _t^s\xi _i(r;s)\varLambda (r)D_i(r)v\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\bigg ]\\&\le C\varepsilon ^{\frac{1}{2}}|v| \sum _{i=1}^d{\mathbb {E}}_t\bigg [\sup _{r\in [t,T]}\varLambda (r)\cdot \bigg (\int _t^s\big |\xi _i(r;s)\big |^2\mathbf{1} _{[t,t+\varepsilon [}(r)\mathrm{d}r\bigg )^{\frac{1}{2}}\bigg ]\\&\le C\varepsilon ^{\frac{1}{2}}|v|\sum _{i=1} ^d\bigg ({\mathbb {E}}_t\bigg [\int _t^s\big |\xi _i(r;s)\big |^2\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\bigg ]\bigg )^{\frac{1}{2}}. \end{aligned}$$

Therefore,

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[L_2(s)]\big |^2\mathrm{d}s&\le C\varepsilon |v|^2\sum _{i=1}^d{\mathbb {E}} _t\bigg [\int _t^T\int _t^s\big |\xi _i(r;s)\big |^2\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\mathrm{d}s\bigg ]. \end{aligned}$$

Setting \(\rho _1(\varepsilon ):=C|v|^2\sum \limits _{i=1} ^d{\mathbb {E}}_t\bigg [\int _t^T\int _t^s\big |\xi _i(r;s)\big |^2\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\mathrm{d}s\bigg ]\) and then from (50), we have

$$\begin{aligned} {\mathbb {E}}\bigg [\int _t^T\int _t^s\big |\xi _i(r;s)\big |^2\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\mathrm{d}s\bigg ]&\le C\sup _{s\in [t,T]}{\mathbb {E}} \bigg [\int _t^s\big |\xi _i(r;s)\big |^2\mathrm{d}r\bigg ]<\infty . \end{aligned}$$

Thus, following the dominated convergence theorem for conditional expectations and observing the fact that \(\mathbf{1}_{[t,t+\varepsilon [}\rightarrow 0\), we can easily obtain that \(\rho _1(\varepsilon )\rightarrow 0\) as \(\varepsilon \downarrow 0,\ a.s.\) Hence,

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[L_2(s)]\big |^2\mathrm{d}s&\le \varepsilon \rho _1(\varepsilon ). \end{aligned}$$
(52)

Similarly, we get

$$\begin{aligned} {\mathbb {E}}_t[L_3(s)]&=\sum _{j=1}^m{\mathbb {E}}_t \bigg [\int _t^s\int _{{\mathbb {R}}_0}\beta _j(r,e;s)\varLambda (r) \frac{F_j(r,e)}{1+E_j(r,e)}v\mathbf{1}_{[t,t+\varepsilon [}(r)\nu _j(de)\mathrm{d}r\bigg ]\\&\le C\varepsilon ^{\frac{1}{2}}|v|\sum _{j=1} ^m{\mathbb {E}}_t\bigg [\sup _{r\in [t,T]}\varLambda (r)\cdot \bigg (\int _t^s\int _{{\mathbb {R}}_0}\big |\beta _j(r,e;s) \big |^2\nu _j(de)\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\bigg )^{\frac{1}{2}}\bigg ]\\&\le C\varepsilon ^{\frac{1}{2}}|v|\sum _{j=1}^m \bigg ({\mathbb {E}}_t\bigg [\int _t^s\int _{{\mathbb {R}}_0} \big |\beta _j(r,e;s)\big |^2\nu _j(de)\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\bigg ]\bigg )^{\frac{1}{2}}. \end{aligned}$$

Hence,

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[L_3(s)]\big |^2\mathrm{d}s&\le C\varepsilon |v|^2\sum _{j=1}^m{\mathbb {E}}_t \bigg [\int _t^T\int _t^s\int _{{\mathbb {R}}_0}\big |\beta _j(r,e;s)\big |^2\nu _j(de)\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\mathrm{d}s\bigg ]. \end{aligned}$$

Setting \(\rho _2(\varepsilon ):=C\varepsilon |v|^2\sum \limits _{j=1}^m{\mathbb {E}}_t\bigg [\int _t^T\int _t^s \int _{{\mathbb {R}}_0}\big |\beta _j(r,e;s)\big |^2\nu _j(de)\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\mathrm{d}s\bigg ]\) and then from (50) again, we have

$$\begin{aligned}&{\mathbb {E}}\bigg [\int _t^T\int _t^s\int _{{\mathbb {R}}_0} \big |\beta _j(r,e;s)\big |^2\nu _j(de)\mathbf{1}_{[t,t+\varepsilon [}(r)\mathrm{d}r\mathrm{d}s\bigg ]\\&\quad \le C\sup _{s\in [t,T]}{\mathbb {E}} \bigg [\int _t^s\int _{{\mathbb {R}}_0}\big |\beta _j(r,e;s)\big |^2\nu _j(de)\mathrm{d}r\bigg ]<\infty . \end{aligned}$$

Thus, using the dominated convergence theorem once again and observing the fact that \(\mathbf{1}_{[t,t+\varepsilon [}\rightarrow 0\), it holds that \(\rho _2(\varepsilon )\rightarrow 0\) as \(\varepsilon \downarrow 0,\ a.s.\) Hence,

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[L_3(s)]\big |^2\mathrm{d}s&\le \varepsilon \rho _2(\varepsilon ). \end{aligned}$$
(53)

In view of (51)–(53), we have

$$\begin{aligned} \int _t^T\big |{\mathbb {E}}_t[\varPsi (s)Y^\varepsilon (s)]\big |^2\mathrm{d}s&\le C\big (\varepsilon ^2|v|^2+\varepsilon \rho _1 (\varepsilon )+\varepsilon \rho _2(\varepsilon )\big ). \end{aligned}$$
(54)

Now setting \(\rho (\varepsilon ):=C(\varepsilon |v|^2+\rho _1(\varepsilon )+\rho _2(\varepsilon ))\) in (54), we obtain estimate (44). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Z., Guo, X. Equilibrium for a Time-Inconsistent Stochastic Linear–Quadratic Control System with Jumps and Its Application to the Mean-Variance Problem. J Optim Theory Appl 181, 383–410 (2019). https://doi.org/10.1007/s10957-018-01471-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-01471-x

Keywords

Mathematics Subject Classification

Navigation