Skip to main content
Log in

On the existence of optimal strategies in the control problem for a stochastic discrete time system with respect to the probability criterion

  • Stochastic Systems
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

We study a control problem for a stochastic system with discrete time. The optimality criterion is the probability of the event that the terminal state function does not exceed a given limit. To solve the problem, we use dynamic programming. The loss function is assumed to be lower semicontinuous with respect to the terminal state vector, and the transition function from the current state to the next is assumed to be continuous with respect to all its arguments. We establish that the dynamic programming algorithm lets one in this case find optimal positional control strategies that turn out to be measurable. As an example we consider a two-step problem of security portfolio construction. We establish that in this special case the future loss function on the second step turns out to be continuous everywhere except one point.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Malyshev, V.V. and Kibzun, A.I., Analiz i sintez vysokotochnogo upravleniya LA (Analysis and Synthesis of High-Accuracy Control for Flying Vehicles), Moscow: Mashinostroenie, 1987.

    Google Scholar 

  2. Kibzun, A.I. and Ignatov, A.N., Reduction of the Two-Step Problem of Stochastic Optimal Control with Bilinear Model to the Problem of Mixed Integer Linear Programming, Autom. Remote Control, 2016, vol. 77, no. 12, pp. 2175–2192.

    Article  MATH  Google Scholar 

  3. Bodnar, T., Parolya, N., and Schmid, W., On the Exact Solution of the Multi-Period Portfolio Choice Problem for an Exponential Utility under Return Predictability, Eur. J. Oper. Res., 2015, vol. 246, no. 2, pp. 528–542.

    Article  MathSciNet  MATH  Google Scholar 

  4. Canakoglu, E. and Ozekici, S., Portfolio Selection in Stochastic Markets with HARA Utility Functions, Eur. J. Oper. Res., 2010, vol. 201, no. 2, pp. 520–536.

    Article  MathSciNet  MATH  Google Scholar 

  5. Kibzun, A.I. and Kan, Yu.S., Zadachi stokhasticheskogo programmirovaniya po veroyatnostnym kriteriyam (Stochastic Programming Problems with Probability Criteria), Moscow: Fizmatlit, 2009.

    Google Scholar 

  6. Bellman, R., Dynamic Programming, Princeton, New Jersey: Princeton Univ. Press, 1957. Translated under the title Dinamicheskoe programmirovanie, Moscow: Inostrannaya Literatura, 1960.

    MATH  Google Scholar 

  7. Bertsekas, D.P. and Sreve, S.E., Stochastic Optimal Control, New York: Academic, 1978. Translated under the title Stokhasticheskoe optimal’noe upravlenie, Moscow: Nauka, 1985.

    Google Scholar 

  8. Dynkin, E.B. and Yushkevich, A.A., Upravlyaemye markovskie protsessy i ikh prilozheniya (Controllable Markov Processes and Their Applications), Moscow: Nauka, 1975.

    MATH  Google Scholar 

  9. Malyshev, V.V. and Kibzun, A.I., Optimal Control for a Stochastic System with Discrete Time, Izv. Akad. Nauk SSSR, Tekh. Kibern., 1985, no. 6, pp. 113–120.

    Google Scholar 

  10. Azanov, V.M. and Kan, Yu.S., Single-Parameter Optimal Correction Problem for the Trajectory of a Flying Vehicle with the Probability Criterion, Izv. Ross. Akad. Nauk, Teor. Sist. Upravlen., 2016, no. 2, pp. 1–13.

    Google Scholar 

  11. Azanov, V.M. and Kan, Yu.S., Optimizing the Correction of Near-Circular Orbit of an Artificial Earth Satellite with the Probability Criterion, Tr. ISA RAN, 2015, vol. 65, no. 2, pp. 18–26.

    Google Scholar 

  12. Kibzun, A.I. and Ignatov, A.N., The Two-step Problem of Investment Portfolio Selection from Two Risk Assets via the Probability Criterion, Autom. Remote Control, 2015, vol. 76, no. 7, pp. 1201–1220.

    Article  MathSciNet  MATH  Google Scholar 

  13. Raik, E.O., On Stochastic Programming Problems with Probability and Quantile Functionals, Izv. Akad. Nauk ESSR, Fiz. Mat., 1972, vol. 21, no. 2, pp. 142–148.

    MathSciNet  Google Scholar 

  14. Rudin, W., Principles of Mathematical Analysis, New York: McGraw-Hill, 1964. Translated under the title Osnovy matematicheskogo analiza, Moscow: Mir, 1976, 2nd ed.

    MATH  Google Scholar 

  15. Yudin, D.B., Zadachi i metody stokhasticheskogo programmirovaniya (Problems and Methods of Stochastic Programming), Moscow: Sovetskoe Radio, 1979.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. I. Kibzun.

Additional information

Original Russian Text © A.I. Kibzun, A.N. Ignatov, 2017, published in Avtomatika i Telemekhanika, 2017, No. 10, pp. 139–154.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kibzun, A.I., Ignatov, A.N. On the existence of optimal strategies in the control problem for a stochastic discrete time system with respect to the probability criterion. Autom Remote Control 78, 1845–1856 (2017). https://doi.org/10.1134/S0005117917100083

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117917100083

Keywords

Navigation