Advertisement

Optimal Control

  • Alexander S. Belenky
Part of the Applied Optimization book series (APOP, volume 20)

Abstract

The theory of optimal control is a branch of applied mathematics that studies the best ways of executing dynamic controlled (controllable) processes [1]. Among those of considerable interest for most applications, there are ones described by ordinary and partial differential equations and also by functional equations with a discrete variable. In all cases, the functions, called controls, appearing in equations describing the processes under study are to be defined assuming that these controls are chosen from a certain domain determined by a system of constraints. The quality of control is described by a functional depending on both the controls and the system of functions determining the trajectory of the dynamic process variation under the influence of the controls.

Keywords

Maximum Principle Optimal Control Problem Steklov Institute Differential Inclusion Pontryagin Maximum Principle 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Kurzhanskiy, A. B. “Mathematical Theory of Optimum Control.” In Matematicheskaja Entziklopedia (Mathematical Encyclopedia). Moscow: Sovetskaya Entziklopediya, 1984; 4: 37–41 [in Russian].Google Scholar
  2. [2]
    Boltianskiy, V. G. Mathematical Methods of Optimum Control. New York: Holt, Reinart and Winston, 1971.Google Scholar
  3. [3]
    Kurzhanskiy, A. B. “Pontryagin Maximum Principle.” In Matematicheskaja Entziklopedia (Mathematical Encyclopedia). Moscow: Sovetskaya Entziklopediya, 1984; 4: 487–89 [in Russian].Google Scholar
  4. [4]
    Kurzhanskiy, A. B. “Programmable Optimum Control.” In Matematicheskaja Entziklopedia (Mathematical Encyclopedia). Moscow: Sovetskaya Entziklopediya, 1984; 4: 47–51 [in Russian].Google Scholar
  5. [5]
    Evtushenko, Iu. G. Numerical Optimization Techniques. New York: Optimization Software Inc., Publications Division, 1985.Google Scholar
  6. [6]
    Chernous’ko, F. L. “Computational Methods of Optimum Control.” In Matematika na Sluzhbe Inzhenera (Mathematics in Engineering). Moscow: Znanie, 1973; 56–73 [in Russian].Google Scholar
  7. [7]
    Ortega, J. M., and Rheinboldt, W. C. Iterative Solution of Nonlinear Equations in Several Variables. New York: Academic Press, 1970.zbMATHGoogle Scholar
  8. [8]
    Shatrovskii, L. I. One numerical method of solving problems of optimum control. U.S.S.R. Computational Mathematics and Mathematical Physics. 1962; 2, No. 3: 488–491.Google Scholar
  9. [9]
    Chernous’ko, F. L., and Kolmanovskiy, V. B. “Computational and Approximate Methods of Optimal Control.” In Matematicheskii Analiz (Mathematical Analysis). Moscow: Izd. VINITI, 1977; 14: 101–167 [in Russian].Google Scholar
  10. Moiseev, N. N. Elementy Teorii Optimal’nykh Sistem (Elements of Optimal Systems Theory). Moscow: Nauka, 1974 [in Russian].Google Scholar
  11. [11]
    Kurzhanskiy, A. B. “Positional Optimal Control.” In Matematicheskaja Entziklopedia (Mathematical Encyclopedia). Moscow: Sovetskaya Entziklopediya, 1984; 4: 42–47 [in Russian].Google Scholar
  12. [12]
    Pontriagin, L. S., et al. The Mathematical Theory of Optimal Processes. Oxford, New York: Pergamon Press, 1964.Google Scholar
  13. Roytenberg, Ya. N. Avtomaticheskoe Upravlenie (Automatic Control). Moscow: Nauka, 1971 [in Russian].Google Scholar
  14. [14]
    Tkachev, A. M. Geometric method for numerical solution of a terminal problem of optimal control. Engineering Cybernetics. 1984; No. 2: 21–26.MathSciNetGoogle Scholar
  15. [15]
    Gabasov, R. F., and Kirillova, F. M. Optimizatzia Lineinykh Sistem (Optimization of Linear Systems). Minsk: Izd. BGU (Belorussia State University), 1973 [in Russian].Google Scholar
  16. [16]
    Tkachev, A. M. A numerical method for a linear optimal response speed problem. Soviet Journal of Computer and Systems Sciences (Formerly Engineering Cybernetics). 1988; 26, No. 1: 174–177.MathSciNetzbMATHGoogle Scholar
  17. [17]
    Kiselev, Yu. N. “Methods for solving a smooth linear time-optimal problem.” In Proceedings of the Steklov Institute of Mathematics. Optimal Control and Differential Games. Edited by Pontryagin L. S. American Mathematical Society, 1990; 185, No. 2: 121–132.Google Scholar
  18. [18]
    Samsonov, S. P. “An optimal control problem with various quality functionals.” In Proceedings of the Steklov Institute of Mathematics. Optimal Control and Differential Games. Edited by Pontryagin L. S. American Mathematical Society, 1990; 185, No. 2: 241–248.zbMATHGoogle Scholar
  19. [19]
    Tarakanov, A. F. The maximum principle for certain minimax control problems for connected sets. Soviet Journal of Computer and Systems Sciences (Formerly Engineering Cybernetics). 1989; 27, No. 2: 142–146.MathSciNetzbMATHGoogle Scholar
  20. [20]
    Butkovskiy, A. G. Distributed Control Systems. New York: American Elsevier Pub. Co., 1969.zbMATHGoogle Scholar
  21. [21]
    Yegorov, Yu. V. “Optimum control of systems with distributed parameters.” In Matematika na Sluzhbe Inzhenera (Mathematics in Engineering). Moscow: Znanie, 1973; 187–99 [in Russian].Google Scholar
  22. [22]
    Boltianskiy, V. G. Optimal Control of Discrete Systems. New York: John Wiley and Sons Publ. Co., 1978.Google Scholar
  23. [23]
    Boltianskiy, V. G. Discrete maximum principle (method of local sections). Differential Equations. 1972; VIII, No. 11: 1497–1503.Google Scholar
  24. [24]
    Iliutovich, A. E. “Decomposition of a procedure of choosing a possible control in the problem of distributed resources.” In Sbornik Trudov VNIISI (Proceedings of All-Union Institute of System Studies). Moscow: Izd. VNIISI (All-Union Institute of System Studies), 1987; No. 3: 28–37 [in Russian].Google Scholar
  25. [25]
    Kolmanovskii, V. B. Optimal control in certain systems involving small parameters. Differential Equations. 1975; 11, No. 8: 1181–1189.MathSciNetGoogle Scholar
  26. Akulenko, L. D., and Chernous’ko, F. L. The averaging method in optimal control problems. U.S.S.R. Computational Mathematics and Mathematical Physics. 1975; 15, No. 4: 54–67.Google Scholar
  27. [27]
    Bellman, R. E. Dynamic Programming. Princeton, New Jersey: Princeton University Press, 1957.Google Scholar
  28. [28]
    Kolmanovskii, V. B. The approximate synthesis of some stochastic quasilinear systems. Automation and Remote Control. 1975; 36, No. 1: 44–50.zbMATHGoogle Scholar
  29. [29]
    Boltianskiy, V. G. “The method of local sections and the supporting principle.” In Matematika na Sluzhbe Inzhenera (Mathematics in Engineering). Moscow: Znanie, 1973; 140–164 [in Russian].Google Scholar
  30. [30]
    Rozov, N. Kh. The local section method for systems with refraction of trajectories. Soviet Mathematics. 1972; 13, No. 1: 146–151.zbMATHGoogle Scholar
  31. [31]
    Blagodatskikh, V. I. Sufficient optimality conditions for differential embeddings. Izvestiya AN SSSR. Seriya Matematika. 1974; 8, No. 3: 621–630.CrossRefGoogle Scholar
  32. [32]
    Karulina, N. I. “A sufficient condition for optimality for differential inclusions.” Proceedings of the Steklov Institute of Mathematics. Optimal Control and Differential Games. Edited by Pontryagin L. S. American Mathematical Society, 1990; 185, No. 2: 95–98.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • Alexander S. Belenky

There are no affiliations available

Personalised recommendations