Advertisement

Multiplant Robust Control

  • Vladimir G. Boltyanski
  • Alexander S. Poznyak
Part of the Systems & Control: Foundations & Applications book series (SCFA)

Abstract

In this chapter the Robust Stochastic Maximum Principle (in the Mayer form) is presented for a class of nonlinear continuous-time stochastic systems containing an unknown parameter from a given finite set and subject to terminal constraints. Its proof is based on the use of the Tent Method with the special technique specific for stochastic calculus. The Hamiltonian function used for these constructions is equal to the sum of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The corresponding robust optimal control can be calculated numerically (a finite-dimensional optimization problem should be solved) for some simple situations.

Keywords

Stochastic Differential Equation Adjoint Equation Polar Cone Complementary Slackness Terminal Constraint 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Basar, T. (1994), ‘Minimax control of switching systems under sampling’, in Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, FL, pp. 716–721. Google Scholar
  2. Basar, T., & Bernhard, P. (1991), H Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach, Birkhäuser, Boston. CrossRefGoogle Scholar
  3. Bellman, R. (1957), Dynamic Programming, Princeton University Press, Princeton. MATHGoogle Scholar
  4. Bensoussan, A. (1983), ‘Lecture on stochastic control, part 1’, in Lecture Notes in Mathematics, Vol. 972, Springer, Berlin, pp. 1–39. Google Scholar
  5. Bensoussan, A. (1992), Stochastic Control of Partially Observable Systems, Cambridge University Press, Cambridge. CrossRefGoogle Scholar
  6. Bernhard, P. (1994), ‘Minimax versus stochastic partial information control’, in Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, FL, pp. 2572–2577. Google Scholar
  7. Bismut, J.M. (1977), ‘On optimal control of linear stochastic equations with a linear quadratic criteria’, SIAM J. Control Optim. 15(1), 1–4. CrossRefGoogle Scholar
  8. Bismut, J.M. (1978), ‘An introductory approach to duality in stochastic control’, SIAM Rev. 20, 62–78. MathSciNetCrossRefGoogle Scholar
  9. Blom, H., & Everdij, M. (1993), ‘Embedding adaptive JLQG into LQ martingale with a complete observable stochastic control matrix’, in Proceedings of the 32nd Conference on Decision and Control, San Antonio, TX, pp. 849–854. CrossRefGoogle Scholar
  10. Boltyanski, V., & Poznyak, A. (1999b), ‘Robust maximum principle in minimax control’, Int. J. Control 72(4), 305–314. MathSciNetCrossRefGoogle Scholar
  11. Boukas, E.K., Shi, P., & Andijani, A. (1999), ‘Robust inventory-production control problem with stochastic demand’, Optim. Control Appl. Methods 20(1), 1–20. MathSciNetCrossRefGoogle Scholar
  12. Clarke, F.H. (1983), Optimization and Nonsmooth Analysis, Wiley, New York. MATHGoogle Scholar
  13. Clarke, F.H., & Vinter, R.B. (1983), ‘Local optimality conditions and Lipshitzian solutions to the Hamilton–Jacobi equations’, SIAM J. Control Optim. 21, 856–870. MathSciNetCrossRefGoogle Scholar
  14. Clarke, F.H., Ledyaev, Y.S., Stern, R.J., & Wolenski, P.R. (1998), Nonsmooth Analysis and Control Theory, Vol. 178 of Graduate Texts in Mathematics, Springer, New York. MATHGoogle Scholar
  15. Crandall, M.G., & Lions, P.L. (1983), ‘Viscosity solution of Hamilton–Jacobi equations’, Trans. Am. Math. Soc. 277, 1–42. MathSciNetCrossRefGoogle Scholar
  16. Didinsky, G., & Basar, T. (1991), ‘Minimax controller and filter designs for discrete-time linear systems under soft-constrained quadratic performance indices’, in Proceedings of the 30th Conference on Decision and Control, Brighton, England, pp. 585–590. Google Scholar
  17. Didinsky, G., & Basar, T. (1994), ‘Minimax adaptive control of uncertain plants’, in Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, FL, pp. 2839–2844. Google Scholar
  18. Dorato, P., & Drenick, R.F. (1966), ‘Optimality, insensitivity and game theory’, in Sensitivity Methods in Control Theory (L. Radanovich, ed.), Pergamon Press, New York, pp. 78–102. CrossRefGoogle Scholar
  19. Doyle, J., Glover, K., Khargonekar, P., & Frances, B. (1989), ‘State-space solutions to standard H 2 and H control problems’, IEEE Trans. Autom. Control 34(8), 831–847. MathSciNetCrossRefGoogle Scholar
  20. Duncan, T.E. (1967), On Nonlinear Stochastic Filtering, PhD thesis. Google Scholar
  21. Duncan, T.E., & Varaiya, P.P. (1971), ‘On the solution of a stochastic control system’, SIAM J. Control Optim. 9, 354–371. MathSciNetCrossRefGoogle Scholar
  22. Duncan, T.E., Guo, L., & Pasik-Duncan, B. (1999), ‘Adaptive continuous-time linear quadratic Gaussian control’, IEEE Trans. Autom. Control 44(9), 1653–1662. MathSciNetCrossRefGoogle Scholar
  23. Fleming, W.H., & Rishel, R.W. (1975), Optimal Deterministic and Stochastic Control, Applications of Mathematics, Springer, Berlin. CrossRefGoogle Scholar
  24. Fleming, W.H., & Soner, H.M. (2006), Controlled Markov Processes and Viscosity Solutions, Vol. 25 of Stochastic Modelling and Applied Probability, Springer, Berlin. MATHGoogle Scholar
  25. Glover, K., & Doyle, J. (1988), ‘State-space formulae for all stabilizing controllers that satisfy an H -norm bound and relations to risk sensitivity’, Syst. Control Lett. 11, 167–172. MathSciNetCrossRefGoogle Scholar
  26. Haussman, U.G. (1981), ‘Some examples of optimal control, or: The stochastic maximum principle at work’, SIAM Rev. 23, 292–307. MathSciNetCrossRefGoogle Scholar
  27. Haussman, U.G. (1982), ‘On the existence of optimal controls for partially observed diffusions’, SIAM J. Control Optim. 20, 385–407. MathSciNetCrossRefGoogle Scholar
  28. Kallianpur, G. (1980), Stochastic Filtering Theory, Springer, New York. CrossRefGoogle Scholar
  29. Khargonekar, P.P. (1991), State-space H -control theory and the LQG control problem, in Mathematical System Theory: The Influence of R.E. Kalman (A. Antoulas, ed.), Springer, Berlin. Google Scholar
  30. Krasovskii, N.N. (1969), ‘Game problem on movements correction’, Appl. Math. Mech. 33(3) (in Russian). Google Scholar
  31. Krylov, N.V. (1980), Controlled Diffusion Processes, Springer, New York. CrossRefGoogle Scholar
  32. Kurjanskii, A.B. (1977), Control and Observations in Uncertain Conditions, Nauka, Moscow (in Russian). Google Scholar
  33. Kushner, H. (1972), ‘Necessary conditions for continuous parameter stochastic optimization problems’, SIAM J. Control Optim. 10, 550–565. MathSciNetCrossRefGoogle Scholar
  34. Limebeer, D., Anderson, B.D.O., Khargonekar, P., & Green, M. (1989), ‘A game theoretic approach to H -control for time varying systems’, SIAM J. Control Optim. 30(2), 262–283. MathSciNetCrossRefGoogle Scholar
  35. Lions, P.L. (1983), ‘Optimal control of diffusion processes and Hamilton–Jacobi–Bellman equations, Part 1: Viscosity solutions and uniqueness’, Commun. Partial Differ. Equ. 11, 1229–1276. CrossRefGoogle Scholar
  36. Ming, K.L., Boyd, S., Koshut, R.L., & Franklin, G.F. (1991), ‘Robust control design for ellipsoidal plant set’, in Proceedings of the 30th Conference on Decision and Control, Brighton, England, pp. 291–296. Google Scholar
  37. Pontryagin, L.S., Boltyansky, V.G., Gamkrelidze, R.V., & Mishenko, E.F. (1969), Mathematical Theory of Optimal Processes, Nauka, Moscow (in Russian). Google Scholar
  38. Poznyak, A.S. (2009), Advanced Mathematical Tools for Automatic Control Engineers, Vol. 2: Stochastic Technique, Elsevier, Amsterdam. Google Scholar
  39. Poznyak, A.S., & Taksar, M.I. (1996), ‘Robust control of linear stochastic systems with fully observable state’, Appl. Math. 24(1), 35–46. MathSciNetMATHGoogle Scholar
  40. Poznyak, A.S., Duncan, T.E., Pasik-Duncan, B., & Boltyansky, V.G. (2002b), ‘Robust stochastic maximum principle for multi-model worst case optimization’, Int. J. Control 75(13), 1032–1048. MathSciNetCrossRefGoogle Scholar
  41. Pytlak, R.S. (1990), ‘Variation algorithm for minimax control: Parallel approach’, in Proceedings of the 29th Conference on Decision and Control, Honolulu, HI, pp. 2469–2474. CrossRefGoogle Scholar
  42. Siljak, D.D. (1989), ‘Parameter space methods for robust control design: A guided tour’, IEEE Trans. Autom. Control 34(7), 674–688. MathSciNetCrossRefGoogle Scholar
  43. Taksar, M.I., & Zhou, X.Y. (1998), ‘Optimal risk and dividend control for a company with a debt liability’, Insur. Math. Econ. 22, 105–122. MathSciNetCrossRefGoogle Scholar
  44. Taksar, M.I., Poznyak, A.S., & Iparraguirre, A. (1998), ‘Robust output feedback control for linear stochastic systems in continuous time with time-varying parameters’, IEEE Trans. Autom. Control 43(8), 1133–1136. MathSciNetCrossRefGoogle Scholar
  45. Ugrinovskii, V.A., & Petersen, I.R. (1999), ‘Absolute stabilization and minimax optimal control of uncertain systems with stochastic uncertainty’, SIAM J. Control Optim. 37(4), 1089–1122. MathSciNetCrossRefGoogle Scholar
  46. Vinter, R.B. (1988), ‘New results on the relationship between dynamic programming and their connection in deterministic control’, Math. Control Signals Syst. 1, 97–105. CrossRefGoogle Scholar
  47. Yaz, G. (1991), ‘Minimax control of discrete nonlinear stochastic systems with white noise uncertainty’, in Proceedings of the 30th Conference on Decision and Control, Brighton, England, pp. 1815–1816. Google Scholar
  48. Yong, J., & Zhou, X.Y. (1999), Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer, Berlin. CrossRefGoogle Scholar
  49. Yoshida, K. (1979), Functional Analysis, Narosa Publishing House, New Delhi. Google Scholar
  50. Zakai, M. (1969), ‘On the optimal filtering of diffusion processes’, Z. Wahrscheinlichkeitstheor. Verw. Geb. 11, 230–243. MathSciNetCrossRefGoogle Scholar
  51. Zames, J. (1981), ‘Feedback and optimality sensitivity: Model reference transformation, multiplicative seminorms and approximate inverses’, IEEE Trans. Autom. Control 26, 301–320. CrossRefGoogle Scholar
  52. Zhou, X.Y. (1991), ‘A unified treatment of maximum principle and dynamic programming in stochastic controls’, Stoch. Stoch. Rep. 36, 137–161. MathSciNetMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Vladimir G. Boltyanski
    • 1
  • Alexander S. Poznyak
    • 2
  1. 1.CIMATGuanajuatoMexico
  2. 2.Automatic Control DepartmentCINVESTAV-IPNMéxicoMexico

Personalised recommendations