This paper is concerned with optimal control problem whose state equation is an uncertain differential equation. A necessary condition of optimality for uncertain optimal control problem is presented by using classical variational method. Meanwhile, an existence theorem of solution to backward uncertain differential equation is proved.
Uncertainty theory Uncertain differential equation Uncertain optimal control Necessary condition
This is a preview of subscription content, log in to check access.
Baghery F., Øksendal B. (2007) A maximum principle for stochastic control with partial information. Stochastic Analysis and Applications 25(3): 705–717MathSciNetMATHCrossRefGoogle Scholar
Bahlali S., Chala A. (2005) The stochastic maximum principle in optimal control of singular diffusions with non linear coefficients. Random Operators and Stochastic Equations 13(1): 1–10MathSciNetMATHCrossRefGoogle Scholar
Chen X., Liu B. (2010) Existence and uniqueness theorem for uncertain differential equations. Fuzzy Optimization and Decision Making 9(1): 69–81MathSciNetMATHCrossRefGoogle Scholar
Chighoub F., Djehiche B., Mezerdi B. (2009) The stochastic maximum principle in optimal control of degenerate diffusions with non-smooth coefficients. Random Operators and Stochastic Equations 17(1): 37–54MathSciNetMATHCrossRefGoogle Scholar
Haussmann U. G. (1976) General necessary conditions for optimal control of stochastic system. Mathematical Programming Study 6: 30–48MathSciNetCrossRefGoogle Scholar
Kusher H. J. (1972) Necessary conditions for continuous parameter stochastic optimazation problems. SIAM Journal on Control 10: 560–565Google Scholar