Feedback Minimum Principle for Optimal Control Problems in Discrete-Time Systems and Its Applications
The paper is devoted to a generalization of a necessary optimality condition in the form of the Feedback Minimum Principle for a nonconvex discrete-time free-endpoint control problem. The approach is based on an exact formula for the increment of the cost functional. This formula is completely defined through a solution of the adjoint system corresponding to a reference process. By minimizing that increment in control variable for a fixed adjoint state, we define a multivalued map, whose selections are feedback controls with the property of potential “improvement” of the reference process. As a result, we derive a necessary optimality condition (optimal process does not admit feedback controls of a “potential descent” in the cost functional). In the case when the well-known Discrete Maximum Principle holds, our condition can be further strengthened. Note that obtained optimality condition is quite constructive and may lead to an iterative algorithm for discrete-time optimal control problems. Finally, we present sufficient optimality conditions for problems, where Discrete Maximum Principle does not make sense.
KeywordsExact formula of the cost functional increment Feedback controls Necessary optimality conditions Feedback Minimum Principle Maximum Principle Method of feedback iterations
- 6.Propoi, A.I.: Elements of the theory of optimal discrete processes. Nauka, Moscow (1973). [in Russian]Google Scholar
- 8.Boltyanskiy, V.G.: Optimal Control of Discrete Systems. Nauka, Moscow (1973). [in Russian]Google Scholar
- 9.Sorokin, S.P., Staritsyn, M.V.: Numeric algorithm for optimal impulsive control based on feedback maximum principle. Optim. Lett. (2018). https://doi.org/10.1007/s11590-018-1344-9