Let us recall the problem of optimal control defined in Chapter 10. The target set θ1 is prescribed once and for all; it is one of the givens of the problem. A control that is optimal at the initial state x 0 must transfer the state to one in θ1 as well as render the minimum value of the cost. We speak of a control u*(·):[t 0, t*1]→R m as being optimal at x 0 in order to emphasize that the function u*(·) is optimal if the initial state is x 0. In general, for a different initial state a different control function is optimal.
KeywordsFeedback Control State Equation Adjoint Equation Bellman Equation Optimal Feedback Control
Unable to display preview. Download preview PDF.