The optimal control problem was defined in the last chapter as the problem of dynamically changing the system parameters in response to the system evolution so as to optimize its performance. It is more common to use the phrase “choose an action” in place of “set the parameters” in the context of control problems. The rule that specifies what action to choose as a function of the system evolution is called a policy.
KeywordsOptimal Policy Action Space Sojourn Time Computational Problem Transition Probability Matrix
Unable to display preview. Download preview PDF.