Abstract
In an optimal control problem, we are given a dynamical system whose behavior may be influenced or regulated by a suitable choice of some of the system’s variables, which are called control—or action or decision—variables. The controls that can be applied at any given time are chosen according to “rules” known as control policies. In addition, we are given a function called a performance criterion (or performance index), defined on the set of control policies, which measures or evaluates in some sense the system’s response to the control policies being used. Then the optimal control problem is to determine a control policy that optimizes (i.e., either minimizes or maximizes) the performance criterion.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Hernández-Lerma, O., Lasserre, J.B. (1996). Introduction and Summary. In: Discrete-Time Markov Control Processes. Applications of Mathematics, vol 30. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0729-0_1
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0729-0_1
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6884-0
Online ISBN: 978-1-4612-0729-0
eBook Packages: Springer Book Archive