# Mathematical Fundamentals of Optimal Control

## Abstract

To describe and study dynamic systems, the notions of system state, control effort and performance measure must be clarified. *System state is* a set of parameters that characterize the system at each time. The state parameters vary gradually and cannot instantly jump. The evolution of the state parameters, or *state variables*, as we will call them further, obeys dynamic laws following from the nature of the system considered as a moving or evolutionary object. For example, phase coordinates and momenta of mechanical systems obey Newton's laws or, in a more general case, Euler-Lagrange dynamic equations that underlie the motion of mechanical systems. In turn, manufacturing systems obey the laws of conservation, expansion or deterioration of mass. For example, the amount of product stored in a buffer is considered to be a state variable because its value changes in time in accordance with the mass conservation law. Indeed, the increment of product mass in the buffer at any time is equal to the sum of incoming and outgoing product flows through the buffer. For the same reasons we consider technological capabilities of machines and capacities of aggregate production as state variables which obey the laws of expansion and deterioration of mass. In modeling, these laws take on the form of dynamic equations with initial system states assumed to be known.

## Keywords

Maximum Principle Planning Horizon State Constraint Shooting Method Production Regime## Preview

Unable to display preview. Download preview PDF.

## References

- Bryson, A.E., and J.Y-C. Ho, 1975,
*Applied Optimal Control*, Hemisphere Publishing Corporation, Washington.Google Scholar - Burden, R.L. and J.D. Faires, 1989,
*Numerical Analysis*, PWS-KENT, Boston.zbMATHGoogle Scholar - Dubovitsky, A.Y. and A.A. Milyutin, 1981, “Theory of the Maximum Principle” in
*Methods of Theory of Extremum Problems in Economy*, Editor V.L. Levin, Nauka, Moscow, 6–47 (in Russian).Google Scholar - Gohberg, I, P. Lancaster and L. Rodman, 1982,
*Matrix Polynomials*, Academic Press, NY.zbMATHGoogle Scholar - Hartl, R.F., S.P. Sethi and R.G. Vickson, 1995, “A survey of the maximum principles for optimal control problems with state constraints”,
*SIAM Review*, 37(2), 181–218.MathSciNetCrossRefzbMATHGoogle Scholar - Milyutin, A.A., N. Osmolovsky, S. Chukanov and A. Ilyutovich, 1993,
*Optimal Control in Linear Systems*, Nauka, Moscow (in Russian).zbMATHGoogle Scholar - Mordukhovich, B.S., 1988,
*Approximation Methods in Problems ofOptimization and Control*, Nauka, Moscow (in Russian).Google Scholar - Ostwald, P.F., 1974,
*Cost Estimating for Engineering and Management*, Prentice-Hall, Englewood Cliffs, NJ.Google Scholar - Pun, L., 1969,
*Introduction to Optimization Practice*, John Wiley & Sons.Google Scholar - Roberts, S.M. and J.S. Shipman, 1972,
*Two-point Boundary Value Problems: Shooting Methods*, American Plsevier NYzbMATHGoogle Scholar - Rockafellar, R.T., 1970,
*Convex Analysis*, Princeton University Press, Princeton.zbMATHGoogle Scholar - Seldon, M.R., 1979,
*Life Cycle Costing: A Better Methodfor Government Procurement*, Westview Press, Boulder, Colorado.Google Scholar