Mathematical Fundamentals of Optimal Control

  • Oded Maimon
  • Eugene Khmelnitsky
  • Konstantin Kogan
Part of the Applied Optimization book series (APOP, volume 18)


To describe and study dynamic systems, the notions of system state, control effort and performance measure must be clarified. System state is a set of parameters that characterize the system at each time. The state parameters vary gradually and cannot instantly jump. The evolution of the state parameters, or state variables, as we will call them further, obeys dynamic laws following from the nature of the system considered as a moving or evolutionary object. For example, phase coordinates and momenta of mechanical systems obey Newton's laws or, in a more general case, Euler-Lagrange dynamic equations that underlie the motion of mechanical systems. In turn, manufacturing systems obey the laws of conservation, expansion or deterioration of mass. For example, the amount of product stored in a buffer is considered to be a state variable because its value changes in time in accordance with the mass conservation law. Indeed, the increment of product mass in the buffer at any time is equal to the sum of incoming and outgoing product flows through the buffer. For the same reasons we consider technological capabilities of machines and capacities of aggregate production as state variables which obey the laws of expansion and deterioration of mass. In modeling, these laws take on the form of dynamic equations with initial system states assumed to be known.


Maximum Principle Planning Horizon State Constraint Shooting Method Production Regime 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bryson, A.E., and J.Y-C. Ho, 1975, Applied Optimal Control, Hemisphere Publishing Corporation, Washington.Google Scholar
  2. Burden, R.L. and J.D. Faires, 1989, Numerical Analysis, PWS-KENT, Boston.zbMATHGoogle Scholar
  3. Dubovitsky, A.Y. and A.A. Milyutin, 1981, “Theory of the Maximum Principle” in Methods of Theory of Extremum Problems in Economy, Editor V.L. Levin, Nauka, Moscow, 6–47 (in Russian).Google Scholar
  4. Gohberg, I, P. Lancaster and L. Rodman, 1982, Matrix Polynomials, Academic Press, NY.zbMATHGoogle Scholar
  5. Hartl, R.F., S.P. Sethi and R.G. Vickson, 1995, “A survey of the maximum principles for optimal control problems with state constraints”, SIAM Review, 37(2), 181–218.MathSciNetCrossRefzbMATHGoogle Scholar
  6. Milyutin, A.A., N. Osmolovsky, S. Chukanov and A. Ilyutovich, 1993, Optimal Control in Linear Systems, Nauka, Moscow (in Russian).zbMATHGoogle Scholar
  7. Mordukhovich, B.S., 1988, Approximation Methods in Problems ofOptimization and Control, Nauka, Moscow (in Russian).Google Scholar
  8. Ostwald, P.F., 1974, Cost Estimating for Engineering and Management, Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
  9. Pun, L., 1969, Introduction to Optimization Practice, John Wiley & Sons.Google Scholar
  10. Roberts, S.M. and J.S. Shipman, 1972, Two-point Boundary Value Problems: Shooting Methods, American Plsevier NYzbMATHGoogle Scholar
  11. Rockafellar, R.T., 1970, Convex Analysis, Princeton University Press, Princeton.zbMATHGoogle Scholar
  12. Seldon, M.R., 1979, Life Cycle Costing: A Better Methodfor Government Procurement, Westview Press, Boulder, Colorado.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • Oded Maimon
    • 1
  • Eugene Khmelnitsky
    • 1
  • Konstantin Kogan
    • 1
  1. 1.Department of Industrial EngineeringTel-Aviv UniversityTel-AvivIsrael

Personalised recommendations