Abstract
To describe and study dynamic systems, the notions of system state, control effort and performance measure must be clarified. System state is a set of parameters that characterize the system at each time. The state parameters vary gradually and cannot instantly jump. The evolution of the state parameters, or state variables, as we will call them further, obeys dynamic laws following from the nature of the system considered as a moving or evolutionary object. For example, phase coordinates and momenta of mechanical systems obey Newton's laws or, in a more general case, Euler-Lagrange dynamic equations that underlie the motion of mechanical systems. In turn, manufacturing systems obey the laws of conservation, expansion or deterioration of mass. For example, the amount of product stored in a buffer is considered to be a state variable because its value changes in time in accordance with the mass conservation law. Indeed, the increment of product mass in the buffer at any time is equal to the sum of incoming and outgoing product flows through the buffer. For the same reasons we consider technological capabilities of machines and capacities of aggregate production as state variables which obey the laws of expansion and deterioration of mass. In modeling, these laws take on the form of dynamic equations with initial system states assumed to be known.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bryson, A.E., and J.Y-C. Ho, 1975, Applied Optimal Control, Hemisphere Publishing Corporation, Washington.
Burden, R.L. and J.D. Faires, 1989, Numerical Analysis, PWS-KENT, Boston.
Dubovitsky, A.Y. and A.A. Milyutin, 1981, “Theory of the Maximum Principle” in Methods of Theory of Extremum Problems in Economy, Editor V.L. Levin, Nauka, Moscow, 6–47 (in Russian).
Gohberg, I, P. Lancaster and L. Rodman, 1982, Matrix Polynomials, Academic Press, NY.
Hartl, R.F., S.P. Sethi and R.G. Vickson, 1995, “A survey of the maximum principles for optimal control problems with state constraints”, SIAM Review, 37(2), 181–218.
Milyutin, A.A., N. Osmolovsky, S. Chukanov and A. Ilyutovich, 1993, Optimal Control in Linear Systems, Nauka, Moscow (in Russian).
Mordukhovich, B.S., 1988, Approximation Methods in Problems ofOptimization and Control, Nauka, Moscow (in Russian).
Ostwald, P.F., 1974, Cost Estimating for Engineering and Management, Prentice-Hall, Englewood Cliffs, NJ.
Pun, L., 1969, Introduction to Optimization Practice, John Wiley & Sons.
Roberts, S.M. and J.S. Shipman, 1972, Two-point Boundary Value Problems: Shooting Methods, American Plsevier NY
Rockafellar, R.T., 1970, Convex Analysis, Princeton University Press, Princeton.
Seldon, M.R., 1979, Life Cycle Costing: A Better Methodfor Government Procurement, Westview Press, Boulder, Colorado.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Maimon, O., Khmelnitsky, E., Kogan, K. (1998). Mathematical Fundamentals of Optimal Control. In: Optimal Flow Control in Manufacturing Systems. Applied Optimization, vol 18. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-2834-7_2
Download citation
DOI: https://doi.org/10.1007/978-1-4757-2834-7_2
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-4799-4
Online ISBN: 978-1-4757-2834-7
eBook Packages: Springer Book Archive