Abstract
As described in the introduction to Chapter 8, an alternative approach to optimal control relies upon Lagrange multipliers in order to link static optimization problems. In this chapter, we provide a brief introduction to several selected topics in variational, or multiplier-based, optimal control, namely: minimization of Lagrangians (and the associated Hamiltonian formalism) for open input-value sets, the basic result in the classical Calculus of Variations seen as a special case, some remarks on numerical techniques, and the Pontryagin Minimum (or Maximum, depending on conventions) Principle for arbitrary control-value sets but free final state. The area of nonlinear optimal control is very broad, and technically subtle, and, for a more in-depth study, the reader should consult the extensive literature that exists on the subject.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media New York
About this chapter
Cite this chapter
Sontag, E.D. (1998). Optimality: Multipliers. In: Mathematical Control Theory. Texts in Applied Mathematics, vol 6. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0577-7_9
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0577-7_9
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6825-3
Online ISBN: 978-1-4612-0577-7
eBook Packages: Springer Book Archive