Abstract
A new implementation of the augmented gradient projection (AGP) scheme is described for discrete-time approximations to continuous-time Bolza optimal control problems with pointwise bounds on control and state variables. In the conventional implementation, control and state vectors are the primal variables, and local forms of the control problem’s state equations are treated as equality constraints incorporated in an augmented Lagrangian with a penalty parameter c. In the new implementation, the original control vectors and new artificial control vectors are the primal variables, and an integrated form of the state equations replaces the usual local form in the augmented Lagrangian. The resulting relaxed nonlinear program for the augmented Lagrangian amounts to a Bolza problem with pure pointwise control constraints, hence the associated gradient and Newtonian direction vectors can be computed efficiently with adjoint equations and dynamic programming techniques. For unscaled AGP methods and prototype regulator problems with bound constraints on control and state vectors, numerical experiments indicate rapid deterioration in the convergence properties of the conventional implementation as the discrete-time mesh is refined with the penalty constant fixed. In contrast, the new implementation of the unscaled AGP scheme exhibits mesh-independent convergence behavior. The new formulation also offers certain additional computational advantages for control problems with separated control and state constraints.
This research was supported by the National Science Foundation, Grant #DMS9500908.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aljazzaf, M. (1989), Multiplier Methods with Partial Elimination of Constraints for Nonlinear Programming, Ph.D. dissertation, North Carolina State University.
Bertsekas, D.P. (1974), “On the Goldstein-Levitin-Polyak gradient projection method,” in Proc. 1974 IEEE CDC, Phoenix, AZ, 47–52 (also in IEEE Trans. Auto. Control, AC-10 (1976), 174–184).
Bertsekas, D.P. (1982), “Projected Newton methods for optimization problems with simple constraints,” SIAM J. Control, Vol. 20, 221–246.
Bertsekas, D.P. (1982), Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York.
Bertsekas, D.P. (1995), Nonlinear Programming, Athena Scientific, Belmont, Massachusetts.
Buys, J.D. (1972), Dual Algorithms for Constrained Optimization, Ph.D. dissertation, Rijksuniversiteit de Leiden.
Canon, M.D., Cullum, C.D. and Polak, E. (1970), Theory of Optimal Control and Nonlinear Programming,McGraw Hill, New York, 407448.
Danskin, J.M. (1966), “Theory of min-max with applications,” SIAM J. Applied Math., Vol. 14, 641–644.
Dunn, J.C. (1967), “On the classification of singular and nonsingular extremals for the Pontryagin maximum principle,” J. Math. Anal. and Appl., Vol. 17, No. 1, 1–36.
Dunn, J.C. (1988), “A projected Newton method for minimization problems with nonlinear inequality constraints,” Numer. Math., Vol. 53, 377–409.
Dunn, J.C. and Bertsekas, D.P. (1989), “Efficient dynamic programming implementations of Newton’s method for unconstrained optimal control problems,” J. Optim. Th. and Appl., Vol. 63, 23–38.
Tian, T. and Dunn, J.C. (1994), “On the gradient projection method for optimal control problems with nonnegative IL2 inputs,” SIAM J. Control Optim., Vol. 32, No. 2, 517–537.
Gafni, E.M. and Bertsekas, D.P. (1984), “Two-metric projection methods for constrained minimization,” SIAM J. Control Optim., Vol. 22, No. 6, 936–964.
Gawande, M. and Dunn, J.C. (1988), “A projected Newton method in a Cartesian product of balls,” J. Optim. Th. and Appl., Vol. 59, No. 1, 45–69.
Goldstein, A.A. (1964), “Convex programming in Hilbert space,”Bull. Amer. Math. Soc., Vol. 70, 709–710.
Hager, W.W. and Ianculescu, G.D. (1984), “Dual approximation in optimal control,”Siam J. Control Optim., Vol. 22, No. 3, 423–465.
Hager, W.W. (1987), “Dual techniques for constrained optimization,”J. Optim. Th. and Appl., Vol. 55, No. 1, 37–71.
Hager, W.W. (1993), “Analysis and implementation of a dual algorithm for constrained optimization,” J. Optim. Th. and Appl., Vol. 79, No. 3, 427–462.
Hestenes, M.R. (1969), “Multiplier and gradient methods,”J. Optim. Th. and Appl., Vol. 4, 303–320.
Levitin, E.S. and Poljak, B.T. (1966), “Constrained minimization problems,” USSR Comp. Math. Phys., Vol. 6, 1-50.
Powell, M.J.D. (1969), “A method for, nonlinear constraints in minimizing problems,” in Optimization, R. Fletcher, ed., Academic Press, New York, 283–298.
Rockafellar, R.T. (1973), “The multiplier method of Hestenes and Powell applied to convex programming,” SIAM J. Control, Vol. 12, 268285.
Tapia, R.A. (1977), “Diagonalized multiplier methods and quasi-Newton methods for constrained optimization,” J. Optim. Th. and Appl., Vol. 14, No. 5, 135–194.
Valentine, F.A. (1937), “The problem of Lagrange with differential inequalities as side conditions,” in Contributions to the Calculus of Variations, University of Chicago Press, Chicago.
Wright, S. J. (1991), “Partitioned dynamic programming for optimal control,” SIAM J. Optim., Vol. 1, No. 4, 620–642.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Dunn, J.C. (1998). Augmented Gradient Projection Calculations for Regulator Problems with Pointwise State and Control Constraints. In: Optimal Control. Applied Optimization, vol 15. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-6095-8_7
Download citation
DOI: https://doi.org/10.1007/978-1-4757-6095-8_7
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-4796-3
Online ISBN: 978-1-4757-6095-8
eBook Packages: Springer Book Archive