Abstract
In this chapter, we discuss an online algorithm based on policy iteration (PI) for learning the continuous-time (CT) optimal control solution for nonlinear systems with infinite horizon costs. We present an online adaptive algorithm implemented as an actor/critic structure which involves simultaneous continuous-time adaptation of both actor and critic neural networks. We call this “synchronous” PI. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for both critic and actor networks, with extra terms in the actor tuning law being required to guarantee closed-loop dynamical stability. The convergence to the optimal controller is proven, and stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Abu-Khalaf, F. L. Lewis, Nearly Optimal Control Laws for Nonlinear Systems with Saturating Actuators Using a Neural Network HJB Approach, Automatica, vol. 41, no. 5, pp. 779-791, 2005.
L. C. Baird III, Reinforcement Learning in Continuous Time: Advantage Updating, Proc. Of ICNN, Orlando FL, June 1994.
R. Beard, G. Saridis, J. Wen,, Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation, Automatica, vol. 33, no. 12, pp. 2159-2177, 1997.
D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, Athena Scientific, MA, 1996.
J. W. Curtis, R. W. Beard, Successive Collocation: An Approximation to Optimal Nonlinear Control, IEEE Proc. ACC01, IEEE, 2001.
K. Doya, Reinforcement Learning In Continuous Time and Space, Neural Computation, 12(1), 219-245, 2000.
W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach., Princeton, NJ: Princeton University Press, 2008.
T. Hanselmann, L. Noakes, and A. Zaknich, Continuous-Time Adaptive Critics, IEEE Transactions on Neural Networks, 18(3), 631-647, 2007.
R. A. Howard, Dynamic Programming and Markov Processes, MIT Press, Cambridge, Massachusetts, 1960.
D. Kleinman, On an Iterative Technique for Riccati Equation Computations, IEEE Trans. on Automatic Control, vol. 13, pp. 114-115, February, 1968.
F. L. Lewis, K. Liu, and A.Yesildirek, Neural Net Controller with Guaranteed Tracking Performance, IEEE Transactions on Neural Networks, vol. 6, no. 3, pp. 703-715, 1995.
F. L. Lewis, V. L. Syrmos, Optimal Control, John Wiley, 1995.
J. J. Murray, C. J. Cox, G. G. Lendaris, and R. Saeks,Adaptive Dynamic Programming, IEEE Trans. on Systems, Man and Cybernetics, vol. 32, no. 2, pp 140-153, 2002.
D. Prokhorov, D. Wunsch, Adaptive critic designs, IEEE Trans. on Neural Networks, vol. 8, no 5, pp. 997-1007, 1997.
J. Si, A. Barto, W. Powel, D. Wunch, Handbook of Learning and Approximate Dynamic Programming, John Wiley, New Jersey, 2004.
R. S. Sutton, A. G. Barto, Reinforcement Learning - An Introduction, MIT Press, Cambridge, Massachusetts, 1998.
D. Vrabie, F. Lewis,Adaptive Optimal Control Algorithm for Continuous-Time Nonlinear Systems Based on Policy Iteration, IEEE Proc. CDC08, IEEE, 2008 (Accepted).
D. Vrabie, O. Pastravanu, F. Lewis, M. Abu-Khalaf, Adaptive Optimal Control for Continuous-Time Linear Systems Based on Policy Iteration, Automatica (to appear).
P. J. Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavior Sciences, Ph.D. Thesis, 1974.
P. J. Werbos, Approximate dynamic programming for real-time control and neural modeling, Handbook of Intelligent Control, ed. D.A. White and D.A. Sofge, New York: Van Nostrand Reinhold, 1992.
P. J. Werbos, Neural networks for control and system identification, IEEE Proc. CDC89, IEEE, 1989.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer London
About this chapter
Cite this chapter
Vamvoudakis, K., Lewis, F. (2009). Online Synchronous Policy Iteration Method for Optimal Control. In: Yu, W. (eds) Recent Advances in Intelligent Control Systems. Springer, London. https://doi.org/10.1007/978-1-84882-548-2_14
Download citation
DOI: https://doi.org/10.1007/978-1-84882-548-2_14
Publisher Name: Springer, London
Print ISBN: 978-1-84882-547-5
Online ISBN: 978-1-84882-548-2
eBook Packages: EngineeringEngineering (R0)