Skip to main content

Online Synchronous Policy Iteration Method for Optimal Control

  • Chapter
Book cover Recent Advances in Intelligent Control Systems

Abstract

In this chapter, we discuss an online algorithm based on policy iteration (PI) for learning the continuous-time (CT) optimal control solution for nonlinear systems with infinite horizon costs. We present an online adaptive algorithm implemented as an actor/critic structure which involves simultaneous continuous-time adaptation of both actor and critic neural networks. We call this “synchronous” PI. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for both critic and actor networks, with extra terms in the actor tuning law being required to guarantee closed-loop dynamical stability. The convergence to the optimal controller is proven, and stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Abu-Khalaf, F. L. Lewis, Nearly Optimal Control Laws for Nonlinear Systems with Saturating Actuators Using a Neural Network HJB Approach, Automatica, vol. 41, no. 5, pp. 779-791, 2005.

    Article  MATH  MathSciNet  Google Scholar 

  2. L. C. Baird III, Reinforcement Learning in Continuous Time: Advantage Updating, Proc. Of ICNN, Orlando FL, June 1994.

    Google Scholar 

  3. R. Beard, G. Saridis, J. Wen,, Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation, Automatica, vol. 33, no. 12, pp. 2159-2177, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  4. D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, Athena Scientific, MA, 1996.

    Google Scholar 

  5. J. W. Curtis, R. W. Beard, Successive Collocation: An Approximation to Optimal Nonlinear Control, IEEE Proc. ACC01, IEEE, 2001.

    Google Scholar 

  6. K. Doya, Reinforcement Learning In Continuous Time and Space, Neural Computation, 12(1), 219-245, 2000.

    Article  Google Scholar 

  7. W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach., Princeton, NJ: Princeton University Press, 2008.

    MATH  Google Scholar 

  8. T. Hanselmann, L. Noakes, and A. Zaknich, Continuous-Time Adaptive Critics, IEEE Transactions on Neural Networks, 18(3), 631-647, 2007.

    Article  Google Scholar 

  9. R. A. Howard, Dynamic Programming and Markov Processes, MIT Press, Cambridge, Massachusetts, 1960.

    MATH  Google Scholar 

  10. D. Kleinman, On an Iterative Technique for Riccati Equation Computations, IEEE Trans. on Automatic Control, vol. 13, pp. 114-115, February, 1968.

    Article  Google Scholar 

  11. F. L. Lewis, K. Liu, and A.Yesildirek, Neural Net Controller with Guaranteed Tracking Performance, IEEE Transactions on Neural Networks, vol. 6, no. 3, pp. 703-715, 1995.

    Article  Google Scholar 

  12. F. L. Lewis, V. L. Syrmos, Optimal Control, John Wiley, 1995.

    Google Scholar 

  13. J. J. Murray, C. J. Cox, G. G. Lendaris, and R. Saeks,Adaptive Dynamic Programming, IEEE Trans. on Systems, Man and Cybernetics, vol. 32, no. 2, pp 140-153, 2002.

    Article  Google Scholar 

  14. D. Prokhorov, D. Wunsch, Adaptive critic designs, IEEE Trans. on Neural Networks, vol. 8, no 5, pp. 997-1007, 1997.

    Article  Google Scholar 

  15. J. Si, A. Barto, W. Powel, D. Wunch, Handbook of Learning and Approximate Dynamic Programming, John Wiley, New Jersey, 2004.

    Google Scholar 

  16. R. S. Sutton, A. G. Barto, Reinforcement Learning - An Introduction, MIT Press, Cambridge, Massachusetts, 1998.

    Google Scholar 

  17. D. Vrabie, F. Lewis,Adaptive Optimal Control Algorithm for Continuous-Time Nonlinear Systems Based on Policy Iteration, IEEE Proc. CDC08, IEEE, 2008 (Accepted).

    Google Scholar 

  18. D. Vrabie, O. Pastravanu, F. Lewis, M. Abu-Khalaf, Adaptive Optimal Control for Continuous-Time Linear Systems Based on Policy Iteration, Automatica (to appear).

    Google Scholar 

  19. P. J. Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavior Sciences, Ph.D. Thesis, 1974.

    Google Scholar 

  20. P. J. Werbos, Approximate dynamic programming for real-time control and neural modeling, Handbook of Intelligent Control, ed. D.A. White and D.A. Sofge, New York: Van Nostrand Reinhold, 1992.

    Google Scholar 

  21. P. J. Werbos, Neural networks for control and system identification, IEEE Proc. CDC89, IEEE, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer London

About this chapter

Cite this chapter

Vamvoudakis, K., Lewis, F. (2009). Online Synchronous Policy Iteration Method for Optimal Control. In: Yu, W. (eds) Recent Advances in Intelligent Control Systems. Springer, London. https://doi.org/10.1007/978-1-84882-548-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-548-2_14

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-547-5

  • Online ISBN: 978-1-84882-548-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics