Skip to main content

Part of the book series: Studies in Systems, Decision and Control ((SSDC,volume 166))

Abstract

Optimal control is one particular branch of modern control. It deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin’s maximum principle (a necessary condition also known as Pontryagin’s minimum principle or simply Pontryagin’s Principle), or by solving the Hamilton–Jacobi–Bellman (HJB) equation (a sufficient condition). For linear systems with quadratic performance function, the HJB equation reduces to the algebraic Riccati equation(ARE) (Zhang et al, Adaptive dynamic programming for control-algorithms and stability. Springer, London, 2013, [1]).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhang, H., Liu, D., Luo, Y., Wang, D.: Adaptive Dynamic Programming for Control-Algorithms and Stability. Springer, London (2013)

    Book  Google Scholar 

  2. Vrabie, D., Vamvoudakis, K., Lewis, F.: Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles. The Institution of Engineering and Technology, London (2013)

    Book  Google Scholar 

  3. Barto, A., Sutton, R., Anderson, C.: Neuron-like adaptive elements that can solve difficult learning control problems. IEEE Trans. Syst. Man Cybern. Part B, Cybern. SMC–13(5), 834–846 (1983)

    Article  Google Scholar 

  4. Werbos, P.: A menu of designs for reinforcement learning over time. In: Miller, W.T., Sutton, R.S., Werbos, P.J. (eds.) Neural Networks for Control, pp. 67–95. MIT Press, Cambridge (1991)

    Google Scholar 

  5. Werbos, P.: Approximate dynamic programming for real-time control and neural modeling. In: White, D.A., Sofge, D.A. (eds.) Handbook of Intelligent Control. Van Nostrand Reinhold, New York (1992)

    Google Scholar 

  6. Werbos, P.: Neural networks for control and system identification. In: proceedings of IEEE Conference Decision Control, Tampa, FL, pp. 260–265 (1989)

    Google Scholar 

  7. Werbos, P.: Advanced forecasting methods for global crisis warning and models of intelligence. General Syst. Yearbook 22, 25–38 (1977)

    Google Scholar 

  8. Liu, D., Wei, Q., Yang, X., Li, H., Wang, D.: Adaptive Dynamic Programming with Applications in Optimal Control. Springer International Publishing, Berlin (2017)

    Book  Google Scholar 

  9. Werbos, P.: ADP: the key direction for future research in intelligent control and understanding brain intelligence. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 38(4), 898–900 (2008)

    Article  Google Scholar 

  10. Lewis, F., Vrabie, D.: Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst. Mag. 9(3), 32–50 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruizhuo Song .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Science Press, Beijing and Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Song, R., Wei, Q., Li, Q. (2019). Introduction. In: Adaptive Dynamic Programming: Single and Multiple Controllers. Studies in Systems, Decision and Control, vol 166. Springer, Singapore. https://doi.org/10.1007/978-981-13-1712-5_1

Download citation

Publish with us

Policies and ethics