Robust Stabilization and Trajectory Tracking of General Uncertain Nonlinear Systems

  • Ding WangEmail author
  • Chaoxu Mu
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 167)


Due to the existence of dynamical uncertainties, it is important to pay attention to the robustness of nonlinear control systems, especially when designing adaptive critic control strategies. In this chapter, based on the neural network learning component, the robust stabilization scheme of nonlinear systems with general uncertainties is developed. Remarkably, the involved uncertain term is more general than the matched case. Through system transformation and employing adaptive critic technique, the approximate optimal controller of the nominal plant can be applied to accomplish robust stabilization for the original uncertain dynamics. The neural network weight vector is very convenient to initialize by virtue of the improved critic learning formulation. Under the action of the approximate optimal control law, the stability issues for the closed-loop form of nominal and uncertain plants are analyzed, respectively. As a generalization result, the robust trajectory tracking method of uncertain nonlinear systems is investigated, where the augmented system construction is performed by combining the tracking error with the reference trajectory. Finally, simulation illustrations via two typical nonlinear systems and a practical power system are included to verify the control performance.


  1. 1.
    Bian, T., Jiang, Y., Jiang, Z.P.: Decentralized adaptive optimal control of large-scale systems with application to power systems. IEEE Trans. Ind. Electron. 62(4), 2439–2447 (2015)CrossRefGoogle Scholar
  2. 2.
    Dierks, T., Jagannathan, S.: Optimal control of affine nonlinear continuous-time systems. In: Proceedings of the American Control Conference, pp. 1568–1573 (2010)Google Scholar
  3. 3.
    Fan, Q.Y., Yang, G.H.: Adaptive actor-critic design-based integral sliding-mode control for partially unknown nonlinear systems with input disturbances. IEEE Trans. Neural Netw. Learn. Syst. 27(1), 165–177 (2016)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Gao, W., Jiang, Y., Jiang, Z.P., Chai, T.: Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming. Automatica 72, 37–45 (2016)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Heydari, A.: Revisiting approximate dynamic programming and its convergence. IEEE Trans. Cybern. 44(12), 2733–2743 (2014)CrossRefGoogle Scholar
  6. 6.
    Jiang, Y., Jiang, Z.P.: Robust adaptive dynamic programming and feedback stabilization of nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 25(5), 882–893 (2014)CrossRefGoogle Scholar
  7. 7.
    Kamalapurkar, R., Walters, P., Dixon, W.E.: Model-based reinforcement learning for approximate optimal regulation. Automatica 64, 94–104 (2016)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Lewis, F.L., Jagannathan, S., Yesildirek, A.: Neural Network Control of Robot Manipulators and Nonlinear Systems. Taylor and Francis, London (1998)Google Scholar
  9. 9.
    Lin, F.: Robust Control Design: An Optimal Control Approach. Wiley, New York (2007)CrossRefGoogle Scholar
  10. 10.
    Liu, D., Wang, D., Wang, F.Y., Li, H., Yang, X.: Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems. IEEE Trans. Cybern. 44(12), 2834–2847 (2014)CrossRefGoogle Scholar
  11. 11.
    Liu, D., Yang, X., Wang, D., Wei, Q.: Reinforcement-learning-based robust controller design for continuous-time uncertain nonlinear systems subject to input constraints. IEEE Trans. Cybern. 45(7), 1372–1385 (2015)CrossRefGoogle Scholar
  12. 12.
    Liu, L., Wang, Z., Zhang, H.: Adaptive fault-tolerant tracking control for MIMO discrete-time systems via reinforcement learning algorithm with less learning parameters. IEEE Trans. Autom. Sci. Eng. 14(1), 299–313 (2017)CrossRefGoogle Scholar
  13. 13.
    Luo, B., Liu, D., Huang, T., Wang, D.: Model-free optimal tracking control via critic-only Q-learning. IEEE Trans. Neural Netw. Learn. Syst. 27(10), 2134–2144 (2016)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Luy, N.T.: Robust adaptive dynamic programming based online tracking control algorithm for real wheeled mobile robot with omni-directional vision system. Trans. Inst. Meas. Control 39(6), 832–847 (2017)CrossRefGoogle Scholar
  15. 15.
    Lv, Y., Na, J., Yang, Q., Wu, X., Guo, Y.: Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics. Int. J. Control 89(1), 99–112 (2016)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Modares, H., Lewis, F.L.: Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning. Automatica 50(7), 1780–1792 (2014)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Modares, H., Lewis, F.L., Jiang, Z.P.: \(H_{\infty }\) tracking control of completely unknown continuous-time systems via off-policy reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 26(10), 2550–2562 (2015)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Mu, C., Ni, Z., Sun, C., He, H.: Data-driven tracking control with adaptive dynamic programming for a class of continuous-time nonlinear systems. IEEE Trans. Cybern. 47(6), 1460–1470 (2017)CrossRefGoogle Scholar
  19. 19.
    Mu, C., Sun, C., Song, A., Yu, H.: Iterative GDHP-based approximate optimal tracking control for a class of discrete-time nonlinear systems. Neurocomputing 214, 775–784 (2016)CrossRefGoogle Scholar
  20. 20.
    Mu, C., Tang, Y., He, H.: Observer-based sliding mode frequency control for micro-grid with photovoltaic energy integration. In: Proceedings of IEEE Power and Energy Society General Meeting, pp. 1–5 (2016)Google Scholar
  21. 21.
    Mu, C., Wang, D., He, H.: Novel iterative neural dynamic programming for data-based approximate optimal control design. Automatica 81, 240–252 (2017)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Sonmez, S., Ayasun, S., Nwankpa, C.O.: An exact method for computing delay margin for stability of load frequency control systems with constant communication delays. IEEE Trans. Power Syst. 31(1), 370–377 (2016)CrossRefGoogle Scholar
  23. 23.
    Vamvoudakis, K.G., Lewis, F.L.: Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem. Automatica 46(5), 878–888 (2010)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Vamvoudakis, K.G., Mojoodi, A., Ferraz, H.: Event-triggered optimal tracking control of nonlinear systems. Int. J. Robust Nonlinear Control 27(4), 598–619 (2017)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Vrabie, D., Vamvoudakis, K.G., Lewis, F.L.: Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles. IET, London (2013)zbMATHGoogle Scholar
  26. 26.
    Wang, D., Liu, D., Li, H.: Policy iteration algorithm for online design of robust control for a class of continuous-time nonlinear systems. IEEE Trans. Autom. Sci. Eng. 11(2), 627–632 (2014)CrossRefGoogle Scholar
  27. 27.
    Wang, D., Liu, D., Mu, C., Zhang, Y.: Neural network learning and robust stabilization of nonlinear systems with dynamic uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 29(4), 1342–1351 (2018)CrossRefGoogle Scholar
  28. 28.
    Wang, D., Liu, D., Zhang, Q., Zhao, D.: Data-based adaptive critic designs for nonlinear robust optimal control with uncertain dynamics. IEEE Trans. Syst. Man Cybern.: Syst. 46(11), 1544–1555 (2016)Google Scholar
  29. 29.
    Wang, D., Mu, C.: Adaptive-critic-based robust trajectory tracking of uncertain dynamics and its application to a spring-mass-damper system. IEEE Trans. Ind. Electron. 65(1), 654–663 (2018)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Wang, D., Mu, C., Liu, D.: Data-driven nonlinear near-optimal regulation based on iterative neural dynamic programming. Acta Automatica Sinica 43(3), 366–375 (2017)zbMATHGoogle Scholar
  31. 31.
    Werbos, P.J.: Approximate dynamic programming for real-time control and neural modeling. In: Handbook of Intelligent Control: Neural, Fuzzy, and Adaptive Approaches, pp. 493–526 (1992)Google Scholar
  32. 32.
    Wu, H.N., Li, M., Guo, L.: Finite-horizon approximate optimal guaranteed cost control of uncertain nonlinear systems with application to Mars entry guidance. IEEE Trans. Neural Netw. Learn. Syst. 26(7), 1456–1467 (2015)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Yang, X., Liu, D., Wei, Q., Wang, D.: Guaranteed cost neural tracking control for a class of uncertain nonlinear systems using adaptive dynamic programming. Neurocomputing 198, 80–90 (2016)CrossRefGoogle Scholar
  34. 34.
    Zhang, H., Cui, L., Luo, Y.: Near-optimal control for nonzero-sum differential games of continuous-time nonlinear systems using single-network ADP. IEEE Trans. Cybern. 43(1), 206–216 (2013)CrossRefGoogle Scholar
  35. 35.
    Zhang, H., Cui, L., Zhang, X., Luo, Y.: Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method. IEEE Trans. Neural Netw. 22(12), 2226–2236 (2011)CrossRefGoogle Scholar
  36. 36.
    Zhang, H., Qin, C., Luo, Y.: Neural-network-based constrained optimal control scheme for discrete-time switched nonlinear system using dual heuristic programming. IEEE Trans. Autom. Sci. Eng. 11(3), 839–849 (2014)CrossRefGoogle Scholar
  37. 37.
    Zhong, X., He, H., Prokhorov, D.V.: Robust controller design of continuous-time nonlinear system using neural network. In: Proceedings of International Joint Conference on Neural Networks Dallas, pp. 1–8 (2013)Google Scholar
  38. 38.
    Zhu, Y., Zhao, D., He, H., Ji, J.: Event-triggered optimal control for partially-unknown constrained-input systems via adaptive dynamic programming. IEEE Trans. Ind. Electron. 64(5), 4101–4109 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.The State Key Laboratory of Management and Control for Complex SystemsInstitute of Automation, Chinese Academy of SciencesBeijingChina
  2. 2.School of Electrical and Information EngineeringTianjin UniversityTianjinChina

Personalised recommendations