Adaptive Control Problems as MDPs


Adaptive control and identification theory for stochastic systems was developed in the last few decades and is now very mature. Many excellent textbooks exist, see e.g., [9, 165, 192, 193, 206]. There has been a continuing discussion of what adaptive control is. In general, the problems studied in this area involve systems whose structures and/or parameters are unknown and/or are time-varying, However, to precisely define adaptive control is not an easy job [9, 206].


Transition Function Adaptive Control Optimal Policy Riccati Equation Reward Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 165.
    H. Kaufman, I. Bar-Kana, and K. Sobel, Direct Adaptive Control Algorithms - Theory and Applications, Springer-Verlag, Noew York, 1994.Google Scholar
  2. 192.
    L. Ljung and T. Söderström, Theory and Practice of Recursive Identification, MIT Press, Cambridge, Massachusetts, 1983.MATHGoogle Scholar
  3. 193.
    L. Ljung, System Identification - Theory for the User, PTR Prentice Hall, 1999.Google Scholar
  4. 206.
    K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems, Prentice Hall, Englewood Cliffs, New Jersey, 1989.Google Scholar
  5. 9.
    K. J. Åström and B. Wittenmark, Adaptive Control, Addison-Wesley, Reading, Massachusetts, 1989.MATHGoogle Scholar
  6. 5.
    A. Al-Tamimi, F. L. Lewis and M. Abu-Khalaf, “Model-Free Q-Learning Designs for Linear Discrete-Time Zero-Sum Games with Application to H-Infinity Control,” Automatica, Vol. 43, 473-481, 2007.MATHCrossRefMathSciNetGoogle Scholar
  7. 30.
    S. J. Bradtke, B. E. Ydestie and A. G. Barto, “Adaptive Linear Quadratic Control Using Policy Iteration,” Proceedings of the American Control Conference, Baltimore, Maryland, U.S.A, 3475-3479, 1994.Google Scholar
  8. 89.
    O. L. V. Costa and J. C. C. Aya, “Monte Carlo TD(λ)-Methods for the Optimal Control of Discrete-Time Markovian Jump Linear Systems,” Automatica, Vol. 38, 217-225, 2002.MATHCrossRefMathSciNetGoogle Scholar
  9. 124.
    S. Hagen and B. Krose, “Linear Quadratic Regulation Using Reinforcement Learning,” Proceedings of 8th Belgian-Dutch Conference on Machine Learning, Wageningen, The Netherlands, 39-46, 1998.Google Scholar
  10. 255.
    P. J. Werbos, “Consistency of HDP applied to a simple reinforcement learning problem,” Neural Networks, Vol. 3, 179-189, 1990.CrossRefGoogle Scholar
  11. 135.
    O. Hernández-Lerma and J. B. Lasserre, Discrete-Time Markov Control Processes: Basic Optimality Criteria, Springer-Verlag, New York, 1996.Google Scholar
  12. 24.
    D. P. Bertsekas and S. E. Shreve, Stochastic Optimal Control: The Discrete Time Case, Academic Press, New York, 1978.MATHGoogle Scholar
  13. 136.
    O. Hernández-Lerma and J. B. Lasserre, “Policy Iteration for Average Cost Markov Control Processes on Borel Spaces,” Acta Appliandae Mathematicae, Vol. 47, 125-154, 1997.MATHCrossRefGoogle Scholar
  14. 203.
    S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag, London, 1993.MATHGoogle Scholar
  15. 39.
    A. E. Bryson and Y. C. Ho, Applied Optimal Control: Optimization, Estimation, and Control, Blaisdell, Waltham, Massachusetts, 1969.Google Scholar
  16. 265.
    K. J. Zhang, Y. K. Xu, X. Chen and X. R. Cao, “Policy iteration based feedback control,” submmited to Automatica.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  1. 1.Hong Kong University of Science and TechnologyKowloonHong Kong

Personalised recommendations