Advertisement

Improvement of Air Handling Unit Control Performance Using Reinforcement Learning

  • Sangjo Youk
  • Moonseong Kim
  • Yangsok Kim
  • Gilcheol Park
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4303)

Abstract

Most common applications using neural networks for control problems are the automatic controls using the artificial perceptual function. These control mechanisms are similar to those of the intelligent and pattern recognition control of an adaptive method frequently performed by the animate nature. Many automated buildings are using HVAC(Heating Ventilating and Air Conditioning) by PI that has simple and solid characteristics. However, to keep up good performance, proper tuning and re-tuning are necessary.In this paper, as the one of method to solve the above problems and improve control performance of controller, using reinforcement learning method for the one of neural network learning method(supervised/unsupervised/reinforcement learning), reinforcement learning controller is proposed and the validity will be evaluated under the real operating condition of AHU(Air Handling Unit) in the environment chamber.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Virk, G.S., Loveday, D.L.: A Comparison of Predictive, PID, and On/Off Techniques for Energy Management and Control. In: Proceedings of ASHRAE, pp. 3–10 (1992)Google Scholar
  2. 2.
    Åström, K.J., Hägglund, T.: PID controllers: Theory, design and tuning, Research-Triangle Park, NC: Instrument Society of America (1995)Google Scholar
  3. 3.
    Hang, C.C., Åström, K.J., Ho, W.K.: Ziegler-Nichols tuning formula. IEE Proc. D 138(2), 111–118Google Scholar
  4. 4.
    Ziegler, J.G., Nichols, N.B.: Optimum settings for automatic controllers. Trans. ASME, 433–444 (1942)Google Scholar
  5. 5.
    Hang, C.C., Lim, C.C., Soon, S.H.: A new PID auto-tunner design based on correla-tion technique. In: Proc. 2nd Multinational Instrumentation Conf., China (1986)Google Scholar
  6. 6.
    Hang, C.C., Åström, K.J.: Refinements of the Ziegler Nichols tunning formula for PID auto-tunners. In: Proc. ISA Conf., USAGoogle Scholar
  7. 7.
    Åström, K.J., Hang, C.C., Persson, P., Ho, W.K.: Towards Intelligent PID Control, International Federation of Automatic Control (1991)Google Scholar
  8. 8.
    Åström, K.J., Hang, C.C., Persson, P.: Heuristics for assessment of PID control with Ziegler-Nichols tuning, Automatic Control, Lund Institute of Technology, Lund, Sweden (1988)Google Scholar
  9. 9.
    Åström, K.J., Hagglund, T.: Automatic tuning of simple regulators with specifications on phase and amplitude margins. Automatica 20, 645–651 (1984)zbMATHCrossRefGoogle Scholar
  10. 10.
    Sutton, R.S.: Learning to predict by the methods of TD(temporal differences). Machine Learn. 3, 9–44 (1988)Google Scholar
  11. 11.
    Anderson, C.W.: Q-learning with hidden-unit restarting. Advances in Neural information processing system 5, 81–88 (1993)Google Scholar
  12. 12.
    Barto, A.G., Bradtke, S.J., Singh, S.P.: Learning to act using real-time dynamic programming. Artificial Intelligence 72, 81–138 (1995)CrossRefGoogle Scholar
  13. 13.
    Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Sangjo Youk
    • 1
  • Moonseong Kim
    • 2
  • Yangsok Kim
    • 3
  • Gilcheol Park
    • 1
  1. 1.School of Information & MultimediaHannam UniversityDaejeonKorea
  2. 2.Dept. Medical Information SystemDaewon Science CollegeChungbukKorea
  3. 3.School of ComputingUniversity of TasmaniaHobartAustralia

Personalised recommendations