Advertisement

Pattern Learning and Decision Making in a Photovoltaic System

  • Rongxin Li
  • Peter Wang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5361)

Abstract

We study the effects of different decision making schemes on the accumulative rewards when photovoltaic (PV) facilities are intended as a potential replacement for conventional peaking power plants. As the amount of solar irradiance usable by a PV module follows a stochastic process, we compare the outcomes using the following two strategies in a stochastic environment: (1) employing an optimal decision making approach without any specific knowledge of the environment; and (2) optimal decision making based upon learning patterns of the environment process. We examine the possibility of integrating a pattern learning approach – called an ε-Machine – with a Partially Observable Markov Decision Process (POMDP). This approach has been motivated in part by the fact that efforts in extending traditional learning approaches to POMDPs have so far achieved only limited success. The PV facility in our model consists of a PV panel and a battery, with an associated local, non-critical load. Under the assumption that any PV generated power exceeding the maximum local consumption capacity must be dumped when the battery is full, the goal of the autonomous control agent is to maintain the maximum output potential to most effectively offset unexpected demand peaks, while minimizing energy wastage in the presence of strong solar irradiance. The environment is assumed to follow a Markov process of a different order than the part of the system under the influence of the agent.

Keywords

Belief State Causal State Maximum Power Point Tracker Pattern Learn Partially Observable Markov Decision Process 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience, Hoboken (1994)CrossRefzbMATHGoogle Scholar
  2. 2.
    Littman, M.L., Dean, T.L., Kaelbling, L.P.: On the complexity of solving Markov Decision Problems. In: Proceedings of the Eleventh Annual Conference on Uncertainty in Artificial Intelligence (UAI 1995), Montreal, Québec, Canada, pp. 394–402 (1995)Google Scholar
  3. 3.
    Shalizi, C.R., Shalizi, K.L., Crutchfield, J.P.: An algorithm for pattern discovery in time series. Journal of Machine Learning Research (to appear)Google Scholar
  4. 4.
    Shalizi, C.R.: Causal Architecture, Complexity and Self-Organization for Time Series and Cellular Automatas. PhD thesis, University of Wisconsin at Madison (2001)Google Scholar
  5. 5.
    Hussein, K.H., Muta, I., Hoshino, T., Osakada, M.: Maximum photovoltaic power tracking: an algorithm for rapidly changing atmospheric conditions. Generation, Transmission and Distribution, IEE Proceedings 142(1), 59–64 (1995)CrossRefGoogle Scholar
  6. 6.
    Kim, J., Torregoza, J.P.M., Kong, I., Hwang, W.: Photovoltaic cell battery model for wireless sensor networks. International Journal of Computer Science and Network Security 6(9B) (2006)Google Scholar
  7. 7.
    Chiasserini, C., Rao, R.: A model for battery pulsed discharge with recovery effect. In: Wireless Communications and Networking Conference, pp. 636–639 (1999)Google Scholar
  8. 8.
    Rao, V., Singhal, G., Kumar, A., Navet, N.: Battery model for embedded systems. In: 18th International Conference on VLSI Design, pp. 57–63 (2005)Google Scholar
  9. 9.
    Murphy, K.: A survey of POMDP solution techniques. Technical report, U.C. Berkeley (2000)Google Scholar
  10. 10.
    Meuleau, N., Kim, K.E., Kaelbling, L.P., Cassandra, A.R.: Solving POMDPs by searching the space of finite policies. In: Proceedings of the Fifteenth Conf. on Uncertainty in Artificial Intelligence, pp. 417–426 (1999)Google Scholar
  11. 11.
    Roy, N.: Finding Approximate POMDP Solutions Through Belief Compression. PhD thesis, Carnegie Mellon University (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Rongxin Li
    • 1
  • Peter Wang
    • 1
  1. 1.Autonomous Systems Laboratory, CSIRO ICT CentreAustralia

Personalised recommendations