Advertisement

The Journal of Supercomputing

, Volume 75, Issue 11, pp 7592–7605 | Cite as

Host load prediction in cloud computing using Long Short-Term Memory Encoder–Decoder

  • Hoang Minh NguyenEmail author
  • Gaurav Kalra
  • Daeyoung Kim
Article

Abstract

Cloud computing has been developed as a means to allocate resources efficiently while maintaining service-level agreements by providing on-demand resource allocation. As reactive strategies cause delays in the allocation of resources, proactive approaches that use predictions are necessary. However, due to high variance of cloud host load compared to that of grid computing, providing accurate predictions is still a challenge. Thus, in this paper we have proposed a prediction method based on Long Short-Term Memory Encoder–Decoder (LSTM-ED) to predict both mean load over consecutive intervals and actual load multi-step ahead. Our LSTM-ED-based approach improves the memory capability of LSTM, which is used in the recent previous work, by building an internal representation of time series data. In order to evaluate our approach, we have conducted experiments using a 1-month trace of a Google data centre with more than twelve thousand machines. Our experimental results show that while multi-layer LSTM causes overfitting and decrease in accuracy compared to single-layer LSTM, which was used in the previous work, our LSTM-ED-based approach successfully achieves higher accuracy than other previous models, including the recent LSTM one.

Keywords

Host load prediction Cloud computing Long Short-Term Memory Encoder–Decoder 

Notes

Acknowledgements

This research was supported by International Research and Development Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning of Korea (2016K1A3A7A03952054), and Smart City R&D project of the Korea Agency for Infrastructure Technology Advancement (KAIA) Grant funded by the Ministry of Land, Infrastructure and Transport (MOLIT), Ministry of Science and ICT (MSIT) (Grant 18NSPS-B149386-01).

References

  1. 1.
    Bengio Y, Simard P, Frasconi P et al (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5(2):157–166CrossRefGoogle Scholar
  2. 2.
    Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. arXiv:14061078
  3. 3.
    Di S, Kondo D, Cirne W (2012) Characterization and comparison of cloud versus grid workloads. In: IEEE International Conference on Cluster Computing (CLUSTER), 2012. IEEE, pp 230–238Google Scholar
  4. 4.
    Di S, Kondo D, Cirne W (2012) Host load prediction in a Google compute cloud with a Bayesian model. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. IEEE Computer Society Press, p 21Google Scholar
  5. 5.
    Duy TVT, Sato Y, Inoguchi Y (2011) Improving accuracy of host load predictions on computational grids by artificial neural networks. Int J Parallel Emerg Distrib Syst 26(4):275–290CrossRefGoogle Scholar
  6. 6.
    Google cluster workload traces (2011). https://github.com/google/cluster-data
  7. 7.
    Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, 91(1)Google Scholar
  8. 8.
    Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRefGoogle Scholar
  9. 9.
    Li Y, Lan Z (2004) A survey of load balancing in grid computing. In: International Conference on Computational and Information Science. Springer, pp 280–285Google Scholar
  10. 10.
    Lorido-Botrán T, Miguel-Alonso J, Lozano JA (2012) Auto-scaling techniques for elastic applications in cloud environments. Department of Computer Architecture and Technology, University of Basque Country, Tech Rep EHU-KAT-IK-09 12:2012Google Scholar
  11. 11.
    Mell P, Grance T et al (2009) The NIST definition of cloud computing. Natl Inst Stand Technol 53(6):50Google Scholar
  12. 12.
    Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. In: International Conference on Machine Learning, pp 1310–1318Google Scholar
  13. 13.
    Song B, Yu Y, Zhou Y, Wang Z, Du S (2018) Host load prediction with long short-term memory in cloud computing. J Supercomput 74(12):6554–6568CrossRefGoogle Scholar
  14. 14.
    Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp 3104–3112Google Scholar
  15. 15.
    Werbos PJ et al (1990) Backpropagation through time: what it does and how to do it. Proc IEEE 78(10):1550–1560CrossRefGoogle Scholar
  16. 16.
    Williams RJ, Peng J (1990) An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Comput 2(4):490–501CrossRefGoogle Scholar
  17. 17.
    Wu Y, Yuan Y, Yang G, Zheng W (2007) Load prediction using hybrid model for computational grid. In: 8th IEEE/ACM International Conference on Grid Computing, 2007. IEEE, pp 235–242Google Scholar
  18. 18.
    Yang Q, Peng C, Zhao H, Yu Y, Zhou Y, Wang Z, Du S (2014) A new method based on PSR and EA-GMDH for host load prediction in cloud computing system. J Supercomput 68(3):1402–1417CrossRefGoogle Scholar
  19. 19.
    Yang Q, Zhou Y, Yu Y, Yuan J, Xing X, Du S (2015) Multi-step-ahead host load prediction using autoencoder and echo state networks in cloud computing. J Supercomput 71(8):3037–3053CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of ComputingKorea Advanced Institute of Science and Technology (KAIST)DaejeonSouth Korea

Personalised recommendations