Advertisement

Host load prediction with long short-term memory in cloud computing

  • Binbin Song
  • Yao Yu
  • Yu Zhou
  • Ziqiang Wang
  • Sidan Du
Article

Abstract

Host load prediction is significant for improving resource allocation and utilization in cloud computing. Due to the higher variance than that in a grid, accurate prediction remains a challenge in the cloud system. In this paper, we apply a concise yet adaptive and powerful model called long short-term memory to predict the mean load over consecutive future time intervals and actual load multi-step-ahead. Two real-world load traces were used to evaluate the performance. One is the load trace in the Google data center, and the other is that in a traditional distributed system. The experiment results show that our proposed method achieves state-of-the-art performance with higher accuracy in both datasets.

Keywords

Host load prediction Cloud computing Long short-term memory Multi-step-ahead 

Notes

Acknowledgements

This work was partially supported by Grant No. BE2015152 from the Natural Science Foundation of Jiangsu Province and Grant Nos. 61100111, 61300157, 61201425, 61271231 from the National Natural Science Foundation of China.

References

  1. 1.
    Akioka S, Muraoka Y (2004) Extended forecast of CPU and network load on computational grid. In: IEEE International Symposium on Cluster Computing and the Grid, 2004. CCGrid 2004. IEEE, pp 765–772Google Scholar
  2. 2.
    Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5(2):157–166CrossRefGoogle Scholar
  3. 3.
    Di S, Kondo D, Cirne W (2012) Characterization and comparison of cloud versus grid workloads. In: 2012 IEEE International Conference on Cluster Computing. IEEE, pp 230–238Google Scholar
  4. 4.
    Di S, Kondo D, Cirne W (2012) Host load prediction in a Google compute cloud with a Bayesian model. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. IEEE Computer Society Press, p 21Google Scholar
  5. 5.
    Duy TVT, Sato Y, Inoguchi Y (2011) Improving accuracy of host load predictions on computational grids by artificial neural networks. Int J Parallel Emerg Distrib Syst 26(4):275–290CrossRefGoogle Scholar
  6. 6.
    Gers FA, Schmidhuber J (2000) Recurrent nets that time and count. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000. IJCNN 2000, vol 3. IEEE, pp 189–194Google Scholar
  7. 7.
    Gers FA, Schmidhuber J, Cummins F (2000) Learning to forget: continual prediction with lstm. Neural Comput 12(10):2451–2471CrossRefGoogle Scholar
  8. 8.
    Goodfellow I, Bengio Y, Courville A (2016) Deep learning. 2015Google Scholar
  9. 9.
    Guenter B, Jain N, Williams C (2011) Managing cost, performance, and reliability tradeoffs for energy-aware server provisioning. In: INFOCOM, 2011 Proceedings IEEE. IEEE, pp 1332–1340Google Scholar
  10. 10.
    Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, p 91Google Scholar
  11. 11.
    Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRefGoogle Scholar
  12. 12.
    Jaeger H, Lukoševičius M, Popovici D, Siewert U (2007) Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw 20(3):335–352CrossRefMATHGoogle Scholar
  13. 13.
    Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
  14. 14.
    Load traces on unix systems. http://www.cs.cmu.edu/~pdinda/LoadTraces/ (1997)
  15. 15.
    Lukoševičius M (2012) A practical guide to applying echo state networks. In: Neural networks: tricks of the trade. Springer, pp 659–686Google Scholar
  16. 16.
    Mell P, Grance T (2011) The NIST definition of cloud computingGoogle Scholar
  17. 17.
    Mozer MC (1992) Induction of multiscale temporal structure. In: Advances in neural information processing systems, pp 275–282Google Scholar
  18. 18.
    Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. ICML 3(28):1310–1318Google Scholar
  19. 19.
    Sutskever I (2013) Training recurrent neural networks. Ph.D. thesis, University of TorontoGoogle Scholar
  20. 20.
  21. 21.
    Werbos PJ (1990) Backpropagation through time: what it does and how to do it. Proc IEEE 78(10):1550–1560CrossRefGoogle Scholar
  22. 22.
    Wilkes J (2011) More google cluster data. Google research blog, NovGoogle Scholar
  23. 23.
    Williams RJ, Peng J (1990) An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Comput 2(4):490–501CrossRefGoogle Scholar
  24. 24.
    Wu Y, Yuan Y, Yang G, Zheng W (2007) Load prediction using hybrid model for computational grid. In: 2007 8th IEEE/ACM International Conference on Grid Computing. IEEE, pp 235–242Google Scholar
  25. 25.
    Yang Q, Peng C, Zhao H, Yu Y, Zhou Y, Wang Z, Du S (2014) A new method based on PSR and EA-GMDH for host load prediction in cloud computing system. J Supercomput 68(3):1402–1417CrossRefGoogle Scholar
  26. 26.
    Yang Q, Zhou Y, Yu Y, Yuan J, Xing X, Du S (2015) Multi-step-ahead host load prediction using autoencoder and echo state networks in cloud computing. J Supercomput 71(8):3037–3053CrossRefGoogle Scholar
  27. 27.
    Zhang Q, Zhani MF, Zhang S, Zhu Q, Boutaba R, Hellerstein JL (2012) Dynamic energy-aware capacity provisioning for cloud computing environments. In: Proceedings of the 9th International Conference on Autonomic Computing. ACM, pp 145–154Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Binbin Song
    • 1
  • Yao Yu
    • 1
  • Yu Zhou
    • 1
  • Ziqiang Wang
    • 1
  • Sidan Du
    • 1
  1. 1.School of Electronic Science and EngineeringNanjing UniversityNanjingChina

Personalised recommendations