Scheduling Virtual Machine Migration During Datacenter Upgrades with Reinforcement Learning

  • Chen YingEmail author
  • Baochun Li
  • Xiaodi Ke
  • Lei Guo
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 300)


Physical machines in modern datacenters are routinely upgraded due to their maintenance requirements, which involves migrating all the virtual machines they currently host to alternative physical machines. For this kind of datacenter upgrades, it is critical to minimize the time it takes to upgrade all the physical machines in the datacenter, so as to reduce disruptions to cloud services. To minimize the upgrade time, it is essential to carefully schedule the migration of virtual machines on each physical machine during its upgrade, without violating any constraints imposed by virtual machines that are currently running. Rather than resorting to heuristic algorithms, we propose a new scheduler, Raven, that uses an experience-driven approach with deep reinforcement learning to schedule the virtual machine migration process. With our design of the state space, action space and reward function, Raven trains a fully-connected neural network using the cross-entropy method to approximate the policy of a choosing destination physical machine for each migrating virtual machine. We compare Raven with state-of-the-art heuristic algorithms in the literature, and our results show that Raven effectively leads to shorter time to complete the datacenter upgrade process.


Reinforcement learning Virtual machine migration 


  1. 1.
    Maurya, K., Sinha, R.: Energy conscious dynamic provisioning of virtual machines using adaptive migration thresholds in cloud data center. Int. J. Comput. Sci. Mob. Comput. 2(3), 74–82 (2013)Google Scholar
  2. 2.
    Ji, S., Li, M.D., Ji, N., Li, B.: An online virtual machine placement algorithm in an over-committed cloud. In: 2018 IEEE International Conference on Cloud Engineering, IC2E 2018, Orlando, FL, USA, 17–20 April 2018, pp. 106–112 (2018)Google Scholar
  3. 3.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 1, no. 1. MIT Press, Cambridge (1998)Google Scholar
  4. 4.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)CrossRefGoogle Scholar
  5. 5.
    Mao, H., Alizadeh, M., Menache, I., Kandula, S.: Resource management with deep reinforcement learning. In: Proceedings of the 15th ACM Workshop on Hot Topics in Networks, pp. 50–56. ACM (2016)Google Scholar
  6. 6.
    Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)CrossRefGoogle Scholar
  7. 7.
    Rubinstein, R., Kroese, D.: The Cross-Entropy Method. Springer, Heidelberg (2004). Scholar
  8. 8.
    Sapuntzakis, C.P., Chandra, R., Pfaff, B., Chow, J., Lam, M.S., Rosenblum, M.: Optimizing the migration of virtual computers. In: 5th Symposium on Operating System Design and Implementation (OSDI 2002), Boston, Massachusetts, USA, 9–11 December 2002 (2002)Google Scholar
  9. 9.
    Zhang, X., Shae, Z.-Y., Zheng, S., Jamjoom, H.: Virtual machine migration in an over-committed cloud. In: Proceedings of the IEEE Network Operations and Management Symposium (NOMS) (2012)Google Scholar
  10. 10.
    Dabbagh, M., Hamdaoui, B., Guizani, M., Rayes, A.: Efficient datacenter resource utilization through cloud resource overcommitment. In: Proceedings of the IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) (2015)Google Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2020

Authors and Affiliations

  1. 1.University of TorontoTorontoCanada
  2. 2.Huawei CanadaMarkhamCanada

Personalised recommendations