Deep Reinforcement Learning-Based Task Offloading and Resource Allocation for Mobile Edge Computing

  • Liang Huang
  • Xu FengEmail author
  • Liping Qian
  • Yuan Wu
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 251)


We consider a mobile edge computing system that every user has multiple tasks being offloaded to edge server via wireless networks. Our goal is to acquire a satisfactory task offloading and resource allocation decision for each user so as to minimize energy consumption and delay. In this paper, we propose a deep reinforcement learning-based approach to solve joint task offloading and resource allocation problems. Simulation results show that the proposed deep Q-learning-based algorithm can achieve near-optimal performance.


Mobile edge computing Deep reinforcement learning Task offloading Resource allocation Deep Q-learning 



This work was supported in part by the National Natural Science Foundation of China under Grant 61572440 and Grant 61502428, in part by the Zhejiang Provincial Natural Science Foundation of China under Grants LR17F010002 and LR16F010003, in part by the Young Talent Cultivation Project of Zhejiang Association for Science and Technology under Grant 2016YCGC011.


  1. 1.
    Liu, J., Zhang, Q.: Offloading schemes in mobile edge computing for ultra-reliable low latency communications. IEEE Access 6, 12825–12837 (2018)CrossRefGoogle Scholar
  2. 2.
    Zhang, T.: Data offloading in mobile edge computing: a coalition and pricing based approach. IEEE Access 6, 2760–2767 (2018)CrossRefGoogle Scholar
  3. 3.
    Zhao, P., Tian, H., Qin, C., Nie, G.: Energy-saving offloading by jointly allocating radio and computational resources for mobile edge computing. IEEE Access 5, 11255–11268 (2017)CrossRefGoogle Scholar
  4. 4.
    Hu, X., Wong, K.K., Yang, K.: Wireless powered cooperation-assisted mobile edge computing. IEEE Trans. Wirel. Commun. 17(4), 2375–2388 (2018)CrossRefGoogle Scholar
  5. 5.
    Chen, X.: Decentralized computation offloading game for mobile cloud computing. IEEE Trans. Parallel Distrib. Syst. 26, 974–983 (2015)CrossRefGoogle Scholar
  6. 6.
    Meskar, E., Todd, T., Zhao, D., Karakostas, G.: Energy efficient offloading for competing users on a shared communication channel. In: Proceedings of IEEE International Conference on Communications (ICC), pp. 3192–3197 (2015)Google Scholar
  7. 7.
    Chen, M.H., Liang, B., Dong, M.: Joint offloading decision and resource allocation for multi-user multi-task mobile cloud. In: 2016 IEEE International Conference on Communications (ICC), Kuala Lumpur, pp. 1–6 (2016)Google Scholar
  8. 8.
    Mnih, V., Kavukcuoglu, K., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  9. 9.
    He, Y., et al.: Deep-reinforcement-learning-based optimization for cache-enabled opportunistic interference alignment wireless networks. IEEE Trans. Veh. Technol. 66(11), 10433–10445 (2017)CrossRefGoogle Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018

Authors and Affiliations

  1. 1.College of Information EngineeringZhejiang University of TechnologyHangzhouChina

Personalised recommendations