Skip to main content

Fair Resource Allocation Based on Deep Reinforcement Learning in Fog Networks

  • Conference paper
  • First Online:
Ad Hoc Networks (ADHOCNETS 2019)

Abstract

As the terminal devices grow explosively, the resource in a fog network may not satisfy all the requirement of them. Thus scheduling the resource reasonably becomes a huge challenge in the future 5G network. In the paper, we propose a fair resource allocation algorithm based on deep reinforcement learning, which makes full use of the computational resource in a fog network. The goal of the algorithm is to complete processing the tasks fairly for all the user nodes (UNs). The fog nodes (FNs) are expected to assign their central processing unit (CPU) cores to process offloading tasks reasonably. We apply the Deep Q-Learning Network (DQN) to solve the problem of resource scheduling. Firstly, we establish an evaluation model of a priority to set the priority for the offloading tasks, which is related to the reward in the reinforcement learning. Secondly, the model of reinforcement learning is built by taking the situation of UNs and resource allocation scheme as the state of environment and the action of the agent, respectively. Subsequently, a loss function is analysed to update the parameters of a deep neural network. Finally, numerical simulations demonstrate the feasibility and effectiveness of our proposed algorithm.

This work is supported in part by the National Natural Science Foundation of China (No. 61871370, No. 61601122 and No. 61773266), the Natural Science Foundation of Shanghai, China (No. 18ZR1437500 and No. 19ZR1423100), the Hundred Talent Program of Chinese Academy of Sciences (No. Y86BRA1001), the Fundamental Research Funds for the Central Universities (No. 2242019K40188), the Key Research & Development Plan of Jiangsu Province (No. BE2018108), and the Postdoctoral Science Foundation of China (No. 2019M651476).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abbas, N., Zhang, Y., Taherkordi, A., Skeie, T.: Mobile edge computing: a survey. IEEE Internet Things J. 5(1), 450–465 (2018). https://doi.org/10.1109/JIOT.2017.2750180

    Article  Google Scholar 

  2. Chiang, M., Zhang, T.: Fog and IoT: an overview of research opportunities. IEEE Internet Things J. 3(6), 854–864 (2016). https://doi.org/10.1109/JIOT.2016.2584538

    Article  Google Scholar 

  3. da Silva, R.A.C., da Fonseca, N.L.S.: Resource allocation mechanism for a fog-cloud infrastructure. In: 2018 IEEE International Conference on Communications (ICC), pp. 1–6, May 2018. https://doi.org/10.1109/ICC.2018.8422237

  4. Ghazy, A.S., Selmy, H.A.I., Shalaby, H.M.H.: Fair resource allocation schemes for cooperative dynamic free-space optical networks. IEEE/OSA J. Opt. Commun. Netw. 8(11), 822–834 (2016). https://doi.org/10.1364/JOCN.8.000822

  5. Gu, Y., Chang, Z., Pan, M., Song, L., Han, Z.: Joint radio and computational resource allocation in IoT fog computing. IEEE Trans. Vehicular Technol. 67(8), 7475–7484 (2018). https://doi.org/10.1109/TVT.2018.2820838

    Article  Google Scholar 

  6. Jia, B., Hu, H., Zeng, Y., Xu, T., Yang, Y.: Double-matching resource allocation strategy in fog computing networks based on cost efficiency. J. Commun. Netw. 20(3), 237–246 (2018). https://doi.org/10.1109/JCN.2018.000036

    Article  Google Scholar 

  7. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)

    Google Scholar 

  8. Ozmen, M., Gursoy, M.C.: Wireless throughput and energy efficiency with random arrivals and statistical queuing constraints. IEEE Trans. Inf. Theory 62(3), 1375–1395 (2016). https://doi.org/10.1109/TIT.2015.2510027

    Article  MathSciNet  MATH  Google Scholar 

  9. Sarigl, M., Avci, M.: Performance comparision of different momentum techniques on deep reinforcement learning. In: 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pp. 302–306, July 2017. https://doi.org/10.1109/INISTA.2017.8001175

  10. Zhang, G., Shen, F., Liu, Z., Yang, Y., Wang, K., Zhou, M.: FEMTO: fair and energy-minimized task offloading for fog-enabled IoT networks. IEEE Internet Things J. 6(3), 4388–4400 (2019). https://doi.org/10.1109/JIOT.2018.2887229

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huihui Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, H., Zu, Y., Shen, F., Yan, F., Qin, F., Shen, L. (2019). Fair Resource Allocation Based on Deep Reinforcement Learning in Fog Networks. In: Zheng, J., Li, C., Chong, P., Meng, W., Yan, F. (eds) Ad Hoc Networks. ADHOCNETS 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 306. Springer, Cham. https://doi.org/10.1007/978-3-030-37262-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37262-0_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37261-3

  • Online ISBN: 978-3-030-37262-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics