Advertisement

Adaptive Packet Routing on Communication Networks Based on Reinforcement Learning

  • Tanyaluk DeekaEmail author
  • Boriboon Deeka
  • Surajate On-rit
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 70)

Abstract

An adaptive approach to routing packets on a communication network using machine learning has been reported on our empirical study. We show that the approach of Q-routing previously demonstrated on small toy networks can be expanded to large networks of realistic sizes. The performance of such a routing approach on synthetic networks of three different topology has been studied: random connections, preferential attachment (PA) and a specific architecture known as highly optimized topology (HOT), specifically designed to mimic the Internet’s router level topology. Our simulations show that in terms of discovering alternate paths under high loads, the HOT topology is able to offer significant advantage over a PA network which is characterized by hubs at which communication bottlenecks form.

Keywords

Adaptive routing Preferential attachment Highly optimized topology Reinforcement learning 

References

  1. 1.
    Tanenbaum, A.S., Wetherall, D.J.: Computer Networks. Pearson (2010)Google Scholar
  2. 2.
    Atzori, Iera, A., Morabito, G.: The internet of things: a survey. Comput. Netw. 54(15), 2787-2805, (2010)zbMATHGoogle Scholar
  3. 3.
    Hopfield, J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc Nat. Acad. Sci USA 84, 3088–3092 (1984)zbMATHGoogle Scholar
  4. 4.
    Aiyer, S.V., Niranjan, M., Fallside, F.: A theoretical investigation into the performance of the Hopfield model. IEEE Trans. Neural Netw. 1(2), 204–215 (1990)Google Scholar
  5. 5.
    Smith, K.A.: Neural networks for comninatorial optimization: a review of more than a decade of research. INFORMS J. Comput. 11(1), 15–34 (1999)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: an introduction. The MIT Press (2018)Google Scholar
  7. 7.
    Crites, R.H., Barto, A.G.: Improving elevator performance using reinforcement learning pp. 1017–1023 (1996)Google Scholar
  8. 8.
    Brosch, T., Neumann, H., Roelfsema, P.R.: Reinforcement learning of linking and tracing contours in recurrent neural networks. PLOS Comput. Biol. 11(10), 1–36 (2015)Google Scholar
  9. 9.
    Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)zbMATHGoogle Scholar
  10. 10.
    Haraty, R.A., Traboulsi, B.: MANET with the Q-routing protocol. In: ICN The Eleventh International Conference on Networks, pp. 187–192 (2012)Google Scholar
  11. 11.
    Maleki, M., Hakami, V., Dehghan, M.: A reinforcement learning-based bi-objective routing algorithm for energy harvesting mobile ad-hoc networks. In: IST The Seventh International Symposium on Telecommunications, pp. 1082–1087 (2014)Google Scholar
  12. 12.
    Bhorkar, A.A., Naghshvar, M., Javidi, T., Rao, B.D.: Adaptive opportunistic routing for wireless ad hoc networks. IEEE/ACM Trans. Networking (TON) 20(1), 243–256 (2012)Google Scholar
  13. 13.
    Lin, Z., van der Schaar, M.: Autonomic and distributed joint routing and power control for delay-sensitive applications in multi-hop wireless networks. IEEE Trans. Wirel. Commun. 10(1), 102–113 (2011)Google Scholar
  14. 14.
    Santhi, G., Nachiappan, A., Ibrahime, M.Z., Raghunadhane, R., Favas, M.: IEEE. Q-learning based adaptive qos routing protocol for manets. In: 2011 International Conference on Recent Trends in Information Technology (ICRTIT), pp. 1233–1238 (2011)Google Scholar
  15. 15.
    Hu, T., Fei, Y.: Qelar: a machine-learning-based adaptive routing protocol for energy-efficient and lifetime-extended underwater sensor networks. IEEE Trans. Mobile Comput. 9(6), 796–809 (2010)Google Scholar
  16. 16.
    Dowling, J., Curran, E., Cunningham, R., Cahill, V.: Using feedback in collaborative reinforcement learning to adaptively optimize manet routing. IEEE Trans. Syst. Man, Cybern.-Part A 84, 3088–3092 (1984)Google Scholar
  17. 17.
    Boyan, J.A., Littman, M.L.: Packet routing in dynamically changing networks: A reinforcement learning approach. Adv. Neural Inf. Process. Syst. 671–678 (1994)Google Scholar
  18. 18.
    Murhammer, M.W., Lee, K.K., Motallebi, P., Borgi, P., Wozabal, K.: IP Network Design Guide. IBM (1999)Google Scholar
  19. 19.
    Batagelj, V., Brandes, U.: Efficient generation of large random networks. Phys. Rev. E 71(3), 1–13 (2005)Google Scholar
  20. 20.
    Li, L., Alderson, D., Willinger, W., Doyle, J.: A first-principles approach to understanding the internet’s router-level topology. ACM SIGCOMM Comput. Commun. Rev. 34(4), 3–14 (2004)Google Scholar
  21. 21.
    Chiocchetti, R., Perino, D., Carofiglio, G., Rossi, D., Rossini, G.: ACM. Inform: a dynamic interest forwarding mechanism for information centric networking, pp. 9–14 (2013)Google Scholar
  22. 22.
    Paul, S., Banerjee, B., Mukherjee, A., Naskar, M.K.: Priority-based content processing with Q-routing in information-centric networking (ICN). Photonic Netw. Commun. 1–11 (2016)Google Scholar
  23. 23.
    Chakrabarti, D., Faloutsos, C.: Graph mining: laws, tools, and case studies, Synthesis Lectures on. Data Mining Knowl. Discovery 7(1), 1–207 (2012)Google Scholar
  24. 24.
    Newman, M.E., Watts, D.J., Strogatz, S.H.: Random graph models of social networks. Proc. National Acad. Sci. 99(1), 2566–2572 (2002)zbMATHGoogle Scholar
  25. 25.
    Barabâsi, A.L., Jeong, H., Néda, Z., Ravasz, E., Schubert, A., Vicsek, T.: Evolution of the social network of scientific collaborations. Physica A: Stat. Mech. Appl. 311(3), 290–614 (2002)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Dorogovtsev, S.N., Mendes, J.F.: Evolution of networks. Adv. Phys. 51(4), 1079–1187 (2002)Google Scholar
  27. 27.
    Newman, M.E.: The structure and function of complex networks. SIAM Rev. 45(2), 167–256 (2003)MathSciNetzbMATHGoogle Scholar
  28. 28.
  29. 29.
    Rummery, G.A., Niranjan, M.: On-line Q-learning Using Connectionist Systems. University of Cambridge, Department of Engineering, (1994)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Tanyaluk Deeka
    • 1
    Email author
  • Boriboon Deeka
    • 1
  • Surajate On-rit
    • 1
  1. 1.Ubon Ratchathani Rajabhat UniversityUbonratchathaniThailand

Personalised recommendations