The Strategic Control of an Ant-Based Routing System Using Neural Net Q-Learning Agents

  • David Legge
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3394)


Agents have been employed to improve the performance of an Ant-Based Routing System on a communications network. The Agents use a Neural Net based Q-Learning approach to adapt their strategy according to conditions and learn autonomously. They are able to manipulate parameters that effect the behaviour of the Ant-System. The Ant-System is able to find the optimum routing configuration with static traffic conditions. However, under fast-changing dynamic conditions, such as congestion, the system is slow to react; due to the inertia built up by the best routes. The Agents reduce this inertia by changing the speed of response of the Ant-System. For an effective system, the Agents must cooperate – forming an implicit society across the network.


Mobile Agent Telecommunication Network Reward Function Short Route Ideal Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Brooks, R.: A Robust Layered Control System for a Mobile Agent. IEEE Journal of Robotics and Automation 2(1), 14–21 (1986)CrossRefGoogle Scholar
  2. Dorigo, M., Di Caro, G.: AntNet: A Mobile Agents Approach to Adaptive Routing., Technical Report IRIDIA 97-12 (1997)Google Scholar
  3. Kapetanakis, S., Kudenko, D.: Improving on the Reinforcement Learning of Coordination in Cooperative Multi-Agent Systems. In: Proc. AAMAS-II, AISB 2002, pp. 89–94. Imperial College, London (2002)Google Scholar
  4. Kapetanakis, S., Kudenko, D., Strens, M.: Learning to Coordinate using Commitment Sequences in Cooperative Multi-Agent Systems. In: Kudenko, D., Kazakov, D., Alonso, E. (eds.) AAMAS 2004. LNCS (LNAI), vol. 3394, pp. 106–118. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  5. Schoonderwoerd, R., Holland, O., Bruten, J.: Ant-Like Agents for Load Balancing in Telecommunications Networks. In: Proc. 1st Int. Conf. on Autonomous Agents, Marina del Rey, pp. 209–216. ACM Press, New York (1997)CrossRefGoogle Scholar
  6. Schoonderwoerd, R., Holland, O., Bruten, J.: Ant-Based Load Balancing in Telecommunications Networks. Hewlett-Packard Laboratories, Bristol (1996)Google Scholar
  7. Tesauro, G.: Programming backgammon using self-teaching neural nets. Artificial Intelligence 134(1-2), 181–199 (2002)zbMATHCrossRefGoogle Scholar
  8. Tesauro, G.: Pricing in Agent Economies using Neural Networks and Multi-Agent Q-Lear ning. In: Proc. Workshop ABS-3 Learning About, From and With other Agents (August 1999)Google Scholar
  9. Tumer, K., Wolpert, D.H.: Avoiding Braess’ Paradox through Collective Intelligence. AAAI/ IAAI (2000)Google Scholar
  10. Vittori, K., Araújo, F.R.: Agent-Oriented Routing in Telecommunications Networks. IEICE Transactions on Communications E84-B(11), 3006–3013 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • David Legge
    • 1
  1. 1.Centre for Telecommunication Networks, School of EngineeringUniversity of Durham 

Personalised recommendations