This chapter presents a new framework for solving distributed control problems in a cooperative manner via the concept of dynamic team building. The distributed control problem is modeled as a set of sub-problems using a directed graph. Each node represents a sub-problem and each link represents the relationship between two nodes. A cooperative ensemble (CE) of agents is used to solve this problem. Agents are assigned to the nodes in the graph and each agent maintains a table of link relationship with all the other nodes of the problem. In the cooperative ensemble, each agent generates three sets of outputs iteratively based on the input variables it receives. They are, namely, the need for cooperation, the level of cooperation and the control directives. These outputs are used for dynamic team building within the cooperative ensemble. Agents within each team can issue a collaborative control directive and they take into account the mistakes of all the members in the team. In addition, each agent has a neuro-biologically inspired memory structure containing the addictive decaying value of all its previous errors and it is used to facilitate the dynamic update of the agent’s control parameters. The cooperative ensemble has been implemented in the form of distributed neural traffic signal controllers for the distributed real-time traffic signal control. It is evaluated in a large simulated traffic network together with several existing algorithms. Promising results have been obtained from the experiments. The cooperative ensemble is seen as a potential framework for similar distributed control problems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
W. Zachary, S. Robertson, “Introduction,” In Cognition, Computing, and Cooperation, S. Robertson, W. Zachary, and J. Black, Eds. Norwood, NJ: Ablex, 1990.
J.K. Rilling, D.A. Gutman, T.R. Zeh, G. Pagnoni, G.S. Berns, and C.D. Kilts, “A Neural Basis for Social Cooperation”, Neuron, Vol. 35, Page(3) 395-405, July 2002.
G. Auda, M. Kamel, and H. Raafat, “A New Neural Network Structure with Cooperative Modules”, In Proceedings of IEEE World Congress on Computational Intelligence (ICNN), Vol. III, Page(s) 1301-1306, 1994.
G. Auda, and M. Kamel, “CMNN: Cooperative modular neural network”, In Proceedings of IEEE Conference on Neural Networks, vol. 1, Page(s) 226-331, 1997.
L. Panait and S. Luke, “Cooperative Multi-Agent Learning: The State of the Art”, Autonomous Agents and Multi-Agent Systems, Volume 11, Number 3, Page(s) 387-434, November 2005.
Argyris, C. & Schön, D.A. Organizational learning: A theory of action perspective, Addison-Wesley, Reading, MA, 1978.
Churchland, P.S. & Sejnowski, T.J. The computational brain, The MIT Press, Cambridge, MA, 1993.
Stein, E.W. & Zwass, V. “Actualizing organizational memory with information systems,” Information systems research, 6(2), 85-117, 1995.
Huber, G.P. “Organizational learning: The contributing processes and the literatures,” Organizational Science, 2(1), 88-115, 1991.
Quadstone. PARAMICS Modeller v4.0 User Guide and Reference Manual. Quadstone Ltd., Edinburgh, U.K., 2002.
M.C. Choy, D. Srinivasan, R.L. Cheu, “Cooperative, Hybrid Agent Architecture for Real-time traffic control”, IEEE Transactions on Systems, Man and Cybernetics, Part A, Vol. 33, Issue 5, Page(s) 597-607, Sept. 2003.
M.C. Choy, D. Srinivasan, R.L. Cheu, “Simultaneous Perturbation Stochastic Approximation based Neural Networks for Online Learning”, In the Proceedings of the 7th IEEE International Conference on Intelligent Transportation Systems, Page(s)1038-1044, Oct. 3-6, 2004.
M.C. Choy, D. Srinivasan, R.L. Cheu, “Neural Networks for Continuous Online Learning and Control”, to appear in the IEEE Transactions on Neural Networks, 2006.
Ana L. C. Bazzan, “A Distributed Approach for Coordination of Traffic Signal Agents”, Autonomous Agents and Multi-Agent Systems, Volume 10, Number 2, Page(s) 131-164, March 2005.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Srinivasan, D., Choy, M.C. (2007). Distributed Problem Solving using Evolutionary Learning in Multi-Agent Systems. In: Jain, L.C., Palade, V., Srinivasan, D. (eds) Advances in Evolutionary Computing for System Design. Studies in Computational Intelligence, vol 66. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72377-6_9
Download citation
DOI: https://doi.org/10.1007/978-3-540-72377-6_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72376-9
Online ISBN: 978-3-540-72377-6
eBook Packages: EngineeringEngineering (R0)