Advertisement

Collective Behavior Evolution in a Group of Cooperating Agents

  • J. Liu
  • J. Wu
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 98)

Abstract

This work addresses the issue of how to acquire a collective goal-directed task-driven behavior in a group of distributed agents. The specific problem that we consider here is how simulated ants can perform coordinated movements to collectively transport an object toward a desired goal location. We propose an evolutionary computation mechanism in which no centralized modeling and control are involved except a high-level criterion for measuring the quality of collective task performance. The evolutionary learning approach is based on a fittest-preserved genetic algorithm. While giving the formulation and algorithm of collective behavior learning, we also present the results of several computer simulations for illustrating and validating the effectiveness of the proposed approach.

Keywords

Genetic Algorithm Fitness Function Collective Movement Group Behavior Goal Location 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Back, T., Hammel, U. and Schwefel, H.-P., (1997), “Evolutionary computation: Comments on the history and current state,” IEEE Transaction on Evolutionary Computation, vol. 1, no. 1, pp. 3–17.CrossRefGoogle Scholar
  2. Balch, T., (1998), Behavioral Diversity in Learning Robot Teams, PhD Thesis, Department of Computer Science, Georgia Institute of Technology.Google Scholar
  3. Beer, R. D. and Gallagher, J. C., (1992), “Evolving dynamic neural networks for adaptive behavior,” Adaptive Behavior, vol. 1, no. 1, pp. 91–122.CrossRefGoogle Scholar
  4. Bonabeau, E. W., Dorigo, M. and Theraulaz, G., (1999), Swarm Intelligence, Oxford University Press, New York.Google Scholar
  5. Cao, Y. U., Fukunaga, A. S. and Kahng, A. B., (1997), “Cooperative mobile robotics: Antecedents and directions,” Autonomous Robots, vol. 4, no. 1, pp. 7–27.CrossRefGoogle Scholar
  6. Cao, Y. U., Fukunaga, A. S., Kahng, A. B. and Meng, F., (1995), “Cooperative mobile robotics: Antecedents and directions,” in Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ‘95), pp. 226–234.Google Scholar
  7. Colombetti, M. and Dorigo, M., (1993), “Learning to control an autonomous robot by distributed genetic algorithms,” in From Animals to Animats 2 Proceedings of the Second International Conference on Simulation of Adaptive Behavior (SAB-93), pp. 305–312.Google Scholar
  8. Colombetti, M., Dorigo, M. and Borghi, G., (1996), “Behavior analysis and training — A methodology for behavior engineering,” IEEE Transactions on Systems, Man, and Cybernetics — Part B: Cybernetics, vol. 26, no. 3, pp. 365–380.CrossRefGoogle Scholar
  9. Dorigo M. and Gambardella, L. M., (1997), “Ant colony system: A cooperative learning approach to the traveling salesman problem,” IEEE Transaction on Evolutionary Computation, vol. 1, no. 1, pp. 53–66.CrossRefGoogle Scholar
  10. Dorigo M. and Gambardella, L. M., (1996), “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Tech. Rep. IRIDIA/1996–5, Universite Libre de Bruxelles, Belgium.Google Scholar
  11. Dorigo, M., Maniezzo, V. and Colorni, A., (1996), “The ant system: Optimization by a colony of cooperating agents,” IEEE Transaction on System, Man, and Cybernetics-Part B, vol. 26, no. 1, pp. 1–13.Google Scholar
  12. Fuj j i, T., Asama, H., von Numers, T., Fujita, T., Kaetsu, H. and Endo, I., (1996), “Co-evolution of a multiple autonomous robot system and its working environment via intelligent local information storage,” Robotics and Autonomous Systems, vol. 19, no. 1, pp. 1–13.CrossRefGoogle Scholar
  13. Fukuda, T. and Iritani, G., (1995), “Construction mechanism of group behavior with cooperation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.Google Scholar
  14. Goldberg, D. E., (1989), Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley publishing company, Reading, MA.Google Scholar
  15. Harvey, I., Husbands, P., Cliff, D., Thompson, A. and Jakobi, N., (1996), “Evolutionary robotics: the sussex approach,” Robotics and Autonomous Systems.Google Scholar
  16. Holland, J. H., (1975), Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor.Google Scholar
  17. Husbands, P., Cliff, D. and Harvey, I., (1996), “The artificial evolution of robot control systems,” in Genetic Algorithms in Design Optimization: Proceedings of a Colloquium at The Institution of Mechanical Engineers.Google Scholar
  18. Husbands, P., Harvey, I. and Cliff, D., (1995), “Circle in the round: State space attractors for evolved sighted robots,” in The Biology and Technology of Intelligent Autonomous Agents, Luc Steels, Ed., pp. 222–257. Springer-Verlag, Berlin.CrossRefGoogle Scholar
  19. Koza, J. R., (1992), Genetic Programming: on the Programming of Computers by Means of Natural Selection, The MIT Press, Cambridge, MA.Google Scholar
  20. Maniezzo, V., Colorni, A. and Dorigo, M., (1994), “The ant system applied to the quadratic assignment problem,” Tech. Rep. IRIDIA/1994–28, Universite Libre de Bruxelles, Belgium.Google Scholar
  21. Mataric, M. J., (1997), “Reinforcement learning in the multi-robot domain,” Autonomous Robots, vol. 4, no. 1, pp. 73–83.CrossRefGoogle Scholar
  22. Mataric, M. J., (1994), “Learning to behave socially,” in From Animals to Animats 3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior (SAB-94).Google Scholar
  23. Mataric, M. J. and Cliff, D., (1996), “Challenges in evolving controllers for physical robots,” Robotics and Autonomous Systems, vol. 19, no. 1Google Scholar
  24. Mataric, M. J., Nilsson, M. and Simsarian, K., (1995), “Cooperative multi-robot box-pushing,” in Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ‘95).Google Scholar
  25. Miglino, O., Lund, H. and Nolfi, S., (1997), “Evolving mobile robots in simulated and real environments,” Artificial Life.Google Scholar
  26. Nolfi, S. and Parisi, D., (1996), “Evolving non-trivial behaviors on real robots: An autonomous robot that picks up objects,” Tech. Rep., Institute of Psychology, National Research Council, Italy.Google Scholar
  27. Reynolds, C., (1994), “Evolution of corridor following in a noisy world,” in D. Cliff, P. Husbands, J.-A. Meyer, and S. W. Wilson, Eds., From Animals to Animats 3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior (SAB-94), pp. 402–410, MIT Press/Bradford Books, Cambridge, MA.Google Scholar
  28. Schwefel, H.-P., (1995), Evolution and Optimum Seeking, John Wiley & Sons, Inc., New York.Google Scholar
  29. White, T., Pagurek, B. and Oppacher, F., (1998), “ASGA: Improving the ant system by integration with genetic algorithms,” in Proceedings of the Symposium on Genetic Algorithms (SGA’98).Google Scholar
  30. Yamauchi, B. and Beer, R. D., (1994), “Integrating reactive, sequential, and learning behavior using dynamical neural networks,” in D. Cliff, P. Husbands, J.-A. Meyer, and S. W. Wilson, Eds., From Animals to Animats 3 Proceedings of the Third International Conference on Simulation of Adaptive Behavior (SAB-94), pp. 382–391, The MIT Press/Bradford Books, Cambridge, MA.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • J. Liu
  • J. Wu

There are no affiliations available

Personalised recommendations