Skip to main content

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ))

Abstract

Can we devise simple solutions to complex problems? Is it possible to do so by making use of elemental modules, which when collaborating create emerging intelligence? The answer is yes. No complex mathematical models are required. Nature offers a variety of techniques that lend themselves well to solving complex problems making use of simpler atomic entities. Insects, for instance, as individuals are very simple, but in a collective they are powerful systems able to solve very complex tasks. This chapter describes how the insect world can inspire engineers and computer scientists to devise simple solutions to complex problems. After all the simplest solution is always the best.1

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Anderson, C., Blackwell, P.G., and Cannings, C. (1997), “Simulating ants that forage by expectation”, Proceedings of the 4Th Conference on Artificial Life, pp. 531–538.

    Google Scholar 

  • Beckers, R., Deneubourg, J. L., Goss, S., and Pasteels, J. M. (1990), “Collective decision making through food recruitment”, Insect Sociaux, vol. 37, pp. 258–267.

    Article  Google Scholar 

  • Beckers, R., Deneubourg, J.L., and Goss, S. (1992), “Trails and Uturns in the selection of the shortest path by the ant Lasius niger”, Journal of Theoretical Biology, vol. 159, pp. 397–415.

    Article  Google Scholar 

  • Bertsekas, D.P., and Tsitsiklis, J.N. (1996), “Neuro Dynamic Programming”, Athena Scientific.

    Google Scholar 

  • Bonabeau, E., Dorigo, M., and Theraulaz, G. (1999), “Swarm intelligence, From Natural to Artificial Systems”, Oxford University Press.

    Google Scholar 

  • Bourjot, C., Chervier, V., and Thomas, V. (2002), “How social spiders inspired an approach To region detection”, Proceedings of the first international joint conference on Autonomous agents and multiagent systems, Part 1, pp. 426–433.

    Chapter  Google Scholar 

  • Bullnheimer, B., Hartl, R.F., and Stauss, C. (1997), “An Improved Ant System Algorithm for the Vehicle Routing Problem”, 6th Viennese workshop on Optimal Control, Dynamic Games, Nonlinear Dynamics and Adaptive Systems, vol. 89, pp. 319–328.

    Google Scholar 

  • Bullnheimer, B., Hartl, R.F., and Stauss, C. (1999), “Applying the Ant System to the Vehicle Routing Problem”, Meta Heuristics: Advances and Trends in Local Search Paradigms for Optimization, Kluwer.

    Google Scholar 

  • Cammaerts-Tricot, M.C. (1974), “Piste et Pheromone attraction chez la fourmi myrmica ruba”, Journal of Computational Physiology, vol. 88, pp. 373–382.

    Article  Google Scholar 

  • Colorni, A., Dorigo, M., and Theraulaz, G. (1991), “Distributed optimzation by ant colonies”, Proceedings of First European Conference on Artificial Life, pp. 134–142.

    Google Scholar 

  • Colorni, A., Dorigo, M., and Maniezzo, V. (1993), “Ant system for job shop scheduling”, Belgian Journal of Operational Research, Statistics and Computer Science, vol. 34, 39–53.

    Google Scholar 

  • Deneubourg, J.L., and Goss, S. (1989), “Collective patterns and decision making”, Ethol. Ecol. and Evol., vol. 1, pp. 295–311.

    Article  Google Scholar 

  • Deneubourg, J.L., Beckers, R., and Goss, S. (1992), “Trails and U-turns in the selection of a path by the ant Lasius niger”, Journal of Theoretical Biology, vol. 159, pp. 397–415.

    Article  Google Scholar 

  • Di Caro, G., Dorigo, M. (1997), “Ant-Net: a mobile agents approach to adaptive routing”, Technical Report IRIDIA 97 12, Universite Libre de Bruxelles, Belgium.

    Google Scholar 

  • Di Caro, G., and Dorigo, M. (1998), “Two Ant Colony Algorithms for Best-Effort Routing in Datagram Networks”, Proceedings of International Conference on Parallel and Distributed Computing and Systems, pp. 541–546.

    Google Scholar 

  • Dorigo, M., Maniezzo, V., and Colorni, A. (1996), “The Ant System: Optimization by a colony of cooperatin agents”, IEEE Transactions on Systems, Man, and Cybernetics, vol. 26, pp. 1–13.

    Google Scholar 

  • Dorigo, M., and Gambardella, L.M. (1996), “A Study of Some Properties of ANT-Q”, Proceedings of Fourth Conference on Parallel Problem Solving From Nature, pp. 656–665.

    Google Scholar 

  • Dorigo, M., and Gambardella, L.M. (1997), “Ant Colony System: A cooperative learning approach to the travelling salesman problem”, IEEE Tranactions on Evolutionary Computing, vol. 1, pp. 53–66.

    Article  Google Scholar 

  • Dorigo, M., Di Caro, G., and Gambardella, L.M. (1999), “Ant Algorithms for Discrete Optimisation”, Artificial Life, vol. 3, pp. 137–172.

    Article  Google Scholar 

  • Forsyth, P., and Wren, A. (1997), “An Ant System for Bus Driver Scheduling”, 7th International Workshop on Computer Aided Scheduling of Public Transport, pp. 252–260.

    Google Scholar 

  • Gambardella, L.M., and Dorigo, M. (1995), “AntQ: A Reinforcement Learning approach to the traveling salesman problem”,Proceedings of the 12Th International Conference on Machine Learning,pp. 252–260.

    Google Scholar 

  • Gambardella, L.M., Taillard, E.D., and Dorigo, M. (1999), “Ant colonies for the Quadratic Assignment Problem”, Journal of Operational Research society, vol. 50, pp. 167–176.

    MATH  Google Scholar 

  • Giarratano, J.C. (1998), “Expert Systems: Principles and Programming”, Brooks Cole.

    Google Scholar 

  • Goss, S., Aron, S., Deneubourg, J.L., and Pasteels, J.M. (1989), “Self organized shorcuts in the Argentine ants”, Naturwissenschaften, PP. 579–581.

    Google Scholar 

  • Hölldobler, F., and Wilson, R. (1995), “Journey to the Ants, A story of scientific exploration”, Library of Congress“.

    Google Scholar 

  • Jaakkola, T., Jordan, M.I., and Singh, S.P. (1994), “On the convergence of stochastic Iterative Dynamic Programming Algorithms”, Neural Computation, vol. 6, pp. 1185–1201.

    Article  MATH  Google Scholar 

  • Kawamura, H., Yamamoto, M., uzuki, K., and Ohuchi, A. (2000), “Multiple Ant Colonies Algorithm Based on Colony Level Interactions”, IEIEC Transactions Fundamentals, vol. E83-A, No.2, pp. 371–379.

    Google Scholar 

  • Kube, C.R., and Zhang, H. (1995), “Stagnation Recovery Procedures for Collective Behaviors”, Proceedings of IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, pp. 1883–1890.

    Google Scholar 

  • Kube, C.R., and Bonabeau, E. (1998), “Cooperative transport by ants and robots”, Robotics and Autonomous Systems, pp. 85–101.

    Google Scholar 

  • Kube, C.R., and Bonabeau, E. (2000), “Cooperative transport by ants and robots”, Robotics and Autonomous Systems, pp. 85–101.

    Google Scholar 

  • Leerink, L.R., Schultz, S.R., and Jabri, M.A. (1995), “A Reinforcement Learning Exploration Strategy based on Ant Foraging Mechanisms”, Proceedings 6Th Australian Conference on Neural Nets, pp. 217–220.

    Google Scholar 

  • Maniezzo, V., Carbonaro, A., and Hildmann, H. (2001), “An ANTS Heuristic for the Long Term Car Pooling Problem”, Proceedings of Second International Workshop on Ant Colony Optimisation.

    Google Scholar 

  • Mitchell, T.M. (1997), “Machine Learning”, McGraw-Hill.

    Google Scholar 

  • Monekosso, N., and Remagnino, P. (2001), “Phe-Q: A pheromone based Q learning”, Proceedings of 14Th Australian Joint Conference on Artificial Intelligence, pp. 345–355.

    Google Scholar 

  • Monekosso, N., and Remagnino, R (2002), “An analysis of the Pheromone Q Learning algorithm”, Proceedings of the 8th Iberoamerican Conference on Artificial Intelligence, IBERAMIA 2002, pp. 224–232.

    Google Scholar 

  • Monekosso, N., and Remagnino, P. (2004), “The Analysis and Performance Evaluation of the Pheromone Q Learning Algorithm”, to appear in Expert Systems, vol. 21(2), pp..

    Google Scholar 

  • Ollason, J.G. (1980), “Learning to forage optimally?”, Theoretical Population Biology, vol. 18, pp. 44–56.

    Article  MathSciNet  Google Scholar 

  • Ollason, J.G. (1987), “Learning to forage in a regenerating patchy environment: can it fail to be optimal?”, Theoretical Population Biology, vol. 31, pp. 13–32.

    Article  MATH  Google Scholar 

  • Van Dyke Parunak, H., and Brueckner, S. (2000), “Ant-Like Missionnaries and Cannibals: Synthetic pheromones for distributed motion control”, Proceedings of Fifth International Conference on Autonomous Agents, pp. 467–474.

    Google Scholar 

  • Van Dyke Parunak, H., Brueckner, S., Sauter, J., and Posdamer, J. (2001), “Mechanisms and Military Applications for Synthetic Pheromones”, Proceedings of workshop on Autonomy Oriented Computation at the Fifth International Conference Autonomous Agents, pp. 58–67.

    Google Scholar 

  • Van Dyke Parunak, H., Brueckner, S., and Sauter, J.A. (2002), “Digital pheromone mechanisms for coordination of unmanned vehicles”, Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pp. 449–450.

    Google Scholar 

  • Ramos, V., Almeida, F., “Artificial Ant Colonies in Digital Image Habitats - A Mass Behaviour Effect Study on Pattern Recognition”, Proceedings of ANTS’2000, Int. Workshop on Ant Algorithms (From Ant Colonies to Artificial Ants),pp. 113–116.

    Google Scholar 

  • Rizzi, A., Biam G., Cassinis R. (1998), “A beeinspired visual hon-fing using color images”, Robotics and Autonomous Systems, 25, pp. 159–164.

    Article  Google Scholar 

  • Sabouret, N., and Sansonnet, J.P. (2001), “Learning Collective Behaviour from Local Interaction”, Proceedings of CEEMAS 2001, LNAI-LNCS 2296, Springer Verlag, pp. 273–282.

    Google Scholar 

  • Sauter, J.A., and Matthews, R., and Van Dyke Parunak, H., and Brueckner, S. (2002), “Evolving adaptive pheromone path planning mechanisms”, Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pp. 434–440.

    Google Scholar 

  • Srinivasan, M.V., Zhang, S.W., Lehrer, M., and Collett, T.S. (1996), “Honeybee Navigation en route to the goal: visual flight control and odometry”, The Journal of Experimental Biology, 199, pp. 237–244.

    Google Scholar 

  • Stutzle, T., and Dorigo, M. (1999), “ACO Algorithms for the Quadratic Assignment Problem, New Ideas in Optimization”, McGraw Hill Press.

    Google Scholar 

  • Stutzle, T., and Dorigo, M. (1999), “ACO Algorithms for the the Traveling Salesman Problem, Evolutionary Algorithms in Engineering and Computer Science”, John Wiley and Sons.

    Google Scholar 

  • Sutton, R.S., and Barto, A.G. (1998), “Reinforcement Learning”, MIT Press.

    Google Scholar 

  • Vaughan, R. T., Stoy, K., Sukhatme, G.S., and Mataric, M. J. (2000), “Whistling in the dark: Cooperative trail following in uncertain localization space”, Proceedings of 4Th International Conference on Autonomous Agents, pp. 373–380.

    Google Scholar 

  • Vaughan, R. T., Stoy, K., Sukhatme, G. and Mataric, M.J. (2002), “LOST: Localization Space Trails for Robot Teams”, IEEE Transactions on Robotics and Automation, special issue on Advances in MultiRobot Systems, vol. 18, pp. 796–812.

    Article  Google Scholar 

  • Watkins, C. J. C. H. (1989), “Learning with delayed rewards”, University of Cambridge.

    Google Scholar 

Download references

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Monekosso, N.D., Remagnino, P. (2005). A Collective Can Do Better. In: Design of Intelligent Multi-Agent Systems. Studies in Fuzziness and Soft Computing. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-44516-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-44516-6_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-06177-6

  • Online ISBN: 978-3-540-44516-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics