Advertisement

Debugging and Testing of Multi-Agent Systems using Design Artefacts

  • David Poutakidis
  • Michael Winikoff†
  • Lin Padgham
  • Zhiyong Zhang
Chapter

Abstract

Agents are a promising technology for dealing with increasingly complex system development. An agent may have many ways of achieving a given task, and it selects the most appropriate way of dealing with a given task based on the context. Although this makes agents flexible and robust, it makes testing and debugging of agent systems challenging. This chapter presents two tools: one for generating test cases for unit testing agent systems, and one for debugging agent systems by monitoring a running system. Both tools are based on the thesis that design artefacts can be valuable resources in testing and debugging. An empirical evaluation that was performed with the debugging tool showed that the debugging tool was useful to developers, providing a significant improvement in the number of bugs that were fixed, and in the amount of time taken.

Keywords

Multiagent System Agent System Testing Tool Interaction Protocol Debug Tool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Apfelbaum, L., Doyle, J.: Model Based Testing. In: the 10th International Software Quality Week Conference. CA, USA(1997)Google Scholar
  2. 2.
    Bates,P. : EBBA Modelling Toola. k. a Event Definition Language. Tech. rep.,Department ofComputer Science University of Massachusetts, Amherst,MA,USA (1987)Google Scholar
  3. 3.
    Bauer, B., Müller, J.P., Odell, J. : Agent UML: A Formalism for Specifying Multi agent Interaction. In: P. Ciancarini, M. Wooldridge (eds.) Agent-Oriented Software Engineering,pp. 91–103. Springer-Verlag, Berlin (2001)CrossRefGoogle Scholar
  4. 4.
    Benfield, S.S., Hendrickson, J., Galanti, D.: Making a strong business case for multi agent technology. In: P. Stone, G. Weiss (eds.) Autonomous Agents and Multi-Agent Systems (AAMAS), pp. 10–15.ACM Press (2006)Google Scholar
  5. 5.
    Binder, R.V. : Testing Object-Oriented Systems: Models, Patterns, and Tools.Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA(1999)Google Scholar
  6. 6.
    Binkley, D., Gold, N., Harman, M. : An empirical study of static program slice size. ACM Transactions on Software Engineering and Methodology 16(2), 8 (2007). DOI http://doi.acm.org/10.1145/1217295.1217297 CrossRefGoogle Scholar
  7. 7.
    Bresciani,P.,Giorgini,P.,Giunchiglia,F.,Mylopoulos,J.,Perini,A.:Tropos: An agent oriented software development methodology. Journal of Autonomous Agents and Multi-Agent Systems 8, 203–236 (2004)CrossRefMATHGoogle Scholar
  8. 8.
    Bruegge, B., Gottschalk, T., Luo, B.: A framework for dynamic program analyzers. In: Object-Oriented Programming Systems, Languages, and Applications. OOPSLA, pp. 65–82.ACM Press,Washington (1993)Google Scholar
  9. 9.
    Busetta, P., Howden, N., Rönnquist, R., Hodgson, A.: Structuring BDI agents in functional clusters. In: Agent Theories, Architectures, and Languages (ATAL-99), pp. 277–289. Springer-Verlag (2000). LNCS 1757Google Scholar
  10. 10.
    Busetta, P., Rönnquist, R., Hodgson, A.,Lucas, A.: JACK Intelligent Agents -Components for Intelligent Agents in Java. Tech. rep., Agent Oriented Software Pty. Ltd, Melbourne, Australia (1998)Google Scholar
  11. 11.
    Caire, G., Cossentino, M.,Negri, A.,Poggi, A., Turci, P. : Multi-Agent Systems Implementation and Testing. In: the Fourth International Symposium: From Agent Theory to Agent Implementation. Vienna, Austria (EU) (2004)Google Scholar
  12. 12.
    Clarke, E.M., Grumberg, O., Long, D.E. : Model checking and abstraction. ACM Transactions on Programming Languages and Systems 16(5), 1512–1542 (1994). URL cite-seer.ist.psu.edu/clarke92model.htmlCrossRefGoogle Scholar
  13. 13.
    Coelho, R., Kulesza, U., vonStaa, A., Lucena, C. : Unit Testing in Multi-Agent Systems using Mock Agents and Aspects. In: Proceedings of the 2006 International Workshop on Software Engineering for Large-Scale Multi-Agent Systems, pp. 83–90 (2006)Google Scholar
  14. 14.
    Cohen, B.: The use of bug in computing. IEEE Annals of the History of Computing 16, No2 (1994)CrossRefGoogle Scholar
  15. 15.
    Cohen, D.M., Dalal ,S.R., Fredman, M.L., Patton, G.C. : The AETG system: An Approach to Testing Based on Combinatorial Design. Software Engineering 23(7), 437–444 (1997). URL citeseer.ist.psu.edu/cohen97aetg.htmlCrossRefGoogle Scholar
  16. 16.
    Cost, R.S., Chen, Y., Finin, T., Labrou, Y., Peng, Y.: Using colored petri nets for conversation modeling. In: F. Dignum, M. Greaves (eds.) Issues in Agent Communication, pp. 178–192. Springer-Verlag: Heidelberg, Germany (2000). URL cite-seer.ist.psu.edu/article/cost99using.htmlCrossRefGoogle Scholar
  17. 17.
    Dalal, S.R., Jain, A., Karunanithi, N., Leaton, J.M., Lott, C.M., Patton, G.C., Horowitz, B.M.: Model-based testing in practice. In: International Conference on Software Engineering (1999)Google Scholar
  18. 18.
    DeLoach, S.A.: Analysis and design using MaSE and agent Tool. In: Proceedings of the 12th Midwest Artificial Intelligence and Cognitive Science Conference (MAICS 2001) (2001)Google Scholar
  19. 19.
    De Loach, S.A. : Developing a multi agent conference management system using the O-MaSE process framework. In: Luck and Padgham [39], pp. 168–181Google Scholar
  20. 20.
    DeLoach, S.A., Wood, M.F., Sparkman, C.H. : Multi agent systems engineering.International Journal of Software Engineering and Knowledge Engineering 11(3), 231–258 (2001)CrossRefGoogle Scholar
  21. 21.
    Dignum, F., Sierra, C. (eds.): Agent Mediated Electronic Commerce: The European Agent link Perspective. Lecture Notes in Artificial Intelligence. Springer-Verlag, London, UK (1991)Google Scholar
  22. 22.
    Doolan, E.P.: Experience with Fagan’s inspection method. Software Practice and Experience 22(2), 173–182 (1992)CrossRefGoogle Scholar
  23. 23.
    Ducassé, M. : A pragmatic survey of automated debugging. In: Automated and Algorithmic Debugging, LNCS,vol. 749, pp. 1–15. Springer Berlin/Heidelberg(1993). URL cite-seer.ist.psu.edu/367030.htmlGoogle Scholar
  24. 24.
    Ekinci, E.E., Tiryaki, A.M., Ç et in,Ö.: Goal-oriented agent test ingrevisited. In: J.J. Gomez-Sanz, M.Luck (eds.) Ninth International Workshop on Agent-Oriented Software Engineering, pp. 85–96 (2008)Google Scholar
  25. 25.
    El-Far, I.K., Whittaker, J.A. : Model-Based Software Testing, pp. 825–837.Wiley (2001)Google Scholar
  26. 26.
    Fagan, M.E. : Advances in software inspections. IEEE Transactions on Software Engineering SE-12(7), 744–751 (1986)CrossRefGoogle Scholar
  27. 27.
    Flater, D.: Debugging agent interactions:a case study. In: Proceedings of the 16th ACM Symposium on Applied Computing(SAC2001), pp. 107–114.ACM Press (2001)Google Scholar
  28. 28.
    Gomez-Sanz, J.J., Botía, J., Serrano, E., Pavón, J. : Testing and debugging of MAS interactions with INGENIAS. In: J.J. Gomez-Sanz, M.Luck (eds.) Ninth International Workshop on Agent-Oriented Software Engineering, pp. 133–144 (2008)Google Scholar
  29. 29.
    Hailpern, B., Santhanam,P.: Software debugging,testing, and verification. IBM Systems Journal 41(1), 1–12 (2002)CrossRefGoogle Scholar
  30. 30.
    Hall, C., Hammond, K., O’Donnell, J.: An algorithmic and semantic approach to debugging. In: Proceedings of the 1990 Glasgow Workshop on Functional Programming, pp. 44–53 (1990)Google Scholar
  31. 31.
    Huber, M.J. : JAM: ABDI-theoretic mobile agent architecture. In: Proceedings of the Third International Conference on Autonomous Agents (Agents’99), pp. 236–243 (1999)Google Scholar
  32. 32.
    Huget, M.P., Odell, J., Haugen, Ø., Nodine, M.M., Cranefield, S., Levy, R., Padgham., L. :Fipa modeling: Interaction diagrams. Onwww.auml.orgunder “Working Documents” (2003). FIPAWorkingDraft(version 2003-07-02)
  33. 33.
    Johnson, M.S. : A software debugging glossary. ACM SIGPLAN Notices 17(2), 53–70 (1982)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Jones, J.A. : Fault localization using visualization of test information. In: Proceedings of the 26th International Conference on Software Engineering, pp. 54–56. IEEE Computer Society, Washington, DC, USA(2004)CrossRefGoogle Scholar
  35. 35.
    Knublauch, H. : Extreme programming of multi-agent systems. In: Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (2002). URL citeseer.ist.psu.edu/knublauch02extreme.htmlGoogle Scholar
  36. 36.
    LeBlanc, T., MellorCrummey, J., Fowler, R. : Analyzing parallel program executions using multiple views. Parallel and Distributed Computing 9(2), 203–217 (1990)CrossRefGoogle Scholar
  37. 37.
    Lind, J. : Specifying agent interaction protocols with standard UML. In: Agent-Oriented Software Engineering II: Second International Workshop, Montreal Canada, LNCS, vol. 2222, pp. 136–147 (2001). URL citeseer.ist.psu.edu/lind01specifying.htmlGoogle Scholar
  38. 38.
    Low, C.K., Chen,T.Y., Rönnquist, R. :Automated Test Case Generationfor BDI agents. Autonomous Agents and Multi-Agent Systems 2(4), 311–332 (1999)CrossRefGoogle Scholar
  39. 39.
    Luck,M.,Padgham,L.(eds.):Agent-Oriented Software Engineering VIII, 8th International Workshop, AOSE 2007, Honolulu, HI, USA, May 14, 2007, Revised selected Papers, Lecture Notes in Computer Science, vol. 4951. Springer (2008)Google Scholar
  40. 40.
    Madachy, R.: Process improvement analysis of a corporate inspection program. In: Seventh Software Engineering Process Group Conference, Boston, MA (1995)Google Scholar
  41. 41.
    Mayer, W., Stumptner, M. : Model-based debugging -state of the art and future challenges. Electronic Notes in Theoretical Computer Science 174(4), 61–82 (2007). DOI http://dx.doi.org/10.1016/j.entcs.2006.12.030 CrossRefGoogle Scholar
  42. 42.
    McDowell, C., Helmbold,D.: Debugging concurrent programs.ACM Computing Surveys 21(4), 593–622 (1989)CrossRefGoogle Scholar
  43. 43.
    Morandini, M., Nguyen, D.C., Perini, A., Siena, A., Susi, A. : Tool-supported development with Tropos: The conference management system case study. In:Luck and Padgham [39], pp. 182–196Google Scholar
  44. 44.
    Munroe, S., Miller, T., Belecheanu, R., Pechoucek, M., McBurney, P., Luck, M.: Crossing the agent technology chasm: Experiences and challenges in commercial applications of agents. Knowledge Engineering Review 21(4), 345–392 (2006)CrossRefGoogle Scholar
  45. 45.
    Myers, B.A., Weitzman, D.A., Ko, A.J., Chau, D.H. : Answering why and why not questions in user interfaces. In: Proceedings of the SIGCHI conference on Human Factors in computing systems, pp. 5397–406. ACM, NewYork, NY, USA (2006)Google Scholar
  46. 46.
    Mylopoulos, J., Castro, J.,Kolp, M .: Tropos: Toward agent-oriented information systems engineering. In: Second International Bi-Conference Workshop onAgent-Oriented Information Systems (AOIS2000) (2000)Google Scholar
  47. 47.
    Naish, L. : A declarative debugging scheme. Journal of Functional and Logic Programming 1997(3), 1–27 (1997)MathSciNetGoogle Scholar
  48. 48.
    Ndumu, D.T., Nwana, H.S., Lee, L.C., Collis, J.C. :Visualising and debugging distributed multi-agent systems. In: Proceedings of the third annual conference on Autonomous Agents, pp. 326–333.ACM Press (1999). DOI http://doi.acm.org/10.1145/301136.301220
  49. 49.
    Nguyen, C.D., Perini, A., Tonella, P. : eCAT: A tool for automating test cases generation and execution intesting multi-agent systems(demopaper).In: 7th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008).Estoril, Portugal (2008)Google Scholar
  50. 50.
    Nguyen, C.D., Perini, A., Tonella, P. : Ontology-based test generation for multi agent systems(short paper). In: 7th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008). Estoril, Portugal (2008)Google Scholar
  51. 51.
    O’Hare, G.M.P., Wooldridge, M.J. : A software engineering perspective on multi-agent system design: experience in the development of MADE. In: Distributed artificial intelligence: theory and praxis, pp. 109–127. Kluwer Academic Publishers (1992)Google Scholar
  52. 52.
    Padgham, L., Thangarajah, J., Winikoff, M.: The prometheus design tool - a conference management system case study. In: Luck and Padgham [39], pp. 197–211Google Scholar
  53. 53.
    Padgham, L., Winikoff, M. : Developing Intelligent Agent Systems: A Practical Guide. John Wiley and Sons (2004). ISBN 0-470-86120-7Google Scholar
  54. 54.
    Padgham, L., Winikoff, M., DeLoach, S., Cossentino, M.: A unified graphical notation for AOSE. In: Ninth International Workshop on Agent Oriented Software Engineering (AOSE) (2008)Google Scholar
  55. 55.
    Padgham, L., Winikoff, M., Poutakidis, D. : Adding debugging support to the prometheus methodology. Engineering Applications of Artificial Intelligence, special issue on Agent oriented Software Development 18(2), 173–190 (2005)Google Scholar
  56. 56.
    Patton, R. :Software Testing (Second Edition). Sams, Indianapolis, IN, USA(2005)Google Scholar
  57. 57.
    Paurobally, S., Cunningham, J., Jennings, N.R.: Developing agent interaction protocols graphically and logically. In: Programming Multi-Agent Systems,LectureNotes in Arti fi cial Intelligence, vol. 3067, pp. 149–168 (2004)Google Scholar
  58. 58.
    Pokahr, A., Braubach, L., Lamersdorf, W. : Jadex: Implementing a BDI-Infrastructure for JADE Agents. EXP -In Search of Innovation (Special Issue on JADE) 3(3), 76–85 (2003)Google Scholar
  59. 59.
    Poutakidis, D. : Debugging multi-agent systems with design documents. Ph.D. thesis, RMIT University, School of Computer Science and IT (2008)Google Scholar
  60. 60.
    Poutakidis, D., Padgham, L., Winikoff, M.: Debugging multi-agent systems using design artifacts: The case of interaction protocols. In: Proceedings of the First International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS’02) (2002)Google Scholar
  61. 61.
    Poutakidis, D., Padgham, L., Winikoff, M. : An exploration of bugs and debuggingin multi agent systems. In: Proceedings of the 14th International Symposium on Methodologies for Intelligent Systems (ISMIS), pp. 628–632. Maebashi City, Japan (2003)MATHGoogle Scholar
  62. 62.
    Purvis, M., Cranefield, S., Nowostawski, M., Purvis, M.: Multi-agent system interaction protocols in a dynamically changing environment. In: T. Wagner (ed.) An Application Sciencefor Multi-AgentSystems, pp. 95–112. Kluwer Academic (2004)Google Scholar
  63. 63.
    Reisig, W. : PetriNets: An Introduction. EATCS Monographs on Theoretical Computer Science. Springer-Verlag (1985). ISBN 0-387-13723-8Google Scholar
  64. 64.
    .Rouff, C. :A Test Agent for Testing Agents and their Communities. Aerospace Conference Proceedings, 2002. IEEE 5, 2638 (2002)Google Scholar
  65. 65.
    Schwarz, R., Mattern, F.: Detecting causal relationships in distributed computations: In search of the holy grail. Distributed Computing 7(3), 149–174 (1994). URL cite-seer.nj.nec.com/schwarz94detecting.htmlCrossRefMATHGoogle Scholar
  66. 66.
    Shapiro, E.Y.: Algorithmic Program Debugging. MIT Press, Cambridge, MA, USA(1983)MATHGoogle Scholar
  67. 67.
    Sprinkle, J., van Buskirk, C.P., Karsai, G.: Modeling agent negotiation. In: Proceedings ofthe 2000 IEEE International Conference on Systems, Man, and Cybernetics,Nashville, TN, vol. 1, pp. 454–459 (2000)Google Scholar
  68. 68.
    Vessey, I.: Expertise in debugging computer programs: A process analysis. International Journal of Man-Machine Studies 23(5), 459–494 (1985)CrossRefGoogle Scholar
  69. 69.
    Weiser, M.: Programmers use slices when debugging. Communications of the ACM 25(7), 446–452 (1982). DOI http://doi.acm.org/10.1145/358557.358577 CrossRefGoogle Scholar
  70. 70.
    Yilmaz, C., Williams, C.: An automated model-based debugging approach. In: Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering, pp. 174–183. ACM, New York, NY, USA (2007). DOI http://doi.acm.org/10.1145/1321631.1321659 CrossRefGoogle Scholar
  71. 71.
    Zhang, Z., Thangarajah, J., Padgham, L. : Automated unit testing for agent systems. In: Second International Working Conference on Evaluation of Novel Approaches to Software Engineering (ENASE), pp. 10–18 (2007).Google Scholar

Copyright information

© Springer-Verlag US 2009

Authors and Affiliations

  • David Poutakidis
    • 1
  • Michael Winikoff†
    • 2
  • Lin Padgham
    • 2
  • Zhiyong Zhang
    • 2
  1. 1.Adaptive Intelligent SystemsMelbourneAustralia
  2. 2.School of Computer Science & IT, RMIT UniversityMelbourneAustralia

Personalised recommendations