Empirical Software Engineering

, Volume 15, Issue 4, pp 346–379 | Cite as

Testing peer-to-peer systems

  • Eduardo Cunha de Almeida
  • Gerson Sunyé
  • Yves Le Traon
  • Patrick Valduriez


Peer-to-peer (P2P) offers good solutions for many applications such as large data sharing and collaboration in social networks. Thus, it appears as a powerful paradigm to develop scalable distributed applications, as reflected by the increasing number of emerging projects based on this technology. However, building trustworthy P2P applications is difficult because they must be deployed on a large number of autonomous nodes, which may refuse to answer to some requests and even leave the system unexpectedly. This volatility of nodes is a common behavior in P2P systems and may be interpreted as a fault during tests (i.e., failed node). In this work, we present a framework and a methodology for testing P2P applications. The framework is based on the individual control of nodes, allowing test cases to precisely control the volatility of nodes during their execution. We validated this framework through implementation and experimentation on an open-source P2P system. The experimentation tests the behavior of the system on different conditions of volatility and shows how the tests were able to detect complex implementation problems.


Software testing Peer-to-peer systems Distributed hash tables (DHT) Testing methodology Experimental procedure 


  1. Androutsellis-Theotokis S, Spinellis D (2004) A survey of peer-to-peer content distribution technologies. ACM Comput Surv 36(4):335–371CrossRefGoogle Scholar
  2. Baresi L, Ghezzi C, Mottola L (2007) On accurate automatic verification of publish-subscribe architectures. In: 29th international conference on software engineering (ICSE 2007), Minneapolis, MN, USA, 20–26 May 2007. IEEE Computer Society, Los Alamitos, pp 199–208Google Scholar
  3. Binder RV (1999) Testing object-oriented systems: models, patterns, and tools. Addison-Wesley Longman, BostonGoogle Scholar
  4. Buschmann F, Meunier R, Rohnert H, Sommerlad P, Stal M (1996) Pattern-oriented software architecture: a system of patterns. Wiley, New YorkGoogle Scholar
  5. Carzaniga A, Rosenblum DS, Wolf AL (2000) Achieving scalability and expressiveness in an internet-scale event notification service. In: PODC ’00: proceedings of the nineteenth annual ACM symposium on principles of distributed computing. New York, NY, USA, pp 219–227. doi: 10.1145/343477.343622
  6. Chen K, Jiang F, dong Huang C (2006) A new method of generating synchronizable test sequences that detect output-shifting faults based on multiple uio sequences. In: SAC, pp 1791–1797Google Scholar
  7. Chen W-H, Ural H (1995) Synchronizable test sequences based on multiple uio sequences. IEEE/ACM Trans Netw 3(2):152–157. doi: 10.1109/90.374116 CrossRefGoogle Scholar
  8. Cirne W, Brasileiro F, Andrade N, Costa L, Andrade A, Novaes R, Mowbray M (2006) Labs of the world, unite!!! J Grid Comput 4(3):225–246. doi: 10.1007/s10723-006-9040-x MATHCrossRefGoogle Scholar
  9. da Silva DP, Cirne W, Brasileiro FV (2003) Trading cycles for information: using replication to schedule bag-of-tasks applications on computational grids. In: Kosch H, Böszörményi L, Hellwagner H (eds) Euro-Par. Lecture notes in computer science, vol 2790. Springer, New York, pp 169–180Google Scholar
  10. de Vries RG, Tretmans J (2000) On-the-fly conformance testing using spin. STTT 2(4):382–393MATHGoogle Scholar
  11. Dragan F, Butnaru B, Manolescu I, Gardarin G, Preda N, Nguyen B, Pop R, Yeh L (2006) P2ptester: a tool for measuring P2P platform performance. In: BDA conferenceGoogle Scholar
  12. Duarte A, Cirne W, Brasileiro F, Machado P (2005) Using the computational grid to speed up software testing. In: Proceedings of the 19th Brazilian symposium on software engineerGoogle Scholar
  13. Duarte A, Cirne W, Brasileiro F, Machado P (2006) Gridunit: software testing on the grid. In: ICSE ’06: Proceeding of the 28th international conference on software engineering, New York, NY, USA. ACM, New York, pp 779–782Google Scholar
  14. Garlan D, Khersonsky S, Kim JS (2003) Model checking publish-subscribe systems. In: Ball T, Rajamani SK (eds) Model checking software, 10th international SPIN workshop, Portland, OR, USA, 9–10 May 2003, Proceedings. Lecture notes in computer science, vol 2648. Springer, New York, pp 166–180Google Scholar
  15. Gerchman J, Jacques-Silva G, Drebes RJ, Weber TS (2005) Ambiente distribuido de injeçao de falhas de comunicaçao para teste de aplicaçoes java de rede. SBESGoogle Scholar
  16. Hierons RM (2001) Testing a distributed system: generating minimal synchronised test sequences that detect output-shifting faults. Inf Softw Technol 43(9):551–560CrossRefGoogle Scholar
  17. Hughes D, Greenwood P, Coulson G (2004) A framework for testing distributed systems. In P2P ’04: Proceedings of the fourth international conference on peer-to-peer computing. IEEE Computer Society, Washington, DC, pp 262–263. doi: 10.1109/P2P.2004.3 Google Scholar
  18. Jard C (2001) Principles of distribute test synthesis based on true-concurrency models. Technical report, IRISA/CNRSGoogle Scholar
  19. Jard C, Jéron T (2005) TGV: theory, principles and algorithms: a tool for the automatic synthesis of conformance test cases for non-deterministic reactive systems. Int J Softw Tools Technol TransfGoogle Scholar
  20. Joubert C, Mateescu R (2006) Distributed on-the-fly model checking and test case generation. In: Valmari A, (ed) Model checking software, 13th international SPIN workshop, Vienna, Austria, 30 March–1 April, proceedings. Lecture notes in computer science, vol 3925. Springer, New York, pp 126–145Google Scholar
  21. Kapfhammer GM (2001) Automatically and transparently distributing the execution of regression test suites. In: Proceedings of the 18th international conference on testing computer software. Washington, DCGoogle Scholar
  22. Long B, Strooper PA (2001) A case study in testing distributed systems. In: Proceedings 3rd international symposium on distributed objects and applications (DOA’01), pp 20–30Google Scholar
  23. McWhirter B (2004) SysUnit project. http://sysunit.codehaus.org/
  24. Petrenko A, Ulrich A (2000) Verification and testing of concurrent systems with action races. In: Ural H, Probert RL, von Bochmann G (eds) Testing of communicating systems: tools and techniques, IFIP TC6/WG6.1 13th international conference on testing communicating systems (TestCom 2000), 29 August–1 September 2000, Ottawa, Canada. IFIP conference proceedings, vol 176. Kluwer, Deventer, pp 261–280Google Scholar
  25. Pickin S, Jard C, Le Traon Y, Jéron T, Jézéquel J-M, Le Guennec A (2002) System test synthesis from UML models of distributed software. ACM - 22nd IFIP WG 6.1 international conference Houston on formal techniques for networked and distributed systemsGoogle Scholar
  26. Pickin S, Jard C, Heuillard T, Jézéquel J-M, Desfray P (2001) A uml-integrated test description language for component testing. In UML2001 wkshp: practical UML-based rigorous development methods. Lecture Notes in Informatics (LNI). Bonner Köllen Verlag, pp 208–223, OctoberGoogle Scholar
  27. Ratnasamy S, Francis P, Handley M, Karp R, Shenkern S (2001) A scalable content-addressable network. ACM SIGCOMM, New YorkGoogle Scholar
  28. Rosenblum DS (1995) A practical approach to programming with assertions. IEEE Trans Softw Eng 21(1):19–31CrossRefGoogle Scholar
  29. Rowstron A, Druschel P (2001) Pastry: scalable, decentralized object location, and routing for large-scale peer-to-peer systems. In: Middleware. Lecture notes in computer science. Springer, New York, pp 329–350Google Scholar
  30. Schieferdecker I, Li M, Hoffmann A (1998) Conformance testing of tina service components - the ttcn/ corba gateway. In: IS&N, pp 393–408Google Scholar
  31. Stoica I, Morris R, Karger D, Kaashoek MF, Hari B (2001) Chord: a scalable peer-to-peer lookup service for internet applications. ACM, New YorkGoogle Scholar
  32. Ulrich A, Konig H (1999) Architectures for testing distributed systems. In: Proceedings of the IFIP TC6 12th international workshop on testing communicating systems, Deventer, The Netherlands. Kluwer, B.V., Deventer, pp 93–108Google Scholar
  33. Walter T, Schieferdecker I, Grabowski J (1998) Test architectures for distributed systems: state of the art and beyond. In: Petrenko A, Yevtushenko N (eds) IWTCS, IFIP Conference Proceedings, vol 131. Kluwer, Deventer, pp 149–174Google Scholar
  34. Zanolin L, Ghezzi C, Baresi L (2003) An approach to model and validate publish/subscribe architectures. In: SAVCBS’03 workshop: proceedings of the specification and verification of component-based systems workshop, Helsinki, Finland, Sept. citeseer.ist.psu.edu/zanolin03approach.html
  35. Zhou Z, Wang H, Zhou J, Tang L, Li K, Zheng W, Fang M (2006) Pigeon: a framework for testing peer-to-peer massively multiplayer online games over heterogeneous network. In: Consumer communications and networking conference. CCNC 2006. 3rd IEEE, vol 2, Issue, pp 1028–1032, 8–10 JanGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Eduardo Cunha de Almeida
    • 1
  • Gerson Sunyé
    • 2
  • Yves Le Traon
    • 3
  • Patrick Valduriez
    • 4
  1. 1.Federal University of ParanáParanáBrazil
  2. 2.LINAUniversity of NantesNantesFrance
  3. 3.University of LuxembourgLuxembourg CityLuxembourg
  4. 4.INRIA & LIRMMMontpellierFrance

Personalised recommendations