Empirical Software Engineering

, Volume 18, Issue 3, pp 550–593 | Cite as

GPGPU test suite minimisation: search based software engineering performance improvement using graphics cards

  • Shin Yoo
  • Mark Harman
  • Shmuel Ur


It has often been claimed that SBSE uses so-called ‘embarrassingly parallel’ algorithms that will imbue SBSE applications with easy routes to dramatic performance improvements. However, despite recent advances in multicore computation, this claim remains largely theoretical; there are few reports of performance improvements using multicore SBSE. This paper shows how inexpensive General Purpose computing on Graphical Processing Units (GPGPU) can be used to massively parallelise suitably adapted SBSE algorithms, thereby making progress towards cheap, easy and useful SBSE parallelism. The paper presents results for three different algorithms: NSGA2, SPEA2, and the Two Archive Evolutionary Algorithm, all three of which are adapted for multi-objective regression test selection and minimization. The results show that all three algorithms achieved performance improvements up to 25 times, using widely available standard GPUs. We also found that the speed-up was observed to be statistically strongly correlated to the size of the problem instance; as the problem gets harder the performance improvements also get better.


Search based software engineering GPGPU Test suite minimisation Regression testing 


  1. ATI Stream Computing (2010) OpenCL Programming Guide Rev. 1.05. AMD CorporationGoogle Scholar
  2. Afzal W, Torkar R, Feldt R (2009) A systematic review of search-based testing for non-functional system properties. Inf Softw Technol 51(6):957–976CrossRefGoogle Scholar
  3. Ali S, Briand LC, Hemmati H, Panesar-Walawege RK (2010) A systematic review of the application and empirical investigation of search-based test-case generation. IEEE Trans Softw Eng 36(6):742–762CrossRefGoogle Scholar
  4. Asadi F, Antoniol G, Guéhéneuc Y (2010) Concept locations with genetic algorithms: a comparison of four distributed architectures. In: Proceedings of 2nd international symposium on search based software engineering, SSBSE 2010. IEEE Computer Society Press, Benevento, pp 153–162Google Scholar
  5. Black J, Melachrinoudis E, Kaeli D (2004) Bi-criteria models for all-uses test suite reduction. In: Proceedings of the 26th international conference on software engineering, pp 106–115Google Scholar
  6. Boyer M, Tarjan D, Acton ST, Skadron K (2009) Accelerating leukocyte tracking using cuda: a case study in leveraging manycore coprocessors. In: Proceedings of the 23rd IEEE International Parallel and Distributed Processing Symposium (IPDPS)Google Scholar
  7. Bull JM, Westhead MD, Kambites ME, Obrzalek J (2000) Towards OpenMP for java. In: Proceedings of the 2nd European workshop on OpenMP, pp 98–105Google Scholar
  8. Chafik O (2009) JavaCL: opensource Java wrapper for OpenCL library (2009). Accessed 6 June 2010
  9. Chau PYK, Tam KY (1997) Factors affecting the adoption of open systems: an exploratory study. MIS Q 21(1):1–20CrossRefGoogle Scholar
  10. Chen T, Lau M (1995) Heuristics towards the optimization of the size of a test suite. In: Proceedings of the 3rd international conference on software quality management, vol 2, pp 415–424Google Scholar
  11. Chen TY, Lau MF (1996) Dividing strategies for the optimization of a test suite. Inf Process Lett 60(3):135–141MathSciNetMATHCrossRefGoogle Scholar
  12. Clark J, Dolado JJ, Harman M, Hierons RM, Jones B, Lumkin M, Mitchell B, Mancoridis S, Rees K, Roper M, Shepperd M (2003) Reformulating software engineering as a search problem. IEE Proc Softw 150(3):161–175CrossRefGoogle Scholar
  13. Cordy JR (2003) Comprehending reality—practical barriers to industrial adoption of software maintenance automation. In: IEEE International Workshop on Program Comprehension, IWPC ’03. IEEE Computer Society, pp 196–206Google Scholar
  14. Do H, Elbaum SG, Rothermel G (2005) Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir Software Eng 10(4):405–435CrossRefGoogle Scholar
  15. Durillo J, Nebro A, Alba E (2010) The jmetal framework for multi-objective optimization: design and architecture. In: Proceedings of congress on evolutionary computation 2010. Barcelona, Spain, pp 4138–4325Google Scholar
  16. Durillo JJ, Nebro AJ, Luna F, Dorronsoro B, Alba E (2006) jMetal: a Java framework for developing multi-objective optimization metaheuristics. Tech. Rep. ITI-2006-10, Departamento de Lenguajes y Ciencias de la Computación, University of Málaga, E.T.S.I. Informática, Campus de TeatinosGoogle Scholar
  17. Ekman M, Warg F, Nilsson J (2005) An in-depth look at computer performance growth. Comput Archit News 33(1):144–147CrossRefGoogle Scholar
  18. Engström E, Runeson P, Wikstrand G (2010) An empirical evaluation of regression testing based on fix-cache recommendations. In: Proceedings of the 3rd International Conference on Software Testing verification and validation, ICST 2010. IEEE Computer Society Press, pp 75–78Google Scholar
  19. Engström E, Runeson PP, Skoglund M (2009) A systematic review on regression test selection techniques. Inf Softw Technol 52(1):14–30CrossRefGoogle Scholar
  20. Garey MR, Johnson DS (1979) Computers and intractability: a guide to the theory of NP-Completeness. W. H. Freeman and Company, New York, NYMATHGoogle Scholar
  21. Govindaraju NK, Gray J, Kumar R, Manocha D (2006) Gputerasort: high performance graphics coprocessor sorting for large database management. In: ACM SIGMODGoogle Scholar
  22. Harman M (2007) The current state and future of search based software engineering. In: 2007 Future of Software Engineering, FOSE ’07. IEEE Computer Society, Washington, DC, pp 342–357Google Scholar
  23. Harman M (2011) Making the case for MORTO: multi-objective regression test optimization. In: 1st international workshop on regression testing (Regression 2011). Berlin, GermanyGoogle Scholar
  24. Harman M, Jones BF (2001) Search based software engineering. Inf Softw Technol 43(14):833–839CrossRefGoogle Scholar
  25. Harman M, Krinke J, Ren J, Yoo S (2009) Search based data sensitivity analysis applied to requirement engineering. In: Proceedings of the 11th annual conference on Genetic and Evolutionary Computation, GECCO ’09. ACM, Montreal, pp 1681–1688CrossRefGoogle Scholar
  26. Harman M, Mansouri A, Zhang Y (2012) Search based software engineering trends, techniques and applications. ACM Comput Surv 45(1):11:1–11:61CrossRefGoogle Scholar
  27. Harrold M, Orso A (2008) Retesting software during development and maintenance. In: Frontiers of Software Maintenance, FoSM 2008. IEEE Computer Society Press, pp 99–108Google Scholar
  28. Harrold MJ, Gupta R, Soffa ML (1993) A methodology for controlling the size of a test suite. ACM Trans Softw Eng Methodol 2(3):270–285CrossRefGoogle Scholar
  29. Hutchins M, Foster H, Goradia T, Ostrand T (1994) Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: Proceedings of the 16th International Conference on Software Engineering, ICSE 1994. IEEE Computer Society Press, pp 191–200Google Scholar
  30. Kim JM, Porter A (2002) A history-based test prioritization technique for regression testing in resource constrained environments. In: Proceedings of the 24th international conference on software engineering. ACM Press, pp 119–129Google Scholar
  31. Kirsopp C, Shepperd M, Hart J (2002) Search heuristics, case-based reasoning and software project effort prediction. In: Proceedings of the genetic and evolutionary computation conference, GECCO 2002. Morgan Kaufmann Publishers, San Francisco, CA, pp 1367–1374Google Scholar
  32. Langdon WB, Banzhaf W (2008) A SIMD interpreter for genetic programming on GPU graphics cards. In: O’Neill M, Vanneschi L, Gustafson S, Esparcia Alcazar AI, De Falco I, Della Cioppa A, Tarantino E (eds) Proceedings of the 11th European conference on Genetic Programming, EuroGP 2008. Lecture notes in computer science, vol 4971. Springer, pp 73–85Google Scholar
  33. Li Z, Harman M, Hierons RM (2007) Search algorithms for regression test case prioritization. IEEE Trans Softw Eng 33(4):225–237CrossRefGoogle Scholar
  34. Mahdavi K, Harman M, Hierons RM (2003) A multiple hill climbing approach to software module clustering. In: IEEE international conference on software maintenance. IEEE Computer Society Press, Los Alamitos, CA, pp 315–324Google Scholar
  35. Maia CLB, do Carmo RAF, de Freitas FG, de Campos GAL, de Souza, JT (2009) A multi-objective approach for the regression test case selection problem. In: Proceedings of anais do XLI Simpòsio Brasileiro de Pesquisa Operacional (SBPO 2009), pp 1824–1835Google Scholar
  36. Mitchell BS, Traverso M, Mancoridis S (2001) An architecture for distributing the computation of software clustering algorithms. In: IEEE/IFIP proceedings of the Working Conference on Software Architecture, WICSA ’01. IEEE Computer Society, Amsterdam, pp 181–190CrossRefGoogle Scholar
  37. Nethercote N, Seward J (2007) Valgrind: a program supervision framework. In: Proceedings of ACM conference on programming language design and implementation. ACM Press, pp 89–100Google Scholar
  38. Offutt J, Pan J, Voas J (1995) Procedures for reducing the size of coverage-based test sets. In: Proceedings of the 12th international conference on testing computer software. ACM Press, pp 111–123Google Scholar
  39. Owens JD, Luebke D, Govindaraju N, Harris M, Krüger J, Lefohn AE, Purcell TJ (2007) A survey of general-purpose computation on graphics hardware. Comput Graph Forum 26(1):80–113CrossRefGoogle Scholar
  40. Praditwong K, Yao X (2006) A new multi-objective evolutionary optimisation algorithm: the two-archive algorithm. In: Proceedings of computational intelligence and security, international conference. Lecture notes in computer science, vol 4456, pp 95–104Google Scholar
  41. Premkumar G, Potter M (1995) Adoption of Computer Aided Software Engineering (CASE) technology: an innovation adoption perspective. Database 26(2&3):105–124Google Scholar
  42. Räihä O (2009) A survey on search-based software design. In: Tech. Rep. D-2009-1. Department of Computer Science, University of TampereGoogle Scholar
  43. Rothermel G, Harrold MJ, Ostrin J, Hong C (1998) An empirical study of the effects of minimization on the fault detection capabilities of test suites. In: Proceedings of International Conference on Software Maintenance, ICSM 1998. IEEE Computer Society Press, Bethesda, MD, pp 34–43Google Scholar
  44. Rothermel G, Elbaum S, Malishevsky A, Kallakuri P, Davia B (2002a) The impact of test suite granularity on the cost-effectiveness of regression testing. In: Proceedings of the 24th International Conference on Software Engineering, ICSE 2002. ACM Press, pp 130–140Google Scholar
  45. Rothermel G, Harrold M, Ronne J, Hong C (2002b) Empirical studies of test suite reduction. Softw Test Verif Reliab 4(2):219–249CrossRefGoogle Scholar
  46. Saliu MO, Ruhe G (2007) Bi-objective release planning for evolving software systems. In: Proceedings of the 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT symposium on the Foundations of Software Engineering, ESEC-FSE 2007. ACM Press, New York, NY, pp 105–114Google Scholar
  47. de Souza JT, Maia CL, de Freitas FG, Coutinho DP (2010) The human competitiveness of search based software engineering. In: Proceedings of 2nd international Symposium on Search Based Software Engineering, SSBSE 2010. IEEE Computer Society Press, Benevento, pp 143–152Google Scholar
  48. Tsutsui S, Fujimoto N (2009) Solving quadratic assignment problems by genetic algorithms with GPU computation: a case study. In: Proceedings of the 11th annual conference companion on Genetic and Evolutionary Computation Conference, GECCO 2009. ACM Press, pp 2523–2530Google Scholar
  49. Walcott KR, Soffa ML, Kapfhammer GM, Roos RS (2006) Time aware test suite prioritization. In: Proceedings of the international symposium on software testing and analysis, pp 1–12Google Scholar
  50. Wilson G, Banzhaf W (2009) Deployment of CPU and GPU-based genetic programming on heterogeneous devices. In: Proceedings of the 11th annual conference companion on Genetic and Evolutionary Computation Conference, GECCO 2009. ACM Press, New York, NY, pp 2531–2538CrossRefGoogle Scholar
  51. Wong ML (2009) Parallel multi-objective evolutionary algorithms on graphics processing units. In: Proceedings of the 11th annual conference companion on Genetic and Evolutionary Computation Conference, GECCO 2009. ACM Press, New York, NY, pp 2515–2522CrossRefGoogle Scholar
  52. Wong WE, Horgan JR, London S, Mathur AP (1998) Effect of test set minimization on fault detection effectiveness. Softw Pract Exp 28(4):347–369CrossRefGoogle Scholar
  53. Wong WE, Horgan JR, Mathur AP, Pasquini A (1999) Test set size minimization and fault detection effectiveness: a case study in a space application. J Syst Softw 48(2):79–89CrossRefGoogle Scholar
  54. Yoo S, Harman M (2007) Pareto efficient multi-objective test case selection. In: Proceedings of international symposium on software testing and analysis. ACM Press, pp 140–150Google Scholar
  55. Yoo S, Harman M (2012a) Regression testing minimisation, selection and prioritisation: a survey. Softw Test Verif Reliab 22(2):67–120CrossRefGoogle Scholar
  56. Yoo S, Harman M (2012b) Test data regeneration: generating new test data from existing test data. J Softw Test Verif Reliab 22(3):171–201CrossRefGoogle Scholar
  57. Yoo S, Harman M, Ur S (2009) Measuring and improving latency to avoid test suite wear out. In: Proceedings of the Interntional Conference on Software Testing, Verification and Validation Workshop, ICSTW 2009. IEEE Computer Society Press, pp 101–110Google Scholar
  58. Yoo S, Harman M, Ur S (2011a) Highly scalable multi-objective test suite minimisation using graphics card. In: LNCS: proceedings of the 3rd international Symposium on Search-Based Software Engineering, SSBSE, vol 6956, pp 219–236Google Scholar
  59. Yoo S, Nilsson R, Harman M (2011b) Faster fault finding at Google using multi objective regression test optimisation. In: 8th European Software Engineering Conference and the ACM SIGSOFT symposium on the Foundations of Software Engineering, ESEC/FSE ’11. Szeged, Hungary. Industry TrackGoogle Scholar
  60. Zhang Y (2011) SBSE repository (2011). Accessed 14 Feb 2011
  61. Zhang Y, Harman M, Finkelstein A, Mansouri A (2011) Comparing the performance of metaheuristics for the analysis of multi-stakeholder tradeoffs in requirements optimisation. J Inf Softw Technol 53(6):761–773CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.University College LondonLondonUK
  2. 2.University of BristolBristolUK

Personalised recommendations