CHOMK: Concurrent Higher-Order Mutants Killing Using Genetic Algorithm

Research Article - Computer Engineering and Computer Science
  • 5 Downloads

Abstract

Higher-order subtle mutants are faults that are hard to detect or kill by the existing test set used for killing all the first-order mutants of the given program. Recently, some techniques have been proposed to construct higher-order subtle concurrency mutants that are not represented by the first-order mutants. To the best of our knowledge, there is no test-input generation technique proposed to kill this type of mutants. This paper proposes a search-based technique for generating a set of test inputs to kill higher-order subtle concurrency mutants. The proposed technique utilizes genetic algorithms in generating the set of test inputs. The performance of the proposed technique is evaluated and compared with that of the random-based test-data generation technique. The obtained results demonstrate the effectiveness of the proposed technique as it outperforms the random technique in terms of the killing ratio for the generated set of subtle concurrency mutants and the size of test suite. In the range of tested set of subtle concurrency mutants, the proposed technique approximately killed 91.4% of all mutants using 79 test cases compared to 82.8% using 128 test cases for the random technique.

Keywords

Mutation testing Higher-order mutation testing Subtle concurrency mutants Test generation Genetic algorithms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jia, Y.; Harman, M.: Higher order mutation testing. J. Inf. Softw. Technol. 51(10), 1379–1393 (2009)CrossRefGoogle Scholar
  2. 2.
    Smith, B.H.; Williams, L.: On guiding the augmentation of an automated test suite via mutation analysis. Empir. Softw. Eng. 14(3), 341–369 (2009)CrossRefGoogle Scholar
  3. 3.
    DeMillo, R.A.; Lipton, R.J.; Sayward, F.G.: Hints on test data selection: help for the practicing programmer. IEEE Comput. 11(4), 34–41 (1978)CrossRefGoogle Scholar
  4. 4.
    Hamlet, R.G.: Testing programs with the aid of a compiler. IEEE Trans. Softw. Eng. 3(no. 4), 279–290 (1977)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Deng, Y.; Zeng, H.; Miao, H.; Gao, H.: Optimization of mutation-based test generation with model checking. Inf. J. 17(10(A)), 4917–4922 (2014)Google Scholar
  6. 6.
    Dai, Z.; Zhang, W.: Survey and analysis of debugging concurrecy bug. Comput. Syst. Appl. 23(10), 1–10 (2014)Google Scholar
  7. 7.
    Wu, L.; Kaiser, G.: Constructing subtle concurrency bugs using synchronization-centric second-order mutation operators. In: Proceedings of the 23rd International Conference on Software Engineering & Knowledge Engineering (SEKE’2011), pp. 244–249 (2011)Google Scholar
  8. 8.
    Choi, S.E.; Lewis, E.C.: A study of common pitfalls in simple multithreaded programs. ACM SIGCSE Bull. 32(1), 325–329 (2000)CrossRefGoogle Scholar
  9. 9.
    Lu, S.; Park, S.; Seo, E.; Zhou, Y.: Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. In: Proceedings of the 13th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’08), pp. 329–339 (2008)Google Scholar
  10. 10.
    Cao, L.; Zheng, W.; Hu, D.; Bai, H.: Concurrent program semantic mutation testing based on abstract memory model. In: IEEE International Conference on Information and Automation 2015, pp. 1200–1205 (2015)Google Scholar
  11. 11.
    Flanagan, C.; Freund, S.N.: FastTrack: efficient and precise dynamic race detection. ACM Sigplan Not. 44(6), 121–133 (2009)CrossRefGoogle Scholar
  12. 12.
    Lu, S.; Tucek, J.; Qin, F.; Zhou, Y.: AVIO: detecting atomicity violations via access-interleaving invariants. IEEE Micro 27(1), 26–35 (2007)CrossRefGoogle Scholar
  13. 13.
    Lucia, B.; Ceze, L.; Strauss, K.: ColorSafe: architectural support for debugging and dynamically avoiding multi-variable atomicity violations. In: Proceedings 37th International Symposium on Computer Architecture (ISCA ’10), pp. 222–233 (2010)Google Scholar
  14. 14.
    Gao, Q.; Zhang, W.; Chen, Z.; Zheng, M.; Qin, F.: 2ndStrike: toward manifesting hidden concurrency typestate bugs. In: Proceedings of the 16th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 239–250 (2011)Google Scholar
  15. 15.
    Zhang, W.; Sun, C.; Lu, S.: ConMem: detecting severe concurrency bugs through an effect-oriented approach. In: Proceedings of the 15th Edition of ASPLOS on Architectural Support for Programming Languages and Operating Systems, pp. 179–192 (2010)Google Scholar
  16. 16.
    Visser, W.; Havelund, K.; Brat, G.; Park, S.; Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)CrossRefGoogle Scholar
  17. 17.
    Li, T.; Ellis, C.S.; Lebeck, A.R.; Sorin, D.J.: Pulse: a dynamic deadlock detection mechanism using speculative execution. In: Proceedings of the Annual Conference on USENIX Annual Technical Conference (ATEC ’05), p. 3-3 (2005)Google Scholar
  18. 18.
    Muzahid, A.; Qi, S.; Torrellas, J.: Vulcan: hardware support for detecting sequential consistency violations dynamically. In: Proceedings 45th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 363–375 (2012)Google Scholar
  19. 19.
    Park, S.; Lu, S.; Zhou, Y.: CTrigger: exposing atomicity violation bugs from their hiding places. In: Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 25–36 (2009)Google Scholar
  20. 20.
    Burckhardt, S.; Kothari, P.; Musuvathi, M.; Nagarakatte, S.: A randomized scheduler with probabilistic guarantees of finding bugs. In: Proceedings of the 15th edition of ASPLOS on Architectural Support for Programming Languages and Operating Systems, pp. 167–178 (2010)Google Scholar
  21. 21.
    Musuvathi, M.; et al.: Finding and reproducing Heisenbugs in concurrent programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI’08), pp. 267–280 (2008)Google Scholar
  22. 22.
    Clark, A.J.; Dan, H.; Hierons, M.R.: Semantic mutation testing. J. Sci. Comput. Program. 78(4), 345–363 (2013)CrossRefMATHGoogle Scholar
  23. 23.
    Dan, H.; Hierons, M.R.: Semantic mutation analysis of floating-point comparison. In: Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, pp. 290–299 (2012)Google Scholar
  24. 24.
    Long, B.; Duke, R.; Goldson, D.; Strooper, P.; Wildman, L.: Mutation-based exploration of a method for verifying concurrent Java components. In: Proceedings of the 18th International Parallel and Distributed Processing Symposium (IPDPS ’04), p. 265 (2004)Google Scholar
  25. 25.
    Ghiduk, A.S.; Harrold, M.J.; Girgis, M.R.: Using genetic algorithms to aid test-data generation for data-flow coverage. In: Proceedings of 14th Asia-Pacific Software Engineering Conference, pp. 41–48 (2007)Google Scholar
  26. 26.
    McMinn, P.: Search-based software test data generation: a survey: research articles. J. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  27. 27.
    Nir-Buchbinder, Y.; Ur, S.: Contest listeners: a concurrency-oriented infrastructure for Java test and heal tools. In: Proceedings 4th International Workshop on Software Quality Assurance: In Conjunction with the 6th ESEC/FSE Joint Meeting, pp. 9–16 (2007)Google Scholar
  28. 28.
    Jia, Y.; Harman, M.: Constructing subtle faults using higher order mutation testing. In: Proceedings of IEEE International Workshop on Source Code Analysis and Manipulation, pp. 249–258 (2008)Google Scholar
  29. 29.
    Briand, L. C.; Labiche, Y.; Shousha, M.: Stress testing real-time systems with genetic algorithms. In: Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, pp. 1021–1028 (2005)Google Scholar
  30. 30.
    Li, Z.; Harman, M.; Hierons, R.M.: Search algorithms for regression test case prioritization. J. IEEE Trans. Softw. Eng. 33(4), 225–237 (2007)CrossRefGoogle Scholar
  31. 31.
    Cohen, M.B.; Dwyer, M.B.; Shi, J.: Constructing interaction test suites for highly-configurable systems in the presence of constraints: a greedy approach. J. IEEE Trans. Softw. Eng. 34(5), 633–650 (2008)CrossRefGoogle Scholar
  32. 32.
    Derderian, K.; Hierons, R.M.; Harman, M.; Guo, Q.: Automated unique input output sequence generation for conformance testing of FSMs. Comput. J. 49(3), 331–344 (2006)CrossRefGoogle Scholar
  33. 33.
    Krena, B.; Letko, Z.; Vojnar, T.; Ur, S.: A platform for search-based testing of concurrent software. In: Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, pp. 48–58 (2010)Google Scholar
  34. 34.
    Gligoric, M.; Zhang, L.; Pereira, C.; Pokam, G.: Selective mutation testing for concurrent code. In: Proceedings of the 2013 International Symposium on Software Testing and Analysis, pp. 224–234 (2013)Google Scholar
  35. 35.
    Arora, V.; Bhatia, R.; Singh, M.: A systematic review of approaches for testing concurrent programs. J. Concurr. Comput. Pract. Exp. 28(5), 1572–1611 (2016)CrossRefGoogle Scholar
  36. 36.
    Baudry, B.; Fleurey, F.; Jézéquel, J.M.; Traon, Y.L.: From genetic to bacteriological algorithms for mutation-based testing. Softw. Test. Verif. Reliab. 15(2), 73–96 (2005)CrossRefGoogle Scholar
  37. 37.
    May, P.; Timmis, J.; Mander, K.: Immune and evolutionary approaches to software mutation testing. In: de Castro L.N., Von Zuben F.J., Knidel H. (eds.) Artificial Immune Systems. Lecture Notes in Computer Science, vol. 4628, pp. 336–347 (2007)Google Scholar
  38. 38.
    Estero-Botaro, A.; García-Domínguez, A.; Domínguez-Jiménez, J. J.; Palomo-Lozano, F.; Medina-Bulo, I.: A framework for genetic test-case generation for WS-BPEL compositions. In: ICTSS 2014 Proceedings of the 26th IFIP WG 6.1 International Conference on Testing Software and Systems, vol. 8763, pp. 1–16 (2014)Google Scholar
  39. 39.
    Silva, R.A.; de Souza, S.; de Souza, P.: A systematic review on search based mutation testing. J. Inf. Softw. Technol. 81(C), 19–35 (2017)CrossRefGoogle Scholar
  40. 40.
    Ghiduk, A.S.; Girgis, M.R.; Shehata, M.H.: Higher order mutation testing: a systematic literature review. Comput. Sci. Rev. 25, 29–48 (2017)MathSciNetCrossRefMATHGoogle Scholar
  41. 41.
    Ghosh, S.: Towards measurement of testability of concurrent object-oriented programs using fault insertion: a preliminary investigation. In: Proceedings of the Second IEEE International Workshop on Source Code Analysis and Manipulation, pp. 17–25 (2002)Google Scholar
  42. 42.
    Delamaro, M.; Pezze, M.; Vincenzi, A.M.R.; Maldonado, J.C.: Mutant operators for testing concurrent Java programs. In: XV Simposio Brasileiro de Engenharia de Software, pp. 272–285 (2001)Google Scholar
  43. 43.
    Bradbury, J.S.; Cordy, J.R.; Dingel, J.: Mutation operators for concurrent Java (J2SE 5.0). In: Proceedings of the Second Workshop on Mutation Analysis (Mutation ’06), p. 11 (2006)Google Scholar
  44. 44.
    Holland, J.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI (1975). ISBN 0 472 08460 7Google Scholar
  45. 45.
    Ghiduk, A.S.; Girgis, M.R.: Using genetic algorithms and dominance concepts for generating reduced test data. Inf. J. 34(3), 377–385 (2010)Google Scholar
  46. 46.
    Girgis, M.: Automatic test data generation for data flow testing using a genetic algorithm. J. Univ. Comput. Sci. 11(6), 898–915 (2005)Google Scholar
  47. 47.
    Michalewicz, Z.: Genetic Algorithms \(+\) Data Structures \(=\) Evolution Programs, 3rd edn. Springer, Berlin (1999)MATHGoogle Scholar
  48. 48.
    Schütte, A.: Parallel programming in Java. http://www.fbi.h-da.de/~a.schuette/Vorlesungen/ParallelProgramming/inhalt.htm. Last visit 14 Dec 2015
  49. 49.
    Do, H.; Elbaum, S.G.; Rothermel, G.: Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. J. Empir. Softw. Eng. 10(4), 405–435 (2005)CrossRefGoogle Scholar
  50. 50.
    Eytani, Y.; Ur, S.: Compiling a benchmark of documented multi-threaded bugs. In: Proceedings of 18th International Parallel and Distributed Processing Symposium, p. 266 (2004)Google Scholar
  51. 51.
    Eytani, Y.; Havelund, K.; Stoller, S.D.; Ur, S.: Toward a framework and benchmark for testing tools for multi-threaded programs. J. Concurr. Comput. Pract. Exp. 19(3), 267–279 (2006)CrossRefGoogle Scholar
  52. 52.
    Dwyer, M. B.; Person, S.; Elbaum, S.: Controlling factors in evaluating path-sensitive error detection techniques. In: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 92–104 (2006)Google Scholar
  53. 53.
    Farchi, E.; Nir, Y.; Ur, S.: Concurrent bug patterns and how to test them. In: Proceedings of the 17th International Symposium on Parallel and Distributed Processing, p. 286.2 (2003)Google Scholar

Copyright information

© King Fahd University of Petroleum & Minerals 2018

Authors and Affiliations

  1. 1.Department of Mathematics and Computer Science, Faculty of ScienceBeni-Suef UniversityBeni SuefEgypt
  2. 2.College of Computers and Information TechnologyTaif UniversityTa’ifSaudi Arabia
  3. 3.Department of Mathematics and Computer Science, Faculty of ScienceMenoufia UniversityAl MinufyaEgypt

Personalised recommendations