Advertisement

Greenify: A Game with the Purpose of Test Data Generation for Unit Testing

  • Sharmin Moosavi
  • Hassan Haghighi
  • Hasti Sahabi
  • Farzam Vatanzade
  • Mojtaba Vahidi AslEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11761)

Abstract

One of the most important, but tedious and costly tasks of software testing process is test data generation. Several methods for automating this task have been presented, yet due to their practical drawbacks, test data generation is still widely performed by humans in industry. In our previous work, we employed the notion of Game With A Purpose (GWAP) and introduced Rings as a GWAP to reduce time and costs of human-based test data generation and increase its appeal to engage even nontechnical people. In this paper, we propose a new game, called Greenify, with the purpose of test data generation so that it solves the main issues of Rings. The environment of this game is built based on a program’s control flow graph. To evaluate the proposed approach, we designed several game levels based on six different C++ programs and gave them to volunteering players. The results show that in comparison to both conventional human-based approach and Rings, Greenify generates test data with less rime for all feasible paths of the given benchmark programs. In addition, Greenify identifies the smaller set of likely infeasible paths.

Keywords

Test data generation Game With A Purpose Human-based computation game Human-based software testing 

References

  1. 1.
    Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, New York (2016)CrossRefGoogle Scholar
  2. 2.
    Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Test. Verif. Reliab. 22(5), 297–312 (2012)CrossRefGoogle Scholar
  3. 3.
    Chen, T.Y., Fei-Ching, K., Robert, G.M., Tse, T.H.: Adaptive random testing: the art of test case diversity. J. Syst. Softw. 83(1), 60–66 (2010)CrossRefGoogle Scholar
  4. 4.
    Cadar, C., et al.: Symbolic execution for software testing in practice: preliminary assessment. In: Proceedings of the 33rd International Conference on Software Engineering, pp. 1066–1071. ACM (2011)Google Scholar
  5. 5.
    Harman, M., McMinn, P., de Souza, J.T., Yoo, S.: Search based software engineering: techniques, taxonomy, tutorial. In: Meyer, B., Nordio, M. (eds.) LASER 2008-2010. LNCS, vol. 7007, pp. 1–59. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-25231-0_1CrossRefGoogle Scholar
  6. 6.
    Weinstein, A.M.: Computer and video game addiction—a comparison between game users and non-game users. Am. J. Drug Alcohol Abus. 36(5), 268–276 (2010)CrossRefGoogle Scholar
  7. 7.
    Wightman, D.: Crowdsourcing human-based computation. In: Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries. ACM (2010)Google Scholar
  8. 8.
    Gong, D., Yao, X.: Automatic detection of infeasible paths in software testing. IET Softw. 4(5), 361–370 (2010)CrossRefGoogle Scholar
  9. 9.
    Werbach, K., Hunter, D.: For the win: How Game Thinking Can Revolutionize Your Business. Wharton Digital Press, Philadelphia (2012)Google Scholar
  10. 10.
    Deterding, S., Dixon, D., Khaled, R., Nacke. L.: From game design elements to gamefulness: defining gamification. In: Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, pp. 9–15. ACM (2011)Google Scholar
  11. 11.
    Yuen, M.C., King, I., Leung. K.S.: A survey of crowdsourcing systems. In: 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing (SocialCom). IEEE (2011)Google Scholar
  12. 12.
    King, J.C.: symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefGoogle Scholar
  13. 13.
    AmiriChimeh, S., Haghighi, H., Vahidi-Asl, M., Setayesh-Ghajar, K., Gholami-Ghavamabad, F.: Rings: a game with a purpose for test data generation. Interact. Comput 30, 1–30 (2017)CrossRefGoogle Scholar
  14. 14.
    Mao, K., Capra, L., Harman, M., Jia, Y.: A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 126, 57–84 (2017)CrossRefGoogle Scholar
  15. 15.
    Schmitz, B., Felicia, P., Bignami, F.: An international survey In: Gamification in Education: Breakthroughs in Research and Practice, pp. 439–452. IGI Global (2018)Google Scholar
  16. 16.
    Reeves, N., West, P., Simperl, E.: “A game without competition is hardly a game’’: the impact of competitions on player activity in a human computation game. In: AAAI (2018)Google Scholar
  17. 17.
    Prabhakar, N., Singhal, A., Bansal, A., Bhatia, V.: A literature survey of applications of meta-heuristic techniques in software testing. In: Hoda, M.N., Chauhan, N., Quadri, S.M.K., Srivastava, P.R. (eds.) Software Engineering. AISC, vol. 731, pp. 497–505. Springer, Singapore (2019).  https://doi.org/10.1007/978-981-10-8848-3_47CrossRefGoogle Scholar
  18. 18.
    Baker, A., Navarro, E.O., Van Der Hoek, A.: An experimental card game for teaching software engineering processes. J. Syst. Softw. 75(1), 3–16 (2005)CrossRefGoogle Scholar
  19. 19.
    Sheth, S., Bell, J., Kaiser, G.: Halo (highly addictive, socially optimized) software engineering. In: 1st International Workshop on Games and Software Engineering, pp. 29–32 (2011)Google Scholar
  20. 20.
    Johansson, M., Ivarsson, E.: An experiment on the effectiveness of unit testing when introducing gamification. Ph.D. thesis, Master’s thesis, Chalmers University of Technology (2014)Google Scholar
  21. 21.
    Arnarsson, D., Johannesson, I.H.: Improving unit testing practices with the use of gamification (2015)Google Scholar
  22. 22.
    Rojas, J.M., Fraser, G.: Code defenders: a mutation testing game. In: 11th International Workshop on Mutation Analysis. IEEE (2015)Google Scholar
  23. 23.
    Navarro, E.O., van der Hoek, A.: SIMSE: an interactive simulation game for software engineering education. In: CATE, pp. 12–17 (2004)Google Scholar
  24. 24.
    Logas, H., et al.: Software verification games: Designing Xylem, The Code of Plants”. In: FDG (2014)Google Scholar
  25. 25.
    Dietl, W., et al.: Verification games: making verification fun. In: 14th Workshop on Formal Techniques for Java-like Programs, pp. 42–49 (2012)Google Scholar
  26. 26.
    Gómez, M., et al.: Reproducing context-sensitive crashes of mobile apps using crowdsourced monitoring. In: 2016 IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft). IEEE (2016)Google Scholar
  27. 27.
    He, M., et al.: A crowdsourcing framework for detecting cross-browser issues in web application. In: Proceedings of the 7th Asia-Pacific Symposium on Internetware. ACM (2015)Google Scholar
  28. 28.
    Afzal, W., et al.: An experiment on the effectiveness and efficiency of exploratory testing. Empir. Softw. Eng. 20(3), 844–878 (2015)CrossRefGoogle Scholar
  29. 29.
    Leicht, N., et al.: When is crowdsourcing advantageous? The case of crowdsourced software testing (2016)Google Scholar
  30. 30.
    Schneider, C., Cheung, T.: The power of the crowd: performing usability testing using an on-demand workforce. In: Pooley, R., Coady, J., Schneider, C., Linger, H., Barry, C., Lang, M. (eds.) Information Systems Development, pp. 551–560. Springer, New York (2013).  https://doi.org/10.1007/978-1-4614-4951-5_44CrossRefGoogle Scholar
  31. 31.
    Gomide, V.H.M., et al.: Affective crowdsourcing applied to usability testing. Int. J. Comput. Sci. Inf. Technol. 5(1), 575–579 (2014)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Sharmin Moosavi
    • 1
  • Hassan Haghighi
    • 1
  • Hasti Sahabi
    • 1
  • Farzam Vatanzade
    • 1
  • Mojtaba Vahidi Asl
    • 1
    Email author
  1. 1.Faculty of Computer Science and EngineeringShahid Beheshti University, G. C.TehranIran

Personalised recommendations