Advertisement

Gap between academia and industry: a case of empirical evaluation of three software testing methods

  • Sheikh Umar FarooqEmail author
Original Article
  • 25 Downloads

Abstract

Doing the right kind of testing has always been one of main challenging and a decisive task for industry. To choose right software testing method(s), industry needs to have an exact objective knowledge of their effectiveness, efficiency, and applicability conditions. The most common way to evaluate testing methods, for such knowledge, is with empirical studies. Reliable and comprehensive evidence can be obtained by aggregating the results of different empirical studies (family of experiments) taking into account their findings and limitations. We conducted a study to investigate the current state of the art of empirical knowledge base of three testing methods. We found that although the empirical studies conducted so far to evaluate testing methods contain many important and interesting results; however, we still lack factual and generalizable knowledge about performance and applicability conditions of testing methods(s), making it unfeasible to be readily adopted by the industry. Moreover, we tried to identify the major factors responsible for limiting academia from producing significantly reliable results having an industrial impact. We believe that besides effective and long-term academia-industry collaboration, there is a need for more systematic, quantifiable and comprehensive empirical studies (which provides scope for aggregation using rigorous techniques), mainly replications so as to create an effective and applicable knowledge base about testing methods which potentially can fill the gap between academia and industry.

Keywords

Aggregation Evaluation Experimentation Replication Testing methods evaluation 

Notes

Funding

University Grants Commission (UGC), BSR Start‐Up Grant. Grant Number: F.30‐114/ 2015 (BSR).

References

  1. Apa C, Dieste O (2014) Effectiveness for detecting faults within and outside the scope of testing techniques: an independent replication. Empir Softw Eng 19(2):378–417CrossRefGoogle Scholar
  2. Basili VR, Selby RW (1987) Comparing the effectiveness of software testing strategies. IEEE Trans Softw Eng 12:1278–1296CrossRefGoogle Scholar
  3. Basili VR, Shull F, Lanubile F (1999) Building knowledge through families of experiments. IEEE Trans Softw Eng 25(4):456–473CrossRefGoogle Scholar
  4. Beizer B (2003) Software testing techniques. Dreamtech Press, New DelhizbMATHGoogle Scholar
  5. Binder R (2000) Testing object-oriented systems: models, patterns, and tools. Addison-Wesley Professional, BostonGoogle Scholar
  6. Briand LC (2007) A critical analysis of empirical research in software testing. In: First international symposium on empirical software engineering and Measurement, 2007. ESEM 2007. IEEE, pp 1–8Google Scholar
  7. Carver JC (2010) Towards reporting guidelines for experimental replications: a proposal. In: RESER‘2010: proceedings of the 1st international workshop on replication in empirical software engineering research, Cape Town, South Africa, vol 4Google Scholar
  8. de Oliveira Neto FG, Torkar R, Machado PDL (2015) An initiative to improve reproducibility and empirical evaluation of software testing techniques. In: Proceedings of the 37th international conference on software engineering, volume 2 (ICSE’15), vol 2. IEEE Press, Piscataway, NJ, USA, pp 575–578Google Scholar
  9. Farooq SU, Quadri SMK (2013) Empirical evaluation of software testing techniques–need, issues and mitigation. Softw Eng Int J 3:41–51Google Scholar
  10. Farooq SU, Quadri SMK (2014) Empirical evaluation of software testing techniques in an open source fashion. In: Proceedings of the 2nd international workshop on conducting empirical studies in industry. ACM, pp 21–24Google Scholar
  11. Farooq SU, Quadri SMK, Ahmad N (2017) A replicated empirical study to evaluate software testing methods. J Softw Evol Process 29(9):e1883CrossRefGoogle Scholar
  12. Fernández E (2007) Aggregation process with multiple evidence levels for experimental studies in Software Engineering. In: Proceedings 2nd international doctoral symposium on empirical software engineering, pp 75–81Google Scholar
  13. Fonseca C, Rodrigo E (2012) Definition of a support infrastructure for replicating and aggregating families of software engineering experiments. International Doctoral Symposium on empirical Software Engineering. Lund, Suecia, pp 9–16Google Scholar
  14. Garousi V, Felderer M (2017) Worlds Apart: Industrial and Academic Focus Areas in Software Testing. IEEE Softw 34(5):38–45.  https://doi.org/10.1109/MS.2017.3641116 CrossRefGoogle Scholar
  15. Garousi V, Petersen K, Ozkan B (2016) Challenges and best practices in industry-academia collaborations in software engineering: a systematic literature review. Inf Softw Technol 79:106–127CrossRefGoogle Scholar
  16. Garousi V, Felderer M, Kuhrmann M, Herkiloğlu K (2017). What industry wants from academia in software testing? Hearing practitioners’ opinions. In: Proceedings of the 21st international conference on evaluation and assessment in software engineering. ACM, pp 65–69Google Scholar
  17. Gerwien R (2014) A painless guide to statistics. Retrieved from 7 Sept 2018 http://abacus.bates.edu/~ganderso/biology/resources/statistics.html
  18. Glass RL (2006) The academe/practice communication chasm—position paper. Dagstuhl seminar on empirical SE 27.06.-30.06.06 (06262). Participant materials. http://www.dagstuhl.de/Materials/Files/06/06262/06262.GlassRobert.ExtAbstract!.pdf. Accessed 11 Sept 2015
  19. Gómez OS, Juristo N, Vegas S (2014) Understanding replication of experiments in software engineering: a classification. Inf Softw Technol 56(8):1033–1048CrossRefGoogle Scholar
  20. Gómez OS, Cortés-Verdín K, Pardo CJ (2017) Efficiency of software testing techniques: a controlled experiment replication and network meta-analysis. e Inform Softw Eng J 11(1):77–102.  https://doi.org/10.5277/e-Inf170104 CrossRefGoogle Scholar
  21. Gorschek T (2015) How to increase the likelihood of successful transfer to industry–going beyond the empirical. In: 2015 IEEE/ACM 3rd international workshop on conducting empirical studies in industry (CESI). IEEE, pp 10–11Google Scholar
  22. Gorschek T, Garre P, Larsson S, Wohlin C (2006) A model for technology transfer in practice. IEEE Softw 23(6):88–95CrossRefGoogle Scholar
  23. Gousios GI (2009) Tools and methods for large scale empirical software engineering research, (Unpublished doctoral Thesis). Athens University of Economics and Business, Athina, GreeceGoogle Scholar
  24. Grady RB (1992) Practical software metrics for project management and process improvement. Prentice-Hall Inc, Upper Saddle RiverGoogle Scholar
  25. Hetzel W (1976) An experimental analysis of program verification methods. Ph.D. Dissertation. The University of North Carolina at Chapel Hill. AAI7702047Google Scholar
  26. Host M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empir Softw Eng 5:201–214CrossRefGoogle Scholar
  27. Ivarsson M, Gorschek T (2011) A method for evaluating rigor and industrial relevance of technology evaluations. Empir Softw Eng 16(3):365–395CrossRefGoogle Scholar
  28. Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting experiments in software engineering. In Guide to advanced empirical software engineering. Springer, London, pp 201–228Google Scholar
  29. Jedlitschka A, Juristo N, Rombach D (2014) Reporting experiments to satisfy professionals’ information needs. Empir Softw Eng 19(6):1921–1955.  https://doi.org/10.1007/s10664-013-9268-6 CrossRefGoogle Scholar
  30. Juristo N (2015) Conducting experiments in software engineering [video file]. Retrieved from 15 Sept 2018 http://www.softwareindustryexperiments.org/
  31. Juristo N, Gómez OS (2012) Replication of software engineering experiments. In Empirical software engineering and verification. Springer, Berlin, pp 60–88Google Scholar
  32. Juristo N, Moreno AM (2013) Basics of software engineering experimentation. Springer, New YorkzbMATHGoogle Scholar
  33. Juristo N, Vegas S (2003) Functional testing, structural testing and code reading: what fault type do they each detect? Empir Methods Stud Softw Eng, pp 208–232Google Scholar
  34. Juristo N, Moreno AM, Vegas S (2004) Reviewing 25 years of testing technique experiments. Empir Softw Eng 9(1–2):7–44CrossRefGoogle Scholar
  35. Juristo N, Moreno AM, Vegas S, Solari M (2006) In search of what we experimentally know about unit testing. IEEE Softw 23(6):72–80CrossRefGoogle Scholar
  36. Juristo N, Moreno A, Vegas S, Shull F (2009) A look at 25 years of data. IEEE Softw 26(1):15–17CrossRefGoogle Scholar
  37. Juristo N, Vegas S, Solari M, Abrahao S, Ramos I (2012) Comparing the effectiveness of equivalence partitioning, branch testing and code reading by stepwise abstraction applied by subjects. In: 2012 IEEE fifth international conference on software testing, verification and validation (ICST). IEEEGoogle Scholar
  38. Juristo N, Vegas S, Apa C (2013) Effectiveness for detecting faults within and outside the scope of testing techniques: a controlled experiment. Accessed on 20 Aug 2015 http://www.grise.upm.es/htdocs/sites/trs/1/pdf/JuristoVegasApa.pdf
  39. Kamsties E, Lott C (1995) An empirical evaluation of three defect-detection techniques. Software engineering ESEC’95, pp 362–383Google Scholar
  40. Kaner C, Falk J, Nguyen HQ (2000) Testing computer software, 2nd edn. Dreamtech Press, New DelhizbMATHGoogle Scholar
  41. Kelly D, Shepard T (2001) A case study in the use of defect classification in inspections. In: Proceedings of IBM centre for advanced studies conference, pp 7–20Google Scholar
  42. Kitchenham B (2008) The role of replications in empirical software engineering—a word of warning. Empir Softwa Eng 13(2):219–221CrossRefGoogle Scholar
  43. Lott CM, Rombach HD (1996) Repeatable software engineering experiments for comparing defect-detection techniques. Empir Softw Eng 1(3):241–277CrossRefGoogle Scholar
  44. Lyu MR (1996) Handbook of software reliability engineering, vol 222. IEEE Computer Society Press, Los AlamitosGoogle Scholar
  45. Marshall E, Boggis E (2016) The statistics tutor’s quick guide to commonly used statistical tests. University of Sheffield. http://www.statstutor.ac.uk/resources/uploaded/tutorsquickguidetostatistics.pdf
  46. Meyer B (2018) Towards empirical answers to important software engineering questions, Accessed on 28 Aug 2018https://cacm.acm.org/blogs/blog-cacm/224677-empirical-answers-to-important-software-engineering-questions-part-2-of-2/fulltext
  47. Miller J (1999) Can results from software engineering experiments be safely combined? In: Proceedings sixth international software metrics symposium, 1999. IEEE, pp 152-158Google Scholar
  48. Miller J (2000) Applying meta-analytical procedures to software engineering experiments. J Syst Softw 54(1):29–39CrossRefGoogle Scholar
  49. Myers G (1978) A controlled experiment in program testing and code walkthroughs/inspections. Commun ACM 21(9):760–768CrossRefGoogle Scholar
  50. Olorisade BK, Vegas S, Juristo N (2013) Determining the effectiveness of three software evaluation techniques through informal aggregation. Inf Softw Technol 55(9):1590–1601CrossRefGoogle Scholar
  51. Pfleeger SL, Menezes W (2000) Marketing technology to software practitioners. IEEE Softw 17(1):27–33CrossRefGoogle Scholar
  52. Rombach D, Jedlitschka A (2015) The maturation of empirical studies. In: Proceedings of the third international workshop on conducting empirical studies in industry. IEEE Press, pp 1–2Google Scholar
  53. Roper M, Wood M, Miller J (1997) An empirical evaluation of defect detection techniques. Inf Softw Technol 39(11):763–775CrossRefGoogle Scholar
  54. Runeson P, Andersson C, Thelin T, Andrews A, Berling T (2006) What do we know about defect detection methods?[software testing]. IEEE Softw 23(3):82–90CrossRefGoogle Scholar
  55. Santos A, Gomez O, Juristo N (2018) Analyzing families of experiments in SE: a systematic mapping study. arXiv preprint arXiv:1805.09009
  56. Shepperd M (2018) Replication studies considered harmful. In: Proceedings of the 40th international conference on software engineering: new ideas and emerging results. ACM, pp 73–76Google Scholar
  57. Shull F, Basili V, Carver J, Maldonado JC, Travassos GH, Mendonça M, Fabbri S (2002) Replicating software engineering experiments: addressing the tacit knowledge problem. In: Proceedings of 2002 international symposium on empirical software engineering. IEEE, pp 7–16Google Scholar
  58. Shull F, Mendoncça MG, Basili V, Carver J, Maldonado JC, Fabbri S, Travassos GH, Ferreira MC (2004) Knowledge-sharing issues in experimental software engineering. Empir Softw Eng 9(1–2):111–137CrossRefGoogle Scholar
  59. Shull FJ, Carver JC, Vegas S, Juristo N (2008) The role of replications in empirical software engineering. Empir Softw Eng 13(2):211–218CrossRefGoogle Scholar
  60. Siegmund J, Siegmund N, Apel S (2015) Views on internal and external validity in empirical software engineering. In: Proceedings of the 37th international conference on software engineering, vol 1. IEEE Press, pp 9–19Google Scholar
  61. Sjoberg DI, Dyba T, Jorgensen M (2007) The future of empirical methods in software engineering research. In: 2007 future of software engineering. IEEE Computer Society, pp 358–378Google Scholar
  62. Sneed HM (2009) Bridging the gap between academia and industry. http://www.reengineer.org/stevens/Harry-Sneed-CSMR2009-Stevens-Lecture-A4.pdf
  63. Solari M, Matalonga S (2014) A controlled experiment to explore potentially undetectable defects for testing techniques. SEKE, pp 106–109Google Scholar
  64. Tichy WF (2000) Hints for reviewing empirical works in software engineering. Empir Softw Eng Int J 5:309–312CrossRefGoogle Scholar
  65. Vegas S, Apa C, Juristo N (2016) Crossover designs in software engineering experiments: benefits and perils. IEEE Trans Software Eng 42(2):120–135CrossRefGoogle Scholar
  66. Wohlin C (2013) Empirical software engineering research with industry: top 10 challenges. In: Proceedings of the 1st international workshop on conducting empirical studies in industry. IEEE Press, pp 43–46Google Scholar

Copyright information

© The Society for Reliability Engineering, Quality and Operations Management (SREQOM), India and The Division of Operation and Maintenance, Lulea University of Technology, Sweden 2019

Authors and Affiliations

  1. 1.Department of Computer Sciences, North CampusUniversity of KashmirBaramullaIndia

Personalised recommendations