Skip to main content

Research Progress in the Processing of Crowdsourced Test Reports

  • Conference paper
  • First Online:
  • 390 Accesses

Abstract

In recent years, crowdsourced testing, which uses collective intelligence to solve complex software testing tasks has gained widespread attention in academia and industry. However, due to a large number of workers participating in crowdsourced testing tasks, the submitted test reports set is too large, making it difficult for developers to review test reports. Therefore, how to effectively process and integrate crowdsourced test reports is always a significant challenge in the crowdsourced testing process. This paper deals with the crowdsourced test reports processing, sorts out some achievements in this field in recent years, and classifies, summarizes, and compares existing research results from four directions: duplicated reports detection, test reports aggregation and classification, priority ranking, and reports summarization. Finally explored the possible research directions, opportunities and challenges of the crowdsourced test reports.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Howe, J.: The rise of crowdsourcing. Wired Mag. 14(6), 1–4 (2016)

    Google Scholar 

  2. Mao, K., Capra, L., Harman, M., et al.: A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 126, 57–84 (2017)

    Article  Google Scholar 

  3. Latoza, T., Hoek, A.: Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Softw. 33(1), 74–80 (2016)

    Article  Google Scholar 

  4. Hao, R., Feng, Y., Jones, J., Li, Y., Chen, Z.: CTRAS: crowdsourced test report aggregation and summarization. In: ICSE 2019 (2019)

    Google Scholar 

  5. Zhang, T., Chen, J., Luo, X., Li, T.: Bug reports for desktop software and mobile apps in GitHub: what is the difference? IEEE Softw. 36, 63–71 (2017)

    Article  Google Scholar 

  6. Zhang, X.F., Feng, Y., Liu, D., Chen, Z.Y., Xu, B.W.: Research progress of crowdsourced software testing. Ruan Jian Xue Bao/J. Softw. 29(1), 69–88 (2018)

    Google Scholar 

  7. Runeson, P., Alexandersson, M., Nyholm, O.: Detection of duplicate defect reports using natural language processing. In: Proceedings of the 29th International Conference on Software Engineering, pp. 499–510. IEEE Computer Society (2007)

    Google Scholar 

  8. Yang, X., Lo, D., Xia, X., Bao, L., Sun, J.: Combining word embedding with information retrieval to recommend similar bug reports. In: ISSRE 2016, pp. 127–137 (2016)

    Google Scholar 

  9. Rocha, H., Valente, M.T., Marques-Neto, H., Murphy, G.C.: An empirical study on recommendations of similar bugs. In: SANER 2016, pp. 46–56 (2016)

    Google Scholar 

  10. Hindle, A., Alipour, A., Stroulia, E.: A contextual approach towards more accurate duplicate bug report detection and ranking. Empir. Softw. Eng. 21, 368–410 (2016)

    Article  Google Scholar 

  11. Wang, X., Zhang, L., Xie, T., et al.: An approach to detecting duplicate bug reports using natural language and execution information. In: ACM/IEEE 30th International Conference on Software Engineering, ICSE 2008, pp. 461–470. IEEE (2008)

    Google Scholar 

  12. Sun, C., Lo, D., Wang, X., Jiang, J., Khoo, S.-C.: A discriminative model approach for accurate duplicate bug report retrieval. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, vol. 1. ACM (2010)

    Google Scholar 

  13. Information Retrieval. https://en.wikiipedia.org/wiki/information_retrieval. Accessed 10 Sept 2019

  14. Sun, C., Lo, D., Khoo, S.C., et al.: Towards more accurate retrieval of duplicate bug reports. In: 26th IEEE/ACM International Conference on Automated Software Engineering, pp. 253–262. IEEE Computer Society (2011)

    Google Scholar 

  15. Nguyen, A.T., Nguyen, T.T., Nguyen, T.N., et al.: Duplicate bug report detection with a combination of information retrieval and topic modeling. In: 27th IEEE/ACM International Conference on Automated Software Engineering, pp. 70–79. ACM (2012)

    Google Scholar 

  16. Liu, K., Tan, H.B.K., Zhang, H.: Has this bug been reported? In: 20th Working Conference on Reverse Engineering (WCRE), pp. 82–91. IEEE (2013)

    Google Scholar 

  17. Banerjee, S., Syed, Z., Helmick, J., Cukic, B.: A fusion approach for classifying duplicate problem reports. In: ISSRE 2013, pp. 208–217 (2013)

    Google Scholar 

  18. Wang, J., Cui, Q., Wang, Q., et al.: Towards effectively test report classification to assist crowdsourced testing. In: ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM (2016)

    Google Scholar 

  19. Jiang, H., Chen, X., He, T., et al.: Fuzzy clustering of crowdsourced test reports for apps. ACM Trans. Internet Technol. 18(2), 1–28 (2018)

    Article  Google Scholar 

  20. Feng, Y., Jones, J.A., Chen, Z., et al.: Multi-objective test report prioritization using image understanding. In: 31st IEEE/ACM International Conference on Automated Software Engineering, pp. 202–213 (2016)

    Google Scholar 

  21. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Computer Vision and Pattern Recognition, pp. 2169–2178. IEEE (2016)

    Google Scholar 

  22. Yang, Y., Yao, X., Gong, D.: Clustering study of crowdsourced test report with multi-source heterogeneous information. In: Tan, Y., Shi, Y. (eds.) DMBD 2019. CCIS, vol. 1071, pp. 135–145. Springer, Singapore (2019). https://doi.org/10.1007/978-981-32-9563-6_14

    Chapter  Google Scholar 

  23. Wang, J., Cui, Q., Wang, S., et al.: Domain adaptation for test report classification in crowdsourced testing. In: International Conference on Software Engineering: Software Engineering in Practice Track. IEEE Press (2017)

    Google Scholar 

  24. Wang, J., Wang, S., Cui, Q., et al.: Local-based active classification of test report to assist crowdsourced testing. In: 31st IEEE/ACM International Conference. ACM (2016)

    Google Scholar 

  25. Wang, J., Li, M., Wang, S., Menzies, T., Wang, Q.: Images don’t lie: duplicate crowdtesting reports detection with screenshot information. Inf. Softw. Technol. 110, 139–155 (2019)

    Article  Google Scholar 

  26. Nazar, N., Jiang, H., Gao, G., et al.: Source code fragment summarization with small scale crowdsourcing based features. Front. Comput. Sci. 10(3), 504–517 (2016)

    Article  Google Scholar 

  27. Jiang, H., Zhang, J., Ma, H., et al.: Mining authorship characteristics in bug repositories. Sci. China Inf. Sci. 60(1), 012107 (2017)

    Article  Google Scholar 

  28. Chen, X., Jiang, H., Li, X., et al.: Automated quality assessment for crowdsourced test reports of mobile applications. In: 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE Computer Society (2018)

    Google Scholar 

  29. Feng, Y., Chen, Z., Jones, J.A., Fang, C., Xu, B.: Test report prioritization to assist crowdsourced testing. In: 10th ACM Joint Meeting on Foundations of Software Engineering, pp. 225–236 (2015)

    Google Scholar 

  30. Yu, L., Tsai, W.-T., Zhao, W., Wu, F.: Predicting defect priority based on neural networks. In: Cao, L., Zhong, J., Feng, Y. (eds.) ADMA 2010. LNCS (LNAI), vol. 6441, pp. 356–367. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17313-4_35

    Chapter  Google Scholar 

  31. Mani, S., Catherine, R., Sinha, V.S., Dubey, A.: AUSUM: approach for unsupervised bug report summarization. In: ACM SIGSOFT International Symposium on the Foundations of Software Engineering, pp. 1–11 (2012)

    Google Scholar 

  32. Rastkar, S., Murphy, G.C., Murray, G.: Automatic summarization of bug reports. IEEE Trans. Softw. Eng. 40(4), 366–380 (2014)

    Article  Google Scholar 

  33. Kokate, P., Wankhade, N.R.: Automatic summarization of bug reports and bug triage classification. Int. J. Sci. Technol. Manag. Res. 2(6) (2017)

    Google Scholar 

  34. Jiang, H., Li, X., Ren, Z., et al.: Toward better summarizing bug reports with crowdsourcing elicited attributes. IEEE Trans. Reliab. 68, 1–21 (2018)

    Google Scholar 

  35. Fazzini, M., Prammer, M., d’Amorim, M., Orso, A.: Automatically translating bug reports into test cases for mobile apps. In: 27th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018), pp. 141–152. ACM (2018)

    Google Scholar 

Download references

Acknowledgment

This work is funded by National Key R&D Program of China (No. 2018YFB 1403400).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingang Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, N., Cai, L., Chen, M., Zhang, C. (2020). Research Progress in the Processing of Crowdsourced Test Reports. In: Gao, H., Li, K., Yang, X., Yin, Y. (eds) Testbeds and Research Infrastructures for the Development of Networks and Communications. TridentCom 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 309. Springer, Cham. https://doi.org/10.1007/978-3-030-43215-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-43215-7_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-43214-0

  • Online ISBN: 978-3-030-43215-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics