Abstract
In recent years, crowdsourced testing, which uses collective intelligence to solve complex software testing tasks has gained widespread attention in academia and industry. However, due to a large number of workers participating in crowdsourced testing tasks, the submitted test reports set is too large, making it difficult for developers to review test reports. Therefore, how to effectively process and integrate crowdsourced test reports is always a significant challenge in the crowdsourced testing process. This paper deals with the crowdsourced test reports processing, sorts out some achievements in this field in recent years, and classifies, summarizes, and compares existing research results from four directions: duplicated reports detection, test reports aggregation and classification, priority ranking, and reports summarization. Finally explored the possible research directions, opportunities and challenges of the crowdsourced test reports.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Howe, J.: The rise of crowdsourcing. Wired Mag. 14(6), 1–4 (2016)
Mao, K., Capra, L., Harman, M., et al.: A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 126, 57–84 (2017)
Latoza, T., Hoek, A.: Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Softw. 33(1), 74–80 (2016)
Hao, R., Feng, Y., Jones, J., Li, Y., Chen, Z.: CTRAS: crowdsourced test report aggregation and summarization. In: ICSE 2019 (2019)
Zhang, T., Chen, J., Luo, X., Li, T.: Bug reports for desktop software and mobile apps in GitHub: what is the difference? IEEE Softw. 36, 63–71 (2017)
Zhang, X.F., Feng, Y., Liu, D., Chen, Z.Y., Xu, B.W.: Research progress of crowdsourced software testing. Ruan Jian Xue Bao/J. Softw. 29(1), 69–88 (2018)
Runeson, P., Alexandersson, M., Nyholm, O.: Detection of duplicate defect reports using natural language processing. In: Proceedings of the 29th International Conference on Software Engineering, pp. 499–510. IEEE Computer Society (2007)
Yang, X., Lo, D., Xia, X., Bao, L., Sun, J.: Combining word embedding with information retrieval to recommend similar bug reports. In: ISSRE 2016, pp. 127–137 (2016)
Rocha, H., Valente, M.T., Marques-Neto, H., Murphy, G.C.: An empirical study on recommendations of similar bugs. In: SANER 2016, pp. 46–56 (2016)
Hindle, A., Alipour, A., Stroulia, E.: A contextual approach towards more accurate duplicate bug report detection and ranking. Empir. Softw. Eng. 21, 368–410 (2016)
Wang, X., Zhang, L., Xie, T., et al.: An approach to detecting duplicate bug reports using natural language and execution information. In: ACM/IEEE 30th International Conference on Software Engineering, ICSE 2008, pp. 461–470. IEEE (2008)
Sun, C., Lo, D., Wang, X., Jiang, J., Khoo, S.-C.: A discriminative model approach for accurate duplicate bug report retrieval. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, vol. 1. ACM (2010)
Information Retrieval. https://en.wikiipedia.org/wiki/information_retrieval. Accessed 10 Sept 2019
Sun, C., Lo, D., Khoo, S.C., et al.: Towards more accurate retrieval of duplicate bug reports. In: 26th IEEE/ACM International Conference on Automated Software Engineering, pp. 253–262. IEEE Computer Society (2011)
Nguyen, A.T., Nguyen, T.T., Nguyen, T.N., et al.: Duplicate bug report detection with a combination of information retrieval and topic modeling. In: 27th IEEE/ACM International Conference on Automated Software Engineering, pp. 70–79. ACM (2012)
Liu, K., Tan, H.B.K., Zhang, H.: Has this bug been reported? In: 20th Working Conference on Reverse Engineering (WCRE), pp. 82–91. IEEE (2013)
Banerjee, S., Syed, Z., Helmick, J., Cukic, B.: A fusion approach for classifying duplicate problem reports. In: ISSRE 2013, pp. 208–217 (2013)
Wang, J., Cui, Q., Wang, Q., et al.: Towards effectively test report classification to assist crowdsourced testing. In: ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM (2016)
Jiang, H., Chen, X., He, T., et al.: Fuzzy clustering of crowdsourced test reports for apps. ACM Trans. Internet Technol. 18(2), 1–28 (2018)
Feng, Y., Jones, J.A., Chen, Z., et al.: Multi-objective test report prioritization using image understanding. In: 31st IEEE/ACM International Conference on Automated Software Engineering, pp. 202–213 (2016)
Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Computer Vision and Pattern Recognition, pp. 2169–2178. IEEE (2016)
Yang, Y., Yao, X., Gong, D.: Clustering study of crowdsourced test report with multi-source heterogeneous information. In: Tan, Y., Shi, Y. (eds.) DMBD 2019. CCIS, vol. 1071, pp. 135–145. Springer, Singapore (2019). https://doi.org/10.1007/978-981-32-9563-6_14
Wang, J., Cui, Q., Wang, S., et al.: Domain adaptation for test report classification in crowdsourced testing. In: International Conference on Software Engineering: Software Engineering in Practice Track. IEEE Press (2017)
Wang, J., Wang, S., Cui, Q., et al.: Local-based active classification of test report to assist crowdsourced testing. In: 31st IEEE/ACM International Conference. ACM (2016)
Wang, J., Li, M., Wang, S., Menzies, T., Wang, Q.: Images don’t lie: duplicate crowdtesting reports detection with screenshot information. Inf. Softw. Technol. 110, 139–155 (2019)
Nazar, N., Jiang, H., Gao, G., et al.: Source code fragment summarization with small scale crowdsourcing based features. Front. Comput. Sci. 10(3), 504–517 (2016)
Jiang, H., Zhang, J., Ma, H., et al.: Mining authorship characteristics in bug repositories. Sci. China Inf. Sci. 60(1), 012107 (2017)
Chen, X., Jiang, H., Li, X., et al.: Automated quality assessment for crowdsourced test reports of mobile applications. In: 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE Computer Society (2018)
Feng, Y., Chen, Z., Jones, J.A., Fang, C., Xu, B.: Test report prioritization to assist crowdsourced testing. In: 10th ACM Joint Meeting on Foundations of Software Engineering, pp. 225–236 (2015)
Yu, L., Tsai, W.-T., Zhao, W., Wu, F.: Predicting defect priority based on neural networks. In: Cao, L., Zhong, J., Feng, Y. (eds.) ADMA 2010. LNCS (LNAI), vol. 6441, pp. 356–367. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17313-4_35
Mani, S., Catherine, R., Sinha, V.S., Dubey, A.: AUSUM: approach for unsupervised bug report summarization. In: ACM SIGSOFT International Symposium on the Foundations of Software Engineering, pp. 1–11 (2012)
Rastkar, S., Murphy, G.C., Murray, G.: Automatic summarization of bug reports. IEEE Trans. Softw. Eng. 40(4), 366–380 (2014)
Kokate, P., Wankhade, N.R.: Automatic summarization of bug reports and bug triage classification. Int. J. Sci. Technol. Manag. Res. 2(6) (2017)
Jiang, H., Li, X., Ren, Z., et al.: Toward better summarizing bug reports with crowdsourcing elicited attributes. IEEE Trans. Reliab. 68, 1–21 (2018)
Fazzini, M., Prammer, M., d’Amorim, M., Orso, A.: Automatically translating bug reports into test cases for mobile apps. In: 27th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018), pp. 141–152. ACM (2018)
Acknowledgment
This work is funded by National Key R&D Program of China (No. 2018YFB 1403400).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Wang, N., Cai, L., Chen, M., Zhang, C. (2020). Research Progress in the Processing of Crowdsourced Test Reports. In: Gao, H., Li, K., Yang, X., Yin, Y. (eds) Testbeds and Research Infrastructures for the Development of Networks and Communications. TridentCom 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 309. Springer, Cham. https://doi.org/10.1007/978-3-030-43215-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-43215-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-43214-0
Online ISBN: 978-3-030-43215-7
eBook Packages: Computer ScienceComputer Science (R0)