Skip to main content

Clustering and Combinatorial Methods for Test Suite Prioritization of GUI and Web Applications

  • Conference paper
  • First Online:
Information Technology - New Generations

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 738))

Abstract

This work introduces a novel test case prioritization method that combines clustering methods, dimensionality reduction techniques (DRTs), and combinatorial-based two-way prioritization for GUI and web applications. The use of clustering with interleaved cluster prioritization increases the diversity of the earliest selected test cases. The study applies four DRTs, four clustering algorithms, and three inter-cluster ranking methods to three GUI and one web applications in order to determine the best combination of methods. We compare the proposed clustering and dimensionality reduction approaches to random and two-way inter-window prioritization techniques. The outcome of the study indicates that the Principal Component Analysis (PCA) dimensionality reduction technique and Mean Shift clustering method outperform other techniques. There is no statistical difference between the three inter-cluster ranking criteria. In comparison to two-way inter-window prioritization, the Mean Shift clustering algorithm with PCA or Independent Component Analysis (FICA) generally produces faster rates of fault detection in the studies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.cs.umd.edu/users/atif/TerpOffice/.

  2. 2.

    https://codeigniter.com/.

  3. 3.

    https://www.smarty.net/.

  4. 4.

    https://www.scipy.org/.

References

  1. S. Anand, E.K. Burke, T.Y. Chen, J. Clark, M.B. Cohen, W. Grieskamp, M. Harman, M.J. Harrold, P. McMinn, An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013)

    Google Scholar 

  2. S. Yoo, M. Harman, Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)

    Google Scholar 

  3. A. Causevic, D. Sundmark, S. Punnekkat, An industrial survey on contemporary aspects of software testing, in Proceedings of the International Conference on Software Testing, Verification and Validation (IEEE, New York, 2010), pp. 393–401

    Google Scholar 

  4. G. Rothermel, R.H. Untch, C. Chu, M.J. Harrold, Test case prioritization: an empirical study, in Proceedings of the international conference on software maintenance (IEEE, New York, 1999), pp. 179–188

    Google Scholar 

  5. S. Elbaum, A.G. Malishevsky, G. Rothermel, Test case prioritization: a family of empirical studies. Trans. Softw. Eng. 28(2), 159–182 (2002)

    Google Scholar 

  6. R.C. Bryce, S. Sampath, A.M. Memon, Developing a single model and test prioritization strategies for event-driven software. Trans. Softw. Eng. 37(1), 48–64 (2011)

    Google Scholar 

  7. L. Zhang, D. Hao, L. Zhang, G. Rothermel, H. Mei, Bridging the gap between the total and additional test-case prioritization strategies, in Proceedings of the International Conference on Software Engineering (IEEE, New York, 2013), pp. 192–201

    Google Scholar 

  8. S. Elbaum, S. Karre, G. Rothermel, Improving web application testing with user session data, in Proceedings of the international conference on software engineering (IEEE Computer Society, Washington, 2003), pp. 49–59

    Google Scholar 

  9. S. Sampath, V. Mihaylov, A. Souter, L. Pollock, A scalable approach to user-session based testing of web applications through concept analysis, in Proceedings of the International Conference on Automated Software Engineering (IEEE, New York, 2004), pp. 132–141

    Google Scholar 

  10. S. Sampath, R.C. Bryce, Improving the effectiveness of test suite reduction for user-session-based testing of web applications. Inf. Softw. Technol. 54(7), 724–738 (2012)

    Article  Google Scholar 

  11. J. Cleland-Huang, A. Czauderna, M. Gibiec, J. Emenecker, A machine learning approach for tracing regulatory codes to product specific requirements, in Proceedings of the International Conference on Software Engineering (ACM, New York, 2010), pp. 155–164

    Google Scholar 

  12. J. Wen, S. Li, Z. Lin, Y. Hu, C. Huang, Systematic literature review of machine learning based software development effort estimation models. Inf. Softw. Technol. 54(1), 41–59 (2012)

    Article  Google Scholar 

  13. H.U. Asuncion, A.U. Asuncion, R.N. Taylor, Software traceability with topic modeling, in Proceedings of the International Conference on Software Engineering (ACM, New York, 2010), pp. 95–104

    Google Scholar 

  14. D. Leon, A. Podgurski, A comparison of coverage-based and distribution-based techniques for filtering and prioritizing test cases, in Proceedings of the International Symposium on Software Reliability Engineering (IEEE, New York, 2003), pp. 442–453

    Google Scholar 

  15. L.C. Briand, Y. Labiche, Z. Bawar, Using machine learning to refine black-box test specifications and test suites, in International Conference on Quality Software (IEEE, New York, 2008), pp. 135–144

    Google Scholar 

  16. S. Yoo, M. Harman, P. Tonella, A. Susi, Clustering test cases to achieve effective and scalable prioritisation incorporating expert knowledge, in Proceedings of the International Symposium on Software Testing and Analysis (ACM, New York, 2009), pp. 201–212

    Google Scholar 

  17. P. Tonella, P. Avesani, A. Susi, Using the case-based ranking methodology for test case prioritization, in Proceedings of the International Conference on Software Maintenance (IEEE, New York, 2006), pp. 123–133

    Google Scholar 

  18. T.L. Saaty, Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 1(1), 83–98 (2008)

    Google Scholar 

  19. R. Carlson, H. Do, A. Denton, A clustering approach to improving test case prioritization: an industrial case study, in Proceedings of the International Conference on Software Maintenance (IEEE, New York, 2011), pp. 382–391

    Google Scholar 

  20. M.J. Arafeen, H. Do, Test case prioritization using requirements-based clustering, in Proceedings of the International Conference on Software Testing, Verification and Validation (IEEE, New York, 2013), pp. 312–321

    Google Scholar 

  21. P. Domingos, A few useful things to know about machine learning. Commun. ACM 55(10), 78–87 (2012)

    Article  Google Scholar 

  22. R. Xu, D. Wunsch, Survey of clustering algorithms. IEEE Trans. Neural Netw. 16(3), 645–678 (2005)

    Article  Google Scholar 

  23. Z. Huang, Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Min. Knowl. Disc. 2(3), 283–304 (1998)

    Article  Google Scholar 

  24. I. Jolliffe, Principal Component Analysis, 2nd edn. (Springer, Berlin, 2002)

    MATH  Google Scholar 

  25. D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)

    Article  Google Scholar 

  26. A. Hyvärinen, J. Karhunen, E. Oja, Independent Component Analysis, 1st edn. (Wiley-Interscience, London, 2001)

    Book  Google Scholar 

  27. F. Pedregosa et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  28. D. Comaniciu, P. Meer, Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)

    Article  Google Scholar 

  29. D. Dueck, B.J. Frey, Non-metric affinity propagation for unsupervised image categorization, in Proceedings of the International Conference on Computer Vision (IEEE, New York, 2007), pp. 1–8

    Google Scholar 

  30. D. Arthur, S. Vassilvitskii, k-means++: the advantages of careful seeding, in Proceedings of the Symposium on Discrete Algorithms (Society for Industrial and Applied Mathematics, Philadelphia, 2007), pp. 1027–1035

    Google Scholar 

  31. L. Hubert, P. Arabie, Comparing partitions. J. Classif. 2(1), 193–218 (1985)

    Article  Google Scholar 

  32. N.X. Vinh, J. Epps, J. Bailey, Information theoretic measures for clusterings comparison: variants, properties, normalization and correction for chance. J. Mach. Learn. Res. 11, 2837–2854 (2010)

    MathSciNet  MATH  Google Scholar 

  33. A. Rosenberg, J. Hirschberg, V-Measure: a conditional entropy-based external cluster evaluation measure, in Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (2007), pp. 410–420

    Google Scholar 

  34. P.J. Rousseeuw, Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)

    Article  Google Scholar 

  35. G. Rothermel, R.H. Untch, C. Chu, M.J. Harrold, Prioritizing test cases for regression testing. Trans. Softw. Eng. 27(10), 929–948 (2001)

    Article  Google Scholar 

  36. R.C. Bryce, A.M. Memon, Test suite prioritization by interaction coverage, in Proceedings of the Workshop on Domain-Specific Approaches to Software Test Automation (ACM, New York, 2007), pp. 1–7

    Google Scholar 

  37. A. Memon, Q. Xie, Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software. Trans. Softw. Eng. 31(10), 884–896 (2005)

    Article  Google Scholar 

  38. W.H. Kruskal, W.A. Wallis, Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47(260), 583–621 (1952)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dmitry Nurmuradov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nurmuradov, D., Bryce, R., Piparia, S., Bryant, B. (2018). Clustering and Combinatorial Methods for Test Suite Prioritization of GUI and Web Applications. In: Latifi, S. (eds) Information Technology - New Generations. Advances in Intelligent Systems and Computing, vol 738. Springer, Cham. https://doi.org/10.1007/978-3-319-77028-4_60

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-77028-4_60

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-77027-7

  • Online ISBN: 978-3-319-77028-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics