Clustering and Combinatorial Methods for Test Suite Prioritization of GUI and Web Applications

  • Dmitry Nurmuradov
  • Renée Bryce
  • Shraddha Piparia
  • Barrett Bryant
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 738)

Abstract

This work introduces a novel test case prioritization method that combines clustering methods, dimensionality reduction techniques (DRTs), and combinatorial-based two-way prioritization for GUI and web applications. The use of clustering with interleaved cluster prioritization increases the diversity of the earliest selected test cases. The study applies four DRTs, four clustering algorithms, and three inter-cluster ranking methods to three GUI and one web applications in order to determine the best combination of methods. We compare the proposed clustering and dimensionality reduction approaches to random and two-way inter-window prioritization techniques. The outcome of the study indicates that the Principal Component Analysis (PCA) dimensionality reduction technique and Mean Shift clustering method outperform other techniques. There is no statistical difference between the three inter-cluster ranking criteria. In comparison to two-way inter-window prioritization, the Mean Shift clustering algorithm with PCA or Independent Component Analysis (FICA) generally produces faster rates of fault detection in the studies.

Keywords

Test suite prioritization User session-based test Cluster prioritization method Inter-cluster ranking method Graphical user interface Dimensionality reduction approach 

References

  1. 1.
    S. Anand, E.K. Burke, T.Y. Chen, J. Clark, M.B. Cohen, W. Grieskamp, M. Harman, M.J. Harrold, P. McMinn, An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013)Google Scholar
  2. 2.
    S. Yoo, M. Harman, Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)CrossRefGoogle Scholar
  3. 3.
    A. Causevic, D. Sundmark, S. Punnekkat, An industrial survey on contemporary aspects of software testing, in Proceedings of the International Conference on Software Testing, Verification and Validation (IEEE, New York, 2010), pp. 393–401Google Scholar
  4. 4.
    G. Rothermel, R.H. Untch, C. Chu, M.J. Harrold, Test case prioritization: an empirical study, in Proceedings of the international conference on software maintenance (IEEE, New York, 1999), pp. 179–188Google Scholar
  5. 5.
    S. Elbaum, A.G. Malishevsky, G. Rothermel, Test case prioritization: a family of empirical studies. Trans. Softw. Eng. 28(2), 159–182 (2002)CrossRefGoogle Scholar
  6. 6.
    R.C. Bryce, S. Sampath, A.M. Memon, Developing a single model and test prioritization strategies for event-driven software. Trans. Softw. Eng. 37(1), 48–64 (2011)CrossRefGoogle Scholar
  7. 7.
    L. Zhang, D. Hao, L. Zhang, G. Rothermel, H. Mei, Bridging the gap between the total and additional test-case prioritization strategies, in Proceedings of the International Conference on Software Engineering (IEEE, New York, 2013), pp. 192–201Google Scholar
  8. 8.
    S. Elbaum, S. Karre, G. Rothermel, Improving web application testing with user session data, in Proceedings of the international conference on software engineering (IEEE Computer Society, Washington, 2003), pp. 49–59Google Scholar
  9. 9.
    S. Sampath, V. Mihaylov, A. Souter, L. Pollock, A scalable approach to user-session based testing of web applications through concept analysis, in Proceedings of the International Conference on Automated Software Engineering (IEEE, New York, 2004), pp. 132–141Google Scholar
  10. 10.
    S. Sampath, R.C. Bryce, Improving the effectiveness of test suite reduction for user-session-based testing of web applications. Inf. Softw. Technol. 54(7), 724–738 (2012)CrossRefGoogle Scholar
  11. 11.
    J. Cleland-Huang, A. Czauderna, M. Gibiec, J. Emenecker, A machine learning approach for tracing regulatory codes to product specific requirements, in Proceedings of the International Conference on Software Engineering (ACM, New York, 2010), pp. 155–164Google Scholar
  12. 12.
    J. Wen, S. Li, Z. Lin, Y. Hu, C. Huang, Systematic literature review of machine learning based software development effort estimation models. Inf. Softw. Technol. 54(1), 41–59 (2012)CrossRefGoogle Scholar
  13. 13.
    H.U. Asuncion, A.U. Asuncion, R.N. Taylor, Software traceability with topic modeling, in Proceedings of the International Conference on Software Engineering (ACM, New York, 2010), pp. 95–104Google Scholar
  14. 14.
    D. Leon, A. Podgurski, A comparison of coverage-based and distribution-based techniques for filtering and prioritizing test cases, in Proceedings of the International Symposium on Software Reliability Engineering (IEEE, New York, 2003), pp. 442–453Google Scholar
  15. 15.
    L.C. Briand, Y. Labiche, Z. Bawar, Using machine learning to refine black-box test specifications and test suites, in International Conference on Quality Software (IEEE, New York, 2008), pp. 135–144Google Scholar
  16. 16.
    S. Yoo, M. Harman, P. Tonella, A. Susi, Clustering test cases to achieve effective and scalable prioritisation incorporating expert knowledge, in Proceedings of the International Symposium on Software Testing and Analysis (ACM, New York, 2009), pp. 201–212Google Scholar
  17. 17.
    P. Tonella, P. Avesani, A. Susi, Using the case-based ranking methodology for test case prioritization, in Proceedings of the International Conference on Software Maintenance (IEEE, New York, 2006), pp. 123–133Google Scholar
  18. 18.
    T.L. Saaty, Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 1(1), 83–98 (2008)Google Scholar
  19. 19.
    R. Carlson, H. Do, A. Denton, A clustering approach to improving test case prioritization: an industrial case study, in Proceedings of the International Conference on Software Maintenance (IEEE, New York, 2011), pp. 382–391Google Scholar
  20. 20.
    M.J. Arafeen, H. Do, Test case prioritization using requirements-based clustering, in Proceedings of the International Conference on Software Testing, Verification and Validation (IEEE, New York, 2013), pp. 312–321Google Scholar
  21. 21.
    P. Domingos, A few useful things to know about machine learning. Commun. ACM 55(10), 78–87 (2012)CrossRefGoogle Scholar
  22. 22.
    R. Xu, D. Wunsch, Survey of clustering algorithms. IEEE Trans. Neural Netw. 16(3), 645–678 (2005)CrossRefGoogle Scholar
  23. 23.
    Z. Huang, Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Min. Knowl. Disc. 2(3), 283–304 (1998)MathSciNetCrossRefGoogle Scholar
  24. 24.
    I. Jolliffe, Principal Component Analysis, 2nd edn. (Springer, Berlin, 2002)MATHGoogle Scholar
  25. 25.
    D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)CrossRefGoogle Scholar
  26. 26.
    A. Hyvärinen, J. Karhunen, E. Oja, Independent Component Analysis, 1st edn. (Wiley-Interscience, London, 2001)CrossRefGoogle Scholar
  27. 27.
    F. Pedregosa et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetMATHGoogle Scholar
  28. 28.
    D. Comaniciu, P. Meer, Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)CrossRefGoogle Scholar
  29. 29.
    D. Dueck, B.J. Frey, Non-metric affinity propagation for unsupervised image categorization, in Proceedings of the International Conference on Computer Vision (IEEE, New York, 2007), pp. 1–8Google Scholar
  30. 30.
    D. Arthur, S. Vassilvitskii, k-means++: the advantages of careful seeding, in Proceedings of the Symposium on Discrete Algorithms (Society for Industrial and Applied Mathematics, Philadelphia, 2007), pp. 1027–1035Google Scholar
  31. 31.
    L. Hubert, P. Arabie, Comparing partitions. J. Classif. 2(1), 193–218 (1985)CrossRefGoogle Scholar
  32. 32.
    N.X. Vinh, J. Epps, J. Bailey, Information theoretic measures for clusterings comparison: variants, properties, normalization and correction for chance. J. Mach. Learn. Res. 11, 2837–2854 (2010)MathSciNetMATHGoogle Scholar
  33. 33.
    A. Rosenberg, J. Hirschberg, V-Measure: a conditional entropy-based external cluster evaluation measure, in Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (2007), pp. 410–420Google Scholar
  34. 34.
    P.J. Rousseeuw, Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)CrossRefGoogle Scholar
  35. 35.
    G. Rothermel, R.H. Untch, C. Chu, M.J. Harrold, Prioritizing test cases for regression testing. Trans. Softw. Eng. 27(10), 929–948 (2001)CrossRefGoogle Scholar
  36. 36.
    R.C. Bryce, A.M. Memon, Test suite prioritization by interaction coverage, in Proceedings of the Workshop on Domain-Specific Approaches to Software Test Automation (ACM, New York, 2007), pp. 1–7Google Scholar
  37. 37.
    A. Memon, Q. Xie, Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software. Trans. Softw. Eng. 31(10), 884–896 (2005)CrossRefGoogle Scholar
  38. 38.
    W.H. Kruskal, W.A. Wallis, Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47(260), 583–621 (1952)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Dmitry Nurmuradov
    • 1
  • Renée Bryce
    • 1
  • Shraddha Piparia
    • 1
  • Barrett Bryant
    • 1
  1. 1.University of North TexasDentonUSA

Personalised recommendations