Skip to main content

Maintainability of Automatic Acceptance Tests for Web Applications—A Case Study Comparing Two Approaches to Organizing Code of Test Cases

  • Conference paper
  • First Online:
SOFSEM 2020: Theory and Practice of Computer Science (SOFSEM 2020)

Abstract

[Context] Agile software development calls for test automation since it is critical for continuous development and delivery. However, automation is a challenging task especially for tests of user interface, which can be very expensive. [Problem] There are two extreme approaches of structuring the code of test duties for web-applicating, i.e., linear scripting and keyword-driven scripting technique employing the page object pattern. The goal of this research is to compare them focusing on the maintainability aspect. [Method] We develop and maintain two automatic test suites implementing the same test cases for a mature open-source system using these two approaches. For each approach, we measure the size of the testing codebase and the number of lines of code that need to be modified to keep the test suites passing and valid through five releases of the system. [Results] We observed that the total number of physical lines was higher for the keyword-driven approach than for the linear scripting one. However, the number of programmatical lines of code was smaller for the former. The number of lines of code that had to be modified to maintain the tests was lower for the keyword-driven scripting test suite than for the linear-scripting one. We found the linear-scripting technique was more difficult to maintain because the scripts consist only of low-level code directly interacting with a web browser making it hard to understand the purpose and broader context of the interaction they implement. [Conclusions] We conclude that test suites created using the keyword-driven approach are easier to maintain and more suitable for most of the projects. However, the results show that the linear scripting approach could be considered as a less expensive alternative for small projects that are not likely to be frequently modified in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Carvalho, R.: A comparative study of GUI testing approaches (2016)

    Google Scholar 

  2. Cohn, M.: Succeeding with Agile: Software Development Using Scrum. Pearson Education, London (2010)

    Google Scholar 

  3. Dees, I., Wynne, M., Hellesoy, A.: Cucumber Recipes: Automate Anything with BDD Tools and Techniques. Pragmatic Bookshelf, Raleigh (2013)

    Google Scholar 

  4. Fowler, M.: https://martinfowler.com/bliki/PageObject.html. Accessed 25 Oct 2019

  5. Garousi, V., Mika, M.: When and what to automate in software testing? A multi-vocal literature review. Inf. Softw. Technol. 76, 92–117 (2016)

    Article  Google Scholar 

  6. Leotta, M., Clerissi, D., Ricca, F., Spadaro, C.: Comparing the maintainability of selenium WebDriver test suites employing different locators: a case study. In: Joining AcadeMiA and Industry Contributions to Testing Automation (JAMAICA) (2013)

    Google Scholar 

  7. Leotta, M., Clerissi, D., Ricca, F., Tonella, P.: Capture-replay vs. programmable web testing: an empirical assessment during test case evolution. In: WCRE 2013, Koblenz, Germany, pp. 272–281 (2013)

    Google Scholar 

  8. Leotta, M., Ricca, F., Stocco, A., Tonella, P.: Reducing web test cases aging by means of robust XPath locators. IEEE (2013)

    Google Scholar 

  9. Leotta, M., Stocco, A., Ricca, F., Tonella, P.: Using multi-locators to increase the robustness of web test cases. IEEE (2015)

    Google Scholar 

  10. Mg, R.P.: Learning Selenium Testing Tools. Packt Publishing Ltd., Birmingham (2015)

    Google Scholar 

  11. Moodle.org: Moodle – Open-source learning platform (2019). https://moodle.org. Accessed 17 June 2019

  12. Natarajan, S., Balasubramaniam, K., Kanitkar, M.: Efficiency and cost containment in quality assurance, 10th edn. Capgemini, Micro Focus, Sogeti, World Quality Report 2018-19 (2019)

    Google Scholar 

  13. Ochodek, M., Kopczyńska, S.: Perceived importance of agile requirements engineering practices-a survey. J. Syst. Softw. 143, 29–43 (2018)

    Article  Google Scholar 

  14. pluralsight.com: Getting Started with Page Object Pattern for Your Selenium Tests (2019). https://www.pluralsight.com/guides/getting-started-with-page-object-pattern-for-your-selenium-tests. Accessed 13 June 2019

  15. Runeson, P., Host, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering: Guidelines and Examples. Wiley, Hoboken (2012)

    Book  Google Scholar 

  16. Sadaj, A.: Maintainability of automatic acceptance tests for web applications–a case study comparing two approaches to organizing code of test cases. Master’s thesis, Poznan University of Technology (2019)

    Google Scholar 

  17. Spinellis, D.: State-of-the-art software testing. IEEE Softw. 34(5), 4–6 (2017)

    Article  Google Scholar 

  18. Ståhl, D., Bosch, J.: Modeling continuous integration practice differences in industry software development. J. Syst. Softw. 87, 48–59 (2014)

    Article  Google Scholar 

  19. Tahchiev, P., Leme, F., Massol, V., Gregory, G.: JUnit in Action. Manning Publications Co., Greenwich (2010)

    Google Scholar 

  20. Yin, R.: Case Study Research: Design and Methods. SAGE Publications, Thousand Oaks (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mirosław Ochodek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sadaj, A., Ochodek, M., Kopczyńska, S., Nawrocki, J. (2020). Maintainability of Automatic Acceptance Tests for Web Applications—A Case Study Comparing Two Approaches to Organizing Code of Test Cases. In: Chatzigeorgiou, A., et al. SOFSEM 2020: Theory and Practice of Computer Science. SOFSEM 2020. Lecture Notes in Computer Science(), vol 12011. Springer, Cham. https://doi.org/10.1007/978-3-030-38919-2_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-38919-2_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-38918-5

  • Online ISBN: 978-3-030-38919-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics