Abstract
[Context] Agile software development calls for test automation since it is critical for continuous development and delivery. However, automation is a challenging task especially for tests of user interface, which can be very expensive. [Problem] There are two extreme approaches of structuring the code of test duties for web-applicating, i.e., linear scripting and keyword-driven scripting technique employing the page object pattern. The goal of this research is to compare them focusing on the maintainability aspect. [Method] We develop and maintain two automatic test suites implementing the same test cases for a mature open-source system using these two approaches. For each approach, we measure the size of the testing codebase and the number of lines of code that need to be modified to keep the test suites passing and valid through five releases of the system. [Results] We observed that the total number of physical lines was higher for the keyword-driven approach than for the linear scripting one. However, the number of programmatical lines of code was smaller for the former. The number of lines of code that had to be modified to maintain the tests was lower for the keyword-driven scripting test suite than for the linear-scripting one. We found the linear-scripting technique was more difficult to maintain because the scripts consist only of low-level code directly interacting with a web browser making it hard to understand the purpose and broader context of the interaction they implement. [Conclusions] We conclude that test suites created using the keyword-driven approach are easier to maintain and more suitable for most of the projects. However, the results show that the linear scripting approach could be considered as a less expensive alternative for small projects that are not likely to be frequently modified in the future.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Carvalho, R.: A comparative study of GUI testing approaches (2016)
Cohn, M.: Succeeding with Agile: Software Development Using Scrum. Pearson Education, London (2010)
Dees, I., Wynne, M., Hellesoy, A.: Cucumber Recipes: Automate Anything with BDD Tools and Techniques. Pragmatic Bookshelf, Raleigh (2013)
Fowler, M.: https://martinfowler.com/bliki/PageObject.html. Accessed 25 Oct 2019
Garousi, V., Mika, M.: When and what to automate in software testing? A multi-vocal literature review. Inf. Softw. Technol. 76, 92–117 (2016)
Leotta, M., Clerissi, D., Ricca, F., Spadaro, C.: Comparing the maintainability of selenium WebDriver test suites employing different locators: a case study. In: Joining AcadeMiA and Industry Contributions to Testing Automation (JAMAICA) (2013)
Leotta, M., Clerissi, D., Ricca, F., Tonella, P.: Capture-replay vs. programmable web testing: an empirical assessment during test case evolution. In: WCRE 2013, Koblenz, Germany, pp. 272–281 (2013)
Leotta, M., Ricca, F., Stocco, A., Tonella, P.: Reducing web test cases aging by means of robust XPath locators. IEEE (2013)
Leotta, M., Stocco, A., Ricca, F., Tonella, P.: Using multi-locators to increase the robustness of web test cases. IEEE (2015)
Mg, R.P.: Learning Selenium Testing Tools. Packt Publishing Ltd., Birmingham (2015)
Moodle.org: Moodle – Open-source learning platform (2019). https://moodle.org. Accessed 17 June 2019
Natarajan, S., Balasubramaniam, K., Kanitkar, M.: Efficiency and cost containment in quality assurance, 10th edn. Capgemini, Micro Focus, Sogeti, World Quality Report 2018-19 (2019)
Ochodek, M., Kopczyńska, S.: Perceived importance of agile requirements engineering practices-a survey. J. Syst. Softw. 143, 29–43 (2018)
pluralsight.com: Getting Started with Page Object Pattern for Your Selenium Tests (2019). https://www.pluralsight.com/guides/getting-started-with-page-object-pattern-for-your-selenium-tests. Accessed 13 June 2019
Runeson, P., Host, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering: Guidelines and Examples. Wiley, Hoboken (2012)
Sadaj, A.: Maintainability of automatic acceptance tests for web applications–a case study comparing two approaches to organizing code of test cases. Master’s thesis, Poznan University of Technology (2019)
Spinellis, D.: State-of-the-art software testing. IEEE Softw. 34(5), 4–6 (2017)
Ståhl, D., Bosch, J.: Modeling continuous integration practice differences in industry software development. J. Syst. Softw. 87, 48–59 (2014)
Tahchiev, P., Leme, F., Massol, V., Gregory, G.: JUnit in Action. Manning Publications Co., Greenwich (2010)
Yin, R.: Case Study Research: Design and Methods. SAGE Publications, Thousand Oaks (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Sadaj, A., Ochodek, M., Kopczyńska, S., Nawrocki, J. (2020). Maintainability of Automatic Acceptance Tests for Web Applications—A Case Study Comparing Two Approaches to Organizing Code of Test Cases. In: Chatzigeorgiou, A., et al. SOFSEM 2020: Theory and Practice of Computer Science. SOFSEM 2020. Lecture Notes in Computer Science(), vol 12011. Springer, Cham. https://doi.org/10.1007/978-3-030-38919-2_37
Download citation
DOI: https://doi.org/10.1007/978-3-030-38919-2_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-38918-5
Online ISBN: 978-3-030-38919-2
eBook Packages: Computer ScienceComputer Science (R0)