Abstract
Test automation is adopted by the majority of software and hardware producers since it speeds up the testing phase and allows to design and perform a large bunch of tests that would be hardly manageable in a manual way. When dealing with the testing of hardware instruments, different physical environments have to be created so that the instruments under test can be analyzed in different scenarios, involving disparate components and software configurations.
Creating a test case is a time consuming activity: test cases should be reused as much as possible. Unfortunately, when a physical test plant changes or a new one is created, understanding if existing test cases can be executed over the updated or new test plant is extremely difficult.
In this paper we present our approach for checking the compliance of a test case w.r.t. a physical test plant characterized by its devices and their current configuration. The compliance check, which is fully automated and exploits a logic-based approach, answers the query “Can the test case A run over the physical configured test plant B”?
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
- 2.
- 3.
This manual translation was enough for the purpose of our work, but if we had to generalize and automatize our approach, a logic-based language supporting a declarative representation of the structural aspects of object-oriented and frame-based languages like F-logic [15] might be taken into account in the modeling stage.
- 4.
- 5.
Since TC is the actual test case and MCTP(TC) is its representation, in the sequel we prefer TC to MCTP(TC) to stress the fact that our experiments involved real or fictional but realistic TCs.
References
Ackermann, C., Cleaveland, R., Huang, S., Ray, A., Shelton, C., Latronico, E.: Automatic requirement extraction from test cases. In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 1–15. Springer, Heidelberg (2010). doi:10.1007/978-3-642-16612-9_1
Ancona, D., Drossopoulou, S., Mascardi, V.: Automatic generation of self-monitoring mass from multiparty global session types in Jason. In: Baldoni, M., Dennis, L., Mascardi, V., Vasconcelos, W. (eds.) DALT 2012. LNCS (LNAI), vol. 7784, pp. 76–95. Springer, Heidelberg (2013). doi:10.1007/978-3-642-37890-4_5
Asaithambi, S.P.R., Jarzabek, S.: Towards test case reuse: a study of redundancies in android platform test libraries. In: Favaro, J., Morisio, M. (eds.) ICSR 2013. LNCS, vol. 7925, pp. 49–64. Springer, Heidelberg (2013). doi:10.1007/978-3-642-38977-1_4
Briola, D., Mascardi, V., Ancona, D.: Distributed runtime verification of JADE and Jason multiagent systems with Prolog. In: Italian Conference on Computational Logic, CEUR Workshop Proceedings, vol. 1195, pp. 319–323 (2014)
Briola, D., Mascardi, V., Ancona, D.: Distributed runtime verification of JADE multiagent systems. In: Camacho, D., Braubach, L., Venticinque, S., Badica, C. (eds.) Intelligent Distributed Computing VIII. SCI, vol. 570, pp. 81–91. Springer, Cham (2015). doi:10.1007/978-3-319-10422-5_10
Cai, L., Tong, W., Liu, Z., Zhang, J.: Test case reuse based on ontology. In: 15th IEEE Pacific Rim International Symposium on Dependable Computing, 2009. PRDC 2009, pp. 103–108. IEEE (2009)
Dustin, E., Rashka, J., Paul, J., Testing, A.S.: Introduction, Management, and Performance. Addison-Wesley, Boston (1999)
Fewster, M., Graham, D.: Software Test Automation. Addison-Wesley, Reading (1999)
Gorlick, M.M., Kesselman, C.F., Marotta, D.A., Parker, D.S.: Mockingbird: a logical methodology for testing. J. Log. Program. 8(1–2), 95–119 (1990)
Graham, D., Fewster, M.: Experiences of Test Automation: Case Studies of Software Test Automation. Addison-Wesley, Upper Saddle River (2012)
Hayes, L.G.: Automated Testing Handbook. Software Testing Inst, Dallas (2004)
Hoffman, D.: Cost benefits analysis of test automation. Report of Software Quality Methods, LLC (1999). https://www.agileconnection.com/sites/default/files/article/file/2014/Cost-Benefit
Jääskeläinen, A.: Towards model construction based on test cases and GUI extraction. In: Wotawa, F., Nica, M., Kushik, N. (eds.) ICTSS 2016. LNCS, vol. 9976, pp. 225–230. Springer, Cham (2016). doi:10.1007/978-3-319-47443-4_15
Jääskeläinen, A., Kervinen, A., Katara, M., Valmari, A., Virtanen, H.: Synthesizing test models from test cases. In: Chockler, H., Hu, A.J. (eds.) HVC 2008. LNCS, vol. 5394, pp. 179–193. Springer, Heidelberg (2009). doi:10.1007/978-3-642-01702-5_18
Kifer, M., Lausen, G., Wu, J.: Logical foundations of object-oriented and frame-based languages. J. ACM 42(4), 741–843 (1995)
Kumar, D., Mishra, K.K.: The impacts of test automation on software’s cost, quality and time to market. In: 7th International Conference on Communication Procedia Computer Science, Computing and Virtualization 2016, vol. 79, pp. 8–15 (2016)
Lucio, L., Pedro, L., Buchs, D.: A methodology and a framework for model-based testing. In: Guelfi, N. (ed.) RISE 2004. LNCS, vol. 3475, pp. 57–70. Springer, Heidelberg (2005). doi:10.1007/11423331_6
Mascardi, V., Ancona, D.: Attribute global types for dynamic checking of protocols in logic-based multiagent systems. TPLP 13(4-5-Online-Supplement) (2013)
Meudec, C.: Atgen: automatic test data generation using constraint logic programming and symbolic execution. Softw. Test. Verif. Reliab. 11(2), 81–96 (2001)
Mosley, D.J., Posey, B.A.: Just Enough Software Test Automation. Prentice Hall, Upper Saddle River (2002)
Pesch, H., Schnupp, P., Schaller, H., Spirk, A.P.: Test case generation using Prolog. In: 8th International Conference on Software Engineering, ICSE 1985, pp. 252–258. IEEE Computer Society Press (1985)
Philipps, J., Pretschner, A., Slotosch, O., Aiglstorfer, E., Kriebel, S., Scholl, K.: Model-based test case generation for smart cards. Electron. Notes Theoret. Comput. Sci. 80, 170–184 (2003)
Sterling, L., Shapiro, E.Y.: The Art of Prolog - Advanced Programming Techniques, 2nd edn. MIT Press, Cambridge (1994)
The Object Management Group: OMG Unified Modeling Language\(^{TM}\)(OMG UML). Version 2.5. OMG Document Number formal/2015-03-01 (2015). http://www.omg.org/spec/UML/2.5/PDF/
The W3C OWL Working Group: OWL 2 Web Ontology Language Document Overview, 2nd Ed. W3C Recommendation, 11 December 2012. https://www.w3.org/TR/owl2-overview/
Unmesh, G.: Selenium Testing Tools Cookbook. Packt Publishing, Burmingham (2012)
Von Mayrhauser, A., Mraz, R., Walls, J., Ocken, P.: Domain based testing: increasing test case reuse. In: IEEE International Conference on Computer Design: VLSI in Computers and Processors. ICCD 1994, pp. 484–491. IEEE (1994)
Acknowledgements
We thank Vladimir Zaikin who contributed to the realization of some parts of this work, and the engineers from BlindedCompany’s test automation team involved in the project. We also thank the reviewers for their valuable comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Briola, D., Mascardi, V. (2017). Can My Test Case Run on Your Test Plant? A Logic-Based Compliance Check and Its Evaluation on Real Data. In: Costantini, S., Franconi, E., Van Woensel, W., Kontchakov, R., Sadri, F., Roman, D. (eds) Rules and Reasoning. RuleML+RR 2017. Lecture Notes in Computer Science(), vol 10364. Springer, Cham. https://doi.org/10.1007/978-3-319-61252-2_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-61252-2_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-61251-5
Online ISBN: 978-3-319-61252-2
eBook Packages: Computer ScienceComputer Science (R0)