Skip to main content

Can My Test Case Run on Your Test Plant? A Logic-Based Compliance Check and Its Evaluation on Real Data

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 10364))

Abstract

Test automation is adopted by the majority of software and hardware producers since it speeds up the testing phase and allows to design and perform a large bunch of tests that would be hardly manageable in a manual way. When dealing with the testing of hardware instruments, different physical environments have to be created so that the instruments under test can be analyzed in different scenarios, involving disparate components and software configurations.

Creating a test case is a time consuming activity: test cases should be reused as much as possible. Unfortunately, when a physical test plant changes or a new one is created, understanding if existing test cases can be executed over the updated or new test plant is extremely difficult.

In this paper we present our approach for checking the compliance of a test case w.r.t. a physical test plant characterized by its devices and their current configuration. The compliance check, which is fully automated and exploits a logic-based approach, answers the query “Can the test case A run over the physical configured test plant B”?

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.prweb.com/releases/2013/1/prweb10298185.htm.

  2. 2.

    http://testingconferences.org/.

  3. 3.

    This manual translation was enough for the purpose of our work, but if we had to generalize and automatize our approach, a logic-based language supporting a declarative representation of the structural aspects of object-oriented and frame-based languages like F-logic [15] might be taken into account in the modeling stage.

  4. 4.

    http://www.swi-prolog.org/packages/jpl.

  5. 5.

    Since TC is the actual test case and MCTP(TC) is its representation, in the sequel we prefer TC to MCTP(TC) to stress the fact that our experiments involved real or fictional but realistic TCs.

References

  1. Ackermann, C., Cleaveland, R., Huang, S., Ray, A., Shelton, C., Latronico, E.: Automatic requirement extraction from test cases. In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 1–15. Springer, Heidelberg (2010). doi:10.1007/978-3-642-16612-9_1

    Chapter  Google Scholar 

  2. Ancona, D., Drossopoulou, S., Mascardi, V.: Automatic generation of self-monitoring mass from multiparty global session types in Jason. In: Baldoni, M., Dennis, L., Mascardi, V., Vasconcelos, W. (eds.) DALT 2012. LNCS (LNAI), vol. 7784, pp. 76–95. Springer, Heidelberg (2013). doi:10.1007/978-3-642-37890-4_5

    Chapter  Google Scholar 

  3. Asaithambi, S.P.R., Jarzabek, S.: Towards test case reuse: a study of redundancies in android platform test libraries. In: Favaro, J., Morisio, M. (eds.) ICSR 2013. LNCS, vol. 7925, pp. 49–64. Springer, Heidelberg (2013). doi:10.1007/978-3-642-38977-1_4

    Chapter  Google Scholar 

  4. Briola, D., Mascardi, V., Ancona, D.: Distributed runtime verification of JADE and Jason multiagent systems with Prolog. In: Italian Conference on Computational Logic, CEUR Workshop Proceedings, vol. 1195, pp. 319–323 (2014)

    Google Scholar 

  5. Briola, D., Mascardi, V., Ancona, D.: Distributed runtime verification of JADE multiagent systems. In: Camacho, D., Braubach, L., Venticinque, S., Badica, C. (eds.) Intelligent Distributed Computing VIII. SCI, vol. 570, pp. 81–91. Springer, Cham (2015). doi:10.1007/978-3-319-10422-5_10

    Google Scholar 

  6. Cai, L., Tong, W., Liu, Z., Zhang, J.: Test case reuse based on ontology. In: 15th IEEE Pacific Rim International Symposium on Dependable Computing, 2009. PRDC 2009, pp. 103–108. IEEE (2009)

    Google Scholar 

  7. Dustin, E., Rashka, J., Paul, J., Testing, A.S.: Introduction, Management, and Performance. Addison-Wesley, Boston (1999)

    Google Scholar 

  8. Fewster, M., Graham, D.: Software Test Automation. Addison-Wesley, Reading (1999)

    MATH  Google Scholar 

  9. Gorlick, M.M., Kesselman, C.F., Marotta, D.A., Parker, D.S.: Mockingbird: a logical methodology for testing. J. Log. Program. 8(1–2), 95–119 (1990)

    Article  MATH  Google Scholar 

  10. Graham, D., Fewster, M.: Experiences of Test Automation: Case Studies of Software Test Automation. Addison-Wesley, Upper Saddle River (2012)

    MATH  Google Scholar 

  11. Hayes, L.G.: Automated Testing Handbook. Software Testing Inst, Dallas (2004)

    Google Scholar 

  12. Hoffman, D.: Cost benefits analysis of test automation. Report of Software Quality Methods, LLC (1999). https://www.agileconnection.com/sites/default/files/article/file/2014/Cost-Benefit

  13. Jääskeläinen, A.: Towards model construction based on test cases and GUI extraction. In: Wotawa, F., Nica, M., Kushik, N. (eds.) ICTSS 2016. LNCS, vol. 9976, pp. 225–230. Springer, Cham (2016). doi:10.1007/978-3-319-47443-4_15

    Chapter  Google Scholar 

  14. Jääskeläinen, A., Kervinen, A., Katara, M., Valmari, A., Virtanen, H.: Synthesizing test models from test cases. In: Chockler, H., Hu, A.J. (eds.) HVC 2008. LNCS, vol. 5394, pp. 179–193. Springer, Heidelberg (2009). doi:10.1007/978-3-642-01702-5_18

    Chapter  Google Scholar 

  15. Kifer, M., Lausen, G., Wu, J.: Logical foundations of object-oriented and frame-based languages. J. ACM 42(4), 741–843 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  16. Kumar, D., Mishra, K.K.: The impacts of test automation on software’s cost, quality and time to market. In: 7th International Conference on Communication Procedia Computer Science, Computing and Virtualization 2016, vol. 79, pp. 8–15 (2016)

    Google Scholar 

  17. Lucio, L., Pedro, L., Buchs, D.: A methodology and a framework for model-based testing. In: Guelfi, N. (ed.) RISE 2004. LNCS, vol. 3475, pp. 57–70. Springer, Heidelberg (2005). doi:10.1007/11423331_6

    Chapter  Google Scholar 

  18. Mascardi, V., Ancona, D.: Attribute global types for dynamic checking of protocols in logic-based multiagent systems. TPLP 13(4-5-Online-Supplement) (2013)

    Google Scholar 

  19. Meudec, C.: Atgen: automatic test data generation using constraint logic programming and symbolic execution. Softw. Test. Verif. Reliab. 11(2), 81–96 (2001)

    Article  Google Scholar 

  20. Mosley, D.J., Posey, B.A.: Just Enough Software Test Automation. Prentice Hall, Upper Saddle River (2002)

    Google Scholar 

  21. Pesch, H., Schnupp, P., Schaller, H., Spirk, A.P.: Test case generation using Prolog. In: 8th International Conference on Software Engineering, ICSE 1985, pp. 252–258. IEEE Computer Society Press (1985)

    Google Scholar 

  22. Philipps, J., Pretschner, A., Slotosch, O., Aiglstorfer, E., Kriebel, S., Scholl, K.: Model-based test case generation for smart cards. Electron. Notes Theoret. Comput. Sci. 80, 170–184 (2003)

    Article  Google Scholar 

  23. Sterling, L., Shapiro, E.Y.: The Art of Prolog - Advanced Programming Techniques, 2nd edn. MIT Press, Cambridge (1994)

    MATH  Google Scholar 

  24. The Object Management Group: OMG Unified Modeling Language\(^{TM}\)(OMG UML). Version 2.5. OMG Document Number formal/2015-03-01 (2015). http://www.omg.org/spec/UML/2.5/PDF/

  25. The W3C OWL Working Group: OWL 2 Web Ontology Language Document Overview, 2nd Ed. W3C Recommendation, 11 December 2012. https://www.w3.org/TR/owl2-overview/

  26. Unmesh, G.: Selenium Testing Tools Cookbook. Packt Publishing, Burmingham (2012)

    Google Scholar 

  27. Von Mayrhauser, A., Mraz, R., Walls, J., Ocken, P.: Domain based testing: increasing test case reuse. In: IEEE International Conference on Computer Design: VLSI in Computers and Processors. ICCD 1994, pp. 484–491. IEEE (1994)

    Google Scholar 

Download references

Acknowledgements

We thank Vladimir Zaikin who contributed to the realization of some parts of this work, and the engineers from BlindedCompany’s test automation team involved in the project. We also thank the reviewers for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniela Briola .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Briola, D., Mascardi, V. (2017). Can My Test Case Run on Your Test Plant? A Logic-Based Compliance Check and Its Evaluation on Real Data. In: Costantini, S., Franconi, E., Van Woensel, W., Kontchakov, R., Sadri, F., Roman, D. (eds) Rules and Reasoning. RuleML+RR 2017. Lecture Notes in Computer Science(), vol 10364. Springer, Cham. https://doi.org/10.1007/978-3-319-61252-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-61252-2_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-61251-5

  • Online ISBN: 978-3-319-61252-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics