Testing of Safety-Critical Systems – a Structural Approach to Test Case Design

  • Armin Beer
  • Bernhard Peischl
Conference paper


In the development of many safety-critical systems, test cases are still created on the basis of experience rather than systematic methods. As a consequence, many redundant test cases are created and many aspects remain untested. One of the most important questions in testing dependable systems is: which are the right test techniques to obtain a test set that will detect critical errors in a complex system? In this paper, we provide an overview of the state-of-practice in designing test cases for dependable event-based systems regulated by the IEC 61508 and DO-178B standards. For example, the IEC 61508 standard stipulates modelbased testing and systematic test-case design and generation techniques such as transition-based testing and equivalence-class partitioning for software verification. However, it often remains unclear in which situation these techniques should be applied and what information is needed to select the right technique to obtain the best set of test cases. We propose an approach that selects appropriate test techniques by considering issues such as specification techniques, failure taxonomies and quality risks. We illustrate our findings with a case study for an interlocking system for Siemens transportation systems.


System Under Test Method Guide Message Sequence Chart Safety Integrity Level Automate Test Case Generation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



The research work reported here was partially conducted within the Softnet Austria competence network ( and was funded by the Austrian Federal Ministry of Economics (bm:wa), the province of Styria, the Steirische Wirtschaftsförderungsgesellschaft mbH (SFG) and the city of Vienna within the scope of the Centre for Innovation and Technology (ZIT).


  1. Avizienis A, Laprie J-C, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Trans Dependable Secur Comput 1:11-33CrossRefGoogle Scholar
  2. Beer A, Heindl M (2007) Issues in testing dependable event-based systems at a systems integration company. Proc Int Conf Availab Reliab Secur (ARES 2007), Vienna, AustriaGoogle Scholar
  3. Beer A, Menzel M (2008) Test automation patterns: closing the gap between requirements and test. Testing Experience Magazine, December 2008Google Scholar
  4. Beer A, Ramler R (2008) The role of experience in software testing practice. 34thUROMICRO conference on software engineering and advanced applications. SEAA 2008, Parma, ItalyGoogle Scholar
  5. Beizer B (1990) Software system testing and quality assurance. Van Nostrand ReinholdGoogle Scholar
  6. Belinfante A, Frantzen L, Schallhart C (2005) Tools for test case generation. In: Model-based testing of reactive systems. LNCS 3472. SpringerGoogle Scholar
  7. Black R (2009) Advanced software testing – guide to the ISTQB advanced certification Vol1 and 2, RockynookGoogle Scholar
  8. Blackburn M (2002) The Mars polar lander failure. STQE Magazine, September/October 2002Google Scholar
  9. SEI (2006) CMMI® for Development, Version 1.2. Carnegie Mellon UniversityGoogle Scholar
  10. CENELEC (2001) EN 50128 Railway applications: communications, signalling and processing systems – software for railway control and protection systemsGoogle Scholar
  11. de Grood D-J (2008) TestGoal, Result-driven testing. Collis BV, Leiden.Google Scholar
  12. Eastaughffe KA, Cant A, et al (1999) A framework for assessing standards for safety critical computer-based systems. In Proc Fourth IEEE Int Symp Forum Softw Eng StandGoogle Scholar
  13. Fernandez J C, Jard C, Jeron T, Viho C (1997) An experiment in automatic generation of test suites for protocols with verification technology. Sci Comput Program 29:123-146CrossRefGoogle Scholar
  14. Frantzen L, Tretmans J, Willemse TAC (2006) A symbolic framework for model-based testing.Google Scholar
  15. In: Havelund K, Núñez M, Rosu G, Wolff B (eds), Formal approaches to software testing andGoogle Scholar
  16. runtime verification (FATES/RV) LNCS 4262. SpringerGoogle Scholar
  17. Hamlet R (1994) Random testing. In: Marciniak J (ed) Encyclopedia of software engineering. Wiley, New YorkGoogle Scholar
  18. Howden W (1976) Reliability of the path analysis testing strategy. IEEE Trans Softw Eng 2:208- 215CrossRefMathSciNetGoogle Scholar
  19. ISO (1989) ISO 8807 Information processing systems – open systems interconnection – LOTOS – A formal description technique based on the temporal ordering of observational behaviourGoogle Scholar
  20. ISTQB (2010) Standard glossary of terms used in software testing. Version 2.1. International Software Testing Qualifications Board, Glossary Working PartyGoogle Scholar
  21. Kahlouche H, Viho C, Zendri M (1998) An industrial experiment in automatic generation of executable test suites for a cache coherency protocol. In: Proc Int Workshop Test Commun SystGoogle Scholar
  22. Kuhn D, Wallace D (2000) Failure modes in medical device software: an analysis of 15 years of recall data. Nat. Institute of Standards and Technology, Gaithersburg, MD USA. Accessed 25 August 2010
  23. Littlewood B, Strigini L (1993) Validation of ultrahigh dependability for software-based systems. Comm. ACM 36(11):69-80CrossRefGoogle Scholar
  24. Lyu MR (ed) (1987) Handbook of software reliability engineering. IEEE Computer Society PressGoogle Scholar
  25. McDonald M, Musson R, Smith R (2008) The practical guide to defect prevention – techniques to meet the demand for more reliable software. Microsoft PressGoogle Scholar
  26. Milius S, Steinke U (2010) Modellbasierte softwareentwicklung mit SCADE in der eisenbahnautomatisierung.∼milius/research/modelbased.pdf. Accessed 24 August 2010
  27. Mohacsi S, Wallner J (2010) A hybrid approach for model-based random testing. VALID 2010, Nice, FranceGoogle Scholar
  28. Mogyorodi G (2008) Requirements-based testing – ambiguity reviews. Testing Experience MagazineGoogle Scholar
  29. Myers G (1979) The art of software testing. Wiley & SonsGoogle Scholar
  30. Nielsen DS (1971) The cause consequence diagram method as a basis for quantitative accident analysis. Danish Atomic Energy Commission, RISO-M-1374Google Scholar
  31. OMG (2010) UML superstructure reference. Accessed April 2010
  32. Peischl B (2007) Standards for safety critical software: validation, verification, and testing requirements. SNA-TR-2007-1, Softnet-ReportGoogle Scholar
  33. Straden L, Trebbien-Nielsen C (2002) Standards for safety-related applications. Nordtest Technical Report.Google Scholar
  34. Tretmans J (1996) Test generation with inputs, outputs and repetitive quiescence. Softw Concepts Tools 17(3):103-120MATHGoogle Scholar
  35. Tretmans J and Brinksma E (2003) Torx: automated model based testing. In: Hartman A, Dussa- Zieger K (eds) Proc First Eur Conf Model-Driven Softw Eng, Nurnburg, GermanyGoogle Scholar
  36. Tretmans J (2008) Model based testing with labelled transition systems. In: Hierons RM, Bowen JP, Harman M (eds) Formal methods and testing: an outcome of the FORTEST network. LNCS 4949. Springer-Verlag, Berlin, HeidelbergGoogle Scholar
  37. Tuinhout R (2008) The boundary value fallacy. Testing Experience MagazineGoogle Scholar
  38. Vegas S, Juristo N, Basili V (2006) Packaging experiences for improving testing technique selection. J Syst Softw 79:1606-1618CrossRefGoogle Scholar
  39. Voas J M, McGraw G (1998) Software fault injection. Wiley InterscienceGoogle Scholar
  40. Whittaker J (2003) How to break software. Addison-WesleyGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  • Armin Beer
    • 1
  • Bernhard Peischl
    • 2
  1. 1.Independent ConsultantBadenAustria
  2. 2.Technical University of Graz, Institute for Software TechnologyGrazAustria

Personalised recommendations