Safety Assessment for Safety-Critical Systems: A Review and Commentary of the Available Techniques

  • Andrea Carpignano
  • Maurizio Morisio
  • Emanuele Rambaudi
Conference paper


The all-pervasive nature of software questions our trust in many safety-critical software systems (SCSS), where the term stands for systems in which a software failure (or even, in some cases, its correct behaviour under unexpected circumstances) may pose a serious threat to one or more between humans, material properties, and the environment. Traditional hardware RAMS analysis has conceived quantitative and qualitative methods to estimate Reliability, Availability, Maintainability and Safety of systems. As far as safety is concerned, two main ways are used to assess it: 1. demonstrate that the probability of the top event is low enough or 2. logically infer that a hazardous event is impossible or, at least, that all mitigation measures have been taken should it happen. Our aim with respect to safety-critical software systems has been to investigate which state-of-the-art methods, belonging to these two parallel “paths”, would seem more effective in the assessment of safety. A historical analysis has been conducted on the basis of a series of past incidents and accidents in various fields of technology. The results have been some considerations on the difficulty of historical analysis itself and hints about the most common causes of software failures, being mistakes in requirements definition.


Failure Mode Software Reliability Hardware Failure Reliability Growth Software Failure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Johnson C. Software tools to support incident reporting in safety-critical systems. In: Safecomp 2000: 19th International conference on computer safety, reliability and security (Safety science vol.40, no.9, December 2002)Google Scholar
  2. 2.
    Neumann PG. Computer related risks. Addison Wesley, 1995Google Scholar
  3. 3.
    Neumann PG. Computer related risks: new material. 1999 http://www.csl.sri.comIneumannIrisks-new.htmlGoogle Scholar
  4. 4.
    Lutz RR. Targeting safety-related errors during software requirement analysis. In: Proceedings SIGSOFT’ 93: foundations of software engineering, 1993∼rlutz/Google Scholar
  5. 5.
    Leveson NG. Safeware. System safety and computers. Addison Wesley, Reading, Massachussets, 1995Google Scholar
  6. 6.
    Kan SH. Metrics and models in software quality engineering. Addison Wesley, December 1994Google Scholar
  7. 7.
    Malaiya YK, Denton J. Estimating the number of residual defects. HASE’ 98, 3rd IEEE international high-assurance systems engineering symposium.∼dentonIhase.pdfGoogle Scholar
  8. 8.
    Bishop PG, Bloomfield RE. A conservative theory for long-term reliability growth prediction. IEEE Transactions on reliability 1996; vol. 45, 4:550–560CrossRefGoogle Scholar
  9. 9.
    Bishop PG, Bloomfield RE, Froome PKD. Justifying the use of software of uncertain pedigree (SOUP) in safety-related applications. Report no. CRR336/2001. HSE Books, London, 2001Google Scholar
  10. 10.
    Strigini L, Fenton N. Rigorously assessing software reliability and safety. In: Proceedings ESA software product assurance workshop, 1996 httpy/ Scholar
  11. 11.
    Poucet A, Wand IC, Wilikens MA. The licensing of safety critical systems containing software. Joint Research Center, Ispra, Italy, 1993Google Scholar
  12. 12.
    Lutz RR. Software engineering for safety: a roadmap. In: Finkelstein A (ed.), The future of software engineering. ACM Press, 2000Google Scholar

Copyright information

© Springer-Verlag London 2004

Authors and Affiliations

  • Andrea Carpignano
    • 1
  • Maurizio Morisio
    • 2
  • Emanuele Rambaudi
    • 1
  1. 1.Politecnico di TorinoDENERTorinoItaly
  2. 2.Politecnico di TorinoDAUINItaly

Personalised recommendations