Advertisement

The Role of Measurement in Software Safety Assessment

  • Norman Fenton

Abstract

The primary objective of this paper is to highlight the important role of measurement in both developing and assessing safety-critical software systems. The ideal approach to measure the safety of such systems is to carefully record times of occurrences of safety-related failures during actual operation. Unfortunately, this is of little use to assessors who need to certify systems in advance of operation. Moreover, even this extremely onerous measurement obligation does not work when there are ultra high reliability requirements; in such cases we are unlikely to observe sufficiently long failure free operational periods. So when we have to assess the safety of either a system that is not yet operational, or a system with ultra-high reliability requirements we have to try something else. In general, we try to make a ‘safety case’ that takes account of many different sources and types of evidence. This may include evidence from testing; evidence about the ‘quality’ of the internal structure of the software; or evidence about the ‘quality’ of the development process. Although many potential types of information could be based on rigorous measurement, more often than not safety assessments are primarily based on engineering judgement. After reviewing a range of measurement techniques that have recently been used in software safety assessment, we focus especially on two important areas:
  • Measures related to ‘defects’ and their resolution; even where developers and testers of safety critical systems record carefully this information there seem to be inevitable flaws in the data. Adherence to some simple principles, such as orthogonal fault classifications, can significantly improve the quality of data and consequently its potential use in safety assessment

  • Rigorous, measurement-based approaches to combining different pieces of evidence; in particular recent work on a) the use of Bayesian Belief Networks and b) the role of Multi-Criteria Decision Aid in dependability assessment.

Keywords

Analytic Hierarchy Process Software Quality Preference Structure Software Measurement Bayesian Belief Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brocklehurst, S and Littlewood B, New ways to get accurate software reliability modelling, IEEE Software, July, 1992.Google Scholar
  2. 2.
    Littlewood B and Strigini L, Validation of Ultra-High Dependability for Software-based Systems, CACM, vol. 36, no 11, 1993.Google Scholar
  3. 3.
    Littlewood B, Limits to evaluation of software dependability, in Software Reliability and Metrics (Ed Littlewood B, Fenton N), Elsevier, 1991.Google Scholar
  4. 4.
    Hamlet D and Voas J, Faults on its sleeve: amplifying software reliability testing, Proc ISSTA ‘83, Boston, pp 89–98, 1993.Google Scholar
  5. 5.
    Voas JM and Miller KW, Software testability: the new verfication, IEEE Software, pp 17–28, May, 1995.Google Scholar
  6. 6.
    Riley P, Towards safe and reliable software for Eurostar, GEC Journal of Research 12(1), 3–12, 1995.Google Scholar
  7. 7.
    Fenton NE, Software Metrics: A Rigorous Approach, Chapman and Hall, 1991.MATHGoogle Scholar
  8. 8.
    Keller T, Measurements role in providing ‘error-free’ onboard shuttle software, 3rd Intl Applications of Software Metrics Conference, La Jolla, California“, pp 2.154–2.166, Proceedings available from Software Quality Engineering, 1992.Google Scholar
  9. 9.
    Lytz R, Software metrics for the Boeing 777: a cased study, Software Quality Journal, 4 (1), 1–14, 1995.CrossRefGoogle Scholar
  10. 10.
    Bennett PA, Software development for the channel tunnel: a summary, High Integrity Systems 1 (2), 213–220, 1994.Google Scholar
  11. 11.
    Stark G, Durst RC and Vowell CW, ‘Using metrics in management decision making’, IEEE Computer, 42–49, Sept, 1994.Google Scholar
  12. 12.
    Leveson NG and Turner CS, An investigation of the Therac-25 accidents, IEEE Computer, July, 18–41, 1993.Google Scholar
  13. 13.
    Fenton NE, Software measurement:a necessary scientific basis, IEEE Trans Software Eng 20 (3), 199–206, 1994.CrossRefGoogle Scholar
  14. 14.
    Zuse H, Software Complexity: Measures and Methods, De Gruyter. Berlin, 1991.Google Scholar
  15. 15.
    Finkelstein L, A review of the fundamental concepts of measurement, Measurement Vol 2(1), 25–34., 1984.MathSciNetCrossRefGoogle Scholar
  16. 16.
    Roberts FS, Measurement Theory with Applications to Decision Making, Utility, and the Social Sciences, Addison Wesley, 1979.Google Scholar
  17. 17.
    Fenton NE, Lizuka Y, and Whiny RW (Editors), Software Quality Assurance and Metrics: A Worldwide Perspective, International Thomson Computer Press, 1995.MATHGoogle Scholar
  18. 18.
    International Organisation for Standardisation, Software product evaluation - Quality characteristics and guide lines for their use, ISO/IEC IS 9126, 1991.Google Scholar
  19. 19.
    Fenton NE, When a sofware measure is not a measure, Softw Eng J 7 (5), 357–362, 1992.CrossRefGoogle Scholar
  20. 20.
    Halstead M, Elements of Software Science, North Holland, 1977.MATHGoogle Scholar
  21. 21.
    McCabe T, A Software Complexity Measure, IEEE Trans. Software Engineering SE-2(4), 308–320, 1976.MathSciNetCrossRefGoogle Scholar
  22. 22.
    Hamer P, Frewin G, Halstead’s software science: a critical examination, Proc 6th Int Conf Software Eng, 197–206, 1982.Google Scholar
  23. 23.
    Shepperd MJ, A critique of cyclomatic complexity as a software metric, Softw. Eng. J. vol 3 (2), pp 30–36, 1988.Google Scholar
  24. 24.
    Oulsnam G, Cyclomatic numbers do not measure complexity of unstructured programs, Inf Processing Letters, 207–211, Dec, 1979.Google Scholar
  25. 25.
    Oviedo EI, Control flow, data flow, and program complexity, In Proc COMPSAC 80, IEEE Computer Society Press, New York, 1980, 146–152, 1980.Google Scholar
  26. 26.
    Nejmeh, BA, NPATH: A measure of execution path complexity and its applications, Comm ACM, 31 (2), 188–200, 1988.CrossRefGoogle Scholar
  27. 27.
    Hatton, L., & Hopkins, T. R, Experiences with Flint, a software metrication tool for Fortran 77, In Symposium on Software Tools, Napier Polytechnic, Edinburgh, 1989.Google Scholar
  28. 28.
    Woodward MR, Hennell MA, Hedley D, A measure of control flow complexity in program text, IEEE Trans Soft. Eng, SE-5 (1), 45–50, 1979.CrossRefGoogle Scholar
  29. 29.
    Bache R, Mullerburg M, Measures of testability as a basis forquality assurance, Software Eng J Vol 5 (2), 86–92, 1990.CrossRefGoogle Scholar
  30. 30.
    Bertolino A and Marre M, How many paths are needed for branch testing? J Systems and Software, 1995.Google Scholar
  31. 31.
    Neil, M.D, Multivariate Assessment of Software Products, Journal of Software Testing, Verification and Reliability, Vol 1 (4), pp 17–37, 1992.MathSciNetGoogle Scholar
  32. 32.
    Hatton L, Static inspection: tapping the wheels of software, IEEE Software, May, 85–87, 1995.Google Scholar
  33. 33.
    Hatton L, ‘C and Safety Related Software Development: Standards, Subsets, testing, Metrics, Legal issues’, McGraw-Hill, 1994.Google Scholar
  34. 34.
    Blum M, Luby M, and Rubinfield R, Self-testing/correcting with applications to numerical problems, J Computer and Systems Sciences 47, 549–595, 1993.MATHCrossRefGoogle Scholar
  35. 35.
    Humphrey, W S, Managing the Software Process, Addison-Wesley, Reading, Massachusetts, 1989.Google Scholar
  36. 36.
    Paulk, MC, Curtis B, Chrissis MB and Weber CV, Capability Maturity Model for Software, Version 1.1, SEI Technical Report SEI-CMU-93-TR-24, Software Engineering Institute, Pittsburgh, Pennsylvania, 15213–3890, USAGoogle Scholar
  37. 37.
    Pfleeger SL, Fenton NE, Page P, Evaluating software engineering standards, WEE Computer, Sept, 1994, 71–79, 1994.CrossRefGoogle Scholar
  38. 38.
    Fenton NE, Guidelines for interpreting standards, CAS/CITY/D212, CASCADE project, Lloyds Register, Croydon, 1994.Google Scholar
  39. 39.
    Adams E, Optimizing preventive service of software products, IBM Research Journal, 28 (1), 2–14, 1984.CrossRefGoogle Scholar
  40. 40.
    Laprie J-C (ed), Dependability: basic concepts and terminology, iSpringer Verlag, 1992.MATHGoogle Scholar
  41. 41.
    Mellor P, Failures, faults and changes in dependability measurement, Information and Software Technology“, 34 (10), 640–654, 1992.CrossRefGoogle Scholar
  42. 42.
    Pearl J, Probabilistic reasoning in intelligent systems, Morgan Kaufmann, Palo Alto, CA, 1988.Google Scholar
  43. 43.
    Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and Systems, 1, 3–28, 1978.MathSciNetMATHCrossRefGoogle Scholar
  44. 44.
    Dubois D and Prade H, Possibility theory: an approach to computerized processing od uncertainty, Plenum Press, NY, 1988.Google Scholar
  45. 45.
    Wright D and Cai K-Y, Representing uncertainty for safety critical systems, DATUM/CITY/02, City University, London EC1V OHB, 1994.Google Scholar
  46. 46.
    Fenton NE, Neil M and Ostralenk G, Metrics and models to predict software fault rates, DATUM/CSR/10, CSR, 1995.Google Scholar
  47. 47.
    Delic KA, Mazzanti F and Strigini L, Formalising a software safety case via belief networks, SHIP/T046, City University, London EC1V OHB, 1995.Google Scholar
  48. 48.
    Fenton NE, Multi-criteria Decision Aid; with emphasis on its relevance of in dependability assessment, DATUM/CSR/02, CSR, 1995.Google Scholar
  49. 49.
    Vincke P, Multicriteria Decision Aid, J Wiley, New York, 1992.Google Scholar
  50. 50.
    Roy B, Decision aid and decision making, European J of Operational Research, 45, 324–331, 1990.CrossRefGoogle Scholar
  51. 51.
    Fishburn PC, Utility theory for decision making, Wiley, NY, 1970.MATHGoogle Scholar
  52. 52.
    Saaty T, The Analytic Hierarchy Process, McGraw Hill, New York, 1980.MATHGoogle Scholar
  53. 53.
    Auer A, A judgement and decision making framework, SHIP document, SHIP/T/013/v02, 1994.Google Scholar
  54. 54.
    Vaisanen A, Auer A, Korhonen J, Assessment of the safety of PLCs: Janiksenlinna water plant study, SHIP/T/033, VTT, Finland, 1994.Google Scholar

Copyright information

© Springer-Verlag London Limited 1997

Authors and Affiliations

  • Norman Fenton
    • 1
  1. 1.Centre for Software ReliabilityCity UniversityUK

Personalised recommendations