Advertisement

Ein Bündel heuristischer Methoden zur kostenoptimalen Bestimmung und Sicherung von Software-Zuverlässigkeit

  • F. Belli
Conference paper
Part of the Informatik-Fachberichte book series (INFORMATIK, volume 83)

Zusammenfassung

Ausgehend von den Methoden der konventionellen Qualitätssicherung und den vorhandenen Techniken des Software-Engineering wird in dieser Arbeit ein Ansatz vorgestellt, welcher die Zuverlässigkeit von Software-Produkten unter Berücksichtigung der Wirtschaftlicheitsaspekte bestimmt und sichert. Die Komponenten der Methode sind: Prüfen des Software-Produktes, Bestimmung der Zuverlässigkeit (Messen) und Optimierung dieser Prüf- und Meß-Prozedur im Hinblick auf ihre Kosten und Nutzen.

Abstract

An assemble of methods established in quality control and software engineering will be presented to determine and to assure the software quality and reliability. The elements of the approach are (i) product verification, (ii) reliability measurement, and (iii) cost optimization.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literaturhinweise

  1. /AVIZ/.
    Avizienis, A., “The Four-Universe Information System Model for the Study of Fault-Tolerance”, Digest of Papers 12th Internat’l Fault-Tolerant Comp. Symposium, Internat’l. Comp. Press (IEEE), New York (1982), pp. 6–13Google Scholar
  2. /BHAR/.
    Bhargard, B., “Software Reliability in Real-Time Systems”, Proc. of IFIP Internat’l. Comp. Conf. (1981), pp. 297–315Google Scholar
  3. /BEL1/.
    Belli, F., “Kritik an Entwurfsverfahren im Hinblick auf Qualitätsanforderungen”, Tagungsband German Chapter of ACM “Software-Engineering: Entwurf und Spezifikation”, Teubner Verlag, Stuttgart (1981) pp. 354–356Google Scholar
  4. /BEL2/.
    Belli, F., Großpietsch, K.-E., “A Strategy for the Development of Communication Fault-Tolerant Systems”, Digest of Papers 13th Internat’l. Fault-Tolerant Comp. Symposium, Internat’l. Comp. Press (IEEE), New York (1983), pp. 66–73Google Scholar
  5. /BENS/.
    Benson J.P. et al, “A Software Quality Assurance Experiment”, Proc. of ACM SIGMETRICS/SIGSOFT Workshop on Software Quality and Assurance, San Diego (1978) pp. 87–91Google Scholar
  6. /BOEH/.
    Boehm, B.W. et al., “Characteristics of Software Quality”, North Holland (1978)Google Scholar
  7. /BOYE/.
    Boyer, R.S., Strother-Moore, J. (eds.), “The Correctness Problem in Computer Science”, Academic Press London, New York etc. (1981)zbMATHGoogle Scholar
  8. /BOUL/.
    Boulton, P.I.P., Kittler, M.A.R., “Estimating Program Reliability”, The Computer Journal, 22/4 (1979), pp. 328–331CrossRefGoogle Scholar
  9. /BUDD/.
    Budd, T., DeMillo, R., Lipton, R., Sayward, F., “The Design of a Prototype Mutation System for Program Testing”, Proc. ACM National Comp. Conf. (1978)Google Scholar
  10. /BURF/.
    Burford M.A.J., Belli, F., “CADAS: A Tool for Rapid Prototyping and Testing of Embedded Software”, Proc. of IEEE/ACM SIGSOFT Symposium on Application and Assessment of Automated Tools for Software Development, IEEE Computer Society Press (1983)Google Scholar
  11. /CHAN/.
    Chandrasekaran B., Radicchi S. (eds.), “Computer Program Testing”, North Holland, Amsterdam etc. (1981)zbMATHGoogle Scholar
  12. /COMP/.
    Computer, IEEE, Special Issue on Software-Testing (April 1978)Google Scholar
  13. /COOP/.
    Cooper, J.D., Fisher, M.J. (Eds.), “Software Quality Management”, Petrocelli Books (1978)Google Scholar
  14. /DAC1/.
    DACS/Rome Air Development Center, “The DACS Glossary - A Bibliography of Software Engineering Terms” (1979)Google Scholar
  15. /DAC2/.
    DACS/Rome Air Development Center, “Quantitative Software Models” (1979)Google Scholar
  16. /DEUT/.
    Deutsch, M.S., “Software Verification an Validation - Realistic Project Approaches”, Prentice Hall Englewood Cliffs, NJ (1982)Google Scholar
  17. /DIJl/.
    Dijkstra, E.W., “Notes on Structured Programming”, in “Structured Programming”, Dahl, O.J. et al. (eds.), Academic Press, London etc. (1972)Google Scholar
  18. /DIJ2/.
    Dijkstra, E.W., “Why Correctness Must be a Mathematical Concern”, in /BOYE/Google Scholar
  19. /DIN1/.
    DIN 40041, 40042, “Zuverlässigkeit...; Begriffe/Kenngrößen”Google Scholar
  20. /DIN2/.
    DIN 55350, “Begriffe der Qualitätssicherung”Google Scholar
  21. /EHR1/.
    Ehrenberger, W.D., “Systematische und statistische Verfahren zur Gewinnung von Zuverlässigkeitskenngrößen für Programme”, in /VDI/, pp. 71–77Google Scholar
  22. /EHR2/.
    Ehrenberger, W.D., “Aspects of Development and Verification of Reliable Process Computer Software”, Proc. IFAC Comp. Appl. to Proc. Control (1981), pp. 35–48Google Scholar
  23. /FAGA/.
    Fagan, M.E., “Design and Code Inspections to Reduce Errors in Program Development”, IBM System Journal 15 (1976), pp. 182–211CrossRefGoogle Scholar
  24. /FAIR/.
    Fairley R.E., “Tutorial: Static Analysis and Dynamic Testing of Computer Software”, IEEE Computer (April 1978), pp. 350–357Google Scholar
  25. /GILB/.
    Gilb, T., “Software Metrics”, Winthrop (1977)Google Scholar
  26. /GLAS/.
    Glass, R.L., “Software Reliability Guidebook”, Prentice Hall (1979)Google Scholar
  27. /GOOD/.
    Goodenough, J.B., Gerhart, S.L., “Toward a Theory of Test Data Selection”, IEEE Trans, on Software Engineering (June 1975), pp. 156–173Google Scholar
  28. /GRIE/.
    Gries, D., “An Illustration of Current Ideas on the Derivation of Correction Proofs and Correct Programs”, IEEE Trans. on Software Engineering, (dwc. 1976), pp. 238–244Google Scholar
  29. /HALS/.
    Halstead, M., “Elements of Software Science”, Elsevier (1977)zbMATHGoogle Scholar
  30. /HAML/.
    Hamlet, R.G., “Testing Programs with the Aid of a Compiler”, IEEE Trans. on Software Engineering, (July 1977), pp. 279–290Google Scholar
  31. /HETZ/.
    Hetzel, W.C., “Program Test Methods”, Prentice Hall (1973)Google Scholar
  32. /HOWD/.
    Howden, W.E., “Theoretical and Empirical Studies of Program Testing”, IEEE Trans. on Software Engineering (July 1978), pp. 293–298Google Scholar
  33. /IEE1/.
    IEEE, Trans. on Software Engineering, Special Issue on Software-Testing (Sept. 1976)Google Scholar
  34. /IEE2/.
    IEEE, Trans. on Software-Engineering, Special Collection on Program Testing (May 1980)Google Scholar
  35. /INF1/.
    INFOTECH, “State of the Arts Report Software Reliability” (1977)Google Scholar
  36. /INF2/.
    INFOTECH, “State of the Arts Report Software Testing” (1979)Google Scholar
  37. /IRES/.
    Ireson, W.G. (Ed.).“Reliability Handbook”, McGraw Hill (1966)Google Scholar
  38. /KING/.
    King, J.C., “Symbolic Execution and Program Testing”, CACM (July 1976), pp. 385–394Google Scholar
  39. /KNUT/.
    Knuth, D.E., “The Art of Computer Programming”, geplant: Vol. I bis VII, AddisonWesley Publishing Co., Reading, Mass. (1973 etc.)Google Scholar
  40. /LITT/.
    Littlewood, B., “How to Measure Software Reliabilty and How Not to”, IEEE Trans. on Reliability (June 1979), pp. 103–110Google Scholar
  41. /MICR/.
    Microelectronics and Reliability, Vol. 19 (1979), Special Issue on Reliability, Pergamon PressGoogle Scholar
  42. /MIL1/.
    MIL-STD-1679 (U.S. DoD/Navy), “Military Standard System Software Development” (Dec. 1978)Google Scholar
  43. /MIL2/.
    ΜIL-STD-721 C (U.S. DoD), “Definitions and Terms for Reliability and Maintainability” (12 June 1981)Google Scholar
  44. /MLL1/.
    Miller, E., “Testing and Test Documentation (Workshop Report)”, IEEE Computer (March 1979)Google Scholar
  45. /MLL2/.
    Miller, E.F. et al, “Automated Generation of Testcase Data sets”, Proc. of the Internat’l. Conference on Reliable Software (1975), pp. 52–58Google Scholar
  46. /MLLS/.
    Mills, H.D., “On the Statistical Validation of Computer Programs”, FSC-72–6015, IBM Federal Systems Dev., Gaithersburg, Md. (1972)Google Scholar
  47. /MYE1/.
    Myers, G.J., “Software Reliability, Principles & Practices”, J. Wileys, New York (1976)Google Scholar
  48. /MYE2/.
    Myers, G.J., “Composite/Structured Design”, Van Nostrand Reinhold, New York (1978)Google Scholar
  49. /MYE3/.
    Myers, G.J., “The Art of Software Testing” Wiley-Interscience, New York (1979)Google Scholar
  50. /PETE/.
    Peterson, R.J., “TESTER/1: An Abstract Mgdel for the Automatic Synthesis of Program Test Case Specification”, in Proc. of Symposium on Computer Software Engineering, Polytechnic Inst. of New York (1976)Google Scholar
  51. /PDV/.
    PDV-Bericht-Nr. 179, “Testen und Verifizieren von Prozeßrechner-Software”, KfK-PDV (Kernforschungszentrum Karlsruhe) (Dez. 1979)Google Scholar
  52. /PROC/.
    Proc. of Annual Symposiums of “Fault Tolerance Computing” since 1970 by IEEE & IFIPGoogle Scholar
  53. /RAM1/.
    Ramamoorthy, C.V. “Testing Large Software with Automated Software Evaluation Systems”, IEEE Trans. on Software Engineering (March 1975) pp. 46–58Google Scholar
  54. /RAM2/.
    Ramamoorthy, C.V., “Techniques in Software Quality Assurance”, Tagungsband German Chapter of ACM “Software-Qualitätssicherung, Teubner Verlag, Stuttgart (March 1982) pp. 11–34Google Scholar
  55. /STRA/.
    Strachey, C., “Towards a Formal Semantics”, in “Formal Language Description Languages for Computer Programming”, Steel, T.B. (ed.), North Holland (1966), pp. 198–220Google Scholar
  56. /VDI /.
    VDI-Bericht Nr. 307, “Zuverlässigkeit und Qualität in der Luft- und Raumfahrt” (1978)Google Scholar
  57. /VOGE/.
    Voges U. et al, “SADAT - An Automated Testing Tool”, IEEE Trans. on Software Engineering (May 1980), pp. 286–290Google Scholar
  58. /WEGN/.
    Wegner, P. (ed.), “Research Directions in Software Technology”, insbes. Kap. “Program Verification”, The MIT Press, Cambridge, Mass. etc. (1979)Google Scholar
  59. /WEIG/.
    Weigel, P. “Qualitäts-Sicherung von EDV Software”, (in Handbuch der Qualitäts-Sicherung, Masing, W. (Herausgeber), Carl-Hanser-Verlag (1980)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1984

Authors and Affiliations

  • F. Belli
    • 1
  1. 1.Hochschule BremerhavenBremerhavenGermany

Personalised recommendations