Skip to main content

Klassifikation von Software-Testmethoden

  • Conference paper
Testen, Analysieren und Verifizieren von Software

Part of the book series: Informatik aktuell ((INFORMAT))

  • 215 Accesses

Zusammenfassung

Der dynamische Funktionalitätstest, bei dem das geforderte funktionale Verhalten des Testobjekts geprüft wird, ist eine grundlegende Methode, um einen im Sinne der Anwender applikationsorientierten Test zu unterstützen [Howden 87, Grimm 88]. Der dynamische Aspekt, d.h. die Ausführung des Testobjekts, gewährleistet zugleich, daß das reale, ablauforientierte Verhalten des Testlings geprüft wird.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. Bernot, G; Gaudel, M.C; Marre, B.: Software Testing based on Formal Specifications: a Theory and a Tool. Rapport de Recherche no. 410, Université de Paris-Sud, Centre d’Orsay, Laboratoire de Recherche en Informatique, Bât. 490, 9/405 Orsay, France, June 1990.

    Google Scholar 

  2. Chow, T.S.: Testing Software Design Modeled by Finite-State Machines. IEEE Transactions on Software Engineering, Vol. 4, No. 3, 1978, S. 178–187.

    Article  Google Scholar 

  3. DeMillo, R.A.; Lipton, R.J.; Sayward, F.G.: Program Mutation: A New Approach to Program Testing. In Infotech Int. Ltd.: Infotech State of the Art Report on Software Testing, Vol. 2, Infotech International Limited, Maidenhead, England, 1979, S. 107–127.

    Google Scholar 

  4. Duran, J.W.; Ntafos, S.C.:A Report on Random Testing. Proceedings of the 5th International Conference on Software Engineering, San Diego, March 1981, S. 179–183.

    Google Scholar 

  5. Elmendorf, W.R.: Cause-Effect-Graphs in Functional Testing. TR-00.2487, IBM Systems Development Div., Poughkeepsie, New York 1973.

    Google Scholar 

  6. Gmeiner, L.: Zur Testfallgenerierung in der Entwurfsphase. Bericht KfK 3538, Kernforschungszentrum Karlsruhe, 1983.

    Google Scholar 

  7. Goodenough, J.B.; Gerhart, S.L.: Toward a Theory of Test Data Selection. IEEE Transactions on Software Engineering, Vol. 1, No. 2, 1975, S. 156–173.

    MathSciNet  Google Scholar 

  8. Gourlay, J.S.: Theory of Testing Computer Programs. Ph.D. Dissertation, University of Michigan, 1981.

    Google Scholar 

  9. Grimm, K.: Methoden und Verfahren zum systematischen Testen von Software. Automatisierungstechnische Praxis, 30, Heft 6, 1988, S. 271–280.

    Google Scholar 

  10. Harrold, M.J.; Soffa, M.L.: Interprocedural Data Flow Testing. In [Kemmerer 89], S. 158–167.

    Google Scholar 

  11. Hecht, M.S.: Flow Analysis of Computer Programs. North-Holland, New York 1977.

    MATH  Google Scholar 

  12. Howden, W.E.: Functional Program Testing and Analysis. McGraw-Hill Book Company, New York 1987.

    Google Scholar 

  13. Jeng, B; Weyuker, E.J.: Some Observations on Partition Testing. In [Kemmerer 89], S. 38–47.

    Google Scholar 

  14. Kemmerer, R.A. (ed): Proceedings of the ACM SIGSOFT 89 — 3rd Symposium on Software Testing, Analysis, and Verification (TAV3), Key West, Florida, Dec. 13–15, 1989.

    Google Scholar 

  15. Miller, E: Program Testing: Art Meets Theory. IEEE Computer, July 1977, S. 42–51.

    Google Scholar 

  16. Myers, G.J.: Methodisches Testen von Programmen. 2. Auflage. Oldenbourg Verlag, München/Wien 1987.

    Google Scholar 

  17. Riedemann, E.H.: PROST-Ein Programmsystem zum Software-Testen. Gl-Softwaretechnik-Trends, Heft 6–1, 1986, S. 64–68.

    Google Scholar 

  18. Sneed, H.M.: Software-Testen — State of the Art. Software-Entwicklungs-Systeme und — Werkzeuge. 2. Kolloqium Technische Akademie Esslingen, 8.-10.9.87. Verlag Technische Akademie Esslingen, 1987, S. 10.3–1 bis 10.3–6.

    Google Scholar 

  19. Spillner, A.; Herrmann, J.; Franck, R.: Methods and Tools for Integration Testing of Large Softwaresystems. 2nd European Conference on Quality Assurance 1990, EOQC-Software Committee, 30.5.-1.6.90, Oslo, Norwegen, Norwegian Computer Society, 1990, ca. S. 540–554.

    Google Scholar 

  20. Wilson, G.; Osterweil, L.: OMEGA-A Data Flow Analysis Tool for the C-Programming Language. Proceedings of the Compsac 1982 — 6th International Computer Software & Applications Conference, 1982, S. 9–18.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Herrmann, J., Grimm, K. (1992). Klassifikation von Software-Testmethoden. In: Liggesmeyer, P., Sneed, H.M., Spillner, A. (eds) Testen, Analysieren und Verifizieren von Software. Informatik aktuell. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-77747-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-77747-9_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55860-6

  • Online ISBN: 978-3-642-77747-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics