Skip to main content

Safety Lifecycle for Developing Safety Critical Artificial Neural Networks

  • Conference paper
Computer Safety, Reliability, and Security (SAFECOMP 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2788))

Included in the following conference series:

Abstract

Artificial neural networks are employed in many areas of industry such as medicine and defence. There are many techniques that aim to improve the performance of neural networks for safety-critical systems. However, there is a complete absence of analytical certification methods for neural network paradigms. Consequently, their role in safety-critical applications, if any, is typically restricted to advisory systems. It is therefore desirable to enable neural networks for highly-dependable roles. Existing safety lifecycles for neural networks do not focus on suitable safety processes for analytical arguments. This paper presents a safety lifecycle for artificial neural networks. The lifecycle focuses on managing behaviour represented by neural networks and to provide acceptable forms of safety assurance. A suitable neural network model is outlined and is based upon representing knowledge in symbolic form. Requirements for safety processes similar to those used for conventional systems are also established. Although developed specifically for decision theory applications, the safety lifecycle could apply to a wide range of application domains.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lisboa, P.: Industrial use of safety-related artificial neural networks. Health & Safety Executive 327 (2001)

    Google Scholar 

  2. Sharkey, A.J.C., Sharkey, N.E.: Combining Diverse Neural Nets, in Computer Science. University of Sheffield, Sheffield (1997)

    Google Scholar 

  3. Nabney, I., et al.: Practical Assessment of Neural Network Applications. Aston University & Lloyd’s Register, UK (2000)

    Google Scholar 

  4. Rodvold, D.M.: A Software Development Process Model for Artificial Neural Networks in Critical Applications. In: Proceedings of the 1999 International Conference on Neural Networks (IJCNN 1999), Washington, D.C (July 1999)

    Google Scholar 

  5. Principe, J.C., Euliano, N.R., Lefebvre, W.C.: Neural and Adaptive Systems: Fundamentals Through Simulations. John Wiley & Sons, Chichester (2000)

    Google Scholar 

  6. Kearns, M.: A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split, AT&T Bell Laboratories

    Google Scholar 

  7. Kilimasaukas, C.C.: Neural nets tell why. Dr Dobbs’s, 16-24 (April 1991)

    Google Scholar 

  8. Andrews, R., Diederich, J., Tickle, A.: A survey and critique of techniques for extracting rules from trained artificial neural networks, Neurocomputing Research Centre, Queensland University of Technology (1995)

    Google Scholar 

  9. Shavlik, J.W.: A Framework for Combining Symbolic and Neural Learning, Computer Science Department. University of Wisconsin, Madison (1992)

    Google Scholar 

  10. Cristea, A., Cristea, P., Okamoto, T.: Neural Network Knowledge Extraction. Série Électrotechnique et Énergétique 42(4), 477–491 (1998)

    Google Scholar 

  11. Gemen, S., Bienenstock, E., Doursat, R.: Neural Networks and the Bias/Variance Dilemma. Neural Computation 4, 1–58 (1992)

    Article  Google Scholar 

  12. Taha, I., Ghosh, J.: A Hybrid Intelligent Architecture and Its Application to Water Reservoir Control. Department of Electrical and Computer Engineering. University of Texas, Austin (1995)

    Google Scholar 

  13. Wermter, S., Sun, R.: Hybrid Neural Systems. Springer, New York (2000)

    Book  Google Scholar 

  14. Sordo, M., Buxton, H., Watson, D.: A Hybrid Approach to Breast Cancer Diagnosis. School of Cognitive and Computing Sciences. University of Sussex, Falmer (2001)

    Google Scholar 

  15. Osorio, F.: INSS: Un Systeme Hybride Neuro-Symbolique pour l’Apprentissage Automa-tique Constructif. INPG - Laboratoire, LEIBNIZ - IMAG, Grenoble (1998)

    Google Scholar 

  16. Weaver, R.A., McDermid, J.A., Kelly, T.P.: Software Safety Arguments: Towards a Systematic Categorisation of Evidence. In: International System Safety Conference, Denver, CO (2002)

    Google Scholar 

  17. Rurnrnelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  18. Leveson, N.: Safeware: system safety and computers. Addison-Wesley, Reading (1995)

    Google Scholar 

  19. MoD, Defence Standard 00-55: Requirements for Safety Related Software in Defence Equipment, UK Ministry of Defence (1996)

    Google Scholar 

  20. SAE, ARP 4761 - Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment, The Society for Automotive Engineers (1996)

    Google Scholar 

  21. MoD, Interim Defence Standard 00-58 Issue 1: HAZOP Studies on Systems Containing Programmable Electronics, UK Ministry of Defence (1996)

    Google Scholar 

  22. Kelly, T.P.: Arguing Safety - A Systematic Approach to Managing Safety Cases, Department of Computer Science. University of York, York (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kurd, Z., Kelly, T. (2003). Safety Lifecycle for Developing Safety Critical Artificial Neural Networks. In: Anderson, S., Felici, M., Littlewood, B. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2003. Lecture Notes in Computer Science, vol 2788. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39878-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39878-3_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20126-7

  • Online ISBN: 978-3-540-39878-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics