Abstract
Artificial neural networks are employed in many areas of industry such as medicine and defence. There are many techniques that aim to improve the performance of neural networks for safety-critical systems. However, there is a complete absence of analytical certification methods for neural network paradigms. Consequently, their role in safety-critical applications, if any, is typically restricted to advisory systems. It is therefore desirable to enable neural networks for highly-dependable roles. Existing safety lifecycles for neural networks do not focus on suitable safety processes for analytical arguments. This paper presents a safety lifecycle for artificial neural networks. The lifecycle focuses on managing behaviour represented by neural networks and to provide acceptable forms of safety assurance. A suitable neural network model is outlined and is based upon representing knowledge in symbolic form. Requirements for safety processes similar to those used for conventional systems are also established. Although developed specifically for decision theory applications, the safety lifecycle could apply to a wide range of application domains.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Lisboa, P.: Industrial use of safety-related artificial neural networks. Health & Safety Executive 327 (2001)
Sharkey, A.J.C., Sharkey, N.E.: Combining Diverse Neural Nets, in Computer Science. University of Sheffield, Sheffield (1997)
Nabney, I., et al.: Practical Assessment of Neural Network Applications. Aston University & Lloyd’s Register, UK (2000)
Rodvold, D.M.: A Software Development Process Model for Artificial Neural Networks in Critical Applications. In: Proceedings of the 1999 International Conference on Neural Networks (IJCNN 1999), Washington, D.C (July 1999)
Principe, J.C., Euliano, N.R., Lefebvre, W.C.: Neural and Adaptive Systems: Fundamentals Through Simulations. John Wiley & Sons, Chichester (2000)
Kearns, M.: A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split, AT&T Bell Laboratories
Kilimasaukas, C.C.: Neural nets tell why. Dr Dobbs’s, 16-24 (April 1991)
Andrews, R., Diederich, J., Tickle, A.: A survey and critique of techniques for extracting rules from trained artificial neural networks, Neurocomputing Research Centre, Queensland University of Technology (1995)
Shavlik, J.W.: A Framework for Combining Symbolic and Neural Learning, Computer Science Department. University of Wisconsin, Madison (1992)
Cristea, A., Cristea, P., Okamoto, T.: Neural Network Knowledge Extraction. Série Électrotechnique et Énergétique 42(4), 477–491 (1998)
Gemen, S., Bienenstock, E., Doursat, R.: Neural Networks and the Bias/Variance Dilemma. Neural Computation 4, 1–58 (1992)
Taha, I., Ghosh, J.: A Hybrid Intelligent Architecture and Its Application to Water Reservoir Control. Department of Electrical and Computer Engineering. University of Texas, Austin (1995)
Wermter, S., Sun, R.: Hybrid Neural Systems. Springer, New York (2000)
Sordo, M., Buxton, H., Watson, D.: A Hybrid Approach to Breast Cancer Diagnosis. School of Cognitive and Computing Sciences. University of Sussex, Falmer (2001)
Osorio, F.: INSS: Un Systeme Hybride Neuro-Symbolique pour l’Apprentissage Automa-tique Constructif. INPG - Laboratoire, LEIBNIZ - IMAG, Grenoble (1998)
Weaver, R.A., McDermid, J.A., Kelly, T.P.: Software Safety Arguments: Towards a Systematic Categorisation of Evidence. In: International System Safety Conference, Denver, CO (2002)
Rurnrnelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Leveson, N.: Safeware: system safety and computers. Addison-Wesley, Reading (1995)
MoD, Defence Standard 00-55: Requirements for Safety Related Software in Defence Equipment, UK Ministry of Defence (1996)
SAE, ARP 4761 - Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment, The Society for Automotive Engineers (1996)
MoD, Interim Defence Standard 00-58 Issue 1: HAZOP Studies on Systems Containing Programmable Electronics, UK Ministry of Defence (1996)
Kelly, T.P.: Arguing Safety - A Systematic Approach to Managing Safety Cases, Department of Computer Science. University of York, York (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kurd, Z., Kelly, T. (2003). Safety Lifecycle for Developing Safety Critical Artificial Neural Networks. In: Anderson, S., Felici, M., Littlewood, B. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2003. Lecture Notes in Computer Science, vol 2788. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39878-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-39878-3_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20126-7
Online ISBN: 978-3-540-39878-3
eBook Packages: Springer Book Archive