Advertisement

Multilayer Perceptrons Which Are Tolerant to Multiple Faults and Learnings to Realize Them

  • Tadayoshi Horita
  • Itsuo Takanami
  • Kazuhiro Nishimura
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8360)

Abstract

We discuss a fault-tolerance of multilayer perceptrons in which input and output learning examples are patterns consisting of 0s and 1s. A type of faults to be dealt with is a multiple neuron and/or weight fault where neurons are in the hidden layer and weights are between the hidden and output layers. We theoretically analyze the condition when a multilayer perceptron is tolerant to multiple neuron and weight faults. According to the analysis, we propose two value injection methods denoted as VIM-WN and VIM-N to make multilayer perceptrons tolerant to all multiple neuron and/or weight faults whose values are in a multi-dimensional interval. In VIM-WN, the extreme values specified by the fault ranges are set to the outputs of the selected neurons and the selected weights of the links at the same time in a learning phase. In VIM-N, the extreme values specified by the fault ranges are set only to the outputs of the selected neurons likewise. First, we present an algorithm based on VIM-WN and prove that a multilayer perceptron which has successfully finished learning by VIM-MN is tolerant to all multiple neuron-and-weight faults whose values are in the interval, under the condition that the multiplicity of the multiple fault is within a certain number specified by faulty neurons and weights. Next, we present them concerning VIM-N likewise. By simulation, we confirm the analytical results for VIM-WN and VIM-N. We also by simulation examine the degrees of fault tolerance concerning multiple neuron-and-weight faults for VIM-N and VIM-W where VIM-W is the method proposed in [1] and show that VIM-N and WIM-W as well as VIM-WN are almost equally effective in coping with multiple neuron-and-weight faults. In addition, we show the data in terms of the learning time, successful rate of learning.

Keywords

fault-tolerance multilayer perceptron value injection multiple fault weight and neuron fault learning method 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. & Syst. E86-D(12), 2536–2543 (2003)Google Scholar
  2. 2.
    Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Networks 6(2), 446–456 (1995)CrossRefGoogle Scholar
  3. 3.
    Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by S.E. Fahlman et.al. at the Computer Science Dept., Carnegie Mellon UniversityGoogle Scholar
  4. 4.
    Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proc. Int’l Symp. on FTCS, pp. 228–235 (1990)Google Scholar
  5. 5.
    Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993) (in Japanese)Google Scholar
  6. 6.
    Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proc. Int’l. J. Conf. on Neural Networks, pp. I-769–I-774 (1992)Google Scholar
  7. 7.
    Ito, T., Takanami, I.: On fault injection approaches for fault tolerance of feedforward neural networks. In: Proc. Int’l Symp. on ATS, pp. 88–93 (1997)Google Scholar
  8. 8.
    Hammadi, N.C., Ito, H.: A learning algorithm for fault tolerant feedforward neural networks. IEICE Trans. Inf & Syst. E80-D(1), 21–26 (1997)Google Scholar
  9. 9.
    Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf & Syst. E81-D(1), 115–123 (1998)Google Scholar
  10. 10.
    Cavalieri, S., Mirabella, O.: A novel learning algorithm which impoves the partial fault tolerance of multilayer neural networks. Neural Networks (Pergamon) 12(1), 91–106 (1999)CrossRefGoogle Scholar
  11. 11.
    Kamiura, N., Hata, Y., Matsui, N.: Fault tolerant feedforward neural networks with learning algorithm based on synaptic weight limit. In: Proc. IEEE Int’l Workshop on On-Line Testing, pp. 222–226 (1999)Google Scholar
  12. 12.
    Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. & Syst. E84-D(7), 899–905 (2001)Google Scholar
  13. 13.
    Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proc. of Int’l J. Conf. on Neural Networks, pp. 2656–2660 (2001)Google Scholar
  14. 14.
    Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. & Syst. E91-D(4), 1168–1175 (2008)CrossRefGoogle Scholar
  15. 15.
    Sum, J.P., Leung, C.S., Ho, K.I.J.: On-line node fault injection training algorithm for MLP networks: Objective function and convergence analysis. IEEE Trans. Neural Networks and Learning Systems 23(2), 211–222 (2012)CrossRefGoogle Scholar
  16. 16.
    Ho, K., Leung, C.S., Sum, J.: Objective functions of online weight noise injection training algorithms for MLPs. IEEE Trans. Neural Networks 22(2), 317–323 (2011)CrossRefGoogle Scholar
  17. 17.
    Ho, K.I.J., Leung, C.S., Sum, J.: Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks. IEEE Trans. Neural Networks 21(6), 938–947 (2010)CrossRefGoogle Scholar
  18. 18.
    Sum, J.P.F., Leung, C.S., Ho, K.I.J.: On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans. Neural Networks 20(1), 124–138 (2009)CrossRefGoogle Scholar
  19. 19.
    Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Networks 5(5), 792–802 (1994)CrossRefGoogle Scholar
  20. 20.
    Nishimura, K., Horita, T., Ootsu, M., Takanami, I.: Novel value injection learning methods which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. In: Proc. CSREA Int’l Conf. on PDPTA, pp. 546–552 (July 2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Tadayoshi Horita
    • 1
  • Itsuo Takanami
    • 2
  • Kazuhiro Nishimura
    • 1
  1. 1.Polytecnic UniversityKodaira-shiJapan
  2. 2.Ichinoseki National College of Technology in Former TimesIwate-kenJapan

Personalised recommendations