Skip to main content

Generalization Error of Faulty MLPs with Weight Decay Regularizer

  • Conference paper
Neural Information Processing. Models and Applications (ICONIP 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6444))

Included in the following conference series:

Abstract

Weight decay is a simple regularization method to improve the generalization ability of multilayered perceptrons (MLPs). Besides, the weight decay method can also improve the fault tolerance of MLPs. However, most existing generalization error results of using the weight decay method focus on fault-free MLPs only. For faulty MLPs, using a test set to study the generalization ability is not practice because there are huge number of possible faulty networks for a trained network. This paper develops a prediction error formula for predicting the performance of faulty MLPs. Our prediction error results allows us to select an appropriate model for MLPs under open node fault situation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Moody, J.E.: Note on generalization, regularization, and architecture selection in nonlinear learning systems. In: Proc. First IEEE-SP Workshop on Neural Networks for Signal Processing, pp. 1–10 (September 1991)

    Google Scholar 

  2. Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in Neural Information Processing Systems 4,[Neural Information Processing Systems Conference], pp. 950–957. Morgan Kaufmann, San Francisco (1992)

    Google Scholar 

  3. Leung, C.S., Young, G.H., Sum, J., Kan, W.K.: On the regularization of forgetting recursive least square. IEEE Trans. Neural Netw. 10(6), 1482–1486 (1999)

    Article  Google Scholar 

  4. Leung, C.S., Tsoi, A., Chan, L.: Two regularizers for recursive least square algorithms in feedforward multilayered neural networks. IEEE Trans. Neural Netw. 12(6), 1314–1332 (2001)

    Article  Google Scholar 

  5. Leung, C.S., Sum, J.: ‘A fault-tolerant regularizer for RBF networks. IEEE Trans. Neural Netw. 19(3), 493–507 (2008)

    Article  Google Scholar 

  6. Zhou, Z.H., Chen, S.F.: Evolving fault-tolerant neural networks. Neural Computing and Applications 11(3-4), 156–160 (2003)

    Article  MATH  Google Scholar 

  7. Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Netw. 6(2), 446–456 (1995)

    Article  Google Scholar 

  8. Chandra, P., Singh, Y.: Fault tolerance of feedforward artificial neural networks – a framework of study. In: Proceedings of the International Joint Conference on Neural Networks 2003, Portland, OR, vol. 1, pp. 489–494 (July 2003)

    Google Scholar 

  9. Bernier, J.L., Ortega, J., Ros, E., Rojas, I., Prieto, A.: A quantitative study of fault tolerance, noise immunity, and generalization ability of MLPs. Neural Comput. 12(12), 2941–2964 (2000)

    Article  Google Scholar 

  10. Ahmadi, A., Fakhraie, S.M., Lucas, C.: Behavioral fault model for neural networks. In: Proceedings of the 2009 International Conference on Computer Engineering and Technology, pp. 71–75. IEEE Computer Society, Washington (January 2009)

    Chapter  Google Scholar 

  11. Larsen, J.: Design of neural network filters. Ph.D. dissertation, Technical University of Denmark, Denmark (July 1993)

    Google Scholar 

  12. Fedorov, V.V.: Theory of optimal experiments. Academic Press, London (1972)

    Google Scholar 

  13. Chen, S.: Local regularization assisted orthogonal least squares regression. Neurocomputing, 559–585 (2006)

    Google Scholar 

  14. Singh, S.: Noise impact on time-series forecasting using an intelligent pattern matching technique. Pattern Recognition 32, 1389–1398 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Leung, C.S., Sum, J., Mak, S.K. (2010). Generalization Error of Faulty MLPs with Weight Decay Regularizer. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds) Neural Information Processing. Models and Applications. ICONIP 2010. Lecture Notes in Computer Science, vol 6444. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17534-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17534-3_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17533-6

  • Online ISBN: 978-3-642-17534-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics