Abstract
Weight decay is a simple regularization method to improve the generalization ability of multilayered perceptrons (MLPs). Besides, the weight decay method can also improve the fault tolerance of MLPs. However, most existing generalization error results of using the weight decay method focus on fault-free MLPs only. For faulty MLPs, using a test set to study the generalization ability is not practice because there are huge number of possible faulty networks for a trained network. This paper develops a prediction error formula for predicting the performance of faulty MLPs. Our prediction error results allows us to select an appropriate model for MLPs under open node fault situation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Moody, J.E.: Note on generalization, regularization, and architecture selection in nonlinear learning systems. In: Proc. First IEEE-SP Workshop on Neural Networks for Signal Processing, pp. 1–10 (September 1991)
Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in Neural Information Processing Systems 4,[Neural Information Processing Systems Conference], pp. 950–957. Morgan Kaufmann, San Francisco (1992)
Leung, C.S., Young, G.H., Sum, J., Kan, W.K.: On the regularization of forgetting recursive least square. IEEE Trans. Neural Netw. 10(6), 1482–1486 (1999)
Leung, C.S., Tsoi, A., Chan, L.: Two regularizers for recursive least square algorithms in feedforward multilayered neural networks. IEEE Trans. Neural Netw. 12(6), 1314–1332 (2001)
Leung, C.S., Sum, J.: ‘A fault-tolerant regularizer for RBF networks. IEEE Trans. Neural Netw. 19(3), 493–507 (2008)
Zhou, Z.H., Chen, S.F.: Evolving fault-tolerant neural networks. Neural Computing and Applications 11(3-4), 156–160 (2003)
Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Netw. 6(2), 446–456 (1995)
Chandra, P., Singh, Y.: Fault tolerance of feedforward artificial neural networks – a framework of study. In: Proceedings of the International Joint Conference on Neural Networks 2003, Portland, OR, vol. 1, pp. 489–494 (July 2003)
Bernier, J.L., Ortega, J., Ros, E., Rojas, I., Prieto, A.: A quantitative study of fault tolerance, noise immunity, and generalization ability of MLPs. Neural Comput. 12(12), 2941–2964 (2000)
Ahmadi, A., Fakhraie, S.M., Lucas, C.: Behavioral fault model for neural networks. In: Proceedings of the 2009 International Conference on Computer Engineering and Technology, pp. 71–75. IEEE Computer Society, Washington (January 2009)
Larsen, J.: Design of neural network filters. Ph.D. dissertation, Technical University of Denmark, Denmark (July 1993)
Fedorov, V.V.: Theory of optimal experiments. Academic Press, London (1972)
Chen, S.: Local regularization assisted orthogonal least squares regression. Neurocomputing, 559–585 (2006)
Singh, S.: Noise impact on time-series forecasting using an intelligent pattern matching technique. Pattern Recognition 32, 1389–1398 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Leung, C.S., Sum, J., Mak, S.K. (2010). Generalization Error of Faulty MLPs with Weight Decay Regularizer. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds) Neural Information Processing. Models and Applications. ICONIP 2010. Lecture Notes in Computer Science, vol 6444. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17534-3_20
Download citation
DOI: https://doi.org/10.1007/978-3-642-17534-3_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-17533-6
Online ISBN: 978-3-642-17534-3
eBook Packages: Computer ScienceComputer Science (R0)