Abstract
The Adaline network [1] is a classic neural architecture whose learning rule is the famous least mean squares (LMS) algorithm (a.k.a. delta rule or Widrow-Hoff rule). It has been demonstrated that the LMS algorithm is optimal in H ∞ sense since it tolerates small (in energy) disturbances, such as measurement noise, parameter drifting and modelling errors [2,3]. Such optimality of the LMS algorithm, however, has been demonstrated for regression-like problems only, not for pattern classification. Bearing this in mind, we firstly show that the performances of the LMS algorithm and variants of it (including the recent Kernel LMS algorithm) in pattern classification tasks deteriorates considerably in the presence of labelling errors, and then introduce robust extensions of the Adaline network that can deal efficiently with such errors. Comprehensive computer simulations show that the proposed extension consistently outperforms the original version.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Widrow, B.: Thinking about thinking: The discovery of the LMS algorithm. IEEE Signal Processing Magazine 22(1), 100–106 (2005)
Hassibi, B., Sayed, A.H., Kailath, T.: H ∞ optimality of the LMS algorithm algorithm. IEEE Transactions on Signal Processing 44(2), 267–280 (1996)
Bolzern, P., Colaneri, P., De Nicolao, G.: H ∞ -robustness of adaptive filters against measurement noise and parameter drift. Automatica 35(9), 1509–1520 (1999)
Poggio, T., Girosi, F.: Networks for approximation and learning. Proceedings of the IEEE 78(9), 1481–1497 (1990)
Widrow, B., Greenblatt, A., Kim, Y., Park, D.: The No-Prop algorithm: A new learning algorithm for multilayer neural networks. Neural Networks 37, 182–188 (2013)
Jaege, H.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Networks 20(3), 335–352 (2007)
Chan, S.C., Zhou, Y.: On the performance analysis of the least mean M-estimate and normalized least mean M-estimate algorithms with gaussian inputs and additive gaussian and contaminated gaussian noises. Journal of Signal Processing Systems 80(1), 81–103 (2010)
Liu, W., Pokharel, P., Principe, J.: The kernel least-mean-square algorithm. IEEE Transactions on Signal Processing 56(2), 543–554 (2008)
Friess, T.T., Cristianini, N., Campbell, C.: The kernel Adatron algorithm: A fast and simple learning procedure for support vector machines. In: Proceedings of the 15th International Conference of Machine Learning (ICML 1998), pp. 188–196 (1998)
Zou, Y., Chan, S.C., Ng, T.S.: Least mean M-estimate algorithms for robust adaptive filtering in impulsive noise. IEEE Transactions on Circuits and Systems II 47(12), 1564–1569 (2000)
Huber, P.J.: Robust estimation of a location parameter. Annals of Mathematical Statistics 35(1), 73–101 (1964)
Modaghegh, H., Khosravi, R., Manesh, S.A., Yazdi, H.S.: A new modeling algorithm-normalized kernel least mean square. In: Proceedings of the International Conference on Innovations in Information Technology (IIT 2009), pp. 120–124 (2009)
Bache, K., Lichman, M.: UCI machine learning repository (2014), http://archive.ics.uci.edu/ml
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Mattos, C.L.C., Santos, J.D.A., Barreto, G.A. (2014). Improved Adaline Networks for Robust Pattern Classification. In: Wermter, S., et al. Artificial Neural Networks and Machine Learning – ICANN 2014. ICANN 2014. Lecture Notes in Computer Science, vol 8681. Springer, Cham. https://doi.org/10.1007/978-3-319-11179-7_73
Download citation
DOI: https://doi.org/10.1007/978-3-319-11179-7_73
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11178-0
Online ISBN: 978-3-319-11179-7
eBook Packages: Computer ScienceComputer Science (R0)