Skip to main content

Improved Adaline Networks for Robust Pattern Classification

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2014 (ICANN 2014)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8681))

Included in the following conference series:

  • 4255 Accesses

Abstract

The Adaline network [1] is a classic neural architecture whose learning rule is the famous least mean squares (LMS) algorithm (a.k.a. delta rule or Widrow-Hoff rule). It has been demonstrated that the LMS algorithm is optimal in H  ∞  sense since it tolerates small (in energy) disturbances, such as measurement noise, parameter drifting and modelling errors [2,3]. Such optimality of the LMS algorithm, however, has been demonstrated for regression-like problems only, not for pattern classification. Bearing this in mind, we firstly show that the performances of the LMS algorithm and variants of it (including the recent Kernel LMS algorithm) in pattern classification tasks deteriorates considerably in the presence of labelling errors, and then introduce robust extensions of the Adaline network that can deal efficiently with such errors. Comprehensive computer simulations show that the proposed extension consistently outperforms the original version.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Widrow, B.: Thinking about thinking: The discovery of the LMS algorithm. IEEE Signal Processing Magazine 22(1), 100–106 (2005)

    Article  Google Scholar 

  2. Hassibi, B., Sayed, A.H., Kailath, T.: H ∞  optimality of the LMS algorithm algorithm. IEEE Transactions on Signal Processing 44(2), 267–280 (1996)

    Article  Google Scholar 

  3. Bolzern, P., Colaneri, P., De Nicolao, G.: H ∞ -robustness of adaptive filters against measurement noise and parameter drift. Automatica 35(9), 1509–1520 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  4. Poggio, T., Girosi, F.: Networks for approximation and learning. Proceedings of the IEEE 78(9), 1481–1497 (1990)

    Article  Google Scholar 

  5. Widrow, B., Greenblatt, A., Kim, Y., Park, D.: The No-Prop algorithm: A new learning algorithm for multilayer neural networks. Neural Networks 37, 182–188 (2013)

    Article  Google Scholar 

  6. Jaege, H.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Networks 20(3), 335–352 (2007)

    Article  Google Scholar 

  7. Chan, S.C., Zhou, Y.: On the performance analysis of the least mean M-estimate and normalized least mean M-estimate algorithms with gaussian inputs and additive gaussian and contaminated gaussian noises. Journal of Signal Processing Systems 80(1), 81–103 (2010)

    Article  Google Scholar 

  8. Liu, W., Pokharel, P., Principe, J.: The kernel least-mean-square algorithm. IEEE Transactions on Signal Processing 56(2), 543–554 (2008)

    Article  MathSciNet  Google Scholar 

  9. Friess, T.T., Cristianini, N., Campbell, C.: The kernel Adatron algorithm: A fast and simple learning procedure for support vector machines. In: Proceedings of the 15th International Conference of Machine Learning (ICML 1998), pp. 188–196 (1998)

    Google Scholar 

  10. Zou, Y., Chan, S.C., Ng, T.S.: Least mean M-estimate algorithms for robust adaptive filtering in impulsive noise. IEEE Transactions on Circuits and Systems II 47(12), 1564–1569 (2000)

    Google Scholar 

  11. Huber, P.J.: Robust estimation of a location parameter. Annals of Mathematical Statistics 35(1), 73–101 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  12. Modaghegh, H., Khosravi, R., Manesh, S.A., Yazdi, H.S.: A new modeling algorithm-normalized kernel least mean square. In: Proceedings of the International Conference on Innovations in Information Technology (IIT 2009), pp. 120–124 (2009)

    Google Scholar 

  13. Bache, K., Lichman, M.: UCI machine learning repository (2014), http://archive.ics.uci.edu/ml

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Mattos, C.L.C., Santos, J.D.A., Barreto, G.A. (2014). Improved Adaline Networks for Robust Pattern Classification. In: Wermter, S., et al. Artificial Neural Networks and Machine Learning – ICANN 2014. ICANN 2014. Lecture Notes in Computer Science, vol 8681. Springer, Cham. https://doi.org/10.1007/978-3-319-11179-7_73

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11179-7_73

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11178-0

  • Online ISBN: 978-3-319-11179-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics