Skip to main content
Log in

On robust randomized neural networks for regression: a comprehensive review and evaluation

  • Review
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Data from real-world regression problems are quite often contaminated with outliers. In order to efficiently handle such undesirable samples, robust parameter estimation methods have been incorporated into randomized neural network (RNN) models, usually replacing the ordinary least squares (OLS) method. Despite recent successful applications to outlier-contaminated scenarios, significant issues remain unaddressed in the design of reliable outlier-robust RNN models for regression tasks. For example, the number of hidden neurons impacts directly on the norm of the estimated output weights, since the OLS method will rely on an ill-conditioned hidden-layer output matrix. Another design concern involves the high sensitivity of RNNs to the randomization of the hidden layer weights, an issue that can be suitably handled, e.g., by intrinsic plasticity techniques. Bearing these concerns in mind, we describe several ideas introduced in previous works concerning the design of RNN models that are both robust to outliers and numerically stable. A comprehensive evaluation of their performances is carried out across several benchmarking regression datasets taking into account accuracy, weight norms, and training time as figures of merit.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. See Fig. 1 in this paper.

  2. Recall that the index n denotes the n-th input vector \({\mathbf {u}}_n\). The index k is used to denote the iteration within the IRLS algorithm.

References

  1. Agulló J, Croux C, Aelst S (2008) The multivariate least-trimmed squares estimator. J Multivar Anal 99(3):311–338

    MathSciNet  MATH  Google Scholar 

  2. Allen DM (1974) The relationship between variable selection and data agumentation and a method for prediction. Technometrics 16(1):125–127

    MathSciNet  MATH  Google Scholar 

  3. Bache K, Lichman M (2013) UCI machine learning repository

  4. Balasundaram S, Gupta D (2014) Kapil: 1-norm extreme learning machine for regression and multiclass classification using Newton method. Neurocomputing 128:4–14

    Google Scholar 

  5. Barreto GA, Barros ALBP (2015) On the design of robust linear pattern classifiers based on M-estimators. Neural Process Lett 42:119–137

    Google Scholar 

  6. Barreto GA, Barros ALBP (2016) A robust extreme learning machine for pattern classification with outliers. Neurocomputing 176:3–13

    Google Scholar 

  7. Barros ALB, Barreto GA (2013) Building a robust extreme learning machine for classification in the presence of outliers. In: Pan JS, Polycarpou M, Woźniak M, Carvalho AC, Quintián H, Corchado E (eds) Hybrid artificial intelligent systems, vol 8073. Lecture notes in computer science. Springer, Berlin, pp 588–597

    Google Scholar 

  8. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44(5):525–536

    MathSciNet  MATH  Google Scholar 

  9. Beliakov G, Kelarev A, Yearwood J (2011) Robust artificial neural networks and outlier detection. Technical report. CoRR arXiv:1110.0169

  10. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2010) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122

    MATH  Google Scholar 

  11. Chen C, He L, Li H, Huang J (2018) Fast iteratively reweighted least squares algorithms for analysis-based sparse reconstruction. Med Image Anal 49:141–152

    Google Scholar 

  12. Daubechies I, Devore R, Fornasier M, Güntürk CSN (2010) Iteratively reweighted least squares minimization for sparse recovery. Commun Pure Appl Math 63:1–38

    MathSciNet  MATH  Google Scholar 

  13. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  14. Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine. In: Proceedings of the 2009 IEEE symposium on computational intelligence and data mining (CIDM)’2009, pp 389–395

  15. Desai NS, Rutherford LC, Turrigiano GG (1999) Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nat Neurosci 2:515–520

    Google Scholar 

  16. Duan L, Bao M, Cui S, Qiao Y, Miao J (2017) Motor imagery EEG classification based on kernel hierarchical extreme learning machine. Cogn Comput 9(6):758–765

    Google Scholar 

  17. El-Melegy MT, Essai MH, Ali AA (2009) Robust Training of Artificial Feedforward Neural Networks, pp. 217–242. Springer

  18. Freire A, Barreto G (2014) A robust and regularized extreme learning machine. In: Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2014), pp 1–6. São Carlos (Brazil)

  19. Freire A, Rocha Neto A (2017) A robust and optimally pruned extreme learning machine. In: Intelligent systems design and applications, advances in intelligent systems and computing, vol 557. Springer International Publishing, pp 88–98

  20. Frenay B, Verleysen M (2014) Classification in the presence of label noise: a survey. IEEE Trans Neural Netw Learn Syst 25(5):845–869

    MATH  Google Scholar 

  21. Frick A, Johnston D (2005) Plasticity of dendritic excitability. J Neurobiol 64:100–115

    Google Scholar 

  22. Guo W, Xu T, Tang K (2016) M-estimator-based online sequential extreme learning machine for predicting chaotic time series with outliers. Neural Comput Appl pp 1–18

  23. Hochberg Y, Tamhane AC (1987) Multiple comparison procedures, chap. 3. Wiley, pp 91–93

  24. Horata P, Chiewchanwattana S, Sunat K (2013) Robust extreme learning machine. Neurocomputing 102:31–34

    Google Scholar 

  25. Huang G, Huang GB, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61(1):32–48

    MATH  Google Scholar 

  26. Huang GB (2015) What are extreme learning machines? filling the gap between Frank Rosenblatt’s dream and John von Neumann’s puzzle. Cogn Comput 7(3):263–278

    Google Scholar 

  27. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Google Scholar 

  28. Huber PJ (1964) Robust estimation of a location parameter. Ann Math Stat 35(1):73–101

    MathSciNet  MATH  Google Scholar 

  29. Hubert M, Debruyne M (2010) Minimum covariance determinant. WIREs Comput Stat 2:36–43

    Google Scholar 

  30. Hubert M, Debruyne M, Rousseeuw PJ (2018) Minimum covariance determinant and extensions. WIREs Comput Stat 10(3):1–11

    MathSciNet  Google Scholar 

  31. Huynh HT, Won Y, Kim JJ (2008) An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks. Int J Neural Syst 18(5):433–441

    Google Scholar 

  32. Igelnik B, Pao YH (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6(6):1320–1329

    Google Scholar 

  33. Jaeger H, Lukoševičius M, Popovici D, Siewert U (2007) Optimization and applications of echo state networks with leaky integrator neurons. Neural Netw 20(3):335–352

    MATH  Google Scholar 

  34. Khamis A, Ismail Z, Haron K, Mohammed AT (2005) The effects of outliers data on neural network performance. J Appl Sci 5(8):1394–1398

    Google Scholar 

  35. Li D, Han M, Wang J (2012) Chaotic time series prediction based on a novel robust echo state network. IEEE Trans Neural Netw Learn Syst 23(5):787–799

    Google Scholar 

  36. Liu N, Sakamoto JT, Cao J, Koh ZX, Ho AFW, Lin Z, Ong MEH (2017) Ensemble-based risk scoring with extreme learning machine for prediction of adverse cardiac events. Cogn Comput 9(4):545–554

    Google Scholar 

  37. Liu S, Feng L, Xiao Y, Wang H (2014) Robust activation function and its application: semi-supervised kernel extreme learning method. Neurocomputing 144:318–328

    Google Scholar 

  38. Liu Y, Zhang L, Deng P, He Z (2017) Common subspace learning via cross-domain extreme learning machine. Cogn Comput 9(4):555–563

    Google Scholar 

  39. Lu X, Zou H, Zhou H, Xie L, Huang GB (2016) Robust extreme learning machine with its application to indoor positioning. IEEE Trans Cybern 46(1):194–205

    Google Scholar 

  40. Maass W, Markram H (2004) On the computational power of recurrent circuits of spiking neurons. J Comput Syst Sci 69(4):593–616

    MATH  Google Scholar 

  41. Meyer M, Vlachos P (1989) Statlib: Data, software and news from the statistics community

  42. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162

    Google Scholar 

  43. Miche Y, Sorjamaa A, Lendasse A (2002) OP-ELM: theory, experiments and a toolbox, pp 145–154

  44. Neumann K, Steil J (2013) Optimizing extreme learning machines via ridge regression and batch intrinsic plasticity. Neurocomputing 102:23–30

    Google Scholar 

  45. Pao YH, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6:163–180

    Google Scholar 

  46. Pao YH, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5):76–79

    Google Scholar 

  47. Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386–408

    Google Scholar 

  48. Rousseeuw PJ, Driessen KV (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41:212–223

    Google Scholar 

  49. Schmidt WF, Kraaijveld MA, Duin R (1992) Feedforward neural networks with random weights. In: Proceedings of the 11th IAPR international conference on pattern recognition (ICPR’1992), vol II, pp 1–4

  50. Similä T, Tikka J (2005) Multiresponse sparse regression with application to multidimensional scaling. In: Artificial neural networks: formal models and their applications–ICANN 2005, Lecture Notes in Computer Science, vol 3697, pp 97–102. Springer

  51. Wang R, He YL, Chow CY, Ou FF, Zhang J (2015) Learning ELM-Tree from big data based on uncertainty reduction. Fuzzy Sets Syst 258:79–100

    MathSciNet  MATH  Google Scholar 

  52. Webster CS (2012) Alan turing’s unorganized machines and artificial neural networks: his remarkable early work and future possibilities. Evol Intel 5(1):35–43

    Google Scholar 

  53. Widrow B, Greenblatt A, Kim Y, Park D (2013) The No-Prop algorithm: a new learning algorithm for multilayer neural networks. Neural Netw 37:182–188

    Google Scholar 

  54. Xie XL, Bian GB, Hou ZG, Feng ZQ, Hao JL (2016) Preliminary study on Wilcoxon-norm-based robust extreme learning machine. Neurocomputing 198:20–26

    Google Scholar 

  55. Yang Y, Wang Y, Yuan X (2012) Bidirectional extreme learning machine for regression problem and its learning effectiveness. IEEE Trans Neural Netw Learn Syst 23(9):1498–1505

    Google Scholar 

  56. Zhang K, Luo M, ORELM Matlab Toolbox. https://www.mathworks.com/matlabcentral/

  57. Zhang K, Luo M (2015) Outlier-robust extreme learning machine for regression problems. Neurocomputing 151:1519–1527

    Google Scholar 

  58. Zhang L, Suganthan PN (2016) A comprehensive evaluation of random vector functional link networks. Inf Sci 367–368:1094–1105

    Google Scholar 

  59. Zhang L, Suganthan PN (2016) A survey of randomized algorithms for training neural networks. Inf Sci 364–365:146–155

    MATH  Google Scholar 

  60. Zhang L, Suganthan PN (2017) Benchmarking ensemble classifiers with novel co-trained kernel ridge regression and random vector functional link ensembles. IEEE Comput Intell Mag 12(4):61–72

    Google Scholar 

  61. Zhao G, Shen Z, Man Z (2011) Robust input weight selection for well-conditioned extreme learning machine. Int J Inf Technol 17(1):1–18

    Google Scholar 

  62. Zhao G, Shen Z, Miao C, Man Z (2009) On improving the conditioning of extreme learning machine: a linear case. In: Proceedings of the 7th international conference on information, communications and signal processing (ICICS’2009), pp 1–5. https://doi.org/10.1109/ICICS.2009.5397617

Download references

Acknowledgements

The first author thanks the support from CAPES for this work via a PNPD (National Program of Post-Doctorate) grant. The second and third authors thank CNPq for the grants 311211/2017-8 and 309379/2019-9, respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guilherme A. Barreto.

Ethics declarations

Human participants or animals

This article does not contain any studies with human participants or animals performed by any of the authors and the research is in compliance with the ethical standards of the Journal.

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Freire, A.L., Rocha-Neto, A.R. & Barreto, G.A. On robust randomized neural networks for regression: a comprehensive review and evaluation. Neural Comput & Applic 32, 16931–16950 (2020). https://doi.org/10.1007/s00521-020-04994-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04994-5

Keywords

Navigation