Advertisement

Improved Adaptive Incremental Error-Minimization-Based Extreme Learning Machine with Localized Generalization Error Model

  • Wen-wen Han
  • Peng Zheng
  • Zhong-Qiu Zhao
  • Wei-dong Tian
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10956)

Abstract

Extreme learning machine (ELM) is a new type of learning algorithms for single-hidden layer feed-forward neural networks (SLFNs). The AIE-ELM aims to adaptively choose the number of hidden layer nodes for different data sets. It is an incremental extreme learning machine which achieves adaptive growth of hidden nodes and can incrementally update output weights by minimizing the training error. In order to enhance the generalization ability of AIE-ELM algorithm, this paper extends the AIE-ELM by introducing the localized generalization error model (referred to as AIEL-ELM), which takes the output sensitivity with input perturbations into account. Experimental results on several benchmark data sets verify that our proposed method can obtain the optimal number of hidden layer nodes and achieve a significant improvement of classification/regression performance and generalization ability compared with previous works.

Keywords

Extreme learning machine Error-minimization Local generalization error Hidden layer nodes 

References

  1. 1.
    Feng, G., Huang, G.B., Lin, Q., Gay, R.: Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans. Neural Netw. 20(8), 1352–1357 (2009)CrossRefGoogle Scholar
  2. 2.
    Huang, G.B., Chen, L.: Convex incremental extreme learning machine. Neurocomputing 70(16), 3056–3062 (2007)CrossRefGoogle Scholar
  3. 3.
    Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18), 3460–3468 (2008)CrossRefGoogle Scholar
  4. 4.
    Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879 (2006)CrossRefGoogle Scholar
  5. 5.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2005)Google Scholar
  6. 6.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1), 489–501 (2006)CrossRefGoogle Scholar
  7. 7.
    Lan, Y., Soh, Y.C., Huang, G.B.: Constructive hidden nodes selection of extreme learning machine for regression. Neurocomputing 73(16–18), 3191–3199 (2010)CrossRefGoogle Scholar
  8. 8.
    Liao, S., Feng, C.: Meta-ELM: ELM with ELM hidden nodes. Neurocomputing 128(5), 81–87 (2014)CrossRefGoogle Scholar
  9. 9.
    Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: optimally pruned extreme learning machine. IEEE Trans. Neural Netw. 21(1), 158–162 (2010)CrossRefGoogle Scholar
  10. 10.
    Mozaffari, A., Azad, N.L.: Optimally pruned extreme learning machine with ensemble of regularization techniques and negative correlation penalty applied to automotive engine coldstart hydrocarbon emission identification. Neurocomputing 131(7), 143–156 (2014)CrossRefGoogle Scholar
  11. 11.
    Rong, H.J., Ong, Y.S., Tan, A.H., Zhu, Z.: A fast pruned-extreme learning machine for classification problem. neurocomputing. Neurocomputing 72(1–3), 359–366 (2008)CrossRefGoogle Scholar
  12. 12.
    Wang, X.Z., Shao, Q.Y., Miao, Q., Zhai, J.H.: Architecture selection for networks trained with extreme learning machine using localized generalization error model. Neurocomputing 102(2), 3–9 (2013)Google Scholar
  13. 13.
    Hoeffding, W.: Probability inequalities for sums of bounded random variables. Am. Stat. Assoc. 58(301), 13–30 (1963)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Yeung, D.S., Ng, W.W.Y., Wang, D., Tsang, E.C.C., Wang, X.Z.: Localized generalization error model and its application to architecture selection for radial basis function neural network. IEEE Trans. Neural Netw. 18(5), 1294–1305 (2007)CrossRefGoogle Scholar
  15. 15.
    Ying, L.: Orthogonal incremental extreme learning machine for regression and multiclass classification. Neural Comput. Appl. 27(1), 111–120 (2016)CrossRefGoogle Scholar
  16. 16.
    Zhang, R., Lan, Y., Huang, G.-B., Soh, Y.C.: Extreme learning machine with adaptive growth of hidden nodes and incremental updating of output weights. In: Kamel, M., Karray, F., Gueaieb, W., Khamis, A. (eds.) AIS 2011. LNCS (LNAI), vol. 6752, pp. 253–262. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21538-4_25CrossRefGoogle Scholar
  17. 17.
    Zou, W., Yao, F., Zhang, B., Guan, Z.: Improved meta-elm with error feedback incremental elm as hidden nodes. Neural Comput. Appl. 8, 1–8 (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Wen-wen Han
    • 1
  • Peng Zheng
    • 1
  • Zhong-Qiu Zhao
    • 1
  • Wei-dong Tian
    • 1
  1. 1.College of Computer and InformationHefei University of TechnologyHefeiChina

Personalised recommendations