Advertisement

Two swarm intelligence approaches for tuning extreme learning machine

  • Abobakr Khalil Alshamiri
  • Alok Singh
  • Bapi Raju Surampudi
Original Article
  • 300 Downloads

Abstract

Extreme learning machine (ELM) is a new algorithm for training single-hidden layer feedforward neural networks which provides good performance as well as fast learning speed. ELM tends to produce good generalization performance with large number of hidden neurons as the input weights and hidden neurons biases are randomly initialized and remain unchanged during the learning process, and the output weights are analytically determined. In this paper, two swarm intelligence based metaheuristic techniques, viz. Artificial Bee Colony (ABC) and Invasive Weed Optimization (IWO) are proposed for tuning the input weights and hidden biases. The proposed approaches are called ABC-ELM and IWO-ELM in which the input weights and hidden biases are selected using ABC and IWO respectively and the output weights are computed using the Moore-Penrose (MP) generalized inverse. The proposed approaches are tested on different benchmark classification data sets and simulations show that the proposed approaches obtain good generalization performance in comparison to the other techniques available in the literature.

Keywords

Artificial bee colony algorithm Classification Extreme learning machine Invasive weed optimization Swarm intelligence 

References

  1. 1.
    Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: A new learning scheme of feedforward neural networks. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), vol 2, Budapest, Hungary, pp 985–990Google Scholar
  2. 2.
    Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501CrossRefGoogle Scholar
  3. 3.
    Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892CrossRefGoogle Scholar
  4. 4.
    Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062CrossRefGoogle Scholar
  5. 5.
    Cao J, Chen T, Fan J (2016) Landmark recognition with compact bow histogram and ensemble elm. Multimed Tools Appl 75(5):2839–2857CrossRefGoogle Scholar
  6. 6.
    Cao J, Zhang K, Luo M, Yin C, Lai X (2016) Extreme learning machine and adaptive sparse representation for image classification. Neural Netw 81:91–102CrossRefGoogle Scholar
  7. 7.
    Serre D (2002) Matrices: theory and applications. Springer, NewYorkzbMATHGoogle Scholar
  8. 8.
    Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 42(2):513–529CrossRefGoogle Scholar
  9. 9.
    Rong HJ, Ong YS, Tan AH, Zhu Z (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72:359–366CrossRefGoogle Scholar
  10. 10.
    Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162CrossRefGoogle Scholar
  11. 11.
    Lan Y, Soh YC, Huang GB (2010) Constructive hidden nodes selection of extreme learning machine for regression. Neurocomputing 73:3191–3199CrossRefGoogle Scholar
  12. 12.
    Zhu QY, Qin A, Suganthan P, Huang GB (2005) Evolutionary extreme learning machine. Pattern Recogn 38:1759–1763CrossRefzbMATHGoogle Scholar
  13. 13.
    Cao J, Lin Z, Huang GB (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36:285–305CrossRefGoogle Scholar
  14. 14.
    Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. technical report tr06, Computer Engineering Department. Erciyes University, TurkeyGoogle Scholar
  15. 15.
    Mehrabiana A, Lucas C (2006) A novel numerical optimization algorithm inspired from weed colonization. Ecol Inform 1:355–366CrossRefGoogle Scholar
  16. 16.
    Akay B, Karaboga D (2012) A modified artificial bee colony algorithm for real-parameter optimization. Inf Sci 192:120–142CrossRefGoogle Scholar
  17. 17.
    Basak A, Pal S, Das S, Abraham A, Snasel V (2010) A modified invasive weed optimization algorithm for time-modulated linear antenna array synthesis. In: Proceedings of IEEE Congress on Evolutionary Computation (CEC). Barcelona, pp 1–8Google Scholar
  18. 18.
    Broomhead DS, Lowe D (1988) Multivariable functional interpolation and adaptive networks. Complex Syst 2(3):321–355MathSciNetzbMATHGoogle Scholar
  19. 19.
    Schmidt WF, Kraaijveld MA, Duin RP (1992) Feedforward neural networks with random weights. In: Proceedings of 11th IAPR International Conference on Pattern Recognition Methodology and Systems. Hague, Netherlands, pp 1–4Google Scholar
  20. 20.
    Pao YH, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of random vector functional-link net. Neurocomputing 6:163–180CrossRefGoogle Scholar
  21. 21.
    Huang GB (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14(2):274–281CrossRefGoogle Scholar
  22. 22.
    Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39(3):459–471MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Pandiri V, Singh A (2015) Two metaheuristic approaches for the multiple traveling salesperson problem. Appl Soft Comput 26:74–89CrossRefGoogle Scholar
  24. 24.
    Zhou Y, Luo Q, Chen H, He A, Wu J (2015) A discrete invasive weed optimization algorithm for solving traveling salesman problem. Neurocomputing 151:1227–1236CrossRefGoogle Scholar
  25. 25.
    Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7:179–188CrossRefGoogle Scholar
  26. 26.
    Lu S, Wang XZ, Zhanga G, Zhoua X (2015) Effective algorithms of the moore-penrose inverse matrices for extreme learning machine. Intell Data Anal 19:743–760CrossRefGoogle Scholar
  27. 27.
    Ashfaq RAR, Wang XZ, Huang JZ, Abbas H, He YL (2017) Fuzziness based semi-supervised learning approach for intrusion detection system. Inf Sci 378:484–497CrossRefGoogle Scholar
  28. 28.
    Wang XZ, Aamir R, Fu AM (2015) Fuzziness based sample categorization for classifier performance improvement. J Intell Fuzzy Syst 29:1185–1196MathSciNetCrossRefGoogle Scholar
  29. 29.
    Wang XZ, Xing HJ, Li Y, Hua Q, Dong CR, Pedrycz W (2015) A study on relationship between generalization abilities and fuzziness of base classifiers in ensemble learning. IEEE Trans Fuzzy Syst 23:1638–1654CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Abobakr Khalil Alshamiri
    • 1
  • Alok Singh
    • 1
  • Bapi Raju Surampudi
    • 1
    • 2
  1. 1.School of Computer and Information SciencesUniversity of HyderabadHyderabadIndia
  2. 2.Cognitive Science LabInternational Institute of Information TechnologyHyderabadIndia

Personalised recommendations