Advertisement

Performance Analysis of Extreme Learning Machine Variants with Varying Intermediate Nodes and Different Activation Functions

  • Harshit Kumar Lohani
  • S. Dhanalakshmi
  • V. Hemalatha
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 768)

Abstract

Feedforward Neural Networks are the type of Artificial Neural networks, which follow a unidirectional path. The input nodes are associated with the intermediate layers and the intermediate layers are associated with the output layer. There are no connections which feedback to the input or the intermediate layer and thus are different from the recurrent neural networks. Extreme Learning Machine (ELM) is an algorithm that has no feedback path and the data flows in a single direction, i.e., from input to output. ELM is an emerging algorithm and is widely used for but not limited to classification, clustering, regression, sparse approximation, feature learning, and compression with a single layer or multi-layers of intermediate nodes. The best-preferred standpoint of ELM is that there is no requirement for the intermediate layer factors to be tuned. The intermediate layer is randomly generated and is never updated thereafter.

Keywords

ELM Clustering Regression Sparse approximation Neural networks 

References

  1. 1.
    Huang, G., et al.: Trends in extreme learning machines: a review. Int. Neural Netw. Soc. Eur. Neural Netw. Soc. Jpn. Neural Netw. Soc. 16, 32–48 (2015)CrossRefGoogle Scholar
  2. 2.
    Wan, Y., et al.: Twin extreme learning machines for pattern classification. Neurocomputing 260, 235–244, 18 Oct 2017Google Scholar
  3. 3.
    Huang, G.-B.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)Google Scholar
  4. 4.
    Huang, G.-B.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. 42 (2011)Google Scholar
  5. 5.
    Wang, H., Brandon Westover, M., Bai, Z.: Sparse extreme learning machine for classification. IEEE Trans. Cybern. (2014)Google Scholar
  6. 6.
    Lan, Y.: Constructive intermediate nodes selection of extreme learning machine for regression. Neurocomputing, 16–18, 3191–3199 (2010)Google Scholar
  7. 7.
    Chen, L., Huang, G.-B.: Enhanced random search based incremental extreme learning machine. Neurocomputing 71 (2008)Google Scholar
  8. 8.
    Chen, L., Huang, G.-B.: Convex incremental extreme learning machine. Neurocomputing 70, 3056–3062 (2007)Google Scholar
  9. 9.
    Huang, G.-B.: Incremental extreme learning machine with fully complex intermediate nodes. Neurocomputing (2007)Google Scholar
  10. 10.
    Huang, G.-B.: On-line sequential extreme learning machine. In: The IASTED International Conference on Computational Intelligence, Canada (2005)Google Scholar
  11. 11.
    Qu, B.Y., et al.: Two-intermediate-layer extreme learning machine for regression. Neurocomputing 175, 826–834 (2016)CrossRefGoogle Scholar
  12. 12.
    Zhu, Q., Siew, C., Huang, G.B.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (2004)Google Scholar
  13. 13.
    Mishra, A., et al.: Bi-modal derivative adaptive activation function sigmoidal feedforward artificial neural networks. Appl. Soft Comput. 61, 983–994 (2017)CrossRefGoogle Scholar
  14. 14.
    Sonoda, S., Murata, N.: Neural network with unbounded activation functions is universal approximator. Appl. Comput. Harmonic Anal. 43(2), 233–268 (2017)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Qian, S., et al.: Adaptive activation functions in convolutional neural networks. Neurocomputing, 6 July 2017Google Scholar
  16. 16.
    Jiang, X., et al.: Deep neural networks with elastic rectified linear units for object recognition. Neurocomputing, 23 Sept 2017Google Scholar
  17. 17.
    Cao, J., et al.: Randomly translational activation inspired by the input distributions of ReLU. Neurocomputing, 20 Sept 2017Google Scholar
  18. 18.
    Sun, W., Su, F.: A novel companion objective function for regularization of deep convolutional neural networks. Image Vis. Comput. 60, 58–63 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Harshit Kumar Lohani
    • 1
  • S. Dhanalakshmi
    • 1
  • V. Hemalatha
    • 1
  1. 1.Faculty of Electronics and Communication EngineeringKattankulathur Campus, SRM UniversityChennaiIndia

Personalised recommendations