Skip to main content

Advertisement

Log in

Region-Enhanced Multi-layer Extreme Learning Machine

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Deep neural networks have made significant achievements in representation learning of traditionally man-made features, especially in terms of complex objects. Over the decades, this learning process has attracted thousands of researchers and has been widely used in the speech, visual, and text recognition fields. One deep network multi-layer extreme learning machine (ML-ELM) achieves a good performance in representation learning while inheriting the advantages of faster learning and the approximating capability of the extreme learning machine (ELM). However, as with most deep networks, the ML-ELM’s algorithmic performance largely depends on the probability distribution of the training data. In this paper, we propose an improved ML-ELM made via using the local significant regions at the input end to enhance the contributions of these regions according to the idea of the selective attention mechanism. To avoid involving and exploring the complex principle of the attention system and to focus on the clarification of our local regional enhancement idea, the paper only selects two typical attention regions. One is the geometric central region, which is normally the important region to attract human attention due to the focal attention mechanism. The other is the task-driven interest region, with facial recognition as an example. The comprehensive experiments are done on the three public datasets of MNIST, NORB, and ORL. The comparison experiment results demonstrate that our proposed region-enhanced ML-ELM (RE-ML-ELM) achieves performance increases in important feature learning by utilizing the apriori knowledge of attention and has a higher recognition rate than that of the normal ML-ELM and the basic ELM. Moreover, it benefits from the non-iterative parameter training method of other ELMs, and our proposed algorithm outperforms most state-of-the-art deep networks such as deep belief network(DBN), in the aspects of training efficiency. Furthermore, because of the deep structure with fewer hidden nodes at each layer, our proposed RE-ML-ELM achieves a comparable training efficiency to that of the ML-ELM but has a higher training speed with the basic ELM, which is normally the width single network that has more hidden nodes to obtain the similar recognition accuracy with the deep networks. Based on our idea of combining the apriori knowledge of the human selective attention system with the data learning, our proposed region-enhanced ML-ELM increases the image classification performance. We believe that the idea of intentionally combining psychological knowledge with the most algorithms based on data-driven learning has the potential to improve their cognitive computing ability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems (NIPS); 2012.

  2. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 2013;35(8):1798–828.

    Article  PubMed  Google Scholar 

  3. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw 2015;61:85–117.

    Article  PubMed  Google Scholar 

  4. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313(5786):504–7.

    Article  PubMed  CAS  Google Scholar 

  5. Huang GB, Zhu QY, Siew CK. Extreme learning machine: theory and applications. Neurocomputing 2006;70:489–501.

    Article  Google Scholar 

  6. Guo T, Zhang L, Tan X. Neuron pruning-based discriminative extreme learning machine for pattern classification. Cogn Comput 2017;9(4):581–595.

    Article  Google Scholar 

  7. Liu Y, Zhang L, Deng P, et al. Common subspace learning via cross-domain extreme learning machine. Cogn Comput 2017;9(4):555–563.

    Article  Google Scholar 

  8. Huang GB. What are extreme learning machines? Filling the gap between Frank Rosenblatt’s dream and John von Neumann’s puzzle. Cogn Comput 2015;7(3):263–78.

    Article  Google Scholar 

  9. Huang GB, Bai Z, Kasun LLC. Local receptive fields based extreme learning machine. IEEE Comput Intell Mag 2015;10(2):18–29.

    Article  Google Scholar 

  10. Kasun LLC, Zhou H, Huang GB, et al. Representational learning with extreme learning machine for big data. Intell Syst IEEE 2013;28(6):31–4.

    Google Scholar 

  11. Tang J, Deng C, Huang GB. Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst 2016;27(4):809–21.

    Article  PubMed  Google Scholar 

  12. Salakhutdinov R, Larochelle H. Efficient learning of deep boltzmann machines. International conference on artificial intelligence and statistics; 2010.

  13. Huang GB. An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 2014;6(3):376–90.

    Article  Google Scholar 

  14. Eriksen C, Hoffman J. Temporal and spatial characteristics of selective encoding from visual displays. Percept Psychophys. 2014;12(2B).

  15. Steinman BA, Steinman SB, Lehmkuhle S. Visual attention mechanisms show a center-surround organization. Vision Res 1995;35(13):1859–69.

    Article  PubMed  CAS  Google Scholar 

  16. Raftopoulos A. Cognition and perception. Oxford: Oxford University Press; 2007, pp. 5–7.

    Google Scholar 

  17. Meier U, Ciresan DC, Gambardella LM, et al. Better digit recognition with a committee of simple neural nets. 2011 international conference on document analysis and recognition (ICDAR); 2011. p. 1250–4.

  18. LeCun Y, Huang FJ, Bottou L. Learning methods for generic object recognition with invariance to pose and lighting. CVPR. 2004.

  19. Zhang Z, Zhao XG, Wang GR. FE-ELM: a new friend recommendation model with extreme learning machine. Cognitive Computation 2017;9(3):1–12.

    Google Scholar 

  20. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313(5786):504–7.

    Article  PubMed  CAS  Google Scholar 

  21. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 2010;11:3371–408.

    Google Scholar 

Download references

Funding

This research was partially sponsored by the National Nature Science Foundation of China (Nos. 61871276, 61672070, and 61672071), the Beijing Municipal Natural Science Foundation (Nos. 7184199 and 4162058), the Research Fund from Beijing Innovation Center for Future Chips (No. KYJJ2018004), and the 2018 Talent-Development Quality Enhancement Project of BISTU (No. 5111823402).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Miao.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Informed Consent

Informed consent was not required as no human or animals were involved.

Human and Animal Rights

This article does not contain any studies with human or animal subjects performed by any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, X., Li, X., Jin, Y. et al. Region-Enhanced Multi-layer Extreme Learning Machine. Cogn Comput 11, 101–109 (2019). https://doi.org/10.1007/s12559-018-9596-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-018-9596-3

Keywords

Navigation