Advertisement

Reduced Kernel Extreme Learning Machine

  • Wanyu Deng
  • Qinghua Zheng
  • Kai Zhang
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 226)

Abstract

We present a fast and accurate algorithm–reduced kernel extreme learning machine (Reduced-KELM). It randomly selects a subset from given dataset, and uses \(\mathcal{K}(X,\tilde{X})\) in place of \(\mathcal{K}(X,X)\). The large scale kernel matrix with size of n×n is reduced to \(n\times \tilde{n} \), and the time-consuming computation for inversion of kernel matrix is reduced to \(O(\tilde{n}^3) \) from O(n 3) where \(\tilde{n} \ll n \). The experimental results show that Reduced-KELM can perform at a similar level of accuracy as KELM and at the same time being significantly faster than KELM.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Huang, G.-B., Zhou, H., Ding, X., Zhang, R.: Extreme Learning Machine for Regression and Multiclass Classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B 42(2), 513–529 (2012)CrossRefGoogle Scholar
  2. 2.
    Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme learning machine: Theory and applications. Neurocomputing 70(1-3), 489–501 (2006)CrossRefGoogle Scholar
  3. 3.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1999)Google Scholar
  4. 4.
    Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Processing Letters 9(3), 293–300 (1999)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Scholkopf, B., Smola, A., Mller, K.R.: Kernel principal component analysis. In: Gerstner, W., Hasler, M., Germond, A., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 583–588. Springer, Heidelberg (1997)Google Scholar
  6. 6.
    Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13, 637–649 (2001)zbMATHCrossRefGoogle Scholar
  7. 7.
    Wu, G., Zhang, Z., Chang, E.Y.: Kronecker factorization for speeding up kernel machines. In: SIAM Int. Conference on Data Mining (SDM), pp. 611–615 (2005)Google Scholar
  8. 8.
    Lin, K.M., Lin, C.J.: A study on reduced support vector machines. IEEE Transactions on Neural Networks 14(6), 1449–1459 (2003)CrossRefGoogle Scholar
  9. 9.
    Liu, Q., He, Q., Shi, Z.-Z.: Extreme support vector machine classifier. In: Washio, T., Suzuki, E., Ting, K.M., Inokuchi, A. (eds.) PAKDD 2008. LNCS (LNAI), vol. 5012, pp. 222–233. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16), 3460–3468 (2008)CrossRefGoogle Scholar
  11. 11.
    Huang, G.B., Chen, L.: incremental extreme learning machine. Neurocomputing 70(16), 3056–3062 (2007)CrossRefGoogle Scholar
  12. 12.
    Courrieu, P.: Fast computation of Moore-Penrose inverse matrices. Neural Information Processing - Letters and Reviews 8(2), 25–29 (2005)Google Scholar
  13. 13.
    Hoerl, A.E., Kennard, R.W.: Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12(1), 55–67 (1970)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Rumelhart, D.E., Hintont, G.E., Williams, R.J.: representations by back-propagating errors. Nature. 323(6088), 533–536 (1986)CrossRefGoogle Scholar
  15. 15.
    Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems 42(1), 80–86 (2000)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  1. 1.Xi’an University of Posts & TelecommunicationsXi’anChina
  2. 2.Xi’an Jiaotong UniversityXi’anChina

Personalised recommendations