Skip to main content

Sparsity Based Feature Extraction for Kernel Minimum Squared Error

  • Conference paper

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 483))

Abstract

Kernel minimum squared error(KMSE) is well-known for its effectiveness and simplicity, yet it suffers from the drawback of efficiency when the size of training examples is large. Besides, most of the previous fast algorithms based on KMSE only consider classification problems with balanced data, when in real world imbalanced data are common. In this paper, we propose a weighted model based on sparsity for feature selection in kernel minimum squared error(KMSE). With our model, the computational burden of feature extraction is largely alleviated. Moreover, this model can cope with the class imbalance problem. Experimental results conducted on several benchmark datasets indicate the effectivity and efficiency of our method.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Becker, S., Bobin, J., Candès, E.J.: Nesta: a fast and accurate first-order method for sparse recovery. SIAM Journal on Imaging Sciences 4(1), 1–39 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  2. Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998)

    Article  Google Scholar 

  3. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing 20(1), 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  4. Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20(3), 273–297 (1995)

    MATH  Google Scholar 

  5. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution (English summary). Comm. Pure Appl. Math. 59(6), 797–829 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  6. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32(2), 407–499 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Elad, M.: Sparse and redundant representations: from theory to applications in signal and image processing. Springer (2010)

    Google Scholar 

  8. Friedman, J., Hastie, T., Höfling, H., Tibshirani, R.: Pathwise coordinate optimization. The Annals of Applied Statistics 1(2), 302–332 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  9. Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intelligence 23(6), 643–660 (2001)

    Article  Google Scholar 

  10. Goldstein, T., Osher, S.: The split bregman method for l1-regularized problems. SIAM Journal on Imaging Sciences 2(2), 323–343 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Jiang, J., Chen, X., Gan, H.T.: Feature extraction for kernel minimum squared error by sparsity shrinkage. Applied Mechanics and Materials 536, 450–453 (2014)

    Article  Google Scholar 

  12. Mika, S., Ratsch, G., Weston, J., Scholkopf, B., Mullers, K.: Fisher discriminant analysis with kernels. In: Neural Networks for Signal Processing IX, Proceedings of the 1999 IEEE Signal Processing Society Workshop, pp. 41–48. IEEE (1999)

    Google Scholar 

  13. Muller, K.R., Mika, S., Ratsch, G., Tsuda, K., Scholkopf, B.: An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks 12(2), 181–201 (2001)

    Article  Google Scholar 

  14. Saitō, S.: Integral transforms, reproducing kernels and their applications, vol. 369. CRC Press (1997)

    Google Scholar 

  15. Saunders, C., Gammerman, A., Vovk, V.: Ridge regression learning algorithm in dual variables. In: (ICML 1998) Proceedings of the 15th International Conference on Machine Learning, pp. 515–521. Morgan Kaufmann (1998)

    Google Scholar 

  16. Schölkopf, B., Smola, A., Müller, K.R.: Kernel principal component analysis. In: Gerstner, W., Hasler, M., Germond, A., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 583–588. Springer, Heidelberg (1997)

    Google Scholar 

  17. Suykens, J.A., Vandewalle, J.: Least squares support vector machine classifiers. Neural Processing Letters 9(3), 293–300 (1999)

    Article  MathSciNet  Google Scholar 

  18. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267–288 (1996)

    Google Scholar 

  19. Wang, J., Wang, P., Li, Q., You, J.: Improvement of the kernel minimum squared error model for fast feature extraction. Neural Computing and Applications, 1–7

    Google Scholar 

  20. Xu, J., Zhang, X., Li, Y.: Kernel mse algorithm: a unified framework for kfd, ls-svm and krr. In: Proceedings of the International Joint Conference on Neural Networks, IJCNN 2001, vol. 2, pp. 1486–1491. IEEE (2001)

    Google Scholar 

  21. Xu, Y., Zhang, D., Jin, Z., Li, M., Yang, J.Y.: A fast kernel-based nonlinear discriminant analysis for multi-class problems. Pattern Recognition 39(6), 1026–1033 (2006)

    Article  MATH  Google Scholar 

  22. Zhao, Y.P., Du, Z.H., Zhang, Z.A., Zhang, H.B.: A fast method of feature extraction for kernel mse. Neurocomputing 74(10), 1654–1663 (2011)

    Article  Google Scholar 

  23. Zhu, Q.: Reformative nonlinear feature extraction using kernel mse. Neurocomputing 73(16), 3334–3337 (2010)

    Article  Google Scholar 

  24. Zhu, Q., Xu, Y., Cui, J., Chen, C.F., Wang, J., Wu, X., Zhao, Y.: A method for constructing simplified kernel model based on kernel-mse. In: Asia-Pacific Conference on Computational Intelligence and Industrial Applications, PACIIA 2009, vol. 1, pp. 237–240. IEEE (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jiang, J., Chen, X., Gan, H., Sang, N. (2014). Sparsity Based Feature Extraction for Kernel Minimum Squared Error. In: Li, S., Liu, C., Wang, Y. (eds) Pattern Recognition. CCPR 2014. Communications in Computer and Information Science, vol 483. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-45646-0_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-45646-0_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-45645-3

  • Online ISBN: 978-3-662-45646-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics