Skip to main content

Using Kernel Basis with Relevance Vector Machine for Feature Selection

  • Conference paper
Book cover Artificial Neural Networks – ICANN 2009 (ICANN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5769))

Included in the following conference series:

  • 3712 Accesses

Abstract

This paper presents an application of multiple kernels like Kernel Basis to the Relevance Vector Machine algorithm. The framework of kernel machines has been a source of many works concerning the merge of various kernels to build the solution. Within these approaches, Kernel Basis is able to combine both local and global kernels. The interest of such approach resides in the ability to deal with a large kind of tasks in the field of model selection, for example the feature selection. We propose here an application of RVM-KB to a feature selection problem, for which all data are decomposed into a set of kernels so that all points of the learning set correspond to a single feature of one data. The final result is the selection of the main features through the relevance vectors selection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bach, F.R., Lanckriet, G.R.G., Jordan, M.I.: Multiple kernel learning, conic duality, and the smo algorithm. In: ICML 2004: Proceedings of the twenty-first international conference on Machine learning, p. 6. ACM Press, New York (2004)

    Google Scholar 

  2. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression, pp. 407–499 (January 2003)

    Google Scholar 

  3. Girolami, M., Rogers, S.: Hierarchic bayesian models for kernel learning. In: 22nd International Conference on Machine Learning, pp. 241–248 (2005)

    Google Scholar 

  4. Guigue, V., Rakotomamonjy, A., Canu, S.: Kernel basis pursuit. In: CAP, pp. 93–106 (2005)

    Google Scholar 

  5. Gunn, S., Kandola, J.: Structural modelling with sparse kernels. Machine Learning 48, 137–163 (2002)

    Article  MATH  Google Scholar 

  6. Mackay, D.J.: Probable networks and plausible predictions - a review of pratictal bayesian methods for supervised neural networks, vol. 6, pp. 469–505 (1995)

    Google Scholar 

  7. Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A.J., Mueller, K.-R.: Constructing descriptive and discriminative non-linear features: Rayleigh coefficients in kernel feature spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence (2004)

    Google Scholar 

  8. Rakotomamonjy, A., Bach, F., Canu, S., Grandvalet, Y.: Simple MKL. Journal of Machine Learning Research (2008)

    Google Scholar 

  9. Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)

    MATH  Google Scholar 

  10. Suard, F., Rakotomamonjy, A., Bensrhair, A.: Model selection in pedestrian detection using multiple kernel learning. In: Intelligent Vehicles Symposium 2007, Istanbul (June 2007)

    Google Scholar 

  11. Tipping, M.: The relevance vector machine. In: Solla, T.K.L.S.A., Müller, K.-R. (eds.) Advances in Neural Information Processing Systems, vol. 12. MIT Press, Cambridge (2000)

    Google Scholar 

  12. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, N.Y. (1995)

    Book  MATH  Google Scholar 

  13. Vincent, P., Bengio, Y.: Kernel matching pursuit. Mach. Learn. 48(1-3), 165–187 (2002)

    Article  MATH  Google Scholar 

  14. Wu, L., Schölkopf, B., Bakir, G.: A direct method for building sparse kernel learning algorithms. Journal of Machine Learning Research 7, 603–624 (2006)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Suard, F., Mercier, D. (2009). Using Kernel Basis with Relevance Vector Machine for Feature Selection. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5769. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04277-5_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04277-5_26

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04276-8

  • Online ISBN: 978-3-642-04277-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics