Skip to main content

Nonlinear Feature Selection by Relevance Feature Vector Machine

  • Conference paper
Machine Learning and Data Mining in Pattern Recognition (MLDM 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4571))

Abstract

Support vector machine (SVM) has received much attention in feature selection recently because of its ability to incorporate kernels to discover nonlinear dependencies between features. However it is known that the number of support vectors required in SVM typically grows linearly with the size of the training data set. Such a limitation of SVM becomes more critical when we need to select a small subset of relevant features from a very large number of candidates. To solve this issue, this paper proposes a novel algorithm, called the ‘relevance feature vector machine’(RFVM), for nonlinear feature selection. The RFVM algorithm utilizes a highly sparse learning algorithm, the relevance vector machine (RVM), and incorporates kernels to extract important features with both linear and nonlinear relationships. As a result, our proposed approach can reduce many false alarms, e.g. including irrelevant features, while still maintain good selection performance. We compare the performances between RFVM and other state of the art nonlinear feature selection algorithms in our experiments. The results confirm our conclusions.

The work was performed when the first author worked as a summer intern at NEC Laboratories America, Inc.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211–244 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  2. Bellman, R.E.: Adaptive Control Processes. Princeton University Press, Princeton, NJ (1961)

    MATH  Google Scholar 

  3. Hochreiter, S., Obermayer, K.: Nonlinear feature selection with the potential support vector machine. In: Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L. (eds.) Feature extraction, foundations and applications, Springer, Berlin (2005)

    Google Scholar 

  4. Figueiredo, M., Jain, A.K.: Bayesian Learning of Sparse Classifiers. In: Proc. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 35–41 (2001)

    Google Scholar 

  5. Figueiredo, M.A.T.: Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(9), 1150–1159 (2003)

    Article  Google Scholar 

  6. BΦ, T.H., Jonassen, I.: New feature subset selection procedures for classification of expression profiles, Genome Biology, 3 research 0017.1-0017.11 (2000)

    Google Scholar 

  7. Burges, C.: Simplified support vector decision rules. In: Proc. of the Thirteenth International Conf. on Machine Learning, pp. 71–77. Morgan Kaufmann, Seattle (1996)

    Google Scholar 

  8. Aizerman, M.E., Braverman, Rozonoer, L.: Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control 25, 821–837 (1964)

    MathSciNet  Google Scholar 

  9. Li, F., Yang, Y., Xing, E.P.: From Lasso regression to Feature Vector Machine, Advances in Neural Information Processing Systems, 18 (2005)

    Google Scholar 

  10. Faul, A., Tipping, M.E.: Analysis of sparse bayesian learning. In: Dietterich, T., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14, pp. 383–389. MIT Press, Cambridge, MA (2002)

    Google Scholar 

  11. Faul, A., Tipping, M.: A variational approach to robust regression, in Artificial Neural Networks. In: Dorffner, G., Bischof, H., Hornik, K. (eds.), pp. 95–202 (2001)

    Google Scholar 

  12. Roth, V.: The Generalized LASSO, V. IEEE Transactions on Neural Networks, Dorffner, G. vol. 15(1) (2004)

    Google Scholar 

  13. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)

    MATH  Google Scholar 

  14. Smola, A.J., Scholkopf, B.: A tutorial on support vector regression, NEUROCOLT Technical Report NC-TR-98-030, Royal Holloway College, London (1998)

    Google Scholar 

  15. Long, F., Ding, C.: Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(8), 1226–1238 (2005)

    Article  Google Scholar 

  16. Guiasu, Silviu: Information Theory with Applications. McGraw-Hill, New York (1977)

    MATH  Google Scholar 

  17. Tipping, M.E.: Microsoft Corporation, http://research.microsoft.com/MLP/RVM/

  18. Tibshirani, R.: Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58(1), 267–288 (1999)

    MathSciNet  Google Scholar 

  19. Guyon, I., Elisseeff, A.: An Introduction to Variable and Feature Selection. Journal of Machine Learning Research 3, 1157–1182 (2003)

    Article  MATH  Google Scholar 

  20. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)

    Google Scholar 

  21. Reeves, S.J., Zhao, Z.: Sequential algorithms for observation selection. IEEE Transactions on Signal Processing 47, 123–132 (1999)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Petra Perner

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cheng, H., Chen, H., Jiang, G., Yoshihira, K. (2007). Nonlinear Feature Selection by Relevance Feature Vector Machine. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2007. Lecture Notes in Computer Science(), vol 4571. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73499-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73499-4_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73498-7

  • Online ISBN: 978-3-540-73499-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics