Skip to main content

Incremental and Decremental Learning for Linear Support Vector Machines

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4668))

Abstract

We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner product, without re-computing it from scratch and the procedure is reversible. An algorithm to implement the proposed method is presented, which avoids matrix inversions from scratch. Among the possible applications, we find feature selection and the design of kernels out of similarity measures.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    MATH  Google Scholar 

  2. Cauwenberghs, G., Poggio, T.: Incremental and Decremental Support Vector Machine Learning. In: Advances in Neural Information Processing Systems, vol. 12, pp. 409–415. MIT Press, Cambridge (2000)

    Google Scholar 

  3. Martín, M.: On-Line Support Vector Machine Regression. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 282–294. Springer, Heidelberg (2002)

    Google Scholar 

  4. Cristianini, N., Campbell, C., Shawe-Taylor, J.: Dynamically Adapting Kernels in Support Vector Machines. In: Advances in Neural Information Processing Systems, vol. 11, pp. 204–210. MIT Press, Cambridge (1999)

    Google Scholar 

  5. Diel, C., Cauwenberghs, G.: SVM Incremental Learning, Adaptation and Optimization. In: International Joint Conference on Neural Networks, vol. 4, pp. 2685–2690 (2003)

    Google Scholar 

  6. Hastie, T., Rosset, S., Tibshirani, R., Zhun, J.: The Entire Regularization Path for the Support Vector Machine. Journal of Machine Learning Research 5, 1391–1415 (2006)

    Google Scholar 

  7. Guyon, I., Weston, J., Barnhill, S., Vapnik, V.N.: Gene Selection for Cancer Classification using Support Vector Machines. Machine Learning 46(1-3), 389–422 (2002)

    Article  MATH  Google Scholar 

  8. Chandon, J.L., Pinson, S.: Analyse Typologique. Théorie et Applications. Masson (1981)

    Google Scholar 

  9. Gower, J.C., Legendre, P.: Metric and Euclidean Properties of Dissimilarity Coefficients. Journal of Classification 3, 5–48 (1986)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Joaquim Marques de Sá Luís A. Alexandre Włodzisław Duch Danilo Mandic

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Romero, E., Barrio, I., Belanche, L. (2007). Incremental and Decremental Learning for Linear Support Vector Machines. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds) Artificial Neural Networks – ICANN 2007. ICANN 2007. Lecture Notes in Computer Science, vol 4668. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74690-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74690-4_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74689-8

  • Online ISBN: 978-3-540-74690-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics