Skip to main content

Support Vector Machine Ensemble with Bagging

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2388))

Abstract

Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensembles with bagging (bootstrap aggregating). Each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. Then, they are aggregated into to make a collective decision in several ways such as the majority voting, the LSE(least squares estimation)-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognitionshow that the proposed SVM ensembles with bagging outperforms a single SVM in terms of classification accuracy greatly.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cortes, C., Vapnik, V.: Support vector network. Machine Learning. 20 (1995) 273–297

    MATH  Google Scholar 

  2. Burges, C.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery. 2(2) (1998) 121–167

    Article  Google Scholar 

  3. Joachims, T.: Making large-scale support vector machine learning practical. Advances in Kernel Methods: Support Vector Machines, MIT Press, Cambridge, MA (1999)

    Google Scholar 

  4. Platt, J.: Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods: Support Vector Machines, MIT Press, Cambridge, MA (1999)

    Google Scholar 

  5. Weston, J., Watkins, C.: Support Vector Machines for Multi-Class Pattern Recognition. Proceedings of the 7th European Symposium on Artificial Neural Networks (1999)

    Google Scholar 

  6. Bottou, L., Cortes, C., Denker, J., Drucker, H., Guyon, I., Jackel, Lawrence D., LeCun, Y., Müller U., Säckinger E., Simard, P., Vapnik, V.: Comparison of classifier methods: a case study in handwriting digit recognition. Proceedings of the 13th International Conference on Pattern Recognition. IEEE Computer Society Press (1994) 77–87

    Google Scholar 

  7. Knerrm, S., Personnaz, L., Dreyfus, G.: Single-layer learning revisited: a stepwise procedure for building and training a neural network. In: Fogelman, J.(eds.): Neurocomputing: Algorithms, Architechtures and Application, Springer-Verlag (1990)

    Google Scholar 

  8. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York, 1999

    Google Scholar 

  9. Schölkopf, B., Smola A., and Muller K.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5) (1998) 1299–1319

    Article  Google Scholar 

  10. Dietterich, T.: Machine Learning Research: Four Current Directions. The AI Magazine, 18(4) (1998) 97–136

    Google Scholar 

  11. Hansen, L., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (1990) 993–1001

    Article  Google Scholar 

  12. Breiman, L.: Bagging predictors. Machine Learning, 24(2) (1996) 123–140, 1996

    MATH  MathSciNet  Google Scholar 

  13. Kim, D., Kim, C.: Forecasting time series with genetic fuzzy predictor ensemble. IEEE Transaction on Fuzzy Systems, 5(4) (1997) 523–535

    Article  Google Scholar 

  14. Jordan, M., Jacobs, R., Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6(5) (1994) 181–214

    Article  Google Scholar 

  15. Fisher, R.: The use of multiple measurements in taxonomic problems. Annual Eugenics, 7, Part II (1936) 179–188

    Google Scholar 

  16. Bay, B.: The UCI KDD Archive [http://kdd.ics.uci.edu]. Irvine, CA: University of California, Department of Information and Computer Science (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kim, HC., Pang, S., Je, HM., Kim, D., Bang, SY. (2002). Support Vector Machine Ensemble with Bagging. In: Lee, SW., Verri, A. (eds) Pattern Recognition with Support Vector Machines. SVM 2002. Lecture Notes in Computer Science, vol 2388. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45665-1_31

Download citation

  • DOI: https://doi.org/10.1007/3-540-45665-1_31

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44016-1

  • Online ISBN: 978-3-540-45665-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics