Advertisement

Binary Coded Output Support Vector Machine

  • Tao Ye
  • Xuefeng Zhu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7996)

Abstract

To solve multi-class classification problems for large-scale datasets, the authors propose a coded output support vector machine (COSVM) by introducing the idea of information coding. The COSVM is built based on the support vector regression (SVR) machine that is implemented by the sequential minimal optimization (SMO) algorithm. The paper first introduces the soft ε-tube SVR’s basic principles, next gives the idea and procedure of the SMO algorithm, and then illustrates the COSVM’s topology. For studying the parameters impact on the binary COSVM’s performance, we perform two experiments with the Character Trajectories dataset, in which output labels are coded with the binary number system. And some useful results are obtained in these experiments. The final section gives a conclusion and further research ideas.

Keywords

Support vector machine (SVM) binary coded output classification regression number system 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Haussler, D. (ed.) 5th Annual ACM Workshop on Computational Learning Theory, pp. 144–152. ACM Press, Pittsburgh (1992)Google Scholar
  2. 2.
    Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998)CrossRefGoogle Scholar
  3. 3.
    Cortes, C., Vapnik, V.N.: Support vector networks. Machine Learning 20, 273–297 (1995)zbMATHGoogle Scholar
  4. 4.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)Google Scholar
  5. 5.
    Deng, N.Y., Tian, Y.J.: A New Method in Data Mining: Support Vector Machine. China Science Press, Beijing (2004) (in Chinese)Google Scholar
  6. 6.
    Dietterich, T.G., Bakiri, G.: Solving multi-class learning problems via error-correcting output codes. Journal of Artificial Intelligent Research 2, 263–286 (1995)zbMATHGoogle Scholar
  7. 7.
    Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods - Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1999)Google Scholar
  8. 8.
    Newman, D.J., Hettich, S., Blake, C.L., et al.: UCI Repository of machine learning databases. University of California, Irvine (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html Google Scholar
  9. 9.
    Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Principe, J., Gile, L., Morgan, N. (eds.) Neural Networks for Signal Processing VII - The 1997 IEEE Workshop, pp. 276–285. IEEE Press, Piscataway (1997)CrossRefGoogle Scholar
  10. 10.
    Platt, J.C.: Sequential minimal optimization: A fast algorithm for training support vector machines. Technical report, MSR-TR-98-14, Microsoft Research (1998)Google Scholar
  11. 11.
    Schölkopf, B., Smola, A.J., Williamson, R., et al.: New support vector algorithms. Neural Computation 12, 1207–1245 (2000)CrossRefGoogle Scholar
  12. 12.
    Smola, A.J., Schölkopf, B., Müller, K.R.: General cost functions for support vector regression. In: Downs, T., Frean, M., Gallagher, M. (eds.) 9th Australian Conference on Neural Network, pp. 79–83. University of Queensland, Brisbane (1998)Google Scholar
  13. 13.
    Smola, A.J.: Learning with Kernels. PhD dissertation. Technische Universität Berlin, Berlin (1998)Google Scholar
  14. 14.
    Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Statistics and Computing 14, 199–222 (2004)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Processing Letters 9(3), 293–300 (1999)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Vapnik, V.N.: Statistical Learning Theory. John Wiley & Sons, New York (1998)zbMATHGoogle Scholar
  17. 17.
    Vapnik, V.N.: An overview of statistical learning theory. IEEE Transactions on Neural Networks 10(5), 988–999 (1999)CrossRefGoogle Scholar
  18. 18.
    Vapnik, V.N., Chervonenkis, A.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2), 264–280 (1971)MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Ye, T., Zhu, X.F.: The bridge relating process neural networks and traditional neural networks. Neurocomputing 74(6), 906–915 (2011)CrossRefGoogle Scholar
  20. 20.
    Zhu, Y.P., Dai, R.W.: Text classifier based on SVM decision tree. Pattern Recognition & Artificial Intelligence 18(4), 412–416 (2005) (in Chinese)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Tao Ye
    • 1
  • Xuefeng Zhu
    • 1
  1. 1.College of Automation Science and EngineeringSouth China University of TechnologyGuangzhouP.R. China

Personalised recommendations