Skip to main content

Efficient Approach One-Versus-All Binary Tree for Multiclass SVM

  • Conference paper
  • First Online:
Transactions on Engineering Technologies

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 275))

  • 900 Accesses

Abstract

In this paper we propose and examine the performance of a framework for solving multiclass problems with Support Vector Machine (SVM). Our methods based on the principle binary tree, leading to much faster convergence and compare it with very popular methods proposals in the literature, both in terms of computational needs for the feedforward phase and of classification accuracy. The proposed paradigm builds a binary tree for multiclass SVM, using the technical of portioning by criteria of natural classification: Separation and Homogeneity, with the aim of obtaining optimal tree. The main result, however, is the mapping of the multiclass problem to a several bi-classes sub-problem, in order to easing the resolution of the real and complex problems. Our approach is more accurate in the construction of the tree. Further, in the test phase OVA Tree Multiclass, due to its Log complexity, it is much faster than other methods in problems that have big class number. In this context, two corpus are used to evaluate our framework; TIMIT datasets for vowels classification and MNIST for recognition of handwritten digits. A recognition rate of 57 %, on the 20 vowels of TIMIT corpus and 97.73 % on MNIST datasets for 10 digits, was achieved. These results are comparable with the state of the arts. In addition, training time and number of support vectors, which determine the duration of the tests, are also reduced compared to other methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Boser B, Guyon IM, Vapnik V (1992) A training algorithm for optimal margin classifiers. 5th annual workshop on computational learning theory. ACM Press, Pittsburgh

    Google Scholar 

  2. Boser B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In 5th annual workshop on computational learning theory. Pittsburgh, pp 144–152

    Google Scholar 

  3. Cha SH, Tappert C (2009) A genetic algorithm for constructing compact binary decision trees. J Pattern Recogn Res 4(1):1–13

    Article  Google Scholar 

  4. Chang C-C, Lin C-J (2013) LIBSVM toolkit: a library for support vector machines. Software available at: http://www.csie.ntu.edu.tw/cjlin/libsvm/

  5. Erdogan H (2005) Regularizing linear discriminant analysis for speech recognition. Interspeech’2005, Lisbon, Portugal (4–8 Sep 2005)

    Google Scholar 

  6. Fei B, Liu J (2006) Binary tree of SVM: a new fast multiclass training and classification algorithm. IEEE Trans Neural Netw 17(3):696–704

    Article  Google Scholar 

  7. Furui S (1986) Speaker-independent isolated word recognition using dynamic features of speech spectrum. IEEE Trans Acoust Speech Signal Process 34:52–59

    Article  Google Scholar 

  8. GF Choueiter, JR Glass (2005) A wavelet and filter bank framework for phonetic classification. ICASSP

    Google Scholar 

  9. Grave A, Schmidhuber J (2005) Framewise phoneme classification with bidirectional LSTM networks. In: Proceedings of IJCNN, vol 4, pp 2047–2052

    Google Scholar 

  10. Guyon I, Boser B, Vapnik V (1993) Automatic capacity tuning of very large VC-dimension classifiers. Adv Neural Inf Process Sys 5:147

    Google Scholar 

  11. Hong D (2007) Speech recognition technology: moving beyond customer service. Comput Bus (Online 1st Mar 2007)

    Google Scholar 

  12. Hsu CW, Lin CJ (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13(2):415–425

    Article  Google Scholar 

  13. Jain AK, Dubes R (1988) Algorithms for clustering data. Prentice Hall, NJ

    MATH  Google Scholar 

  14. Joachims T (2001) Making large-scale SVM learning practical. Software available at: http://svmlight.joachims.org/

  15. Joachims T (1998) Making large-scale support vector machine learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods. MIT Press, Cambridge

    Google Scholar 

  16. Kamal M, Mark H-J (2003) Non-linear independent component analysis for speech recognition. International conference on computer communication and control technologies, Orlando

    Google Scholar 

  17. Lee K-F, Hon H-W (1989) Speaker-independent phone recognition using Hidden Markov Models. IEEE Trans Acoust Speech Signal Process 37(11):1641–1648

    Article  Google Scholar 

  18. Lei H, Govindaraju V (2005) Half-against-half multi-class support vector machines. In: Oza NC, Polikar R, Kittler J, Roli F (eds) Multiple classifier system, vol 3541. Springer, Berlin, pp 156–164

    Chapter  Google Scholar 

  19. LibCVM toolkit of the improved core vector machine (CVM), which are fast support vector machine (SVM) training algorithms using core-set approximation on very large scale data sets available at: http://c2inet.sce.ntu.edu.sg/ivor/cvm.html

  20. Madzarov G, Gjorgjevikj D, Chorbev I (2009) A multi-class SVM classifier utilizing binary decision tree. Informatica 33:233–241

    MathSciNet  Google Scholar 

  21. Moreno P (1999) On the use of support vector machines for phonetic classification. In: Proceedings of ICCASP, vol 2. Phoenix, AZ, pp 585–588

    Google Scholar 

  22. Morris J, Fosler-Lussier E (2006) Discriminative phonetic recognition with conditional random fields. HLTNAACL

    Google Scholar 

  23. Naomi H, Saeed V, Paul MC (1998) A novel model for phoneme recognition using phonetically derived features. In: Proceeding EUSIPCO

    Google Scholar 

  24. Osuna E et al (1997) Training support vector machines, an application to face detection. Proceedings IEEE computer society conference on computer vision and pattern recognition, In, pp 130–136

    Google Scholar 

  25. Platt J (1999) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advances in kernel methods: support vector learning

    Google Scholar 

  26. Platt JC, Cristianini N, Shawe-Taylor J (2000) Large margin DAGs for multiclass classification. Adv Neural Inf Process Sys 12:547–443

    Google Scholar 

  27. Platt JC, Cristianini N, Shawe-Taylor J (2000) Large margin DAGs for multiclass classification. Adv Neural Inf Process Sys 12:547–553

    Google Scholar 

  28. Rifkin R et al (2007) Noise robust phonetic classification with linear regularized least squares and second order featues. ICASSP

    Google Scholar 

  29. Ryan R, Klautau A (2004) In defense of one-vs-all classification. J Mach Learn Res 5:101–141

    MATH  Google Scholar 

  30. Salomon J, King S, Osborne M (2002) Framewise phone classification using support vector machines. ICSLP

    Google Scholar 

  31. Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press, Cambridge

    Google Scholar 

  32. Sha F et al. (2007) Comparaison of large margin training to other discriminative methods for phonetic recognition by hidden markov models. IEEE international conference on acoustics, speech and signal processing, vol 4. Honolulu, USA

    Google Scholar 

  33. Sha F, Saul LK (2006) Large margin hidden markov models for automatic speech recognition. NIPS

    Google Scholar 

  34. Sidaoui B, Sadouni K (2013) Approach multiclass SVM utilizing genetic algorithms. IMECS 2013 conference, Hong Kong

    Google Scholar 

  35. Slaney M (1998) Auditory toolbox version 2. Tech. Report#010. Internal Research Corporation

    Google Scholar 

  36. The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. The digits have been size-normalized and centred in a fixed-size image available at: http://yann.lecun.com/exdb/mnist/

  37. Vapnik V (1982) Estimation of dependences based on empirical data. Nauka, Moscow. (English translation, Springer, 1979, NY)

    Google Scholar 

  38. Vapnik V (2000) The nature of statistical learning theory. Springer, NY

    Google Scholar 

  39. Vapnik V (1998) Statistical learning theory. Wiley, NY

    Google Scholar 

  40. Vapnik V, Chervonenkis AJ (1974) Theory of pattern recognition. Nauka, Moscow. (Theorie der Zeichenerkennung, 1979, Akademie-Verlag, Berlin)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boutkhil Sidaoui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this paper

Cite this paper

Sidaoui, B., Sadouni, K. (2014). Efficient Approach One-Versus-All Binary Tree for Multiclass SVM. In: Yang, GC., Ao, SI., Huang, X., Castillo, O. (eds) Transactions on Engineering Technologies. Lecture Notes in Electrical Engineering, vol 275. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7684-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-94-007-7684-5_15

  • Published:

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-007-7683-8

  • Online ISBN: 978-94-007-7684-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics