Abstract
In this paper we propose and examine the performance of a framework for solving multiclass problems with Support Vector Machine (SVM). Our methods based on the principle binary tree, leading to much faster convergence and compare it with very popular methods proposals in the literature, both in terms of computational needs for the feedforward phase and of classification accuracy. The proposed paradigm builds a binary tree for multiclass SVM, using the technical of portioning by criteria of natural classification: Separation and Homogeneity, with the aim of obtaining optimal tree. The main result, however, is the mapping of the multiclass problem to a several bi-classes sub-problem, in order to easing the resolution of the real and complex problems. Our approach is more accurate in the construction of the tree. Further, in the test phase OVA Tree Multiclass, due to its Log complexity, it is much faster than other methods in problems that have big class number. In this context, two corpus are used to evaluate our framework; TIMIT datasets for vowels classification and MNIST for recognition of handwritten digits. A recognition rate of 57 %, on the 20 vowels of TIMIT corpus and 97.73 % on MNIST datasets for 10 digits, was achieved. These results are comparable with the state of the arts. In addition, training time and number of support vectors, which determine the duration of the tests, are also reduced compared to other methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Boser B, Guyon IM, Vapnik V (1992) A training algorithm for optimal margin classifiers. 5th annual workshop on computational learning theory. ACM Press, Pittsburgh
Boser B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In 5th annual workshop on computational learning theory. Pittsburgh, pp 144–152
Cha SH, Tappert C (2009) A genetic algorithm for constructing compact binary decision trees. J Pattern Recogn Res 4(1):1–13
Chang C-C, Lin C-J (2013) LIBSVM toolkit: a library for support vector machines. Software available at: http://www.csie.ntu.edu.tw/cjlin/libsvm/
Erdogan H (2005) Regularizing linear discriminant analysis for speech recognition. Interspeech’2005, Lisbon, Portugal (4–8 Sep 2005)
Fei B, Liu J (2006) Binary tree of SVM: a new fast multiclass training and classification algorithm. IEEE Trans Neural Netw 17(3):696–704
Furui S (1986) Speaker-independent isolated word recognition using dynamic features of speech spectrum. IEEE Trans Acoust Speech Signal Process 34:52–59
GF Choueiter, JR Glass (2005) A wavelet and filter bank framework for phonetic classification. ICASSP
Grave A, Schmidhuber J (2005) Framewise phoneme classification with bidirectional LSTM networks. In: Proceedings of IJCNN, vol 4, pp 2047–2052
Guyon I, Boser B, Vapnik V (1993) Automatic capacity tuning of very large VC-dimension classifiers. Adv Neural Inf Process Sys 5:147
Hong D (2007) Speech recognition technology: moving beyond customer service. Comput Bus (Online 1st Mar 2007)
Hsu CW, Lin CJ (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13(2):415–425
Jain AK, Dubes R (1988) Algorithms for clustering data. Prentice Hall, NJ
Joachims T (2001) Making large-scale SVM learning practical. Software available at: http://svmlight.joachims.org/
Joachims T (1998) Making large-scale support vector machine learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods. MIT Press, Cambridge
Kamal M, Mark H-J (2003) Non-linear independent component analysis for speech recognition. International conference on computer communication and control technologies, Orlando
Lee K-F, Hon H-W (1989) Speaker-independent phone recognition using Hidden Markov Models. IEEE Trans Acoust Speech Signal Process 37(11):1641–1648
Lei H, Govindaraju V (2005) Half-against-half multi-class support vector machines. In: Oza NC, Polikar R, Kittler J, Roli F (eds) Multiple classifier system, vol 3541. Springer, Berlin, pp 156–164
LibCVM toolkit of the improved core vector machine (CVM), which are fast support vector machine (SVM) training algorithms using core-set approximation on very large scale data sets available at: http://c2inet.sce.ntu.edu.sg/ivor/cvm.html
Madzarov G, Gjorgjevikj D, Chorbev I (2009) A multi-class SVM classifier utilizing binary decision tree. Informatica 33:233–241
Moreno P (1999) On the use of support vector machines for phonetic classification. In: Proceedings of ICCASP, vol 2. Phoenix, AZ, pp 585–588
Morris J, Fosler-Lussier E (2006) Discriminative phonetic recognition with conditional random fields. HLTNAACL
Naomi H, Saeed V, Paul MC (1998) A novel model for phoneme recognition using phonetically derived features. In: Proceeding EUSIPCO
Osuna E et al (1997) Training support vector machines, an application to face detection. Proceedings IEEE computer society conference on computer vision and pattern recognition, In, pp 130–136
Platt J (1999) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advances in kernel methods: support vector learning
Platt JC, Cristianini N, Shawe-Taylor J (2000) Large margin DAGs for multiclass classification. Adv Neural Inf Process Sys 12:547–443
Platt JC, Cristianini N, Shawe-Taylor J (2000) Large margin DAGs for multiclass classification. Adv Neural Inf Process Sys 12:547–553
Rifkin R et al (2007) Noise robust phonetic classification with linear regularized least squares and second order featues. ICASSP
Ryan R, Klautau A (2004) In defense of one-vs-all classification. J Mach Learn Res 5:101–141
Salomon J, King S, Osborne M (2002) Framewise phone classification using support vector machines. ICSLP
Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press, Cambridge
Sha F et al. (2007) Comparaison of large margin training to other discriminative methods for phonetic recognition by hidden markov models. IEEE international conference on acoustics, speech and signal processing, vol 4. Honolulu, USA
Sha F, Saul LK (2006) Large margin hidden markov models for automatic speech recognition. NIPS
Sidaoui B, Sadouni K (2013) Approach multiclass SVM utilizing genetic algorithms. IMECS 2013 conference, Hong Kong
Slaney M (1998) Auditory toolbox version 2. Tech. Report#010. Internal Research Corporation
The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. The digits have been size-normalized and centred in a fixed-size image available at: http://yann.lecun.com/exdb/mnist/
Vapnik V (1982) Estimation of dependences based on empirical data. Nauka, Moscow. (English translation, Springer, 1979, NY)
Vapnik V (2000) The nature of statistical learning theory. Springer, NY
Vapnik V (1998) Statistical learning theory. Wiley, NY
Vapnik V, Chervonenkis AJ (1974) Theory of pattern recognition. Nauka, Moscow. (Theorie der Zeichenerkennung, 1979, Akademie-Verlag, Berlin)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media Dordrecht
About this paper
Cite this paper
Sidaoui, B., Sadouni, K. (2014). Efficient Approach One-Versus-All Binary Tree for Multiclass SVM. In: Yang, GC., Ao, SI., Huang, X., Castillo, O. (eds) Transactions on Engineering Technologies. Lecture Notes in Electrical Engineering, vol 275. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7684-5_15
Download citation
DOI: https://doi.org/10.1007/978-94-007-7684-5_15
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-7683-8
Online ISBN: 978-94-007-7684-5
eBook Packages: EngineeringEngineering (R0)