On the Design of a Hardware-Software Architecture for Acceleration of SVM’s Training Phase

  • Lázaro Bustio-Martínez
  • René Cumplido
  • José Hernández-Palancar
  • Claudia Feregrino-Uribe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6256)

Abstract

Support Vector Machines (SVM) is a new family of Machine Learning techniques that have been used in many areas showing remarkable results. Since training SVM scales quadratically (or worse) according of data size, it is worth to explore novel implementation approaches to speed up the execution of this type of algorithms. In this paper, a hardware-software architecture to accelerate the SVM training phase is proposed. The algorithm selected to implement the architecture is the Sequential Minimal Optimization (SMO) algorithm, which was partitioned so a General Purpose Processor (GPP) executes operations and control flow while the coprocessor executes tasks than can be performed in parallel. Experiments demonstrate that the proposed architecture can speed up SVM training phase 178.7 times compared against a software-only implementation of this algorithm.

Keywords

SVM SMO FPGA Parallel hardware-software architectures 

References

  1. 1.
    Newman, D.J., Asuncion, A.: UCI machine learning repository (2007)Google Scholar
  2. 2.
    Burges, Christopher, J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2(2), 121–167 (1998)CrossRefGoogle Scholar
  3. 3.
    Compton, K., Hauck, S.: Reconfigurable computing: a survey of systems and software. ACM Comput. Surv. 34(2), 171–210 (2002)CrossRefGoogle Scholar
  4. 4.
    Dey, S., Kedia, M., Agarwal, N., Basu, A.: Embedded support vector machine: Architectural enhancements and evaluation. In: VLSID ’07: Proceedings of the 20th International Conference on VLSI Design Held Jointly with 6th International Conference, Washington, DC, USA, pp. 685–690. IEEE Computer Society, Los Alamitos (2007)Google Scholar
  5. 5.
    Guyon, I.: Svm application list (2006)Google Scholar
  6. 6.
    Joachims, T.: Making large-scale support vector machine learning practical. pp. 169–184 (1999)Google Scholar
  7. 7.
    Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Proceedings of the 1997 IEEE Workshop on Neural Networks for Signal Processing, vol. VII, pp. 276–285 (1997)Google Scholar
  8. 8.
    Platt, J.C.: Sequential minimal optimization: A fast algorithm for training support vector machines. Technical report, Microsoft Research, MST-TR-98-14 (1998)Google Scholar
  9. 9.
    Vapnik, V., Kotz, S.: Estimation of Dependences Based on Empirical Data: Empirical Inference Science (Information Science and Statistics). Springer, New York (2006)Google Scholar
  10. 10.
    Vapnik, V.N.: The nature of statistical learning theory. Springer, New York (1995)CrossRefMATHGoogle Scholar
  11. 11.
    Wang, G.: A survey on training algorithms for support vector machine classifiers. In: NCM ’08: Proceedings of the 2008 Fourth International Conference on Networked Computing and Advanced Information Management, Washington, DC, USA, pp. 123–128. IEEE Computer Society, Los Alamitos (2008)CrossRefGoogle Scholar
  12. 12.
    Weisstein, E.W.: Riemann-lebesgue lemma (online)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Lázaro Bustio-Martínez
    • 1
    • 2
  • René Cumplido
    • 2
  • José Hernández-Palancar
    • 1
  • Claudia Feregrino-Uribe
    • 2
  1. 1.Advanced Technologies Application CenterHavanaCuba
  2. 2.National Institute for Astrophysics, Optics and ElectronicTonantzintlaMéxico

Personalised recommendations