Skip to main content

An Efficient Many-Core Implementation for Semi-Supervised Support Vector Machines

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Big Data (MOD 2015)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9432))

Included in the following conference series:

Abstract

The concept of semi-supervised support vector machines extends classical support vector machines to learning scenarios, where both labeled and unlabeled patterns are given. In recent years, such semi-supervised extensions have gained considerable attention due to their huge potential for real-world applications with only small amounts of labeled data. While being appealing from a practical point of view, semi-supervised support vector machines lead to a combinatorial optimization problem that is difficult to address. Many optimization approaches have been proposed that aim at tackling this task. However, the computational requirements can still be very high, especially in case large data sets are considered and many model parameters need to be tuned. A recent trend in the field of big data analytics is to make use of graphics processing units to speed up computationally intensive tasks. In this work, such a massively-parallel implementation is developed for semi-supervised support vector machines. The experimental evaluation, conducted on commodity hardware, shows that valuable speed-ups of up to two orders of magnitude can be achieved over a standard single-core CPU execution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    More precisely, we assume \(\sum _{i=1}^{u} \varPhi (\mathbf {x}_{l+ i}) = \mathbf {0}\), where \(\varPhi (\mathbf {x})=k(\mathbf {x}, \cdot )\) is the feature mapping induced by the kernel \(k\). Centering the data can be achieved by adapting the kernel matrices in the preprocessing phase, see, e.g., Schölkopf and Smola [20].

  2. 2.

    A linear-time operation on, e.g., \(n=10000\) elements does not yield sufficient parallelism for a modern GPU with thousands of compute units.

  3. 3.

    Nocedal and Wright [17] point out that small values for the parameter m are usually sufficient to achieve a satisfying convergence rate in practice (e.g., \(m=3\) to \(m=50\)).

  4. 4.

    Our CPU version is a manually tuned variant of the publicly available code [12] (version 0.1), which is by a factor of two faster than the original version.

  5. 5.

    The most significant part of the runtime of cpu-qn-s3vm is spent on matrix operations (see below), which are efficiently supported by the NumPy package; we therefore do not expect significant performance gains using a pure, e.g., C implementation.

  6. 6.

    Available at http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.

References

  1. Adankon, M., Cheriet, M., Biem, A.: Semisupervised least squares support vector machine. IEEE Trans. Neural Netw. 20(12), 1858–1870 (2009)

    Article  Google Scholar 

  2. Bennett, K.P., Demiriz, A.: Semi-supervised support vector machines. In: Advances in Neural Information Processing Systems, vol. 11, pp. 368–374. MIT Press (1999)

    Google Scholar 

  3. Bie, T.D., Cristianini, N.: Convex methods for transduction. In: Advances in Neural Information Proceedings Systems, vol. 16, pp. 73–80. MIT Press (2004)

    Google Scholar 

  4. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Haussler, D. (ed.) Proceedings 5th Annual Workshop on Computational Learning Theory, pp. 144–152. ACM, New York (1992)

    Chapter  Google Scholar 

  5. Catanzaro, B., Sundaram, N., Keutzer, K.: Fast support vector machine training and classification on graphics processors. In: Proceedings of the 25th International Conference on Machine Learning, pp. 104–111. ACM, New York (2008)

    Google Scholar 

  6. Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press, Cambridge (2006)

    Google Scholar 

  7. Chapelle, O., Sindhwani, V., Keerthi, S.S.: Branch and bound for semi-supervised support vector machines. In: Advances in Neural Information Processing Systems 19, pp. 217–224. MIT Press (2007)

    Google Scholar 

  8. Chapelle, O., Sindhwani, V., Keerthi, S.S.: Optimization techniques for semi-supervised support vector machines. J. Mach. Learn. Res. 9, 203–233 (2008)

    MATH  Google Scholar 

  9. Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics, pp. 57–64 (2005)

    Google Scholar 

  10. Cheng, J., Grossman, M., McKercher, T.: Professional CUDA C Programming. Wiley, New Jersey (2014)

    Google Scholar 

  11. Coates, A., Huval, B., Wang, T., Wu, D.J., Catanzaro, B.C., Ng, A.Y.: Deep learning with COTS HPC systems. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1337–1345. JMLR.org (2013)

    Google Scholar 

  12. Gieseke, F., Airola, A., Pahikkala, T., Kramer, O.: Fast and simple gradient-based optimization for semi-supervised support vector machines. Neurocomputing 123, 23–32 (2014)

    Article  Google Scholar 

  13. Gieseke, F., Heinermann, J., Oancea, C., Igel, C.: Buffer k-d trees: processing massive nearest neighbor queries on GPUs. In: Proceedings of the 31st International Conference on Machine Learning, JMLR W&CP, vol. 32, pp. 172–180. JMLR.org (2014)

    Google Scholar 

  14. Joachims, T.: Transductive inference for text classification using support vector machines. In: Proceedings of the International Conference on Machine Learning, pp. 200–209 (1999)

    Google Scholar 

  15. Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: open source scientific tools for Python (2001–2015). http://www.scipy.org/

  16. Klöckner, A., Pinto, N., Lee, Y., Catanzaro, B., Ivanov, P., Fasih, A.: PyCUDA and PyOpenCL: a scripting-based approach to GPU run-time code generation. Parallel Comput. 38(3), 157–174 (2012)

    Article  Google Scholar 

  17. Nocedal, J., Wright, S.J.: Numerical Optimization, 1st edn. Springer, New York (2000)

    Google Scholar 

  18. Rifkin, R., Yeo, G., Poggio, T.: Regularized least-squares classification. In: Advances in Learning Theory: Methods, Models and Applications. IOS Press (2003)

    Google Scholar 

  19. Schölkopf, B., Herbrich, R., Smola, A.J.: A Generalized Representer Theorem. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 416–426. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  20. Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)

    Google Scholar 

  21. Sindhwani, V., Keerthi, S.S.: Large scale semi-supervised linear SVMs. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 477–484. ACM, New York (2006)

    Google Scholar 

  22. Steinwart, I., Christmann, A.: Support Vector Machines. Springer, New York (2008)

    MATH  Google Scholar 

  23. Vapnik, V., Sterin, A.: On structural risk minimization or overall risk in a problem of pattern recognition. Autom. Remote Control 10(3), 1495–1503 (1977)

    Google Scholar 

  24. Wen, Z., Zhang, R., Ramamohanarao, K.: Enabling precision/recall preferences for semi-supervised SVM training. In: Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, pp. 421–430. ACM, New York (2014)

    Google Scholar 

  25. Wen, Z., Zhang, R., Ramamohanarao, K., Qi, J., Taylor, K.: Mascot: fast and highly scalable SVM cross-validation using GPUs and SSDs. In: Proceedings of the 2014 IEEE International Conference on Data Mining (2014)

    Google Scholar 

  26. Xu, L., Schuurmans, D.: Unsupervised and semi-supervised multi-class support vector machines. In: Proceedings of the National Conference on Artificial intelligence, pp. 904–910 (2005)

    Google Scholar 

  27. Zhang, T., Oles, F.J.: Text categorization based on regularized linear classification methods. Inf. Retr. Boston 4, 5–31 (2001)

    Article  MATH  Google Scholar 

  28. Zhu, X., Goldberg, A.B.: Introduction to Semi-Supervised Learning. Morgan and Claypool, San Rafael (2009)

    MATH  Google Scholar 

Download references

Acknowledgements

The author would like to thank the anonymous reviewers for their careful reading and detailed comments. This work has been supported by the Radboud Excellence Initiative of the Radboud University Nijmegen. The author also would like to thank NVIDIA for generous hardware donations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabian Gieseke .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Gieseke, F. (2015). An Efficient Many-Core Implementation for Semi-Supervised Support Vector Machines. In: Pardalos, P., Pavone, M., Farinella, G., Cutello, V. (eds) Machine Learning, Optimization, and Big Data. MOD 2015. Lecture Notes in Computer Science(), vol 9432. Springer, Cham. https://doi.org/10.1007/978-3-319-27926-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-27926-8_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-27925-1

  • Online ISBN: 978-3-319-27926-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics