Abstract
New parallel algorithms of local support vector regression (local SVR), called kSVR, krSVR are proposed in this paper to efficiently handle the prediction task for large datasets. The learning strategy of kSVR performs the regression task with two main steps. The first one is to partition the training data into k clusters, followed which the second one is to learn the SVR model from each cluster to predict the data locally in the parallel way on multi-core computers. The krSVR learning algorithm trains an ensemble of T random kSVR models for improving the generalization capacity of the kSVR alone. The performance analysis in terms of the algorithmic complexity and the generalization capacity illustrates that our kSVR and krSVR algorithms are faster than the standard SVR for the non-linear regression on large datasets while maintaining the high correctness in the prediction. The numerical test results on five large datasets from UCI repository showed that proposed kSVR and krSVR algorithms are efficient compared to the standard SVR. Typically, the average training time of kSVR and krSVR are 183.5 and 43.3 times faster than the standard SVR; kSVR and krSVR also improve 62.10%, 63.70% of the relative prediction correctness compared to the standard SVR, respectively.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
It must be noted that the complexity of the kSVR approach does not include the kmeans clustering used to partition the full dataset. But this step requires insignificant time compared with the quadratic programming solution.
References
Lyman, P., et al.: How much information (2003)
National Research Council, Division on Engineering and Physical Sciences, Board on Mathematical Sciences and Their Applications, Committee on the Analysis of Massive Data, Committee on Applied and Theoretical Statistics: Frontiers in Massive Data Analysis. The National Academies Press (2013)
Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995). https://doi.org/10.1007/978-1-4757-3264-1
Guyon, I.: Web page on SVM applications (1999). http://www.clopinet.com/isabelle/Projects/-SVM/app-list.html
Bui, L.D., Tran-Nguyen, M.T., Kim, Y.G., Do, T.N.: Parallel algorithm of local support vector regression for large datasets. In: Proceedings of Future Data and Security Engineering - 4th International Conference, FDSE 2017, pp. 139–153, Ho Chi Minh City, Vietnam, 29 November–1 December (2017)
Chang, C.C., Lin, C.J.: LIBSVM : a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(27), 1–27 (2011)
MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press, Berkeley, January 1967
Lichman, M.: UCI machine learning repository (2013)
Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines: And Other Kernel-based Learning Methods. Cambridge University Press, New York (2000)
Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods - Support Vector Learning, pp. 185–208 (1999)
OpenMP Architecture Review Board: OpenMP application program interface version 3.0 (2008)
Bi, J., Bennett, K.P.: A geometric approach to support vector regression. Neurocomputing 55(1–2), 79–108 (2003)
Vapnik, V.: Principles of risk minimization for learning theory. In: Advances in Neural Information Processing Systems 4, NIPS Conference, Denver, Colorado, USA, 2–5 December 1991, pp. 831–838 (1991)
Bottou, L., Vapnik, V.: Local learning algorithms. Neural Comput. 4(6), 888–900 (1992)
Vapnik, V., Bottou, L.: Local algorithms for pattern recognition and dependencies estimation. Neural Comput. 5(6), 893–909 (1993)
Do, T.N., Poulet, F.: Parallel learning of local SVM algorithms for classifying large datasets. T. Large-Scale Data-Knowl.-Cent. Syst. 31, 67–93 (2016)
Do, T.N., Poulet, F.: Latent-lSVM classification of very high-dimensional and large-scale multi-class datasets. Concurr. Comput.: Pract. Exp. 0(0), e4224
Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, Heidelberg (2000). https://doi.org/10.1007/978-1-4757-3264-1
Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Breiman, L.: Arcing classifiers. Ann. Stat. 26(3), 801–849 (1998)
Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45014-9_1
Whaley, R., Dongarra, J.: Automatically tuned linear algebra software. In: Ninth SIAM Conference on Parallel Processing for Scientific Computing, CD-ROM Proceedings (1999)
Lin, C.: A practical guide to support vector classification (2003)
Boser, B., Guyon, I., Vapnik, V.: An training algorithm for optimal margin classifiers. In: Proceedings of 5th ACM Annual Workshop on Computational Learning Theory of 5th ACM Annual Workshop on Computational Learning Theory, pp. 144–152. ACM (1992)
Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Gile, L., Morgan, N., Wilson, E. (eds.) Neural Networks for Signal Processing VII, Jose Principe, pp. 276–285 (1997)
Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: primal estimated sub-gradient solver for SVM. In: Proceedings of the Twenty-Fourth International Conference Machine Learning, pp. 807–814 (2007). ACM
Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems, vol. 20, pp. 161–168. NIPS Foundation (2008). http://books.nips.cc
Do, T.N.: Parallel multiclass stochastic gradient descent algorithms for classifying million images with very-high-dimensional signatures into thousands classes. Vietnam. J. Comput. Sci. 1(2), 107–115 (2014)
Do, T.N., Poulet, F.: Parallel multiclass logistic regression for classifying large scale image datasets. In: Advanced Computational Methods for Knowledge Engineering - Proceedings of 3rd International Conference on Computer Science, Applied Mathematics and Applications - ICCSAMA 2015, Metz, France, 11–13 May 2015, pp. 255–266 (2015)
Do, T.-N., Tran-Nguyen, M.-T.: Incremental parallel support vector machines for classifying large-scale multi-class image datasets. In: Dang, T.K., Wagner, R., Küng, J., Thoai, N., Takizawa, M., Neuhold, E. (eds.) FDSE 2016. LNCS, vol. 10018, pp. 20–39. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48057-2_2
Yuan, G., Ho, C., Lin, C.: Recent advances of large-scale linear classification. Proc. IEEE 100(9), 2584–2603 (2012)
Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res. 9(4), 1871–1874 (2008)
Ho, C., Lin, C.: Large-scale linear support vector regression. J. Mach. Learn. Res. 13, 3323–3348 (2012)
Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. In: Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing, HotCloud 2010, p. 10. USENIX Association, Berkeley (2010)
Lin, C., Tsai, C., Lee, C., Lin, C.: Large-scale logistic regression and linear support vector machines using spark. In: 2014 IEEE International Conference on Big Data, Big Data 2014, Washington, DC, USA, 27–30 October 2014, pp. 519–528 (2014)
Zhuang, Y., Chin, W., Juan, Y., Lin, C.: Distributed Newton methods for regularized logistic regression. In: Proceedings Advances in Knowledge Discovery and Data Mining - 19th Pacific-Asia Conference, PAKDD 2015, Part II, Ho Chi Minh City, Vietnam, 19–22 May 2015, pp. 690–703 (2015)
Chiang, W., Lee, M., Lin, C.: Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1485–1494 (2016)
Tsai, C., Lin, C., Lin, C.: Incremental and decremental training for linear classification. In: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2014, New York, NY, USA, 24–27 August 2014, pp. 343–352 (2014)
Huang, H., Lin, C.: Linear and kernel classification: when to use which? In: Proceedings of the SIAM International Conference on Data Mining 2016 (2016)
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3(1), 79–87 (1991)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 39(1), 1–38 (1977)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)
Collobert, R., Bengio, S., Bengio, Y.: A parallel mixture of SVMs for very large scale problems. Neural Comput. 14(5), 1105–1114 (2002)
Gu, Q., Han, J.: Clustered support vector machines. In: Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2013, Scottsdale, AZ, USA, 29 April–1 May 2013, Volume 31 of JMLR Proceedings, pp. 307–315 (2013)
Do, T.-N.: Non-linear classification of massive datasets with a parallel algorithm of local support vector machines. In: Le Thi, H.A., Nguyen, N.T., Do, T.V. (eds.) Advanced Computational Methods for Knowledge Engineering. AISC, vol. 358, pp. 231–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-17996-4_21
Do, T.-N., Poulet, F.: Random local SVMs for classifying large datasets. In: Dang, T.K., Wagner, R., Küng, J., Thoai, N., Takizawa, M., Neuhold, E. (eds.) FDSE 2015. LNCS, vol. 9446, pp. 3–15. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-26135-5_1
Chang, F., Guo, C.Y., Lin, X.R., Lu, C.J.: Tree decomposition for large-scale SVM problems. J. Mach. Learn. Res. 11, 2935–2972 (2010)
Chang, F., Liu, C.C.: Decision tree as an accelerator for support vector machines. In: Ding, X. (ed.) Advances in Character Recognition. InTech (2012)
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.: Classification and Regression Trees. Wadsworth International, Kennett Square (1984)
Vincent, P., Bengio, Y.: K-local hyperplane and convex distance nearest neighbor algorithms. In: Advances in Neural Information Processing Systems, pp. 985–992. The MIT Press (2001)
Zhang, H., Berg, A., Maire, M., Malik, J.: SVM-KNN: discriminative nearest neighbor classification for visual category recognition. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2126–2136 (2006)
Yang, T., Kecman, V.: Adaptive local hyperplane classification. Neurocomputing 71(13–15), 3001–3004 (2008)
Segata, N., Blanzieri, E.: Fast and scalable local kernel machines. J. Mach. Learn. Res. 11, 1883–1926 (2010)
Beygelzimer, A., Kakade, S., Langford, J.: Cover trees for nearest neighbor. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 97–104. ACM (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer-Verlag GmbH Germany, part of Springer Nature
About this chapter
Cite this chapter
Do, TN., Bui, LD. (2019). Parallel Learning Algorithms of Local Support Vector Regression for Dealing with Large Datasets. In: Hameurlain, A., Wagner, R., Dang, T. (eds) Transactions on Large-Scale Data- and Knowledge-Centered Systems XLI. Lecture Notes in Computer Science(), vol 11390. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-58808-6_3
Download citation
DOI: https://doi.org/10.1007/978-3-662-58808-6_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-58807-9
Online ISBN: 978-3-662-58808-6
eBook Packages: Computer ScienceComputer Science (R0)