Advertisement

Improved Sublinear Primal-Dual Algorithm for Support Vector Machines

  • Ming Gu
  • Shizhong Liao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11062)

Abstract

Sublinear primal-dual algorithm (SUPDA) is a well established sublinear time algorithm. However, SUPDA performs the primal step in every iteration, which is unnecessary since the overall regret of SUPDA is dominated by the dual step. To improve the efficiency of SUPDA, we propose an improved SUPDA (ISUPDA), and apply ISUPDA to linear support vector machines, which yields an improved sublinear primal-dual algorithm for linear support vector machines (ISUPDA-SVM). Specifically, different from SUPDA that conducts the primal step in every iteration, ISUPDA executes the primal step with a probability at each iteration, which can reduce the time complexity of SUPDA. We prove that the expected regret of ISUPDA is still dominated by the dual step and hence ISUPDA guarantees the convergence. We further convert linear support vector machines into saddle-point forms in order to apply ISUPDA to linear support vector machines, and provide the theoretical guarantee of the quality of solution and efficiency for ISUPDA-SVM. Comparison experiments on multiple datasets demonstrate that ISUPDA outperforms SUPDA and that ISUPDA-SVM is an efficient algorithm for linear support vector machines.

Keywords

Sublinear primal-dual algorithm Regret analysis Randomized algorithm Linear support vector machines 

Notes

Acknowledgments

The work was supported in part by the National Natural Science Foundation of China under grant No. 61673293.

References

  1. 1.
    Arora, S., Hazan, E., Kale, S.: The multiplicative weights update method: a meta-algorithm and applications. Theory Comput. 8(1), 121–164 (2012)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Chang, C., Lin, C.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27:1–27:27 (2011)CrossRefGoogle Scholar
  3. 3.
    Chang, K., Hsieh, C., Lin, C.: Coordinate descent method for large-scale l2-loss linear support vector machines. J. Mach. Learn. Res. 9(3), 1369–1398 (2008)MathSciNetMATHGoogle Scholar
  4. 4.
    Cherkassky, V.: The nature of statistical learning theory. IEEE Trans. Neural Netw. Learn. Syst. 8(6), 1–30 (1997)Google Scholar
  5. 5.
    Clarkson, K.L., Hazan, E., Woodruff, D.P.: Sublinear optimization for machine learning. J. ACM 59(5), 23:1–23:49 (2012)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Cotter, A., Shalev-Shwartz, S., Srebro, N.: The kernelized stochastic batch perceptron. In: Proceedings of the 29th International Conference on Machine Learning (ICML), pp. 943–950 (2012)Google Scholar
  7. 7.
    Garber, D., Hazan, E.: Approximating semidefinite programs in sublinear time. In: Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), pp. 1080–1088 (2011)Google Scholar
  8. 8.
    Hazan, E., Agarwal, A., Kale, S.: Logarithmic regret algorithms for online convex optimization. Mach. Learn. 69(2), 169–192 (2007)CrossRefGoogle Scholar
  9. 9.
    Hazan, E., Koren, T.: Linear regression with limited observation. In: Proceedings of the 29th International Conference on Machine Learning (ICML), pp. 1865–1872 (2012)Google Scholar
  10. 10.
    Hazan, E., Koren, T., Srebro, N.: Beating SGD: learning svms in sublinear time. In: Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), pp. 1233–1241 (2011)Google Scholar
  11. 11.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI), pp. 1137–1145 (1995)Google Scholar
  12. 12.
    Peng, H., Wang, Z., Chang, E.Y., Zhou, S., Zhang, Z.: Sublinear algorithms for penalized logistic regression in massive datasets. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7523, pp. 553–568. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33460-3_41CrossRefGoogle Scholar
  13. 13.
    Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal estimated sub-gradient solver for SVM. In: Proceedings of the 24th International Conference on Machine Learning (ICML), pp. 807–814 (2007)Google Scholar
  14. 14.
    Shalev-Shwartz, S., Zhang, T.: Stochastic dual coordinate ascent methods for regularized loss. J. Mach. Learn. Res. 14(1), 567–599 (2013)MathSciNetMATHGoogle Scholar
  15. 15.
    Slavakis, K., Kim, S., Mateos, G., Giannakis, G.B.: Stochastic approximation vis-a-vis online learning for big data analytics. IEEE Sig. Process. Mag. 31(6), 124–129 (2014)CrossRefGoogle Scholar
  16. 16.
    Wang, W., Peng, Z., Liu, Z., Zhu, T., Hong, X.: Learning the influence probabilities based on multipolar factors in social network. In: Zhang, S., Wirsing, M., Zhang, Z. (eds.) KSEM 2015. LNCS (LNAI), vol. 9403, pp. 512–524. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-25159-2_46CrossRefGoogle Scholar
  17. 17.
    Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11(1), 2543–2596 (2010)MathSciNetMATHGoogle Scholar
  18. 18.
    Zhang, T.: Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proceedings of the 21st International Conference on Machine Learning (ICML), pp. 9–16 (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Computer Science and TechnologyTianjin UniversityTianjinChina

Personalised recommendations