Weighted Coordinate-Wise Pegasos
Pegasos is a popular and reliable machine learning algorithm for making linear Support Vector Machines solvable at the larger scale. It benefits from the strongly convex optimization objective, faster convergence rates and lower computational and memory costs. In this paper we devise a new weighted formulation of the Pegasos algorithm which favors from the different coordinate-wise λ i regularization parameters. Together with the proposed extension we give a brief theoretical justification of its convergence to an optimal solution and analyze at a glance its computational costs. We conclude our paper with the numerical results obtained for UCI datasets and demonstrate the merits and the importance of our approach for achieving a better classification accuracy and convergence rates in the partially or fully stochastic setting.
KeywordsSupport Vector Machine Relevance Vector Machine Linear Support Vector Machine Hinge Loss Projection Step
- 1.Joachims, T.: Training linear SVMs in linear time. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2006, pp. 217–226. ACM, New York (2006)Google Scholar
- 2.Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, New York, NY, USA, pp. 807–814 (2007)Google Scholar
- 9.Vapnik, V.: Statistical learning theory, 1st edn. Wiley (September 1998)Google Scholar
- 11.Shalev-Shwartz, S., Singer, Y.: Logarithmic regret algorithms for strongly convex repeated games. Technical report, The Hebrew University (2007)Google Scholar
- 13.Wipf, D., Nagarajan, S.: A New View of Automatic Relevance Determination. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems 20, pp. 1625–1632. MIT Press, Cambridge (2008)Google Scholar