Empirical Study of Matrix Factorization Methods for Collaborative Filtering
Matrix factorization methods have proved to be very efficient in collaborative filtering tasks. Regularized empirical risk minimization with squared error loss function and L 2-regularization and optimization performed via stochastic gradient descent (SGD) is one of the most widely used approaches.
The aim of the paper is to experimentally compare some modifications of this approach. Namely, we compare Huber’s, smooth ε-insensitive and squared error loss functions. Moreover, we investigate a possibility to improve the results by applying a more sophisticated optimization technique — stochastic meta-descent (SMD) instead of SGD.
Keywordscollaborative filtering matrix factorization loss functions
- 2.Dekel, O., Shalev-Shwartz, S., Singer, Y.: Smooth e-intensive regression by loss symmetrization. In: COLT, pp. 433–447 (2003)Google Scholar
- 3.Bray, M., Koller-meier, E., Muller, P., Van Gool, L., Schraudolph, N.N.: 3d hand tracking by rapid stochastic gradient descent using a skinning model. In: CVMP, pp. 59–68 (2004)Google Scholar
- 4.Karatzoglou, A., Smola, A., Weimer, M.: Collaborative filtering on a budget. In: AISTATS, vol. 9, pp. 389–396 (2010)Google Scholar
- 5.Takács, G., Pilászy, I., Németh, B., Tikk, D.: Scalable collaborative filtering approaches for large recommender systems. Journal of Machine Learning Research 10, 623–656 (2009)Google Scholar
- 6.Cohen, G., Ruch, P., Hilario, M.: Model selection for support vector classifiers via direct simplex search. In: FLAIRS Conference, pp. 431–435 (2005)Google Scholar