Mathematical Programming

, Volume 173, Issue 1–2, pp 509–536 | Cite as

Linear convergence of the randomized sparse Kaczmarz method

  • Frank Schöpfer
  • Dirk A. LorenzEmail author
Full Length Paper Series A


The randomized version of the Kaczmarz method for the solution of consistent linear systems is known to converge linearly in expectation. And even in the possibly inconsistent case, when only noisy data is given, the iterates are expected to reach an error threshold in the order of the noise-level with the same rate as in the noiseless case. In this work we show that the same also holds for the iterates of the recently proposed randomized sparse Kaczmarz method for recovery of sparse solutions. Furthermore we consider the more general setting of convex feasibility problems and their solution by the method of randomized Bregman projections. This is motivated by the observation that, similarly to the Kaczmarz method, the Sparse Kaczmarz method can also be interpreted as an iterative Bregman projection method to solve a convex feasibility problem. We obtain expected sublinear rates for Bregman projections with respect to a general strongly convex function. Moreover, even linear rates are expected for Bregman projections with respect to smooth or piecewise linear-quadratic functions, and also the regularized nuclear norm, which is used in the area of low rank matrix problems.


Randomized Kaczmarz method Linear convergence Bregman projections Sparse solutions Split feasibility problem Error bounds 

Mathematics Subject Classification

65F10 68W20 90C25 

Supplementary material

10107_2017_1229_MOESM1_ESM.m (0 kb)
Supplementary material 1 (m 0 KB)
10107_2017_1229_MOESM2_ESM.m (8 kb)
Supplementary material 2 (m 7 KB)
10107_2017_1229_MOESM3_ESM.m (10 kb)
Supplementary material 3 (m 9 KB)
10107_2017_1229_MOESM4_ESM.m (13 kb)
Supplementary material 4 (m 13 KB)
10107_2017_1229_MOESM5_ESM.m (1 kb)
Supplementary material 5 (m 1 KB)
10107_2017_1229_MOESM6_ESM.m (2 kb)
Supplementary material 6 (m 1 KB)
10107_2017_1229_MOESM7_ESM.m (3 kb)
Supplementary material 7 (m 2 KB)
10107_2017_1229_MOESM8_ESM.m (2 kb)
Supplementary material 8 (m 1 KB)
10107_2017_1229_MOESM9_ESM.m (8 kb)
Supplementary material 9 (m 7 KB)
10107_2017_1229_MOESM10_ESM.m (7 kb)
Supplementary material 10 (m 6 KB)
10107_2017_1229_MOESM11_ESM.m (3 kb)
Supplementary material 11 (m 3 KB)
10107_2017_1229_MOESM12_ESM.m (5 kb)
Supplementary material 12 (m 5 KB)
10107_2017_1229_MOESM13_ESM.m (11 kb)
Supplementary material 13 (m 10 KB)
10107_2017_1229_MOESM14_ESM.m (3 kb)
Supplementary material 14 (m 2 KB)
10107_2017_1229_MOESM15_ESM.m (18 kb)
Supplementary material 15 (m 18 KB)
10107_2017_1229_MOESM16_ESM.m (1 kb)
Supplementary material 16 (m 0 KB)
10107_2017_1229_MOESM17_ESM.m (8 kb)
Supplementary material 17 (m 8 KB)
10107_2017_1229_MOESM18_ESM.m (8 kb)
Supplementary material 18 (m 8 KB)
10107_2017_1229_MOESM19_ESM.m (17 kb)
Supplementary material 19 (m 17 KB)
10107_2017_1229_MOESM20_ESM.m (8 kb)
Supplementary material 20 (m 8 KB)
10107_2017_1229_MOESM21_ESM.m (19 kb)
Supplementary material 21 (m 19 KB)
10107_2017_1229_MOESM22_ESM.m (2 kb)
Supplementary material 22 (m 1 KB)
10107_2017_1229_MOESM23_ESM.m (20 kb)
Supplementary material 23 (m 19 KB)
10107_2017_1229_MOESM24_ESM.m (1 kb)
Supplementary material 24 (m 1 KB)
10107_2017_1229_MOESM25_ESM.m (9 kb)
Supplementary material 25 (m 8 KB)
10107_2017_1229_MOESM26_ESM.m (20 kb)
Supplementary material 26 (m 20 KB)
10107_2017_1229_MOESM27_ESM.m (2 kb)
Supplementary material 27 (m 2 KB)
10107_2017_1229_MOESM28_ESM.m (3 kb)
Supplementary material 28 (m 2 KB)


  1. 1.
    Agaskar, A., Wang, C., Lu, Y.M.: Randomized Kaczmarz algorithms: exact MSE analysis and optimal sampling probabilities. In: IEEE global conference on signal and information processing (GlobalSIP) (2015)Google Scholar
  2. 2.
    Alber, Y., Butnariu, D.: Convergence of Bregman projection methods for solving consistent convex feasibility problems in reflexive Banach spaces. J. Optim. Theory Appl. 92(1), 33–61 (1997)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Bauschke, H.H., Borwein, J.M.: Legendre functions and the method of random Bregman projections. J. Convex Anal. 4(1), 27–67 (1997)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42(2), 596–636 (2003)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Bauschke, H.H., Borwein, J.M., Li, W.: Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization. Math. Program. 86(1), 135–160 (1999)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Brand, M.: Fast low-rank modifications of the thin singular value decomposition. Linear Algebra Appl. 415(1), 20–30 (2006)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Bregman, L.M.: The relaxation method for finding common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)MathSciNetGoogle Scholar
  9. 9.
    Briskman, J., Needell, D.: Block Kaczmarz method with inequalities. J. Math. Imaging Vis. 52(3), 385–396 (2015)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Burger, M.: Bregman distances in inverse problems and partial differential equations. In: Hiriart-Urruty, J.-B., Korytowski, A., Maurer, H., Szymkat, M. (eds.) Advances in Mathematical Modeling, Optimization and Optimal Control, pp. 3–33. Springer (2016)Google Scholar
  11. 11.
    Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Byrne, C., Censor, Y.: Proximity function minimization using multiple Bregman projections, with applications to split feasibility and Kullback–Leibler distance minimization. Ann. Oper. Res. 105, 77–98 (2001)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Cai, J.-F., Osher, S., Shen, Z.: Convergence of the linearized Bregman iteration for \(\ell _1\)-norm minimization. Math. Comput. 78, 2127–2136 (2009)zbMATHGoogle Scholar
  16. 16.
    Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Alg. 8, 221–239 (1994)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Chen, X., Powell, A.M.: Almost sure convergence of the Kaczmarz algorithm with random measurements. J. Fourier Anal. Appl. 18(6), 1195–1214 (2012)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Deutsch, F., Hundal, H.: The rate of convergence for the method of alternating projections, II. J. Math. Anal. Appl. 205(2), 381–405 (1997)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Elad, M.: Sparse and redundant representations: from theory to applications in signal and image processing. Springer, New York (2010)zbMATHGoogle Scholar
  23. 23.
    Hansen, P.C., Saxild-Hansen, M.: AIR tools—a MATLAB package of algebraic iterative reconstruction methods. J. Comput. Appl. Math. 236(8), 2167–2178 (2012)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49(4), 263–265 (1952)MathSciNetGoogle Scholar
  25. 25.
    Kaczmarz, S.: Angenäherte Auflösung von Systemen linearer Gleichungen. Bulletin International de l’Academie Polonaise des Sciences et des Lettres 35, 355–357 (1937)zbMATHGoogle Scholar
  26. 26.
    Lai, M.J., Yin, W.: Augmented \(\ell _1\) and nuclear-norm models with a globally linearly convergent algorithm. SIAM J. Imaging Sci. 6(2), 1059–1091 (2013)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Leventhal, D., Lewis, A.S.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35(3), 641–654 (2010)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Lorenz, D.A., Schöpfer, F., Wenger, S.: The linearized Bregman method via split feasibility problems: analysis and generalizations. SIAM J. Imaging Sci. 7(2), 1237–1262 (2014)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Lorenz, D.A., Wenger, S., Schöpfer, F., Magnor, M.: A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 1347–1351. IEEE (2014)Google Scholar
  30. 30.
    Mansour, H., Yilmaz, O.: A fast randomized Kaczmarz algorithm for sparse solutions of consistent linear systems. arXiv preprint arXiv:1305.3803 (2013)
  31. 31.
    Needell, D.: Randomized Kaczmarz solver for noisy linear systems. BIT Numer. Math. 50(2), 395–403 (2010)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Needell, D., Srebro, N., Ward, R.: Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm. Math. Program. 155(1), 549–573 (2016)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Needell, D., Tropp, J.A.: Paved with good intentions: analysis of a randomized block Kaczmarz method. Linear Algebra Appl. 441, 199–221 (2014)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Petra, S.: Randomized sparse block Kaczmarz as randomized dual block-coordinate descent. Analele Stiintifice Ale Universitatii Ovidius Constanta-Seria Matematica 23(3), 129–149 (2015)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)MathSciNetzbMATHGoogle Scholar
  37. 37.
    Richtárik, P., Takáč, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(1–2), 1–38 (2014)MathSciNetzbMATHGoogle Scholar
  38. 38.
    Robinson, S.M.: Some continuity properties of polyhedral multifunctions. Math. Program. Study 14, 206–214 (1981)MathSciNetzbMATHGoogle Scholar
  39. 39.
    Rockafellar, R.T., Wets, R.J.-B.: Variational analysis. Springer, Berlin (2009)zbMATHGoogle Scholar
  40. 40.
    Schöpfer, F.: Exact regularization of polyhedral norms. SIAM J. Optim. 22(4), 1206–1223 (2012)MathSciNetzbMATHGoogle Scholar
  41. 41.
    Schöpfer, F.: Linear convergence of descent methods for the unconstrained minimization of restricted strongly convex functions. SIAM J. Optim. 26(3), 1883–1911 (2016)MathSciNetzbMATHGoogle Scholar
  42. 42.
    Schöpfer, F., Schuster, T., Louis, A.K.: An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Probl. 24(5), 055008 (2008)MathSciNetzbMATHGoogle Scholar
  43. 43.
    Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262–278 (2009)MathSciNetzbMATHGoogle Scholar
  44. 44.
    Wang, M., Bertsekas, D.P.: Stochastic first-order methods with random constraint projection. SIAM J. Optim. 26(1), 681–717 (2016)MathSciNetzbMATHGoogle Scholar
  45. 45.
    Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imaging Sci. 3(4), 856–877 (2010)MathSciNetzbMATHGoogle Scholar
  46. 46.
    Zhang, H., Hui Cai, J.F., Cheng, L., Zhu, J.: Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Probl. Imaging 6(2), 357–372 (2012)MathSciNetzbMATHGoogle Scholar
  47. 47.
    Zhang, H., Cheng, L.: Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization. Optim. Lett. 9(5), 961–979 (2015)MathSciNetzbMATHGoogle Scholar
  48. 48.
    Zhao, J., Yang, Q.: Several solution methods for the split feasibility problem. Inverse Probl. 21, 1791–1799 (2005)MathSciNetzbMATHGoogle Scholar
  49. 49.
    Zouzias, A., Freris, N.M.: Randomized extended Kaczmarz for solving least squares. SIAM J. Matrix Anal. Appl. 34(2), 773–793 (2013)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018

Authors and Affiliations

  1. 1.Institut für MathematikCarl von Ossietzky Universität OldenburgOldenburgGermany
  2. 2.Institute for Analysis and AlgebraTU BraunschweigBraunschweigGermany

Personalised recommendations