# Remarks on Kaczmarz Algorithm for Solving Consistent and Inconsistent System of Linear Equations

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12138)

## Abstract

In this paper we consider the classical Kaczmarz algorithm for solving system of linear equations. Based on the geometric relationship between the error vector and rows of the coefficient matrix, we derive the optimal strategy of selecting rows at each step of the algorithm for solving consistent system of linear equations. For solving perturbed system of linear equations, a new upper bound in the convergence rate of the randomized Kaczmarz algorithm is obtained.

## Keywords

Iterative methods Kaczmarz method Convergence rate Orthogonal projection Linear systems

## 1 Introduction

Kaczmarz algorithm  is an iterative method for solving system of linear equations of the form
\begin{aligned} Ax=b, \end{aligned}
(1)
where $$A\in \mathcal{R}^{m\times n}$$ has full column rank, $$m\ge n$$ and $$b\in {\mathcal {R}}^m$$. In the consistent case, the solution of (1) can be regarded as the coordinate of the common point of hyperplanes defined by each single equation in (1):
\begin{aligned} \mathcal {P}_i=\{x| a_i^Tx=b_i\}, \end{aligned}
(2)
where $$a_i^T$$, $$i=1,2,\cdots , m$$, denotes the ith row of A and $$b_i$$ is the ith element of vector b.
The idea of the Kaczmarz type algorithms is to exploit the geometric structure of the problem (1), and the using a sequential of projections to seek the solution. The recursive process can be formulated as follows. Let $$x_{0}$$ be an initial guess to the solution of (1), then the classical Kaczmarz algorithm iteratively generates a sequence of approximate solutions $$x_k$$ by the recursive formula:
\begin{aligned} x_{k+1}=x_{k}+\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i, \end{aligned}
(3)
where $$i=mod(k,m)+1$$. For a given $$x_k$$, from (3) we can see that $$x_{k+1}$$ satisfies the ith equation in (1), i.e., $$a_i^Tx_{k+1}=b_i$$. The updating formula (3) implicitly produces a solution to the following constraint optimization problem [21, 37]
$$\min _{\{x|a_i^Tx_{k+1}=b\}}||x-x_k||_2,$$
which is equivalent to finding the projection of $$x_k$$ from the hyperplane $$\mathcal {P}_i$$. Two geometric explanations of the above process can be illustrated by Fig. 1: Fig. 1.Geometric illustrations of the classical Kaczmarz iterations with $$m=4$$.

By comparing the projection processes displayed in Fig. 1, it is natural to have the intuition that convergence of the classical Kaczmarz algorithm highly depends on the geometric positions of the associated hyperplanes. If the normal vectors of every two successive hyperplanes keep reasonably large angles, the convergence of the classical Kaczmarz algorithm will be fast, whereas two nearly parallel consecutive hyperplanes will make the convergence slow down. The Kaczmarz algorithm can be regarded as a special application of famous von Neumann’s alternating projection  originally distributed in 1933. The fundamental idea can even trace the history back to Schwarz  in 1870s.

In the past few years, the Karzmarz algorithm has been interpreted as successive projection methods [4, 7, 8, 11, 12, 13], which are also known as projection onto convex sets (POCS) [9, 17, 18, 42, 43, 44] in the optimization community. Notice that each iteration of the Kaczmarz algorithm just need $$\mathcal{O}(n)$$ flops and the cost is independent with the number of equations, this type of algorithms are well-suited to problems with $$m\gg n$$. Due to its simplicity and generality, Kaczmarz algorithms find viable applications in the area of image processing and signal process [19, 20, 24, 25, 26, 30, 36] under the name of algebraic reconstruction techniques (ART). Since 1980s, relaxation variants [11, 25, 41]
\begin{aligned} x_{k+1}=x_{k}+\lambda _k\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i, \end{aligned}
(4)
and the block versions [3, 33, 34]
\begin{aligned} x_{k+1}=x_{k}+A_{\tau }^{\dag }(b_{\tau }-A_{\tau }^Tx_k),\ with \ A= \left( \begin{array}{c} A_1 \\ A_2 \\ \vdots \\ A_M \\ \end{array} \right) , b= \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_M \\ \end{array} \right) , \tau \in \{1, 2, \cdots , M\}, \end{aligned}
(5)
of the Kaczmarz algorithm have been widely investigated, and some fruitful theoretical results have been obtained. In particular, for consistent linear systems, it is shown [5, 21, 31, 39] that the Kaczmarz iterations converges to the least square norm solution $$x=A^{\dag }b$$ with any starting vector $$x_0$$ in the column space of $$A^T$$. For inconsistent linear systems, the cyclic subsequences generated by the Kaczmarz algorithm converges to a weighted least squares solution when the relaxation parameter $$\lambda _k$$ goes to zero .
As indicated in Fig. 1, convergence of the classical Kaczmarz algorithm depends on the sequence of successive projections, which relies upon the ordering of the rows in the matrix A. In some real applications, it is observed [25, 30] that instead of selecting rows of the matrix A sequentially at each step of the Kaczmarz algorithm, randomly selection can often improve its convergence. Recently, in the remarkable paper , Strohmer and Vershynin proved the rate of convergence for the following randomized Kaczmarz algorithm
$$x_{k+1}=x_{k}+\frac{b_{r(i)}-a_{r(i)}^Tx_k}{||a_{r(i)}||_2^2}a_{r(i)}$$
where r(i) is chosen from $$\{1, 2, \cdots , m\}$$ with probabilities $$\frac{||a_{r(i)}||_2^2}{||A||_F^2}$$. In particular, the following bound on the expected rate of convergence for the randomized Kaczmarz method is proved
\begin{aligned} \mathbb {E}||x_k-x||_2^2\le (1-\frac{1}{\kappa (A)^2})^k||x_0-x||^2_2, \end{aligned}
(6)
where $$\kappa (A)=||A||_F||A^{-1}||_2$$, with $$||A^{-1}||_2=\inf \{M: M||Ax||_2\ge ||x||_2\}$$ be the scaled conditioned number of A introduced by J. Demmel . Due to this pioneering work that characterized the convergence rate for the randomized Kaczmarz algorithms, the idea stimulated considerable interest in this area and various investigations [1, 2, 6, 10, 15] have been performed recently. In particular, some acceleration strategies have been proposed [6, 16, 22] and convergence analysis was performed in [21, 23, 27, 29, 31, 32]. See also [21, 23] for some comments on equivalent interpretations of the randomized Kaczmarz algorithms.

## 2 Optimal Row Selecting Strategy of the Kaczmarz Algorithm for Solving Consistent System of Linear Equations

In this section, we consider the case that system of linear equations (1) is consistent and x is a solution. If the ith row is selected at the $$(k+1)$$th iteration of the Kaczmarz algorithm, i.e.,
$$x_{k+1}=x_{k}+\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i,$$
then $$x_{k+1}$$ can be reformulated as
$$\begin{array}{lll} x_{k+1}&{}=&{}x_{k}+\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{b_i}{||a_i||_2^2}a_i-\frac{x_k^Ta_i}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{a_i^Tx}{||a_i||_2^2}a_i-\frac{a_i^Tx_k}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{a_i^T(x-x_k)}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{a_ia_i^T}{||a_i||_2^2}(x-x_k).\\ \end{array}$$
It follows that
\begin{aligned} \begin{array}{lll} x-x_{k+1}&{}=&{}x-x_k-\frac{a_ia_i^T}{||a_i||_2^2}(x-x_k)\\ &{}=&{}(I-\frac{a_ia_i^T}{||a_i||_2^2})(x-x_k)\\ \end{array} \end{aligned}
(7)
and thus
\begin{aligned} x_{k+1}-x_k=\frac{a_ia_i^T}{||a_i||_2^2}(x-x_k). \end{aligned}
(8)
From (7) and (8), we can see that
\begin{aligned} x-x_{k+1}\perp x_{k+1}-x_k, \end{aligned}
(9)
i.e.,
\begin{aligned} x-x_{k+1}\perp a_i. \end{aligned}
(10)
To this end, let us make the following orthogonal direct sum decomposition $$x-x_k$$,
\begin{aligned} x-x_{k}=\alpha \hat{a}_i +\beta \hat{a}_i^{\perp }, \end{aligned}
(11)
where $$\hat{a}_i=\frac{a_i}{||a_i||_2}$$ and $$\hat{a}_i^{\perp }$$ is a normalized vector orthogonal to $$a_i$$. Then coefficients $$\alpha$$ and $$\beta$$ can be written as
$$\alpha =||x-x_k||_2\cos \theta _{k_i},$$
$$\beta =||x-x_k||_2\sin \theta _{k_i},$$
where $$\theta _{k_i}=\angle (x-x_k, a_i)$$ is the angle between the vectors $$(x-x_{k})$$ and $$a_i$$.
Substituting the above decomposition (11) into (7) gives
\begin{aligned} \begin{array}{lll} x-x_{k+1}&{}=&{}(I-\frac{a_ia_i^T}{||a_i||_2^2})(\alpha \hat{a}_i +\beta \hat{a}_i^{\perp })\\ &{}=&{}\beta \hat{a}_i^{\perp }\\ &{}=&{}||x-x_k||_2\sin \theta _{k_i} \hat{a}_i^{\perp }.\\ \end{array} \end{aligned}
(12)
It follows that
\begin{aligned} \begin{array}{lll} ||x-x_{k+1}||_2=||x-x_k||_2\cdot |\sin \theta _{k_i}|.\\ \end{array} \end{aligned}
(13)
From (13) we can see that the error norms generated by the Kaczmarz algorithm are monotonically nonincreasing. Moreover, the convergence can be optimized if $$|\sin \theta _{k_i}|$$ is minimized at every iteration, which is equivalent to selecting the row $$a_{i}$$ that solves the optimization problem
$$|\sin \angle (x-x_k, a_{i})|=\min _{j} |\sin \angle (x-x_k, a_j)|.$$
As x is the unknown solution, the above minimization problems seems unsolvable. However, noting that consistent linear system (1) implies
$$a_j^Tx=b_j, \ j=1,2, \cdots , m$$
and $$x_k$$ is fixed at the $$(k+1)$$th iteration. The minimization problem can be tackled by maximizing $$|\cos \angle (x-x_k, a_j)|$$, i.e.,
\begin{aligned} \begin{array}{lll} |\cos \angle (x-x_k, a_j)|&{}=&{}\frac{|a_j^T(x-x_k)|}{||x-x_k||_2||a_j||_2} \\ &{}=&{}\frac{|b_j-a_j^Tx_k|}{||x-x_k||_2||a_j||_2}\\ &{}=&{}\frac{|r_k(j)|}{||x-x_k||_2||a_j||_2},\\ \end{array} \end{aligned}
(14)
where $$r_k=b-Ax_k=\left( \begin{array}{cccc} r_k(1), &{} r_k(2), &{} \cdots , &{} r_k(m) \\ \end{array} \right) ^T.$$
It is clear from (14) that the optimal updating strategy for the Kaczmarz algorithm is to select the row $$\hat{i}$$ that satisfies
$$|b_{\hat{i}}-a_{\hat{i}}^Tx_k|=\max _j|b_j-a_j^Tx_k|=||b-Ax_k||_{\infty },$$
i.e., the index where $$r_k$$ has the largest entry in absolute value. We refer to the above row selection method as the optimal selecting strategy, and call the Kaczmarz algorithm with the optimal selecting strategy as the optimal Kaczmarz algorithm.
Next, we analyze the convergence of the optimal Kaczmarz algorithm for solving consistent system of linear equations. To simplify the analysis, we introduce two notations
$$\theta _{k}^{\hat{i}}=\min _{j}\angle (x_{k}-x, a_j),$$
and
$$\theta _{p}^{\hat{i}}=\max _{k}\theta _{k}^{\hat{i}},$$
where $$1\le \hat{i}\le m$$ and $$1\le p \le k$$.
Based on (13), the $$(k+1)$$th error can be bounded as follows
\begin{aligned} \begin{array}{lll} ||x-x_{k+1}||_2&{}=&{}||x-x_{k}||_2\cdot |\sin \theta _{k}^{\hat{i}}|\\ &{}=&{}||x-x_0||_2\cdot |\sin \theta _{k}^{\hat{i}}|\cdot |\sin \theta _{k-1}^{\hat{i}}|\cdots |\sin \theta _{0}^{\hat{i}}| \\ &{}\le &{}||x-x_0||_2\cdot |\sin \theta _{p}^{\hat{i}}|^k, \\ \end{array} \end{aligned}
(15)
where $$1\le p \le k$$.
Notice that
$$0\le \sin \theta _{p}^{\hat{i}}\le 1,$$
we can theoretically divide the convergence history of the Kaczmarz algorithm into two periods:
• when $$\sin \theta _{p}^{\hat{i}}<1$$, the algorithm converge exponentially,

• when $$\sin \theta _{p}^{\hat{i}}= 1$$, we have
$$\max _{j}a_j^T(x_{p}-x)=0$$
and thus,
$$a_j^T(x_{p}-x)=0, j=1,2,\cdots , m.$$
This implies that $$Ax_{p}=b$$, i.e., $$x_{p}$$ solves the system of linear equation (1).
In summary, for solving consistent system of linear equations (1), there exists a theoretical optimal selecting strategy or optimal randomization strategy for Kaczmarz algorithm. With the strategy, the algorithm converges exponentially and will achieve convergence when
$$\max _{k}\min _{1\le j\le m}\angle (x_{k}-x, a_j)=\frac{\pi }{2}.$$

## 3 Randomized Kaczmarz Algorithm for Solving Inconsistent System of Linear Equations

Suppose (1) is a consistent system of linear equations and its right hand side is perturbed with a noise vector r as follows:
\begin{aligned} Ax\simeq b+r, \end{aligned}
(16)
where (16) can be either consistent or inconsistent. In this section, we give some remarks on the convergence of randomized Kaczmarz algorithm for solving (16), which was investigated by D. Needell .

Firstly, we recall the Lemma 2.2 in .

### Lemma 1

Let $$H_i$$ be the affine subspaces of $$\mathcal {R}^n$$ consisting of the solutions to unperturbed equations, $$H_i=\{x \mid \langle a_i,x\rangle =b_i\}$$. Let $$\tilde{H}_i$$ be the solution spaces of the noisy equations, $$\tilde{H}_i=\{x \mid \langle a_i,x\rangle =b_i+r_i\}$$. Then
$$\tilde{H}_i=\{w+\alpha _ia_i\mid w\in H_i\}$$
where $$\alpha _i=\frac{r_i}{||a_i||^2_2}$$.
Remarks: If the Lemma 1 is used to interpret the Kaczmarz algorithm for solving the perturbed and unperturbed equations, we need to introduce a vector $$a_i^{\perp }$$ in the as the orthogonal complement of the vector $$a_i$$, and write $$\tilde{x}_i\in \tilde{H}_i$$ as
$$\tilde{x}_i=x_i+\alpha _ia_i+\beta v_i$$
where $$x_i$$ is a solution generated by Kaczmarz algorithm for solving the unperturbed equations, and $$v_i$$ is a vector in the orthogonal complement of $$a_i$$.
Example 1. Consider the $$2\times 2$$ system of linear equations
$$\left\{ \begin{array}{ll} x_1+x_2=1, \\ x_1-x_2=1, \end{array} \right.$$
and the perturbed equations
$$\left\{ \begin{array}{ll} x_1+x_2=1.5, \\ x_1-x_2=1.5, \end{array} \right.$$
i.e., $$A=\left( \begin{array}{cc} 1 &{} 1 \\ 1 &{} -1 \\ \end{array} \right)$$, $$b=\left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right)$$ and $$r=\left( \begin{array}{c} 0.5 \\ 0.5 \\ \end{array} \right)$$.
Let
$$H_i\doteq \{x\mid \langle a_i, x\rangle =b_i\}$$
and
$$\tilde{H}_i\doteq \{\tilde{x}\mid \langle a_i, \tilde{x}\rangle =b_i+r_i\}.$$
If we use $$x_0=\left( \begin{array}{c} 1 \\ 0\\ \end{array} \right)$$ as the same initial guess for the perturbed and unperturbed linear system, then
$$H_1= \{\left( \begin{array}{c} 1 \\ 0\\ \end{array} \right) +\xi \left( \begin{array}{c} -1 \\ 1\\ \end{array} \right) \mid \xi \in \mathcal {R} \}$$
and
$$\tilde{H}_1= \{\left( \begin{array}{c} 1.5 \\ 0\\ \end{array} \right) +\xi \left( \begin{array}{c} -1 \\ 1\\ \end{array} \right) \mid \xi \in \mathcal {R} \}$$
Note that $$a_1=\left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right) ,$$ $$||a_1||_2^2=2$$ and $$r_1=\frac{1}{2}$$. We havei.e.,In order to derive the convergence rate of randomized Kaczmarz algorithm for solving the perturbed linear equations (16), we need to make use of the established convergence results  for the unperturbed linear system (1), together with the relationship between the approximate solutions generated by the Kaczmarz algorithm  for perturbed and unperturbed linear equations. In , D. Needell analyzed the convergence rate and error bound of the randomized Kaczmarz algorithm for solving the perturbed linear equations, in which the author take the approximate solution to the perturbed linear equations as the guess for the unperturbed system, which make the derivation process simplified. However, the approximate solutions generated by applying the randomized Kaczmarz algorithm to the perturbed linear system may not converge to the solution of the unperturbed linear system.

In what follows, we will consider the convergence rate of the randomized Kaczmarz algorithm for solving (16) from a different perspective. We try to bound the difference between the solution for the unperturbed linear system (1) and approximate solutions generated by applying the randomized Kaczmarz algorithm to the perturbed linear system.

In the following discussion, we use $$x_k$$ and $$\tilde{x}_k$$ to denote the approximate solutions generated by applying the randomized Kaczmarz algorithm to (1) and (16), respectively. The recursive formulas can be written as
\begin{aligned} x_{k+1}=x_k+\frac{b_{i{_k}}-x_k^Ta_{i{_k}}}{||a_{i{_k}}||_2^2}a_{i{_k}} \end{aligned}
(17)
and
\begin{aligned} \tilde{x}_{k+1}=\tilde{x}_k+\frac{b_{i{_k}}+r_{i{_k}}-\tilde{x}_k^Ta_{i{_k}}}{||a_{i{_k}}||_2^2}a_{i{_k}}, \end{aligned}
(18)
where the subscript $$i{_k}\in \{1, 2, \cdots , m\}$$ is used to denote that the $$i{_k}$$th row is selected with probability $$\frac{||a_{i{_k}}||^2_2}{||A||^2_F}$$ at the kth iteration.
Suppose the same initial guess $$x_0=\tilde{x}_0$$ is used as the starting vector. Then
$$\tilde{x}_{1}=\tilde{x}_0+\frac{b_{i{_0}}+r_{i{_0}}-\tilde{x}_0^Ta_{i{_0}}}{||a_{i{_0}}||_2^2}a_{i{_0}}$$
and potentially
$$x_{1}=x_0+\frac{b_{i{_0}}-x_k^Ta_{i{_0}}}{||a_{i{_0}}||_2^2}a_{i{_0}}.$$
It follows that
\begin{aligned} \tilde{x}_1=x_1+\frac{r_{i_{0}}a_{i_0}}{||a_{i_0}||_2^2}. \end{aligned}
(19)
In the next iteration, we have
$$\begin{array}{lll} \tilde{x}_2&{}=&{}\tilde{x}_1+\frac{b_{i{_1}}+r_{i{_1}}-\tilde{x}_1^Ta_{i{_1}}}{||a_{i{_1}}||_2^2}a_{i{_1}}\\ &{}=&{}x_1+\frac{r_{i_{0}}a_{i_0}}{||a_{i_0}||_2^2}+\frac{b_{i_1}-(x_1+\frac{r_{i_{0}}a_{i_0}}{||a_{r_0}||_2^2})^Ta_{i_1}}{||a_{i_1}||_2^2}a_{i_1}+\frac{r_{i_{1}}a_{i_1}}{||a_{r_1}||_2^2}\\ &{}=&{}\underbrace{x_1+\frac{b_{i{_1}}-x_1^Ta_{i{_1}}}{||a_{i{_1}}||_2^2}a_{i{_1}}}_{x_2} + \frac{r_{i_{1}}a_{i_1}}{||a_{i_1}||_2^2}+ \underbrace{(I-\frac{a_{i_1}a_{i_1}^T}{||a_{i_1}||_2^2})\frac{r_{i_0}a_{i_0}}{||a_{i_0}||_2^2}}_{ a_{i_1}^{\bot }}\\ &{}=&{}x_2+\frac{r_{i_{1}}a_{i_1}}{||a_{i_1}||_2^2}+v_{i_1} \end{array}$$
where $$v_{i_1}=(I-\frac{a_{i_1}a_{i_1}^T}{||a_{i_1}||_2^2})\frac{r_{i_0}a_{i_0}}{||a_{i_0}||_2^2}\in span\{a_{i_1}\}^{\bot }$$ with $$||v_{i_1}||_2=\frac{|r_{i_0}|}{||a_{i_0}||_2}$$.
Continue the above process, we have
\begin{aligned} \tilde{x}_{k}=x_{k}+\frac{r_{i_{k-1}}}{||a_{i_{k-1}}||_2^2}a_{i_{k-1}}+\sum _{j=1}^{k-2}v_{i_j}, \end{aligned}
(20)
where $$v_{i_j}=(I-\frac{a_{i_j}a_{i_j}^T}{||a_{i_j}||_2^2})\frac{r_{i_{j-1}}a_{i_{j-1}}}{||a_{i_{j-1}}||_2^2}\in span\{a_{i_j}\}^{\bot }$$ and $$||v_{i_j}||_2=\frac{|r_{i_{j-1}}|}{||a_{i_{j-1}}||_2}$$.
Subtracting x on both sides of (20) gives
\begin{aligned} \tilde{x}_{k}-x=x_{k}-x+\frac{r_{i_{k-1}}a_{i_{k-1}}}{||a_{i_{k-1}}||_2^2}+\sum _{j=1}^{k-2}v_{i_j}. \end{aligned}
(21)
Based on Jensen’s inequality and (6), we have
\begin{aligned} \mathbb {E}||x_k-x||_2 \le (1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2, \end{aligned}
(22)
where $$\kappa (A)=||A||_F||A^{-1}||_2$$, with $$||A^{-1}||_2=\inf \{M: M||Ax||_2\ge ||x||_2\}$$.
Taking norm on both sides of (21) and using triangle inequality, we have
$$\begin{array}{lll} \mathbb {E}(||\tilde{x}_{k}-x||_2)&{}\le &{} \mathbb {E}(||x_{k}-x||_2)+||\frac{r_{i_{k-1}}a_{i_{k-1}}}{||a_{i_{k-1}}||_2^2}||_2+\sum \limits _{j=1}^{k-2}||v_{i_j}||_2\\ &{}\le &{}(1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+\sum \limits _{j=1}^{k-1}||v_{i_j}||_2\\ &{}=&{}(1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+\sum \limits _{j=1}^{k-1}\frac{|r_{i_{j}}|}{||a_{i_{j}}||_2}\\ &{}=&{}(1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+(k-1)\gamma \\ \end{array}$$
where $$\gamma =\max \limits _{1\le i\le m}\frac{|r_i|}{||a_i||_2}$$.

In conclusion, we have derived the following theorem.

### Theorem 1

Let A be a matrix full column rank and assume the system $$Ax = b$$ is consistent. Let $$\tilde{x}_{k}$$ be the kth iterate of the noisy randomized Kaczmarz method run with $$Ax \simeq b +r$$, and let $$a_1,\cdots , a_m$$ denote the rows of A. Then we have
$$\mathbb {E}||\tilde{x}_{k}-x||_2\le (1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+(k-1)\gamma ,$$
where $$\kappa (A)=||A||_F||A^{-1}||_2$$ and $$\gamma =\max \limits _{1\le i\le m}\frac{|r_i|}{||a_i||_2}$$.

## 4 Conclusions

In this paper, we provide a new look at the Kaczmarz algorithm for solving system of linear equations. The optimal row selecting strategy of the Kaczmarz algorithm for solving consistent system of linear equations is derived. The convergence of the randomized Kaczmarz algorithm for solving perturbed system of linear equations is analyzed and a new bound of the convergence rate is obtained from a new perspective.

## References

1. 1.
Agaskar, A., Wang, C., Lu, Y.M.: Randomized Kaczmarz algorithms: exact MSE analysis and optimal sampling probabilities. In: Proceedings of the 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, 3–5 December, pp. 389–393 (2014)Google Scholar
2. 2.
Ivanov, A.A., Zhdanov, A.I.: Kaczmarz algorithm for Tikhonov regularization problem. Appl. Math. E-Notes 13, 270–276 (2013)
3. 3.
Ivanovy, A.A., Zhdanovz, A.I.: The block Kaczmarz algorithm based on solving linear systems with arrowhead matrices. Appl. Math. E-Notes 17, 142–156 (2017)
4. 4.
Bai, Z.-Z., Liu, X.-G.: On the Meany inequality with applications to convergence ananlysis of several row-action methods. Numer. Math. 124, 215–236 (2013)
5. 5.
Bai, Z.-Z., Wu, W.-T.: On greedy randomized Kaczmarz method for solving large sparse linear systems. SIAM J. Sci. Comput. 40, A592–A606 (2018)
6. 6.
Bai, Z.-Z., Wu, W.-T.: On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems. Appl. Math. Lett. 83, 21–26 (2018)
7. 7.
Benzi, M., Meyer, C.D.: A direct projection method for sparse linear systems. SIAM J. Sci. Comput. 16, 1159–1176 (1995)
8. 8.
Brenzinski, C.: Projection Methods for Systems of Equations. Elsevier Science B.V., Amsterdam (1997)Google Scholar
9. 9.
Benzi, M., Meyer, C.D.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)
10. 10.
Cai, J., Tang, Y.: A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval. Neural Netw. 98, 178–191 (2018)
11. 11.
Censor, Y.: Row-action methods for huge and sparse systems and their applications. SIAM Rev. 23, 444–466 (1981)
12. 12.
Censor, Y., Eggermont, P.P.B., Gordon, D.: Strong underrelaxation in Kaczmarz’s method for inconsistent systems. Numer. Math. 41, 83–92 (1983)
13. 13.
Censor, Y., Herman, G.T., Jiang, M.: A note on the behaviour of the randomized Kaczmarz algorithm of Strohmer and Vershynin. J. Fourier Anal. Appl. 15, 431–436 (2009)
14. 14.
Demmel, J.: The probability that a numerical analysis problem is difficult. Math. Comput. 50, 449–480 (1988)
15. 15.
De Loera, J.A., Haddock, J., Needell, D.: A sampling Kaczmarz-Motzkin algorithm for linear feasibility. SIAM J. Sci. Comput. 39, S66–S87 (2017)
16. 16.
Eldar, Y.C., Needell, D.: Acceleration of randomized Kaczmarzmethod via the Johnson–Lindenstrauss lemma. Numer. Algorithms 58, 163–177 (2011)
17. 17.
Feichtinger, H.G., Cenker, C., Mayer, M., Steier, H., Strohmer, T.: New variants of the POCS method using affine subspaces of finite codimension with applications to irregular sampling. In: VCIP. SPIE, pp. 299–310 (1992)Google Scholar
18. 18.
Galántai, A.: On the rate of convergence of the alternating projection method in finite dimensional spaces. J. Math. Anal. Appl. 310, 30–44 (2005)
19. 19.
Gordon, R., Bender, R., Herman, G.T.: Algebraic Reconstruction Techniques (ART) for threedimensional electron microscopy and x-ray photography. J. Theor. Biol. 29, 471–481 (1970)
20. 20.
Gordon, R., Herman, G.T., Johnson, S.A.: Image reconstruction from projections. Sci. Am. 233, 56–71 (1975)
21. 21.
Gower, R.M., Richtárik, P.: Randomized iterative methods for linear systems. SIAM J. Matrix Anal. 36, 1660–1690 (2015)
22. 22.
Hanke, M., Niethammer, W.: On the acceleration of Kaczmarz’s method for inconsistent linear systems. Linear Algebra Appl. 130, 83–98 (1990)
23. 23.
Hefny, A., Needell, D., Ramdas, A.: Rows vs. columns: randomized Kaczmarz or gauss-seidel for ridge regression. SIAM J. Sci. Comput. 39, S528–S542 (2016)
24. 24.
Herman, G.T.: Image Reconstruction from Projection, the Fundamentals of the Computerized Tomography. Academic Press, New York (1980)
25. 25.
Herman, G.T., Lent, A., Lutz, P.H.: Relaxation methods for image reconstruction. Commun. Assoc. Comput. Mach. 21, 152–158 (1978)
26. 26.
Herman, G.T., Meyer, L.B.: Algebraic reconstruction techniques can be made computationally efficient. IEEE Trans. Med. Imaging 12, 600–609 (1993)
27. 27.
Jiao, Y.-L., Jin, B.-T., Lu, X.-L.: Preasymptotic convergence of randomized Kaczmarz method. Inverse Prob. 33, 125012 (2017)
28. 28.
Kaczmarz, S.: Angenäherte auflösung von systemen linearer gleichungen. Bull. Acad. Polon. Sci. Lett. A 35, 355–357 (1933)
29. 29.
Leventhal, L., Lewis, A.S.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35, 641–654 (2010)
30. 30.
Natterer, F.: Themathematics of Computerized Tomography. SIAM (2001)Google Scholar
31. 31.
Ma, A., Needell, D., Ramdas, A.: Convergence properties of the randomized extended Gauss-Seidel and Kaczmarz methods. SIAM J. Matrix Anal. Appl. 36, 1590–1604 (2015)
32. 32.
Needell, D.: Randomized Kaczmarz solver for noisy linear systems. BIT 50, 395–403 (2010)
33. 33.
Needell, D., Zhao, R., Zouzias, A.: Randomized block Kaczmarz method with projection for solving least squares. Linear Algebra Appl. 484, 322–343 (2015)
34. 34.
Needell, D., Tropp, J.A.: Paved with good intentions: analysis of a randomized block Kaczmarz method. Linear Algebra Appl. 441, 199–221 (2014)
35. 35.
von Neumann, J.: The Geometry of Orthogonal Spaces, vol. 2. Princeton University Press, Princeton (1950). This is a mimeographed lecture notes, first distributed in 1933Google Scholar
36. 36.
Nutini, J., Sepehry, B., Laradji, I., Schmidt, M., Koepke, H., Virani, A.: Convergence rates for greedy Kaczmarz algorithms, and faster randomized Kaczmarz rules using the orthogonality graph, UAI (2016)Google Scholar
37. 37.
Schmidt, M.: Notes on randomized Kaczmarz, Lecture notes, 9 April 2015Google Scholar
38. 38.
Schwarz, H.A.: Ueber einen Grenzübergang durch alternirendes Verfahren. Vierteljahrsschrift der Naturforschenden Gessellschaft in Zurich 15, 272–286 (1870)
39. 39.
Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15, 262–278 (2009)
40. 40.
Strohmer, T.: Comments on the randomized Kaczmarz method (2009, unpublished manuscript)Google Scholar
41. 41.
Tanabe, K.: Projection method for solving a singular system of linear equations and its applications. Numer. Math. 17, 203–214 (1971)
42. 42.
Trussell, H., Civanlar, M.: Signal deconvolution by projection onto convex sets. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1984, vol. 9, pp. 496–499 (1984)Google Scholar
43. 43.
Youla, D.C., Webb, H.: Image restoration by the method of convex projections: part 1 theory. IEEE Trans. Med. Imaging 1, 81–94 (1982)
44. 44.
Youla, D.C., Webb, H.: Image restoration by the method of convex projections: part 2 applications and numerical results. IEEE Trans. Med. Imaging 1, 95–101 (1982)
45. 45.
Zouzias, A., Freris, N.M.: Randomized extended Kaczmarz for solving least squares. SIAM J. Matrix Anal. 34, 773–793 (2013)