Advertisement

Correction to: convergence rates for Kaczmarz-type algorithms

  • Constantin PopaEmail author
Correction
  • 62 Downloads

Correction to: Numerical Algorithms, 79(1)(2018), 1–17

 https://doi.org/10.1007/s11075-017-0425-7

1 Comments and notations

We made corrections only on Theorem 7 from Section “4.2 Extended Kaczmarz single - projection algorithm” of the original paper. We will refer to the equations, results, and references from the original paper by adding the sign (*). Else, they are related to this Erratum.

2 Erratum to Theorem *7

Theorem 1

The algorithm MREK has linear convergence.

Proof

Let (xk)k≥ 0 be the sequence generated with the MREK algorithm. According to the selection procedure (*44) of the projection index ik and (*9), we successively obtain (see also Section 1 of the paper [*1])
$$ \begin{array}{@{}rcl@{}} m |\langle A_{i_{k}}, x^{k-1} \rangle - {b}_{i_{k}}^{k}|^{2} &\geq& \sum\limits_{1 \leq i \leq m} |\langle A_{i}, x^{k-1} \rangle - {{b}_{i}^{k}}|^{2} = \parallel A x^{k-1} - b^{k} {\parallel}^{2}\\ &=& \parallel (A {x}^{k-1} - b) + (r - {y}^{k}) {\parallel}^{2}. \end{array} $$
(1)
We have the following elementary inequality.

Lemma 1

Letα, βbe real numbers such that
$$ \alpha \in [0, 1], \beta \geq -1 \text{ and } \beta - \alpha = \alpha \beta. $$
(2)
Then
$$ (r_1 + r_2)^2 \geq \alpha {r}_{1}^{2} - \beta {r}_{2}^{2}, \forall r_{1}, r_{2} \in \mathbb{R}. $$
(3)

This gives us the following result.

Corollary 1

Letα, βbe as in (2). Then
$$ \parallel x + y \parallel^{2} \geq \alpha \parallel x \parallel^{2} - \beta \parallel y \parallel^{2}, \forall x, y \in \mathbb{R}^{n}. $$
(4)

Proof

Indeed, we observe that, in the hypothesis (2), we have
$$ \parallel x + y \parallel^{2} - \alpha \parallel x \parallel^{2} + \beta \parallel y \parallel^{2} = \parallel \sqrt{1-\alpha} x - \sqrt{1+\beta} y \parallel^{2} \geq 0. $$
Therefore, from (1) and (4), we obtain
$$ - |\langle A_{i_{k}}, x^{k-1} \rangle - {b}_{i_{k}}^{k}|^{2} \leq - \frac{\alpha}{m} \parallel A x^{k-1} - b \parallel^{2} + \frac{\beta}{m} \parallel r - y^{k} \parallel^{2} . $$
(5)
In [*19], Proposition 1, Eq. (59) (for ω = 1) it is proved the equality
$$ \parallel {x}^{k} - x \parallel^{2} = \parallel x^{k-1} - x \parallel^{2} - \frac{\left( \langle A_{{i_{k}}}, x^{k-1}\rangle-b_{{i_{k}}}\right)^{2}}{\|A_{{i_{k}}}\|^{2}} + \parallel \gamma_{{i_{k}}} \parallel^{2}, $$
(6)
where
$$ \gamma_{i_k} = \frac{r_{i_{k}}-{y}_{i_{k}}^{k}}{\parallel A_{i_{k}} \parallel^{2}} A_{i_{k}}, $$
(7)
and xLSS(A; b) is such that \(P_{\mathcal {N}(A)}(x) = P_{\mathcal {N}(A)}(x^{0})\). If δ is the smallest nonzero singular value of A (therefore also of AT) and because \(P_{\mathcal {N}(A)}(x^{k}) = P_{\mathcal {N}(A)}(x^{0}), \forall k \geq 0\) it holds that \(x^{k} - x \in \mathcal {R}(A^{T})\) (see also [*1]), hence
$$ \parallel A x^{k-1} - b \parallel^2 \geq \delta^2 \parallel x^{k-1} - x \parallel^{2}. $$
(8)
Then, from (1), (6), and (5), the obvious inequality
$$ \parallel \gamma_{i_{k}} \parallel^{2} \leq \frac{\parallel r - y^{k} \parallel^{2}}{\parallel A_{i_{k}} \parallel^{2}}, $$
and (8) we get
$$ \begin{array}{@{}rcl@{}} \parallel x^{k} \!- x \parallel^{2} \!&\leq&\! \parallel x^{k-1} - x \parallel^{2} - \frac{\alpha}{m} \frac{\parallel {Ax}^{k-1} - b \parallel^{2}}{\parallel {A}_{i_{k}} \parallel^{2}} \!+ \frac{\beta}{m} \frac{\parallel r - y^{k} \parallel^{2}}{\parallel A_{i_{k}} \parallel^{2}} + \frac{\parallel r - y^{k} \parallel^{2}}{\parallel {A}_{i_{k}} \parallel^{2}}\\ \!&\leq&\! \left( 1 - \frac{\alpha \delta^2}{m \cdot M} \right) \parallel x^{k-1} - x \parallel^{2} + \frac{1}{\mu} \left( 1 + \frac{\beta}{m} \right) \parallel y^{k} - r \parallel^{2}, \end{array} $$
(9)
where
$$ M = \max\limits_{1 \leq i \leq m} \parallel A_{i} \parallel^{2}, \mu = \min\limits_{1 \leq i \leq m} \parallel A_{i} \parallel^{2}. $$
(10)
In [*19], Lemma 2 it is proved that
$$ \parallel y^{k} - r \parallel^{2} \leq \left( 1 - \frac{\delta^{2}}{n} \right)^{k} \parallel y^{0} - r \parallel^{2}, \forall k \geq 0. $$
(11)
Then, from (*5) and (11), we obtain
$$ \parallel x^{k} - x \parallel^{2} \leq \left( 1 - \frac{\alpha \delta^{2}}{m M} \right) \parallel x^{k-1} - x \parallel^{2} + \frac{1}{\mu} \left( 1 + \frac{\beta}{m} \right) \left( 1 - \frac{\delta^{2}}{n} \right)^{k} \parallel y^{0} - r \parallel^{2}. $$
(12)
If we introduce the notations
$$ \tilde{\alpha} = 1 - \frac{\alpha \delta^{2}}{m \cdot M} \in [0, 1), \tilde{\beta} = 1 - \frac{\delta^{2}}{n} \in [0, 1), C = \frac{1}{\mu} \left( 1 + \frac{\beta}{m} \right) \parallel {y}^{0} - r \parallel^{2} $$
(13)
from (9) – (10), we obtain
$$ \parallel x^{k} - x \parallel^{2} \leq \tilde{\alpha} \parallel x^{k-1} - x \parallel^{2} + \tilde{\beta}^{k} C, \forall k \geq 1. $$
(14)
From (14), a recursive argument gives us
$$ \parallel x^{k} - x \parallel^{2} \leq \tilde{\alpha}^{k} \parallel x^{0} - x \parallel^{2} + \sum\limits_{j=0}^{k-1} \tilde{\alpha}^{j} \tilde{\beta}^{k-j} C $$
or, for \(\nu = \max \{ \tilde {\alpha }, \tilde {\beta } \} \in [0, 1)\)
$$ \parallel x^{k} - x \parallel^{2} \leq \nu^{k} \left( \parallel x^{0} - x \parallel^{2} + C k \right), \forall k \geq 1. $$
(15)
If we define 𝜖k = νk (∥ x0x2 + Ck), ∀k ≥ 1, we obtain that \(\lim _{k \rightarrow \infty } \frac {\epsilon _{k+1}}{\epsilon _{k}} = \nu \in [0, 1)\), which gives us the linear convergence for MREK algorithm and completes the proof. □
Typos mistakes
  1. 1.
    On page 9, at the end of the proof of Corollary 1, replace the equation
    $$ \frac{\epsilon_{n \Gamma}}{\epsilon_{n {\Gamma} -1}} = \delta \in [0, 1), \forall n \geq 1. $$
    by the equation
    $$ \frac{\epsilon_{n \Gamma}}{\epsilon_{(n-1) \Gamma}} = \delta \in [0, 1), \forall n \geq 1. $$
     
  2. 2.
    On page 11, in equation (45), instead of
    $$ \mathbb{E} \left[\|x^{k} - x_{LS} \|\right] \leq \left( 1 - \frac{1}{\hat{k}^{2}(A)}\right)^{\lfloor k/2\rfloor} (1 + 2 hat{k}^{2}(A)) \|x_{LS} \|^{2}, $$
    write
    $$ \mathbb{E} \left[\|x^{k} - x_{LS} \|\right] \leq \left( 1 - \frac{1}{\hat{k}^{2}(A)}\right)^{\lfloor k/2\rfloor} (1 + 2 \hat{k}^{2}(A)) \|x_{LS} \|^{2}, $$
     
  3. 3.
    On page 10, after the equation (42), please write:Note. We used formula (40) to update the vector yk− 1, instead of the formula
    $$ y^{k} = y^{k-1} - \frac{{\langle y^{k-1}, A^{j_{k}} \rangle}}{\parallel A^{j^{k}} \parallel^{2}} A^{j_{k}} $$
    because we supposed that ∥ Aj ∥ = 1, ∀j = 1, … , n. This can be achieved by a scalling of A of the form
    $$ A \Longrightarrow A D, \text{ with } D = diag\left( \frac{1}{\parallel A^{1} \parallel}, \frac{1}{\parallel A^{2} \parallel}, \dots, \frac{1}{\parallel A^{n} \parallel}\right), $$
    which transforms the initial problem (10) into the equivalent one
    $$ \parallel (A D) (D^{-1} x) - \hat{b} \parallel = \min\limits_{z \in \mathbb{R}^{n}} \parallel (A D) (D^{-1} z) - \hat{b} \parallel. $$
     
  4. 4.
    On page 14, second line from top, instead of the formula
    $$ x^{k+{\Gamma} - j} - x = P_{{i_{k+{\Gamma} - j-1}}}(x^{k+{\Gamma} - j -1} - x) + \gamma_{{i_{k+{\Gamma} - j-1}}} $$
    the formula
    $$ x^{k+{\Gamma} - j} - x = P_{{i_{k+{\Gamma} - j}}}(x^{k+{\Gamma} - j -1} - x) + \gamma_{{i_{k+{\Gamma} - j}}}. $$
     
  5. 5.
    On page 14, the fourth line from top, instead of the formula
    $$ x^{k+\Gamma} = P_{k+{\Gamma} -1} \circ {\cdots} \circ P_{i_{k}}(x^{k} - x) + \sum\limits_{j=1}^{\Gamma} {\Pi}_{j} \gamma_{i{k+{\Gamma} - j}} $$
    please write
    $$ x^{k+\Gamma} = P_{i_{k}+\Gamma} \circ {\cdots} \circ P_{i_{k}}(x^{k} - x) + \sum\limits_{j=1}^{\Gamma} {\Pi}_{j} \gamma_{i_{k}+{\Gamma} - j}. $$
     

Notes

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Mathematics and InformaticsOVIDIUS University of ConstantaConstanţaRomania

Personalised recommendations