Condition pp 77-100

Probabilistic Analysis of Rectangular Matrices

• Peter Bürgisser
• Felipe Cucker
Part of the Grundlehren der mathematischen Wissenschaften book series (GL, volume 349)

Abstract

We started Chap.  by stating a backward analysis for linear equation solving that was a particular case of a theorem of N.J. Higham. We may now quote this result in full.

Theotem 4.1Let $$A\in \mathbb {R}^{q\times n}$$ have full rank, qn, $$b\in \mathbb {R}^{q}$$, and suppose the least-squares problem min x bAxis solved using the Householder QR factorization method. The computed solution $$\tilde{x}$$ is the exact solution to
$$\min_{x\in \mathbb {R}^n}\|\tilde{b}-\tilde{A}x\|,$$
where $$\tilde{A}$$ and $$\tilde{b}$$ satisfy the relative error bounds
$$\|\tilde{A}-A\|_F\leq n\gamma_{cq}\|A\|_F \quad\mbox{and}\quad \|\tilde{b}-b\|\leq n\gamma_{cq}\|b\|$$
where $$\gamma_{cq}:=\frac{cq\epsilon _{\mathsf {mach}}}{1- cq\epsilon _{\mathsf {mach}}}$$ for a small constant c.  □
Replacing the Frobenius norm by the spectral norm, it follows from this backward stability result that the relative error for the computed solution $$\tilde{x}$$ satisfies
$$\frac{\|\tilde{x}-x\|}{\|x\|} \leq cn^{3/2} q\,\epsilon _{\mathsf {mach}}\mathsf {cond}(A,b) +o(\epsilon _{\mathsf {mach}})$$
and the loss of precision is bounded by
$$\mathsf {LoP}(A^\dagger b) \leq \log n^{3/2} q+\log \mathsf {cond}(A,b) + \log c+o(1),$$
(**)
where cond(A,b) is the normwise condition number for linear least squares (with respect to the spectral norm), which is defined as
$$\mathsf {cond}(A,b)=\lim_{\delta\to0} \sup_{\max\{\mathsf {RelError}(A),\mathsf {RelError}(b)\}\leq\delta} \frac{\mathsf {RelError}(A^{\dagger}b)}{\delta}.$$
This condition is bounded by a constant times κ(A)2 where κ(A)=∥A∥ ∥A −1∥. Consequently, to obtain expected bounds (or a smoothed analysis) for the loss of precision LoP(A b) from equation (**) it is enough to perform the corresponding analysis for logκ(A).

In this chapter we perform average and smoothed analyses of κ(A). It is worth noting that the bounds obtained are independent of n and depend only on the upper bound on the elongation n/q. Furthermore, surprisingly, they are also independent of σ.

These results indicate that for large reasonably elongated matrices, one may expect the loss of precision in the solution of least-squares problems to derive mostly from the backward error bounds of the algorithm used.

Keywords

Frobenius Norm Spectral Norm Householder Transformation Tail Bound Gaussian Matrice
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.