# Probabilistic Analysis of Rectangular Matrices

Chapter

## Abstract

We started Chap. 1 by stating a backward analysis for linear equation solving that was a particular case of a theorem of N.J. Higham. We may now quote this result in full.

**Theotem 4.1**

*Let*\(A\in \mathbb {R}^{q\times n}\)

*have full rank*,

*q*≥

*n*, \(b\in \mathbb {R}^{q}\),

*and suppose the least-squares problem*min

_{ x }∥

*b*−

*Ax*∥

*is solved using the Householder QR factorization method. The computed solution*\(\tilde{x}\)

*is the exact solution to*

$$\min_{x\in \mathbb {R}^n}\|\tilde{b}-\tilde{A}x\|, $$

*where*\(\tilde{A}\)

*and*\(\tilde{b}\)

*satisfy the relative error bounds*

$$\|\tilde{A}-A\|_F\leq n\gamma_{cq}\|A\|_F \quad\mbox{and}\quad \|\tilde{b}-b\|\leq n\gamma_{cq}\|b\| $$

*where*\(\gamma_{cq}:=\frac{cq\epsilon _{\mathsf {mach}}}{1- cq\epsilon _{\mathsf {mach}}}\)

*for a small constant*

*c*. □

Replacing the Frobenius norm by the spectral norm, it follows from this backward stability result that the relative error for the computed solution \(\tilde{x}\) satisfies and the loss of precision is bounded by where cond( This condition is bounded by a constant times

$$ \frac{\|\tilde{x}-x\|}{\|x\|} \leq cn^{3/2} q\,\epsilon _{\mathsf {mach}}\mathsf {cond}(A,b) +o(\epsilon _{\mathsf {mach}}) $$

$$ \mathsf {LoP}(A^\dagger b) \leq \log n^{3/2} q+\log \mathsf {cond}(A,b) + \log c+o(1), $$

(**)

*A*,*b*) is the normwise condition number for linear least squares (with respect to the spectral norm), which is defined as$$\mathsf {cond}(A,b)=\lim_{\delta\to0} \sup_{\max\{\mathsf {RelError}(A),\mathsf {RelError}(b)\}\leq\delta} \frac{\mathsf {RelError}(A^{\dagger}b)}{\delta}. $$

*κ*(*A*)^{2}where*κ*(*A*)=∥*A*∥ ∥*A*^{−1}∥. Consequently, to obtain expected bounds (or a smoothed analysis) for the loss of precision LoP(*A*^{†}*b*) from equation (**) it is enough to perform the corresponding analysis for log*κ*(*A*).In this chapter we perform average and smoothed analyses of *κ*(*A*). It is worth noting that the bounds obtained are independent of *n* and depend only on the upper bound on the elongation *n*/*q*. Furthermore, surprisingly, they are also independent of *σ*.

These results indicate that for large reasonably elongated matrices, one may expect the loss of precision in the solution of least-squares problems to derive mostly from the backward error bounds of the algorithm used.

## Keywords

Frobenius Norm Spectral Norm Householder Transformation Tail Bound Gaussian Matrice
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## Copyright information

© Springer-Verlag Berlin Heidelberg 2013