Advertisement

Condition pp 21-58 | Cite as

Probabilistic Analysis

  • Peter Bürgisser
  • Felipe Cucker
Part of the Grundlehren der mathematischen Wissenschaften book series (GL, volume 349)

Abstract

The loss of precision in linear equation solving (via QR Householder factorization) is bounded as
$$\mathsf {LoP}\bigl(A^{-1}b\bigr)\leq (2+C)\log n + \log \kappa (A) +\log c +o(1), $$
where c,C are small constants. While the terms (2+C)logn+logc point to a loss of approximately (2+C)logn figures of precision independently of the data (A,b), the quantity logκ(A), i.e., log∥A∥+log∥A −1∥, depends on A and does not appear to be a priori estimable.

We already discussed this problem in the Overture, where we pointed to a way out consisting in randomizing the data and analyzing the effects of such randomization on the condition number at hand (which now becomes a random variable). In this chapter we become more explicit and actually perform such an analysis for κ(A).

A cursory look at the current literature shows two different ideas of randomization for the underlying data. In the first one, which lacking a better name we will call classical or average, data are supposed to be drawn from “evenly spread” distributions. If the space M where data live is compact, a uniform measure is usually assumed. If instead, data are taken from \(\mathbb {R}^{n}\), the most common choice is the multivariate isotropic Gaussian centered at the origin. In the case of condition numbers (which are almost invariably scale-invariant), this choice is essentially equivalent to the uniform measure on the sphere \(\mathbb {S}^{n-1}\) of dimension n−1. Data randomly drawn from these evenly spread distributions are meant to be “average” (whence the name), and the analysis performed for such a randomization is meant to describe the behavior of the analyzed quantity for such an “average Joe” inhabitant of M.

The second idea for randomization, known as smoothed analysis, replaces this average data by a small random perturbation of worst-case data. That is, it considers an arbitrary element \(\overline{x}\) in M (and thus, in particular, the instance at hand) and assumes that \(\overline{x}\) is affected by random noise. The distribution for this perturbed input is usually taken to be centered and isotropic around \(\overline{x}\), and with a small variance.

An immediate advantage of smoothed analysis is its robustness with respect to the distribution governing the random noise. This is in contrast to the most common critique of average-case analysis: “A bound on the performance of an algorithm under one distribution says little about its performance under another distribution, and may say little about the inputs that occur in practice” (Spielman and Teng).

The main results of this chapter show bounds for both the classical and smoothed analysis of logκ(A). In the first case we obtain \(\mathbb {E}(\log \kappa(A))=\mathcal {O}(\log n)\). In the second, that for all \(\overline{A}\in \mathbb {R}^{n\times n}\), \(\mathbb {E}(\log \kappa(A))=\mathcal {O}(\log n) +\log \frac{1}{\sigma}\), where A is randomly drawn from a distribution centered at \(\overline{A}\) with dispersion σ. Therefore, the first result implies that for random data (A,b) we have
$$\mathbb {E}\bigl(\mathsf {LoP}\bigl(A^{-1}b\bigr)\bigr)= \mathcal {O}(\log n), $$
and the second that for all data \((\overline{A},\overline{b})\) and random perturbations (A,b) of it,
$$\mathbb {E}\bigl(\mathsf {LoP}\bigl(A^{-1}b\bigr)\bigr)= \mathcal {O}(\log n) +\log \frac{1}{\sigma}. $$

Keywords

Condition Number Probabilistic Analysis Scale Invariance Data Space Conditional Density 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Bibliography

  1. 207.
    D.A. Spielman and S.-H. Teng. Smoothed analysis of algorithms. In Proceedings of the International Congress of Mathematicians, volume I, pages 597–606, 2002. Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Peter Bürgisser
    • 1
  • Felipe Cucker
    • 2
  1. 1.Institut für MathematikTechnische Universität BerlinBerlinGermany
  2. 2.Department of MathematicsCity University of Hong KongHong KongHong Kong SAR

Personalised recommendations