Skip to main content

Probabilistic Analysis

  • Chapter
Book cover Condition

Part of the book series: Grundlehren der mathematischen Wissenschaften ((GL,volume 349))

  • 2395 Accesses

Abstract

The loss of precision in linear equation solving (via QR Householder factorization) is bounded as

$$\mathsf {LoP}\bigl(A^{-1}b\bigr)\leq (2+C)\log n + \log \kappa (A) +\log c +o(1), $$

where c,C are small constants. While the terms (2+C)logn+logc point to a loss of approximately (2+C)logn figures of precision independently of the data (A,b), the quantity logκ(A), i.e., log∥A∥+log∥A −1∥, depends on A and does not appear to be a priori estimable.

We already discussed this problem in the Overture, where we pointed to a way out consisting in randomizing the data and analyzing the effects of such randomization on the condition number at hand (which now becomes a random variable). In this chapter we become more explicit and actually perform such an analysis for κ(A).

A cursory look at the current literature shows two different ideas of randomization for the underlying data. In the first one, which lacking a better name we will call classical or average, data are supposed to be drawn from “evenly spread” distributions. If the space M where data live is compact, a uniform measure is usually assumed. If instead, data are taken from \(\mathbb {R}^{n}\), the most common choice is the multivariate isotropic Gaussian centered at the origin. In the case of condition numbers (which are almost invariably scale-invariant), this choice is essentially equivalent to the uniform measure on the sphere \(\mathbb {S}^{n-1}\) of dimension n−1. Data randomly drawn from these evenly spread distributions are meant to be “average” (whence the name), and the analysis performed for such a randomization is meant to describe the behavior of the analyzed quantity for such an “average Joe” inhabitant of M.

The second idea for randomization, known as smoothed analysis, replaces this average data by a small random perturbation of worst-case data. That is, it considers an arbitrary element \(\overline{x}\) in M (and thus, in particular, the instance at hand) and assumes that \(\overline{x}\) is affected by random noise. The distribution for this perturbed input is usually taken to be centered and isotropic around \(\overline{x}\), and with a small variance.

An immediate advantage of smoothed analysis is its robustness with respect to the distribution governing the random noise. This is in contrast to the most common critique of average-case analysis: “A bound on the performance of an algorithm under one distribution says little about its performance under another distribution, and may say little about the inputs that occur in practice” (Spielman and Teng).

The main results of this chapter show bounds for both the classical and smoothed analysis of logκ(A). In the first case we obtain \(\mathbb {E}(\log \kappa(A))=\mathcal {O}(\log n)\). In the second, that for all \(\overline{A}\in \mathbb {R}^{n\times n}\), \(\mathbb {E}(\log \kappa(A))=\mathcal {O}(\log n) +\log \frac{1}{\sigma}\), where A is randomly drawn from a distribution centered at \(\overline{A}\) with dispersion σ. Therefore, the first result implies that for random data (A,b) we have

$$\mathbb {E}\bigl(\mathsf {LoP}\bigl(A^{-1}b\bigr)\bigr)= \mathcal {O}(\log n), $$

and the second that for all data \((\overline{A},\overline{b})\) and random perturbations (A,b) of it,

$$\mathbb {E}\bigl(\mathsf {LoP}\bigl(A^{-1}b\bigr)\bigr)= \mathcal {O}(\log n) +\log \frac{1}{\sigma}. $$

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  1. D.A. Spielman and S.-H. Teng. Smoothed analysis of algorithms. In Proceedings of the International Congress of Mathematicians, volume I, pages 597–606, 2002.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Bürgisser, P., Cucker, F. (2013). Probabilistic Analysis. In: Condition. Grundlehren der mathematischen Wissenschaften, vol 349. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38896-5_2

Download citation

Publish with us

Policies and ethics