Error Analysis of Least Squares Algorithms

  • Åke Björck
Part of the NATO ASI Series book series (volume 70)


A finite algebraic algorithm starts with a set of data d 1,...,d r , from which it computes via fundamental arithmetic operations a solution f l,...,f t . In forward error analysis one attempts to bound \( \left| {{{\bar f}_j} - {f_j}} \right| \) , where \( {\bar f_j} \) denotes the computed element In backward error analysis, pioneered by J.H. Wilkinson in the late fifties, one attempts to determine a modified set of data \( {\bar d_i} \) such that the computed solution \( {\bar f_j} \) is the exact solution. When it applies it tends to be very markedly superior to forward analysis. To yield error bounds for the solution, the backward error analysis has to be complemented with a perturbation analysis, which naturally leads to the concept of condition number of a problem.

There are several possible definitions of the stability of an algorithm related to different types of error analysis. The concepts of forward and backward stability and of weak and strong stability are discussed.

Many of the common problems in signal processing can be formulated as solutions to (a sequence of) linear least squares problems of the form min x X wy2. We review the perturbation theory of such problems and discuss methods for the estimation of the corresponding condition numbers. We survey stability results for the method of normal equations and methods based on orthogonal reductions.

Very often it is required to recursively recalculate the solution x when equations are successively added to and/or deleted from the least squares problem. Many different algorithms have been proposed to effectuate this. Most of these involve updating or downdating the Cholesky factor R of A T A. This can be achieved using orthogonal and hyperbolic transformations. The numerical stability of such recursive algorithms is not yet completely analyzed. A new method, using iterative refinement, is suggested as a means of increasing the reliability of downdating algorithms


Error Analysis Condition Number Singular Value Decomposition Compute Solution Stable Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Alexander, S.T., Pan, C.-T., and Plemmons, R.J. [ 1988 ]: Analysis of a recursive least squares hyperbolic scheme for signal processing, Lin. Alg. Appl. 98, pp. 3–40.MathSciNetMATHCrossRefGoogle Scholar
  2. [2]
    Arioli, M., Demmel, J.W., and Duff, I.S. [ 1988 ]: Solving sparse linear systems with sparse backward error. Report CSS 214, Harwell Laboratory.Google Scholar
  3. [3]
    Björck, A. [ 1987 ]: Stability analysis of the method of semi-normalequations for linear least squares problems, Linear Algebra Appl. 88/89, 31–48.Google Scholar
  4. [4]
    Björck, A. [ 1989 ]: Least squares methods. in Handbook of Numerical Analysis, Vol.II: Finite Difference Methods-Solution of Equations in R’“. Eds. P.G. Ciarlet and J.L. Lions, Elsevier/North Holland.Google Scholar
  5. [5]
    Bunch, J.R. [1987]: The weak and strong stability of algorithms in numerical linear algebra, Linear Algebra Appl. 88/89, 49–66.Google Scholar
  6. [6]
    Chan, T.F. [1987]: Rank revealing QR-factorizations, Linear Algebra Appl. 88/89, 67–82.Google Scholar
  7. [7]
    Cybenko, G. [ 1980 ]: The numerical stability of the Levinson-Durbin algorithm for Toeplitz systems of equations, SIAM J. Sci. Statist. Comput. 1, 303–319.MathSciNetMATHCrossRefGoogle Scholar
  8. [8]
    Daniel, J.W., Gragg, W.B., Kaufman, L. and Stewart, G.W. [ 1976 ]: Reorthogonalization and stable algorithms for updating the Gram-Schmidt QR factorization. Math. Comput. 30, pp. 772–795.MathSciNetMATHGoogle Scholar
  9. [9]
    Eldén, L. and Waldén, B. [ 1988 ]: Downdating QR decompositions with improved stability, Report LiTH-MAT-R-1988, Linköping, Sweden.Google Scholar
  10. [10]
    Foster, L. [ 1988 ]: The probability of large diagonal entries in the QR factorizations, submitted to SIAM J. Sci. Statist. Comput.Google Scholar
  11. [11]
    Golub, G.H. and Van Loan, C.F. [1983]: Matrix Computations,John Hopkins University Press.Google Scholar
  12. [12]
    Hager, W.W. [ 1984 ]: Condition estimators, SIAM J. Sci. Statist. Comput., 5, 311–316.Google Scholar
  13. [13]
    Heath, M.T., Laub, A.J., Paige, C.C., and Ward, R.C. [1986]: Computing the SVD of a product of two matrices. SIAM J.Sci. Stat. Comput. 7, 1147–1149.MathSciNetMATHCrossRefGoogle Scholar
  14. [14]
    The reference is de Jong, L.S. [ 1977 ]: Towards a formal definition of numerical stability. Numer. Math. Vol. 28, pp. 211–220.MathSciNetMATHCrossRefGoogle Scholar
  15. [15]
    Reichel, L. and Gragg, W.B, [1988]: FORTRAN subroutines for updating the QR decomposition. ACM Trans. Math. Software, to appear.Google Scholar
  16. [16]
    Skeel, R.D. [1979]: Scaling for numerical stability in Gaussian elimination. Journal of the Association for Computing Machinery, 26, 494526.Google Scholar
  17. [17]
    Stewart, G.W. [1977]: Perturbation bounds for the QR factorization of a matrix, SIAM J. Numer. Anal. 14, 509–518.Google Scholar
  18. [18]
    Stewart, G.W. [1979]: The effects of rounding errors on an algorithm for downdating a Cholesky factorization, J. Inst. Maths. Applics. 23, 203–213.MATHCrossRefGoogle Scholar
  19. [19]
    Wedin, P-A. [1973]: Perburbation theory for pseudo-inverses. BIT 13, 217–32.MathSciNetMATHCrossRefGoogle Scholar
  20. [20]
    Wilkinson, J.H., [1965]: The Algebraic Eigenvalue Problem, Oxford University Press, London.MATHGoogle Scholar
  21. [21]
    Wilkinson, J.H. [1986]: Error analysis revisited. IMA Bulletin, 22, 192–200.MathSciNetMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • Åke Björck
    • 1
  1. 1.Department of MathematicsUniversity of LinköpingLinköpingSweden

Personalised recommendations