# Computation of Eigenvectors and Eigenvalues and the Singular Value Decomposition

• James E. Gentle
Part of the Statistics and Computing book series (SCO)

## Abstract

Before we discuss methods for computing eigenvalues, we mention an interesting observation. Consider the polynomial, f(& λ),
$${{\lambda }^{p}} + {{a}_{{p - 1}}}{{\lambda }^{{p - 1}}} + ... + {{a}_{1}}\lambda + {{a}_{0}}$$
Now form the matrix, A,
$$\left[ \begin{gathered} 0 1 0 ... 0 \hfill \\ 0 0 1 ... 0 \hfill \\ \ddots \hfill \\ 0 0 0 ... 0 \hfill \\ - {{a}_{0}} - {{a}_{1}} - {{a}_{2}} ... - {{a}_{{p - 1}}} \hfill \\ \end{gathered} \right]$$
The matrix A is called the companion matrix of the polynomial f. It is easy to see that the characteristic equation of A, equation (2.11) on page 68, is the polynomial f(λ):
$$\det (A - \lambda I) = f(\lambda )$$
Thus, given a general polynomial f, we can form a matrix A whose eigenvalues are the roots of the polynomial. It is a well-known fact in the theory of equations that there is no general formula for the roots of a polynomial of degree greater than 4. This means that we cannot expect to have a direct method for calculating eigenvalues; rather, we will have to use an iterative method.

## Keywords

Power Method Similarity Transformation Nonzero Entry Main Diagonal Dominant Eigenvalue
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.