Fitting Linear Models pp 26-40 | Cite as

# The Conjugate Gradient Algorithm

Chapter

## Abstract

Let ø(β) be a function mapping
G

*R*^{ r }into*R*. Here and in the next two sections, we suppose that ø(β) has a unique minimizer on*R*^{ r }, which we will denote β. When a solution in closed form is impossible or impractical, iterative methods are usually used to find β. Starting with an initial approximation β^{(0)}, an iterative method attempts to construct a sequence β^{(0)}, β^{(1)}, β^{(2)}, … that converges to β. Given β^{(k)}a particular approximation to β, a better approximation may be computed as β^{(k+1)}= β^{(k)}+α^{(k)}*k*= 0,1, … (3.1.1) where the*search direction*p^{(k)}is a non-zero vector in*R*^{ r }and the*step length*α^{(k)}is chosen to produce a reasonable decrease in ø. The procedure of choosing α^{(k)}is called a*line search;*it is said to be*exact*if α^{(k)}minimizes ø(α) = α (β^{(k)}+αP^{(k)}) (3.1.2) We will adopt the notation$$g\left( \beta \right) = \left[ {{{\partial \phi } \over {\partial \beta }}} \right] = {\left[ {{{\partial \phi } \over {\partial {\beta _1}}},{{\partial \phi } \over {\partial {\beta _2}}}, \cdots ,{{\partial \phi } \over {\partial \beta r}}} \right]^t}$$

^{g}(*k*) =*g*(β^{(k)}) and$$G\left( \beta \right) = {\partial \over {\partial \beta }}{\left[ {{{\partial \phi } \over {\partial \beta }}} \right]^t} = {\left[ {{{{\partial ^2}\phi } \over {\partial {\beta _i}\partial {\beta _j}}}} \right]_{r \times r}}$$

^{(k)}= G(β^{(k)}) for the first and second partial derivatives of ø with respect to β. (We assume that these exist everywhere in*R*^{ r }.)### Keywords

Manifold Nash Univer## Preview

Unable to display preview. Download preview PDF.

## Copyright information

© Springer-Verlag New York Inc. 1982