The Conjugate Gradient Algorithm

  • Allen Mclntosh
Part of the Lecture Notes in Statistics book series (LNS, volume 10)

Abstract

Let ø(β) be a function mapping R r into R. Here and in the next two sections, we suppose that ø(β) has a unique minimizer on R r , which we will denote β. When a solution in closed form is impossible or impractical, iterative methods are usually used to find β. Starting with an initial approximation β(0), an iterative method attempts to construct a sequence β(0), β(1), β(2), … that converges to β. Given β(k) a particular approximation to β, a better approximation may be computed as β(k+1) = β(k)(k)k = 0,1, … (3.1.1) where the search direction p(k) is a non-zero vector in R r and the step length α(k) is chosen to produce a reasonable decrease in ø. The procedure of choosing α(k) is called a line search; it is said to be exact if α(k) minimizes ø(α) = α (β(k)+αP(k)) (3.1.2) We will adopt the notation
$$g\left( \beta \right) = \left[ {{{\partial \phi } \over {\partial \beta }}} \right] = {\left[ {{{\partial \phi } \over {\partial {\beta _1}}},{{\partial \phi } \over {\partial {\beta _2}}}, \cdots ,{{\partial \phi } \over {\partial \beta r}}} \right]^t}$$
g(k) = g(k)) and
$$G\left( \beta \right) = {\partial \over {\partial \beta }}{\left[ {{{\partial \phi } \over {\partial \beta }}} \right]^t} = {\left[ {{{{\partial ^2}\phi } \over {\partial {\beta _i}\partial {\beta _j}}}} \right]_{r \times r}}$$
G(k) = G(β(k)) for the first and second partial derivatives of ø with respect to β. (We assume that these exist everywhere in R r .)

Keywords

Manifold Nash Univer 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag New York Inc. 1982

Authors and Affiliations

  • Allen Mclntosh
    • 1
  1. 1.Bell Telephone Laboratories, Inc.New JerseyUSA

Personalised recommendations