Advertisement

Affine Projection Algorithm

  • Kazuhiko Ozeki
Chapter
Part of the Mathematics for Industry book series (MFI, volume 22)

Abstract

The normalized least-mean-squares (NLMS) algorithm has a problem that the convergence slows down for correlated input signals. The reason for this phenomenon is explained by looking at the algorithm from a geometrical point of view. This observation motivates the affine projection algorithm (APA) as a natural generalization of the NLMS algorithm. The APA exploits most recent multiple regressors, while the NLMS algorithm uses only the current, single regressor. In the APA, the current coefficient vector is orthogonally projected onto the affine subspace defined by the regressors for updating the coefficient vector. By increasing the number of regressors, which is called the projection order, the convergence rate of the APA is improved especially for correlated input signals. The role of the step-size is made clear. Investigations from the affine projection point of view give us a deep insight into the properties of the APA. We also see that alternative approaches are possible to derive the update equation for the APA. To stabilize the numerical inversion of a matrix in the update equation, a regularization term is often added. This variant of the APA is called the regularized APA (R-APA), whereas the original APA is called the basic APA (B-APA). This chapter also explains that the B-APA with unity step-size has a decorrelating property, and that there are formal similarities between the recursive least-squares (RLS) algorithm and the R-APA.

Keywords

Error Vector Coefficient Vector Numerical Inversion Affine Subspace Regularization Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Ozeki, K., Umeda, T.: An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. IEICE Trans. J67-A(2), 126–132 (1984) (Also in Electron. Commun. Jpn. 67-A(5), 19–27 (1984))Google Scholar
  2. 2.
    Haykin, S.: Adaptive Filter Theory. Prentice-Hall, Upper Saddle River (2002)Google Scholar
  3. 3.
    Sayed, A.H.: Adaptive Filters. Wiley, Hoboken (2008)CrossRefGoogle Scholar
  4. 4.
    Haykin, S., Widrow, B. (eds.): Least-Mean-Square Adaptive Filters. Wiley, Hoboken (2003)Google Scholar
  5. 5.
    Werner, S., Diniz, P.S.R.: Set-membership affine projection algorithm. IEEE Signal Process. Lett. 8(8), 231–235 (2001)CrossRefGoogle Scholar
  6. 6.
    Morgan, D.R., Kratzer, S.G.: On a class of computationally efficient, rapidly converging, generalized NLMS algorithms. IEEE Signal Process. Lett. 3(8), 245–247 (1996)CrossRefGoogle Scholar
  7. 7.
    Rupp, M.: A family of adaptive filter algorithms with decorrelating properties. IEEE Trans. Signal Process. 46(3), 771–775 (1998)CrossRefGoogle Scholar
  8. 8.
    Hinamoto, T., Maekawa, S.: Extended theory of learning identification. J. IEEJ-C 95(10), 227–234 (1975)Google Scholar
  9. 9.
    Cioffi, J.M., Kailath, T.: Windowed fast transversal filters adaptive algorithms with normalization. IEEE Trans. Acoust. Speech Signal Process. ASSP–33(3), 607–625 (1985)CrossRefGoogle Scholar
  10. 10.
    Satake, I.: Linear Algebra. Marcel Dekker, New York (1975)Google Scholar
  11. 11.
    Luenberger, D.G.: Linear and Nonlinear Programming. Addison-Wesley, Menlo Park (1989)Google Scholar
  12. 12.
    Markel, J.D., Gray, A.H.: Linear Prediction of Speech. Springer, Berlin (1976)CrossRefGoogle Scholar

Copyright information

© Springer Japan 2016

Authors and Affiliations

  1. 1.The University of Electro-CommunicationsTokyoJapan

Personalised recommendations