Advertisement

A Data-Reusing Stochastic Approximation Algorithm for Neural Adaptive Filters

  • Danilo P. Mandic
  • Igor R. Krcmar
  • Warren Sherliker
  • George Smith
Conference paper

Abstract

A data-reusing stochastic approximation algorithm for adaptation of a neural adaptive filter is derived. The proposed algorithm is of the gradient-descent (GD) type and incorporates the data-reusing technique and the learning-rate annealing schedule. The convergence analysis is undertaken upon contraction mapping, and bounds on the learning rate parameter ŋ are provided. This algorithm outperforms the linear LMS and NLMS for prediction of speech.

Keywords

Less Mean Square Adaptive Filter Less Mean Square Algorithm Nonstationary Signal Normalise Little Mean Square 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    D. P. Mandic and J. A. Chambers, “Relationship between the slope of the activation function and the learning rate for the RNN”, Neural Computation, vol. 11, no. 5, pp. 1069–1077, 2000.CrossRefGoogle Scholar
  2. [2]
    V. J. Mathews and Z. Xie, “A stochastic gradient adaptive filter with gradient adaptive step size”, IEEE Transactions on Signal Processing, vol. 41, no. 6, pp. 2075–2087, 1993.CrossRefGoogle Scholar
  3. [3]
    C. Darken, J. Chang, and J. Moody, “Learning schedules for faster stochastic gradient search”, IEEE Workshop on NNSP’ 92, pp. 3–10, 1992.Google Scholar
  4. [4]
    J. R. Treichler, C. R. Johnson, and M. G. Larimore, Theory and design of adaptive filters, John Wiley & Sons, 1987.Google Scholar
  5. [5]
    D. P. Mandic and J. A. Chambers, “Relationships between the apriori and aposteriori errors in nonlinear adaptive neural filters”, Neural Computation, vol. 12, pp. 1285–1292, 2000.CrossRefGoogle Scholar
  6. [6]
    C. Darken and J. Moody, “Towards faster stochastic gradient search”, Neural Information Processing Systems ( J. E. Moody, S. J. Hanson, and R. P. Lippman, eds.), vol. 4, pp. 1009–1016, Morgan Kaufman, 1992.Google Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Danilo P. Mandic
    • 1
  • Igor R. Krcmar
    • 2
  • Warren Sherliker
    • 3
  • George Smith
    • 1
  1. 1.School of Information SystemsUniversity of East AngliaNorwichUK
  2. 2.Faculty of Electrical EngineeringUniversity of BanjalukaBanjalukaBH
  3. 3.Department of Electrical EngineeringImperial CollegeLondonUK

Personalised recommendations