A Data-Reusing Stochastic Approximation Algorithm for Neural Adaptive Filters
A data-reusing stochastic approximation algorithm for adaptation of a neural adaptive filter is derived. The proposed algorithm is of the gradient-descent (GD) type and incorporates the data-reusing technique and the learning-rate annealing schedule. The convergence analysis is undertaken upon contraction mapping, and bounds on the learning rate parameter ŋ are provided. This algorithm outperforms the linear LMS and NLMS for prediction of speech.
KeywordsLess Mean Square Adaptive Filter Less Mean Square Algorithm Nonstationary Signal Normalise Little Mean Square
Unable to display preview. Download preview PDF.
- C. Darken, J. Chang, and J. Moody, “Learning schedules for faster stochastic gradient search”, IEEE Workshop on NNSP’ 92, pp. 3–10, 1992.Google Scholar
- J. R. Treichler, C. R. Johnson, and M. G. Larimore, Theory and design of adaptive filters, John Wiley & Sons, 1987.Google Scholar
- C. Darken and J. Moody, “Towards faster stochastic gradient search”, Neural Information Processing Systems ( J. E. Moody, S. J. Hanson, and R. P. Lippman, eds.), vol. 4, pp. 1009–1016, Morgan Kaufman, 1992.Google Scholar