Advertisement

Circuits, Systems, and Signal Processing

, Volume 38, Issue 7, pp 3133–3151 | Cite as

Kernel Least Mean Square Based on the Nyström Method

  • Shi-Yuan WangEmail author
  • Wen-Yue Wang
  • Lu-Juan Dang
  • Yun-Xiang Jiang
Article
  • 100 Downloads

Abstract

The kernel least mean square (KLMS) algorithm is the simplest algorithm in kernel adaptive filters. However, the network growth of KLMS is still an issue for preventing its online applications, especially when the length of training data is large. The Nyström method is an efficient method for curbing the growth of the network size. In this paper, we apply the Nyström method to the KLMS algorithm, generating a novel algorithm named kernel least mean square based on the Nyström method (NysKLMS). In comparison with the KLMS algorithm, the proposed NysKLMS algorithm can reduce the computational complexity, significantly. The NysKLMS algorithm is proved to be convergent in the mean square sense when its step size satisfies some conditions. In addition, the theoretical steady-state excess mean square error of NysKLMS supported by simulations is calculated to evaluate the filtering accuracy. Simulations on system identification and nonlinear channel equalization show that the NysKLMS algorithm can approach the filtering performance of the KLMS algorithm by using much lower computational complexity, and outperform the KLMS with the novelty criterion, the KLMS with the surprise criterion, the quantized KLMS, the fixed-budget QKLMS, and the random Fourier features KLMS.

Keywords

Kernel least mean square Approximation Nyström method Random Fourier features 

Notes

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant No. 61671389), China Postdoctoral Science Foundation Funded Project (Grant No. 2017M620779), Chongqing Postdoctoral Science Foundation Special Funded Project (Grant No. Xm2017107), and the Fundamental Research Funds for the Central Universities (Grant No. XDJK2016C096).

References

  1. 1.
    T.Y. Al-Naffouri, A.H. Sayed, Transient analysis of adaptive filters with error nonlinearities. IEEE Trans. Signal Process. 51(3), 653–663 (2003)Google Scholar
  2. 2.
    A.Z.U.M. Al-Saggaf, M. Moinuddin, M. Arif, The q-least mean squares algorithm. Signal Process. 111, 50–60 (2015)Google Scholar
  3. 3.
    N. Aronszajn, Theory of reproducing kernels. Trans. Am. Math. Soc. 68(3), 337–404 (1950)MathSciNetzbMATHGoogle Scholar
  4. 4.
    P. Bouboulis, S. Pougkakiotis, S. Theodoridis, Efficient KLMS and KRLS algorithms: a random Fourier feature perspective, in IEEE Statistical Signal Processing Workshop (SSP), Palma de Mallorca, Spain, 26–29 June 2016, pp. 1–5Google Scholar
  5. 5.
    C. Burges, A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 2(2), 121–167 (1998)Google Scholar
  6. 6.
    B. Chen, L. Li, W. Liu, J.C. Príncipe, Nonlinear adaptive filtering in kernel spaces (Springer, Berlin, 2014), pp. 715–734Google Scholar
  7. 7.
    B. Chen, S. Zhao, P. Zhu, J.C. Príncipe, Mean square convergence analysis for kernel least mean square algorithm. Signal Process. 92(11), 2624–2632 (2012)Google Scholar
  8. 8.
    B. Chen, S. Zhao, P. Zhu, J.C. Príncipe, Quantized kernel least mean square algorithm. IEEE Trans. Neural Netw. Learn. Syst. 23, 22–32 (2012)Google Scholar
  9. 9.
    S. Craciun, D. Cheney, K. Gugel, J.C. Sanchez, J.C. Príncipe, Wireless transmission of neural signals using entropy and mutual information compression. IEEE Trans. Neural Syst. Rehabil. Eng. 19(1), 35–44 (2011)Google Scholar
  10. 10.
    L. Csato, M. Opper, Sparse on-line Gaussian processes. Neural Comput. 14(3), 641–668 (2002)zbMATHGoogle Scholar
  11. 11.
    P. Drineas, M.W. Mahoney, On the nystrom method for approximating a Gram matrix for improved kernel-based learning. J. Mach. Learn. Res. 6, 2153–2175 (2005)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Y. Engel, S. Mannor, R. Meir, The kernel recursive least-squares algorithm. IEEE Trans. Signal Process. 52(8), 2275–2285 (2004)MathSciNetzbMATHGoogle Scholar
  13. 13.
    H. Fan, Q. Song, A linear recurrent kernel online learning algorithm with sparse updates. Neural Netw. 50(2), 142–153 (2014)zbMATHGoogle Scholar
  14. 14.
    Z. Hu, M. Lin, C. Zhang, Dependent online kernel learning with constant number of random fourier features. IEEE Trans. Neural Netw. Learn. Syst. 26(10), 2464–2476 (2015)MathSciNetGoogle Scholar
  15. 15.
    A. Khalili, A. Rastegarnia, M.K. Islam, T.Y. Rezaii, Codebook design for vector quantization based on a kernel fuzzy learning algorithm. Circuits Syst. Signal Process. 30(5), 999–1010 (2011)Google Scholar
  16. 16.
    A. Khalili, A. Rastegarnia, M.K. Islam, T.Y. Rezaii, Steady-state tracking analysis of adaptive filter with maximum correntropy criterion. Circuits Syst. Signal Process. 36(4), 1725–1734 (2017)zbMATHGoogle Scholar
  17. 17.
    S. Khan, I. Naseem, R. Togneri, M. Bennamoun, A novel adaptive kernel for the RBF neural networks. Circuits Syst. Signal Process. 36(4), 1639–1653 (2017)Google Scholar
  18. 18.
    J. Kivinen, A. Smola, R. Williamson, Online learning with kernels. IEEE Trans. Signal Process. 52(8), 2165–2176 (2004)MathSciNetzbMATHGoogle Scholar
  19. 19.
    T. Lehn-Schiøler, A. Hegde, D. Erdogmus, J.C. Príncipe, Vector quantization using information theoretic concepts. Natural Comput. 4(1), 39–51 (2005)MathSciNetzbMATHGoogle Scholar
  20. 20.
    S. Li, L. Song, T. Qiu, Steady-state and tracking analysis of fractional lower-order constant modulus algorithm. Circuits Syst. Signal Process. 30(6), 1275–1288 (2011)zbMATHGoogle Scholar
  21. 21.
    M. Lin, L. Zhang, R. Jin, S. Weng, C. Zhang, Online kernel learning with nearly constant support vectors. Neurocomputing 179, 26–36 (2016)Google Scholar
  22. 22.
    Y.Y. Linde, A. Buzo, R.M. Gray, An algorithm for vector quantizer design. IEEE Trans. Commun. 28(1), 84–95 (1980)Google Scholar
  23. 23.
    W. Liu, I. Park, J.C. Príncipe, An information theoretic approach of designing sparse kernel adaptive filters. IEEE Trans. Neural Netw. 20(12), 1950–1961 (2009)Google Scholar
  24. 24.
    W. Liu, P. Pokharel, J.C. Príncipe, The kernel least-mean-square algorithm. IEEE Trans. Signal Process. 56(2), 543–554 (2008)MathSciNetzbMATHGoogle Scholar
  25. 25.
    W. Liu, J.C. Príncipe, Kernel affine projection algorithms. EURASIP J. Adv. Signal Process. 2008(1), 1–13 (2008)zbMATHGoogle Scholar
  26. 26.
    S. Nan, L. Sun, B. Chen, Z. Lin, K.A. Toh, Density-dependent quantized least squares support vector machine for large data sets. IEEE Trans. Neural Netw. Learn. Syst. 28(1), 94–106 (2015)Google Scholar
  27. 27.
    J. Platt, A resource-allocating network for function interpolation. Neural Comput. 3(2), 213–225 (1991)MathSciNetGoogle Scholar
  28. 28.
    J.C. Príncipe, W. Liu, S.S. Haykin, Kernel adaptive filtering: a comprehensive introduction (Wiley, New York, 2010)Google Scholar
  29. 29.
    A. Rahimi, B. Recht, Random features for large scale kernel machines, in Advances in Neural Information Processing Systems, Vancouver, Canada, 3–6 December 2007, pp. 1177–1184Google Scholar
  30. 30.
    A.H. Sayed, Fundamentals of adaptive filtering (Wiley, New York, 2003)Google Scholar
  31. 31.
    K. Slavakis, S. Theodoridis, I. Yamada, Online kernel-based classification using adaptive projection algorithms. IEEE Trans. Signal Process. 56(7), 2781–2796 (2008)MathSciNetzbMATHGoogle Scholar
  32. 32.
    S. Smale, D. Zhou, Geometry on probability spaces. Constr. Approx. 30(3), 311–323 (2009)MathSciNetzbMATHGoogle Scholar
  33. 33.
    S. Wang, Y. Zheng, S. Duan, L. Wang, Simplified quantised kernel least mean square algorithm with fixed budget. Electron Lett. 52, 1453–1455 (2016)Google Scholar
  34. 34.
    S. Wang, Y. Zheng, S. Duan, L. Wang, H. Tan, Quantized kernel maximum correntropy and its mean square convergence analysis. Digital Signal Process. 63(3), 164–176 (2017)Google Scholar
  35. 35.
    C. Williams, M. Seeger, Using the nystrom method to speed up kernel machines, in Advances in Neural Information Processing Systems, Vancouver, Canada, 3–8 December 2001, pp. 682–688Google Scholar
  36. 36.
    T. Yang, Y. Li, M. Mahdavi, R. Jin, Z. Zhou, Nystrom method vs random Fourier features: a theoretical and empirical comparison, in Advances in Neural Information Processing Systems, Lake Tahoe, USA, 3–6 December 2012, pp. 485–493Google Scholar
  37. 37.
    N.R. Yousef, A.H. Sayed, A unified approach to the steady-state and tracking analyses of adaptive filters. IEEE Trans. Signal Process. 49(2), 314–324 (2001)Google Scholar
  38. 38.
    S. Zhao, B. Chen, P. Zhu, J.C. Príncipe, Fixed budget quantized kernel least-mean-square algorithm. Signal Process. 93(9), 2759–2770 (2013)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.College of Electronic and Information EngineeringSouthwest UniversityChongqingChina
  2. 2.Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information ProcessingChongqingChina
  3. 3.Graduate School at Shenzhen of Tsinghua UniversityShenzhenChina

Personalised recommendations