A low computational complexity normalized subband adaptive filter algorithm employing signed regressor of input signal
 268 Downloads
Abstract
ᅟ
In this paper, the signed regressor normalized subband adaptive filter (SRNSAF) algorithm is proposed. This algorithm is optimized by L_{1}norm minimization criteria. The SRNSAF has a fast convergence speed and a low steadystate error similar to the conventional NSAF. In addition, the proposed algorithm has lower computational complexity than NSAF due to the signed regressor of the input signal at each subband. The theoretical meansquare performance analysis of the proposed algorithm in the stationary and nonstationary environments is studied based on the energy conservation relation and the steadystate, the transient, and the stability bounds of the SRNSAF are predicated by the closed form expressions. The good performance of SRNSAF is demonstrated through several simulation results in system identification, acoustic echo cancelation (AEC) and line EC (LEC) applications. The theoretical relations are also verified by presenting various experimental results.
Keywords
Normalized subband adaptive filter (NSAF) Meansquare performance Signed regressor (SR)L_{1}norm1 Introduction
Fast convergence rate and low computational complexity features are important issues for high data rate applications such as speech processing, echo cancelation, network echo cancelation, and channel equalization. The leastmeansquares (LMS) and the normalized LMS (NLMS) algorithms are useful for a wide range of adaptive filter applications because of their low computational complexity. However, the performance of the LMStype algorithms is corrupted when the input signals are colored [1, 2].
To solve this problem, various approaches such as affine projection algorithm (APA) [3, 4] and subband adaptive filter (SAF) algorithm have been proposed [5, 6, 7]. In [8], a new version of the SAF was developed based on a constrained optimization problem referred to as normalized SAF (NSAF). The filter update equation in [8] is similar to the update equation in [9, 10], where the full band filters are updated instead of subfilters as in the conventional SAF structure [5].
To reduce the computational complexity of NSAF and APA, different methods were proposed. In [11], the selective partial update NSAF (SPUNSAF) algorithm was presented where the filter coefficients are partially updated rather than the entire filter at every adaptation. In [12], the dynamic selection of NSAF (DSNSAF) algorithm was introduced. In this algorithm, the number of subbands was optimally selected during each iteration. The fix selection NSAF (FSNSAF) was also introduced in [13]. In this algorithm, a subset of subbands was selected during the adaptation.
There are some classes of adaptive filter algorithms that make use of the signum of either the error signal or the input signal, or both. These approaches have been applied to the LMS algorithm for the simplicity of implementation, enabling a significant reduction in computational complexity [14, 15, 16, 17, 18]. The sign algorithm (SA) takes the signum of the error signal. This algorithm is particularly useful against impulsive interferences [19, 20]. But, in other cases, the convergence speed of the SA is slower than conventional one [21]. This approach was also successfully extended to the NSAF algorithm to establish the sign SAF (SSAF) algorithm [22, 23].
In the signed regressor LMS (SRLMS), the signum of the input regressors is utilized. In this algorithm, the polarity of the input signal is used to adjust the filter coefficients, which requires no multiplications. The SRLMS has a convergence speed and a steadystate error level that are only slightly inferior to those of the LMS algorithm for the same parameter setting [24]. To increase the convergence speed of SRLMS, the signed regressor NLMS (SRNLMS) was firstly proposed in [14]. Also, the modified version of this algorithm (MSRNLMS) was presented in [25]. The same as SRLMS, the SRNLMS enjoys advantages similar to those of the NLMS algorithm. Due to the normalization factor, the steadystate error level does not depend on the input signal power [18]. Note that no multiplications are needed to calculate the normalization factor. But for highly colored input signal, the convergence speed of SRNLMS is still low. On the other hand, there is no definition for cost function or solving the optimization problem to establishment of the signed regressor algorithms in the literature.
Due to the effective features of signed regressor adaptive algorithms (low computational complexity and close convergence speed to the conventional algorithm) and to increase the performance of the SRNLMS algorithm, this paper proposes the signed regressor NSAF (SRNSAF) algorithm. The SRNSAF is established with L_{1}norm optimization. A constraint is imposed on the decimated filter output to force a posteriori error to become zero. This constraint guarantees the convergence of the algorithm. This algorithm utilizes the signum of the input regressors at each subband during the adaptation. Again, no multiplications are required for normalization factor at each subband. To improve the performance of the SRNSAF, the modified SRNSAF (MSRNSAF) is also established. The proposed SRNSAF and MSRNSAF algorithms have lower computational complexity than the NSAF, SPUNSAF, DSNSAF, and FSNSAF, while they have a fast convergence rate similar to the NSAF. In addition, the steadystate error level is also nearly close to the NSAF. For performance evaluation of any proposed adaptive algorithm, a theoretical analysis is essential [26]. Therefore, in the following, the energy conservation approach [27] is applied to the SRNSAF and the meansquare performance analysis of the proposed algorithms are studied in the stationary and nonstationary environments. This approach does not need a white or Gaussian assumption for the input regressors. Based on this, the transient, the steadystate, and the stability bounds of the SRNSAF and MSRNSAF are analyzed and closed form relations are derived.

The establishment of the SRNSAF according to the proposed cost function. This algorithm utilizes the signum of the input regressors at each subband. Furthermore, no multiplications are required for normalization factor at each subband.

Meansquare performance analysis of the SRNSAF algorithm in the stationary and nonstationary environments. The theoretical expressions for transient and steadystate performances of the SRNSAF are extracted.

Analysis of the mean and meansquare stability bounds of the SRNSAF and MSRNSAF algorithms.

The performance of NSAF, SPUNSAF, DSNSAF, FSNSAF, SRNSAF, and MSRNSAF are compared in convergence speed, steadystate error, and computational complexity features for system identification, acoustic echo cancelation, and line echo cancelation applications.

The theoretical expressions for transient, steadystate, and stability bounds are justified with various experiments.
The current paper is organized as follows. In Section II, the conventional NSAF is briefly reviewed. The proposed SRNSAF and MSRNSAF are presented in Section III. Section IV presents the mean square performance analysis of SRNSAF. The theoretical stability bounds relations are given in Section V. In the following, the computational complexity of the proposed algorithm will be discussed. Finally, before concluding the paper, the usefulness of the introduced algorithms are demonstrated by presenting several experimental results.
.  Norm of a scalar 

‖.‖^{2}  Squared Euclidean norm of a vector. 
‖.‖_{1}  L_{1}norm of a vector. 
(.)^{ T }  Transpose of a vector or a matrix. 
E{.}  Expectation operator. 
sgn  Sign function. 
Tr(.)  Trace of a matrix. 
λ _{ max }  The largest eigenvalue of a matrix. 
ℜ^{+}  The set of positive real numbers. 
Α ⊗ Β  Kronecker product of matrices Α and Β 
\( {\left\Vert \mathbf{t}\right\Vert}_{\boldsymbol{\Phi}}^2 \)  Φweighted Euclidean norm of a column vector t defined as t^{ T }Φ t. 
diag(.)  Has the same meaning as the MATLAB operator with the same name: if its argument is a vector, a diagonal matrix with the diagonal elements given by the vector argument results. If the argument is a matrix, its diagonal is extracted into a resulting vector. 
vec(T)  Creates an M^{2} × 1 column vector t through stacking the columns of the M × M matrix T. 
vec(t)  Creates an M × M matrix T from the M^{2} × 1 column vector t. 
2 Background on NSAF
3 Sign regressor normalized subband adaptive filter(SRNSAF)
The SRNSAF
The MSRNSAF
4 Mean square performance analysis of SRNSAF in stationary environment
It is important to note that selecting F = I and N = K = 1 lead to the performance analysis of SRNLMS and MSRNLMS algorithms, which was not presented in [14, 25]. This analysis can be successfully extended to nonstationary environment. In Appendix 1, the meansquare performance analysis of the SRNSAF is presented in the nonstationary environment.
5 Mean and meansquare stability of the SRNSAF
6 Computational complexity
The computational complexity of NSAF and SRNSAF
Computation  Multiplications 

\( {x}_i(n)={\mathbf{f}}_i^T\mathbf{x}(n) \). The input signal, x(n), is K × 1  NK 
\( {d}_i(n)={\mathbf{f}}_i^T\mathbf{d}(n) \). The desired signal, d(n), is K × 1  NK 
\( e(n)={\sum}_{i=0}^{N1}{\mathbf{g}}_i^T{e}_i(n) \)  NK 
\( {e}_{i,D}(k)={d}_{i,D}(k){\mathbf{x}}_i^T(k)\mathbf{w}(k) \)  M 
\( \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu {\sum}_{i=0}^{N1}\frac{{\mathbf{x}}_i(k)}{{\left\Vert {\mathbf{x}}_i(k)\right\Vert}^2}{e}_{i,D}(k) \)  2 M + 1 
\( \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu {\sum}_{i=0}^{N1}\frac{\operatorname{sgn}\left[{\mathbf{x}}_i(k)\right]}{{\left\Vert {\mathbf{x}}_i(k)\right\Vert}_1}{e}_{i,D}(k) \)  1 
Total Complexity for NSAF  3 M + 3NK + 1 
Total Complexity for SRNSAF  M + 3NK + 1 
7 Simulation results
7.1 System identification: AR(2) input signal
Number of multiplications for various NSAF algorithms until convergence in SI and AEC applications
Algorithm  No. of multi. in SI  No. of multi. in AEC 

NLMS  15,380,000  – 
SRNLMS  5,654,000  – 
NSAF  1,537,000  184,440,000 
SPUNSAF  1,473,000  176,760,000 
DSNSAF  1,224,000  146,880,000 
FSNSAF  1,409,000  169,080,000 
Proposed SRNSAF  1,025,000  123,000,000 
7.2 Acoustic echo cancelation (AEC): speech input signal
For AEC setup, we consider both the exact and undermodeling scenarios. For the undermodeling scenario, the NMSD is calculated by padding the tapweight vector of the adaptive filter with M − J zeros (J=length of adaptive filter which is shorter than that of the unknown system in this case) [31]. In the exactmodeling scenario, the echo path is truncated to the first 128 tap weights [before the dotted line in Fig. 3]; in the undermodeling scenario, the length of the echo path is set to 256. For both scenarios, the length of all the adaptive filters is set to 128. Speech input signal is used as input signal for AEC setup [26].
7.3 Line echo cancelation
7.4 Performance in nonstationary environment
7.5 Theoretical performance analysis
7.5.1 Simulation results for transient performance
7.5.2 Simulation results for stability bounds
Stability bounds of the SRNSAF and MSRNSAF for different values of N
Algorithm  \( \frac{2}{\lambda_{\mathrm{max}}\left(E\left\{\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T{\mathbf{X}}^T(k)\right\}\right)} \)  \( \frac{1}{\lambda_{\mathrm{max}}\left({\mathbf{M}}^{1}\mathbf{N}\right)} \)  \( \frac{1}{\max \left(\lambda \left(\mathbf{H}\right)\in {\mathrm{\Re}}^{+}\right)} \)  μ _{max} 

SRNSAF (N = 2)  5.7899  1.3786  4.5400  1.3786 
SRNSAF (N = 4)  5.1303  1.3768  4.2693  1.3768 
SRNSAF (N = 8)  4.2324  1.3704  3.9672  1.3704 
MSRNSAF (N = 2)  7.4337  1.6387  5.2346  1.6387 
MSRNSAF (N = 4)  6.4078  1.6355  4.7916  1.6355 
MSRNSAF (N = 8)  4.9383  1.6302  4.1312  1.6302 
7.5.3 Simulation results for steadystate performance
7.5.4 Theoretical results in nonstationary environment
8 Conclusion
In this paper, the NSAF algorithm with signed regressor of input signal was established. The optimization problem was formulated by L_{1}norm minimization. The result of this optimization leads to the sign operation on the input regressors at each subband. The computational complexity of the proposed SRNSAF was lower than previous NSAF family while it had close convergence performance to the NSAF. Therefore, the SRNSAF is a suitable candidate for many applications. To increase the performance of SRNSAF, the MSRNSAF was introduced. The performance of the SRNSAF was confirmed by several computer simulations in SI, AEC, and LEC applications. Also, the theoretical meansquare performance analysis and the stability bound of the proposed algorithms were studied and confirmed by different experiments.
Footnotes
Notes
Acknowledgements
The authors would like to thank Shahid Rajaee Teacher Training University (SRTTU) for financially support.
Funding
This work was financially supported by Shahid Rajaee Teacher Training University (SRTTU).
Authors’ contributions
Due to the effective features of signed regressor adaptive algorithms (low computational complexity and close convergence speed to the conventional algorithm) and to increase the performance of the SRNLMS algorithm, this paper proposes the signed regressor NSAF (SRNSAF) algorithm. The SRNSAF is established with L_{1}norm optimization. A constraint is imposed on the decimated filter output to force a posteriori error to become zero. This constraint guarantees the convergence of the algorithm. This algorithm utilizes the signum of the input regressors at each subband during the adaptation. Again, no multiplications are required for normalization factor at each subband. To improve the performance of the SRNSAF, the modified SRNSAF (MSRNSAF) is also established. The proposed SRNSAF and MSRNSAF algorithms have lower computational complexity than the NSAF, SPUNSAF, DSNSAF, and FSNSAF, while they have a fast convergence rate similar to the NSAF. In addition, the steadystate error level is also nearly close to the NSAF. For performance evaluation of any proposed adaptive algorithm, a theoretical analysis is essential [33]. Therefore, in the following, the energy conservation approach [27] is applied to the SRNSAF and the meansquare performance analysis of the proposed algorithms are studied in the stationary and nonstationary environments. This approach does not need a white or Gaussian assumption for the input regressors. Based on this, the transient, the steadystate, and the stability bounds of the SRNSAF and MSRNSAF are analyzed and closed form relations are derived.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.S Haykin, Adaptive Filter Theory, 4th edn. (PrenticaHall, 2002)Google Scholar
 2.AH Sayed, Adaptive Filters (Wiley, 2008)Google Scholar
 3.K Ozeki, T Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron Commun Jpn 67A, 19–27 (1984)MathSciNetCrossRefGoogle Scholar
 4.M Muneyasu, T Hinamoto, A realization of TD adaptive filters using affine projection algorithm. J Franklin Inst 335(7), 1185–1193 (1998)MathSciNetCrossRefMATHGoogle Scholar
 5.A Gilloire, M Vetterli, Adaptive filtering in subbands with critical sampling: Analysis, experiments, and application to acoustic echo cancellation. IEEE Trans. Signal Process 40, 1862–1875 (1992)CrossRefMATHGoogle Scholar
 6.KA Lee, WS Gan, SM Kuo, Subband Adaptive Filtering: Theory and Implementation (Wiley, Hoboken, 2009)CrossRefGoogle Scholar
 7.MSE Abadi, S Kadkhodazadeh, A family of proportionate normalized subband adaptive filter algorithms. J Franklin Inst 348(2), 212–238 (2011)CrossRefMATHGoogle Scholar
 8.KA Lee, WS Gan, Improving convergence of the NLMS algorithm using constrained subband updates. IEEE Signal Process Lett 11, 736–739 (2004)CrossRefGoogle Scholar
 9.M de Courville, P Duhamel, Adaptive filtering in subbands using a weighted criterion. IEEE Trans Signal Process 46, 2359–2371 (1998)CrossRefGoogle Scholar
 10.SS Pradhan, VE Reddy, A new approach to subband adaptive filtering. IEEE Trans Signal Process 47, 655–664 (1999)CrossRefGoogle Scholar
 11.MSE Abadi, JH Husøy, Selective partial update and setmembership subband adaptive filters. Signal Process. 88, 2463–2471 (2008)CrossRefMATHGoogle Scholar
 12.SE Kim, YS Choi, MK Song, WJ Song, A subband adaptive filtering algorithm employing dynamic selection of subband filters. IEEE Signal Process Lett 17(3), 245–248 (2010)CrossRefGoogle Scholar
 13.MK Song, SE Kim, YS Choi, WJ Song, Selective normalized subband adaptive filter with subband extension. IEEE Trans Circuits Syst II: EXPRESS BRIEFS 60(2), 101–105 (2013)CrossRefGoogle Scholar
 14.J Nagumo, A Noda, A learning method for system identification. IEEE Trans. Automat. Contr. 12, 282–287 (1967)CrossRefGoogle Scholar
 15.DL Duttweiler, Adaptive filter performance with nonlinearities in the correlation multipliers. IEEE Trans Acoust Speech Signal Process 30(8), 578–586 (1982)CrossRefGoogle Scholar
 16.A Gersho, Adaptive filtering with binary reinforcement. IEEE Trans. Inform. Theory 30(3), 191–199 (1984)CrossRefMATHGoogle Scholar
 17.WA Sethares, Adaptive algorithms with nonlinear data and error functions. IEEE Trans Signal Process 40(9), 2199–2206 (1992)MathSciNetCrossRefMATHGoogle Scholar
 18.S Koike, Analysis of adaptive filters using normalized signed regressor LMS algorithm. IEEE Trans Signal Process 47(10), 2710–2723 (1999)CrossRefGoogle Scholar
 19.VJ Mathews, SH Cho, Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm. IEEE Trans Acoust Speech Signal Process 35(4), 450–454 (1987)CrossRefMATHGoogle Scholar
 20.P Wen, S Zhang, J Zhang, A novel subband adaptive filter algorithm against impulsive noise and it’s performance analysis. Signal Process. 127(10), 282–287 (2016)CrossRefGoogle Scholar
 21.TMCM Classen, WFG Mecklenbraeuker, Comparsion of the convergence of two algorithms for adaptive FIR digital filters. IEEE Trans Acoust Speech Signal Process 29(6), 670–678 (1981)CrossRefMATHGoogle Scholar
 22.J Ni, F Li, Variable regularisation parameter sign subband adaptive filter. Electron. Lett. 64, 1605–1607 (2010)CrossRefGoogle Scholar
 23.J Shin, J Yoo, P Park, Variable stepsize sign subband adaptive filter. IEEE Signal Process Lett 20, 173–176 (2013)CrossRefGoogle Scholar
 24.NJ Bershad, Comments on ‘comparison of the convergence of two algorithms for adaptive FIR digital filters. IEEE Trans Acoust Speech Signal Process 33(12), 1604–1606 (1985)CrossRefGoogle Scholar
 25.K Takahashi, S Mori, in Proc. ICCS/ISITA. A new normalized signed regressor LMS algorithm (Singapore, 1992), pp. 1181–1185Google Scholar
 26.J Ni, F Li, A variable stepsize matrix normalized subband adaptive filter. IEEE Trans Audio Speech Lang Process 18, 1290–1299 (2010)CrossRefGoogle Scholar
 27.HC Shin, AH Sayed, Meansquare performance of a family of affine projection algorithms. IEEE Trans Signal Process 52(1), 90–102 (2004)MathSciNetCrossRefMATHGoogle Scholar
 28.JJ Jeong, SH Kim, G Koo, SW Kim, Meansquare deviation analysis of multibandstructured subband adaptive filter algorithm. IEEE Trans Signal Process 64(4), 985–994 (2016)MathSciNetCrossRefGoogle Scholar
 29.K Dogancay, O Tanrikulu, Adaptive filtering algorithms with selective partial updates. IEEE Trans Circuits Syst II Analog Digit Signal Process 48, 762–769 (2001)CrossRefMATHGoogle Scholar
 30.H Malvar, Signal Processing with Lapped Transforms (Artech House, 1992)Google Scholar
 31.C Paleologu, J Benesty, S Ciochina, A variable stepsize affine projection algorithm designed for acoustic echo cancellation. IEEE Trans Audio Speech Lang Process 16, 1466–1478 (2008)CrossRefGoogle Scholar
 32.ITUT Rec. G. 168, Digital Network Echo Cancellers, 2007.Google Scholar
 33.AH Sayed, Fundamentals of Adaptive Filtering (Wiley, 2003)Google Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.