# Fast basis search for adaptive Fourier decomposition

- 601 Downloads

## Abstract

The adaptive Fourier decomposition (AFD) uses an adaptive basis instead of a fixed basis in the rational analytic function and thus achieves a fast energy convergence rate. At each decomposition level, an important step is to determine a new basis element from a dictionary to maximize the extracted energy. The existing basis searching method, however, is only the exhaustive searching method that is rather inefficient. This paper proposes four methods to accelerate the AFD algorithm based on four typical optimization techniques including the unscented Kalman filter (UKF) method, the Nelder-Mead (NM) algorithm, the genetic algorithm (GA), and the particle swarm optimization (PSO) algorithm. In the simulation of decomposing four representative signals and real ECG signals, compared with the existing exhaustive search method, the proposed schemes can achieve much higher computation speed with a fast energy convergence, that is, in particular, to make the AFD possible for real-time applications.

## Keywords

Adaptive Fourier decomposition Unscented Kalman filter Nelder-Mead algorithm Genetic algorithm Particle swarm optimization algorithm## Abbreviations

- AFD
Adaptive fourier decomposition

- GA
Genetic algorithm

- MS
Maximal sifting

- NM
Nelder-Mead

- MSP
Maximal selection principle

- PSO
Particle swarm optimization

- STD
Standard deviation

- UKF
Unscented Kalman filter

## 1 Introduction

The adaptive Fourier decomposition (AFD), introduced by Qian et. al., is a type of positive frequency expansion algorithm based on a given basis search dictionary [1, 2, 3]. It offers fast energy decomposition via the adaptive basis, which is different from the conventional Fourier decomposition that is based on the Fourier basis. All the traditional methods, including the wavelet one, are of the same nature. Accordingly, the AFD has been successfully applied to the system identification, modeling, signal compression and denoising [4, 5, 6, 7, 8, 9].

*G*(

*e*

^{jt}) in

*H*

^{2}space, the core AFD, as basis of the other AFD methods and itself is often abbreviated as AFD, expresses

*G*(

*e*

^{jt}) as

*R*

_{N}(

*e*

^{jt}) denotes the standard remainder at the decomposition level

*N*, and

*G*

_{n}(

*e*

^{jt}) denotes the reduced remainder at the decomposition level

*n*that is defined as

*a*

_{n}that is

*G*

_{n}(

*e*

^{jt}) and \(e_{\left \{a_{n}\right \}}\) in

*L*

^{2}space. The most important step at each decomposition level is to determine a suitable

*a*

_{n}to achieve a fast energy convergence rate. In the AFD, the maximal selection principle (MSP) is applied to identify such

*a*

_{n}by solving the following optimization problem:

This process to get *G*_{n+1}(*e*^{jt}) from *G*_{n}(*e*^{jt}) through the MSP is called maximal sifting [1]. The convergence, convergence rate, and robustness of the AFD have been theoretically proved in [1, 10, 11]. For convenience, *B*_{n}(*e*^{jt}), *G*(*e*^{jt}), *G*_{n}(*e*^{jt}), and *R*_{n}(*e*^{jt}) are abbreviated as *B*_{n}, *G*, *G*_{n}, and *R*_{n} in the following equations.

There are several versions of the AFD, including the core AFD, the unwinding AFD, and the cyclic AFD, proposed in literature to determine the suitable *a*_{n} array [2, 12, 13]. The key decomposition strategy of the unwinding AFD coincides with the study of the nonlinear phase unwinding of functions by Coifman et al. [14, 15]. Although they adopt different decomposition processes or basis representations to improve the computation efficiency, these versions of the AFD all require to implement the MSP. Until now, the most common implementation for the MSP in the AFD is the exhaustive search method [1, 3, 4, 7, 8, 9, 12, 13]. In the exhaustive search, the parameters *a*_{1},⋯,*a*_{n}, as indices of the selected dictionary elements, are selected according to the MSP in the one by one manner.

To make sure that the searched result closely gives rise to the global optimum, the density of the search dictionary should be sufficiently high. Since the objective function in (5) is highly nonlinear and complicated, this exhaustive and inefficient search strategy would be usually time consuming, which seriously limits the practicability of the AFD. It turns out to be a crucial problem of the implementation of the AFD. Besides the above mentioned versions of the AFD, Plonka et. al. proposed a sparse approximation of the exponential summations for the adaptive Fourier series which can estimate the almost optimal basis, and thus can provide significantly good convergence behavior [16]. However, this paper will not focus on proposing a new algorithm for the decomposition based on the Takenaka-Malmquist system but focus on improving the computation efficiency of the AFD by improving the searching strategy in the MS.

Normally, the objective function in (5) is highly nonlinear and contains an uncertain term *G*_{n} that varies for different input signals at different decomposition levels. Therefore, calculating the gradient information and objective function values of (5) is complicated and time consuming. In our previous work shown in [6], a preliminary study of applying the NM method to improve the computation efficiency of the AFD shows that the NM method can reduce the computation time of the AFD to the half of that based on the conventional exhaustive search method. However, in [6], the performance of the NM method is only verified by one kind of special signals, i.e., the ECG signals, with non-optimal parameter selection, and is only compared with that of the exhaustive search. Besides our previous work, Kirkbas et. al propose the Jaya-based AFD method for reducing the computation time [17]. Thanks to the advanced searching strategy and remarkable convergence speed of the Jaya method that is one kind of the novel population-based heuristic optimization methods, the Jaya-based AFD method can provide faster computation speed comparing to the conventional method with the accurate signal representation. Similarly with our previous work, the performance of the Jaya-based AFD method is only verified by one cosine signal and one kind of specific signals, i.e. speech signals, and only compared with the conventional method. In this paper, four typical optimization algorithms that require neither the gradient information nor too many function evaluations are reviewed and adopted to determine each successive *a*_{n} in the AFD, including the unscented Kalman filter (UKF) method which is based on the deterministic sampling, the NM algorithm which is a simplex method, and the genetic algorithm (GA) as well as the particle swarm optimization (PSO) algorithm which belong to the stochastic search. In order to apply these methods, the optimization problem in (5) is reformulated from a maximization problem with one complex-valued variable to a minimization problem with two real-valued variables. The performances of these proposed methods in the sifting of the AFD are compared with the conventional exhaustive search method in the decomposition of four representative signals, including the heavisine signal, the doppler signal, the block signal, and the bump signal. These signals are chosen because they caricature spatially variable functions arising in imaging, spectroscopy, and other scientific signal processing [18]. In addition, to verify the performance of the proposed methods for real signals, simulations are also carried out for real ECG signals from the MIT-BIH Arrhythmia Database [19, 20]. Simulation results show that, compared with the existing exhaustive search method, all these proposed four optimization methods can provide higher computation speed with a fast energy convergence rate. In addition, the UKF method performs best among all the tested algorithms.

The rest of this paper is organized as follows. In Section 2.1, the reformulated optimization problem and the method for determining the initial points are proposed. In addition, a brief review of the above mentioned optimization methods, i.e., the NM method, the UKF method, the GA, and the PSO algorithm, is provided. Section 3 and Section 4 show effects of optimization parameters and comparison results of these acceleration methods in simulations as well as the detail computation results of the UKF method. Finally, the conclusion is given in Section 5.

## 2 Proposed implementation method and simulation settings

### 2.1 Efficient implementation of basis search for AFD

#### 2.1.1 Maximal sifting problem reformulation

The optimization problem of the MSP in (5) is a maximum problem with complex-valued variables. However, the selected optimization methods, including the NM algorithm, the UKF method, the GA, and the PSO algorithm, are all designed for the minimization problem with real-valued variables. Therefore, the original optimization problem shown in (5) needs to be further adjusted.

*ρ*

_{n}and the phase

*α*

_{n}of

*a*

_{n}, and thus the corresponding equivalent minimization problem with real-valued variables can be expressed as

which can be applied for the UKF method.

#### 2.1.2 Determination of initial points

For the selected optimization methods, initial points are important for the optimization performance. In the NM algorithm and the UKF method, to determine suitable initial points, a coarse search step is applied. First, a set of (*ρ*_{n,k},*α*_{n,k}) in the search range 0≤*ρ*_{n,k}<1 are selected randomly with the uniform distribution where *k*=1,2,3,⋯,*N*_{rand} and *N*_{rand} denote the total number of points in the dictionary of determining suitable initial points. Then, the objective function values at these points *Y*(*ρ*_{n,k},*α*_{n,k}) are evaluated. Finally, points at which the objective function contains small values are selected as the initial points. Since the initial points are only required to approximate to the point at which the objective function achieves the global optimum, the number of points to be evaluated can be much smaller compared to that in the conventional exhaustive search method. In the GA and the PSO algorithm, such kind of the coarse initial point searching has already been included in their stochastic searching process. Therefore, the dictionary of determining suitable initial points in the GA and the PSO algorithm is equivalent to that of individuals, i.e., a set of (*ρ*_{n,k},*α*_{n,k}) where *k*=1,2,3,⋯,*N*_{ind}, and *N*_{ind} denotes the total number of individuals.

*a*

_{n}have already been recognized, which can be considered as the pre-knowledge for searching suitable

*a*

_{n}[6, 7]. Accordingly, the number of points in the coarse search process for initial points can be further reduced [7]. More specifically, suppose the distribution range of

*a*

_{n}is known, which is that the phase search range of

*a*

_{n}is limited into [

*α*

_{min},

*α*

_{max}), and the magnitude search range of

*a*

_{n}is limited into [

*ρ*

_{min},

*ρ*

_{max}), the

*k*th point in the dictionary for searching initial points can be computed as

where *u*_{k} and *v*_{k} are two random numbers in [0,1). The distributions of *u*_{k} and *v*_{k} follow the distributions of *a*_{n}. Suppose only the searching range of *a*_{n} is known, *u*_{k} and *v*_{k} can be assumed as the uniform distribution to achieve the maximum entropy and thus cover most points in the searching range [21]. Since the following simulations are carried out mainly for comparing performances of optimization methods, this strategy is not applied. However, for real applications, the distribution of *a*_{n} can be recognized first to further reduce the computation time of searching the initial points. Moreover, since the evaluations of the objective function at different points are not interrupted with each other, the parallel computing can be adopted to enhance the computing speed of the searching the initial points. However, this paper is mainly to verify the effects of parameters and the performances of following optimization methods for the computation efficiency of the AFD. Therefore, in the following simulations, the parallel computing is not adopted.

In the next section, the NM algorithm, the UKF method, the GA, and the PSO algorithm will be reviewed, which will be adopted to solve (6). The pseudocode of the AFD based on the four optimization algorithms is shown in Algorithm 1. Although this implementation is based on the core AFD, these optimization algorithms can also be applied for the unwinding AFD and the cyclic AFD.

### 2.2 Adopted optimization algorithms

#### 2.2.1 Nelder-Mead algorithm

The NM algorithm is known as one of the best simplex methods for finding the local minimum of a function [22]. For two variables, this method performs a pattern search based on three vertices of a triangle [23]. At each stage, among three initial vertices, the worst vertex at which the objective function achieves the largest value is replaced by a new vertex which is generated by reflection, expansion, contraction, or shrink and leads to smallest objective function value compared to the previous vertices [22]. This process is iterated until converging to a local minimum. The searching strategy is shown in Algorithm 2 [22, 24].

The NM algorithm requires the differences of the objective function values rather than directly calculating the gradients of the objective function. Owing to such a better search strategy, the NM algorithm needs much fewer function evaluations in most cases compared with the exhaustive search method.

#### 2.2.2 Unscented Kalman filter method

The UKF is a type of extended Kalman filters which has good performance for highly non-linear state transition and observation models [25]. Based on the deterministic sampling technique called unscented transform, the UKF minimizes the absolute error between the estimated observation and the true measurement. In the optimization problem, by setting the true measurement as 0 and the estimated observation as the objective function, the UKF can be considered as a numerical optimization method to minimize the absolute value of the objective function. The searching strategy is shown in Algorithm 3 [26, 27]. In the following simulations, the parameters of the UKF method are set following the suggestions in [25], i.e., *β*=0.001 and *κ*=0.

Based on the unscented transform technique, the UKF does not require the gradient information and normally does not need too many function evaluations.

#### 2.2.3 Evolutionary algorithms

Evolutionary algorithms belong to stochastic search methods [28]. They mimic the metaphor of the natural biological evolution and the social behavior of species [29]. In this paper, the GA and the PSO algorithm are studied.

The GA is inspired by the improved fitness of biological systems through the evolution, used in several research areas to find exact/approximate solutions to the optimization problems [30]. In the GA, a population of candidate solutions, called chromosomes, containing low objective function values are selected from a random population [31, 32]. These selected chromosomes change their elements, called genes, through crossover or mutation processes to produce offspring chromosomes [33]. Then, these offspring chromosomes are evaluated by the objective function and selected to evolve the population if they could provide better solutions than weak population members do [34]. In the crossover process, selected chromosomes containing better solutions exchange parts of their information to produce offspring chromosomes [35]. As opposed to the crossover process, the mutation process changes a piece of genes in one offspring chromosome randomly, which generates new genetic material to avoid the genetic algorithm converging to local minimum [32].

The PSO algorithm is a population-based search algorithm, inspired by the social behavior of a flock of migrating birds trying to reach an unknown destination [32, 36]. The optimization procedure initializes with a random generation of points in the search space, usually called particles [37]. As opposed to the GA, the PSO algorithm does not create new generations. The particles in the population only evolve their movement speed and position to achieve the desired position based on their own experience and also the experience of others [38]. In every search step, the position of the best particle who achieves the minimum objective function value is determined as the best fitness of all particles. Based on this position and its own previous best position, each particle updates its velocity to catch up with the best particle [39].

As evolutionary algorithms in general are based on stochastic search, they do not require the gradient information as well as sifting initial points in the computation. In addition, comparing to the exhaustive search method, less number of function evaluations is needed, normally.

### 2.3 Evaluation indices

- 1.The reconstruction energy error at the maximum decomposition level
*N*_{decom}, denoted as \(E_{N_{\text {decom}}}\), is defined as$$ E_{N_{\text{decom}}}=\frac{\left|\left\|s_{\text{ori}}\right\|^{2}-\left\|s_{N_{\text{decom}}}\right\|^{2}\right|}{\left\|s_{\text{ori}}\right\|^{2}}\times100\% $$(9)where

*s*_{ori}and \(s_{N_{\text {decom}}}\) denote the original signal and the reconstructed signal at the maximum decomposition level, respectively. \(E_{N_{\text {decom}}}\) is assessed to verify whether the AFD based on the optimization algorithm converges or not; - 2.The absolute difference between the reconstruction error in sense of energy at the
*N*th decomposition level,*E*_{N}, of the conventional exhaustive search method and the other optimization method at each decomposition level where*E*_{N}is defined in (10), which is used to verify whether the energy convergence rate remains satisfactory by considering the search results of the conventional exhaustive search method as the standard results.$$ E_{N}=\frac{\left|\left\|s_{\text{ori}}\right\|^{2}-\left\|s_{N}\right\|^{2}\right|}{\left\|s_{\text{ori}}\right\|^{2}}\times100\% $$(10)where

*s*_{N}denote the reconstructed signal at the*N*th decomposition level. - 3.
The computation time that is applied to evaluate the computation efficiency of the AFD. The units of time for the following simulation results are the second.

All following simulations are conducted in MATLAB R2014a at a PC equipped with Intel(R) Core(TM) i7-4770 CPU @ 3.40 GHz and 12 GB RAM. Moreover, in the following simulations, all numerical integrations in algorithm 1 are implemented based on the 6 order Newton-Cotes formula. The lengths of the processed signals in the following simulations are all set as 2500 sample points.

## 3 Simulation results

### 3.1 Effects of optimization parameters

*G*(

*z*) is shown in detail. For real-valued signals in Section 3.2 and real ECG signals in Section 3.3, effects of parameters are similar to the case for

*G*(

*z*).

The total number of points that need to be evaluated in the MS process affects the optimization accuracy and the workload very much. The more points evaluated, the better computation result but the longer computation time. Therefore, there exists a trade-off between the computation accuracy and the fast speed. Such control parameters for the NM algorithm and the UKF method are the number of points *N*_{rand} in the dictionary for searching initial points and the maximum iteration number *N*_{iter}, respectively. The control parameters for the GA and the PSO algorithm are the number of individuals *N*_{ind} and the maximum number of generations *N*_{gen}, respectively. For the UKF method, to get the best searching speed, *L* and *λ* are set as 2 and 0.001, respectively.

*N*

_{decom}is set to 20 for the complex-valued signal

*G*(

*z*) since the first 20 decomposition components are enough to approximate

*G*(

*z*) according to Ref. [1]. According to the following simulation results, the suggested ranges of the parameters for the optimization algorithms are shown in Table 1, which can lead to relative low computation time and high optimization accuracy at the same time.

Suggested selection ranges of parameters in NM, UKF, GA, and PSO algorithms

Algorithm | | |

NM | [600,1000] | [10,200] |

UKF | [200,1000] | [1,8] |

| | |

GA | [5,200] | [10,50] |

PSO | [10,40] | [10,40] |

*N*

_{rand}and

*N*

_{iter}are selected from [ 100,2000] and [ 1,200] for evaluations, respectively. Simulation results of the

*G*(

*z*) signal are illustrated in Fig. 3. It can be seen that all values of \(E_{N_{\text {decom}}}\) are small no matter which values of parameters are selected as shown in Fig. 3b, which means that the NM algorithm can keep the convergence of the AFD. In addition, the simulation result shown in Fig. 3a indicates that the effect of

*N*

_{iter}in the given range for the computation speed is not very large. However, the absolute differences of

*E*

_{N}between the conventional method and the NM method are large and unstable when

*N*

_{iter}is smaller than 10 and

*N*

_{rand}is smaller than 600. A major reason is that, when the evaluated points are not enough, the NM algorithm would not lead to the global optimum, which may deteriorate the convergence rate of the remainder energy. Although increasing

*N*

_{rand}and

*N*

_{iter}could increase the computation accuracy, the computation time would also be increased. In summary, for the NM algorithm, the suggested ranges of

*N*

_{rand}and

*N*

_{iter}are [600,1000] and [10,200], respectively.

*N*

_{rand}and

*N*

_{iter}are selected from [100,2000] and [1,20] respectively for evaluations. Simulation results are illustrated in Fig. 4. It can be seen that values of \(E_{N_{\text {decom}}}\) are small. However, for some

*N*

_{iter}values when

*N*

_{rand}is smaller than 200, the absolute differences of

*E*

_{N}between the conventional method and the UKF method cannot keep small and stable, or consequently, the convergence rate of the remainder energy can not keep high. In addition, the computation time will be increased very much as

*N*

_{rand}and

*N*

_{iter}increase as shown in Fig. 4a. Moreover, based on simulation results, the effect of

*N*

_{iter}is larger than that of

*N*

_{rand}. In summary, the suggested ranges of

*N*

_{rand}and

*N*

_{iter}for the UKF method are [200,1000] and [1,8], respectively.

*N*

_{gen}, and

*N*

_{ind}are all selected in [5,200] for evaluations. Simulation results are illustrated in Fig. 5. Although all values of \(E_{N_{\text {decom}}}\) are small, differences between

*E*

_{N}of the conventional method and the GA become large and unstable when

*N*

_{ind}is smaller than 20. Therefore, to achieve the fast energy convergence rate,

*N*

_{ind}is required to be larger than 20. Moreover,

*N*

_{ind}will affect the computation speed significantly. As

*N*

_{ind}increases, the computation time increases drastically. In summary, the ranges of

*N*

_{gen}and

*N*

_{ind}for the GA method are suggested as [5,200] and [10,50], respectively.

*N*

_{ind}and

*N*

_{gen}are all selected in [10,100] for evaluations. Simulation results are shown in Fig. 6. It can be seen that all values of \(E_{N_{\text {decom}}}\) are small. Although the absolute differences of

*E*

_{N}between the conventional method and the PSO algorithm are relatively large and unstable when

*N*

_{ind}and

*N*

_{gen}are small, they are still acceptable in comparison with the simulation results of the NM algorithm, the UKF method and the GA. Moreover, since all individual states need to be updated one by one, computation time will be increased considerably as

*N*

_{ind}and

*N*

_{gen}increase. In summary, the suggested ranges of

*N*

_{gen}and

*N*

_{ind}for the PSO algorithm are all [10,40].

### 3.2 Optimization performance comparison for typical signals

*N*

_{decom}of the AFD for these four signals are set as 100 to make sure that the AFD can extract most energy from the original signal.

Selected parameters of NM, UKF, GA, and PSO algorithms for simulations

Algorithm | | |

NM | 700 | 10 |

UKF | 700 | 8 |

| | |

GA | 100 | 20 |

PSO | 10 | 10 |

Computation time for five optimization algorithms in four typical real-valued signals

Heavisine | Doppler | Bump | Block | Mean | STD | |
---|---|---|---|---|---|---|

signal | signal | signal | signal | |||

Conventional method | 5.496 | 5.451 | 5.647 | 5.501 | 5.524 | 0.085 |

NM method | 1.286 | 1.172 | 1.316 | 1.331 | 1.276 | 0.072 |

UKF method | | | | | | |

GA method | 1.607 | 1.447 | 1.934 | 2.424 | 1.853 | 0.431 |

PSO method | 5.094 | 8.047 | 5.031 | 4.953 | 5.781 | 1.512 |

Reconstruction energy error of different optimization algorithms for different signals

Heavisine | Doppler | Bump | Block | |
---|---|---|---|---|

signal | signal | signal | signal | |

Conventional method | 1.1×10 | 7.3×10 | 0.1×10 | 0.6×10 |

NM method | 1.4×10 | 3.8×10 | 0.1×10 | 2.1×10 |

UKF method | 8.0×10 | 10.0×10 | 1.2×10 | 4.4×10 |

GA method | 9.5×10 | 6.8×10 | 2.4×10 | 1.7×10 |

PSO method | 1.5×10 | 1.3×10 | 1.6×10 | 1.1×10 |

### 3.3 Optimization performance comparison for real ECG signals

Computation time of optimization algorithms for real ECG signals

Real ECG | Conventional | NM | UKF | GA | PSO |
---|---|---|---|---|---|

records | method | method | method | method | method |

100 | 2.895 | 0.722 | | 1.156 | 2.211 |

101 | 2.982 | 0.618 | | 1.019 | 2.015 |

102 | 2.871 | 0.594 | | 1.022 | 2.071 |

103 | 2.798 | 0.554 | | 1.002 | 2.010 |

104 | 2.793 | | 0.578 | 0.928 | 1.994 |

105 | 2.824 | 0.546 | | 0.966 | 1.986 |

106 | 3.172 | | 0.989 | 1.348 | 2.148 |

107 | 3.293 | 0.595 | | 1.122 | 2.046 |

108 | 3.294 | 0.577 | | 1.013 | 2.059 |

109 | 2.863 | 0.563 | | 0.959 | 2.067 |

119 | 2.775 | | 0.546 | 0.945 | 1.994 |

201 | 3.029 | | 0.565 | 0.996 | 2.520 |

202 | 3.059 | 0.585 | | 0.981 | 2.049 |

203 | 2.942 | 0.831 | | 0.999 | 2.117 |

205 | 3.011 | 0.903 | | 1.113 | 2.366 |

207 | 2.907 | 0.607 | | 1.025 | 2.117 |

208 | 2.958 | | 0.606 | 1.033 | 2.067 |

209 | 2.888 | 0.573 | | 1.271 | 2.093 |

213 | 2.847 | 0.620 | | 0.961 | 2.077 |

Mean | 2.958 | 0.611 | | 1.045 | 2.106 |

STD | 0.155 | 0.101 | 0.110 | 0.111 | 0.134 |

## 4 Discussions

*a*

_{n}sequence. Such small number of iterations and small number of objective function evaluations will lead the UKF-based AFD achieve the short computation time.

*N*

_{rand}and

*N*

_{iter}mentioned in the Section 3.1, there are other two parameters, i.e., the spread of sigma points

*β*and the secondary scaling parameter

*κ*, as shown in Algorithm 3. These two parameters determine the scaling parameter

*λ*defined as [25]

*β*=0.001 and

*κ*=0 which is suggested in [25]. These two parameters will also affect the optimization results of the UKF method. Figure 10a and b illustrate the effects of

*β*for the convergence of the UKF method where \(\overline {\mathbf {x}}_{i}\) denotes the estimated observation in the

*i*th iteration that can be considered as the updated optimization result obtained from the

*i*th iteration, as well as \(\mathbf {P}_{\mathbf {X}}^{i}\) and \(\mathbf {P}_{\mathbf {X}}^{\text {final}}\) denote the covariances of sample points

**X**in the

*i*th and the final iterations that can be considered as the updated descent steps obtained from the

*i*th iteration and the final iteration. It can be seen that the error between \(\overline {\mathbf {x}}_{i}\) and the optimum is small at the beginning iteration level when

*β*is small. In addition, \(\overline {\mathbf {X}}\) and

**P**

_{X}achieve the small values faster when

*β*is smaller. Except the parameter

*β*, Fig. 10c and d show the effects of

*κ*for the convergence of \(\overline {\mathbf {X}}\) and

**P**

_{X}. It can be seen that, when

*κ*is close to 0, \(\overline {\mathbf {X}}\) and

**P**

_{X}can converge to the small values faster though the differences between \(\mathbf {P}_{\mathbf {X}}^{i}\) and \(\mathbf {P}_{\mathbf {X}}^{\text {final}}\) are not smallest at the beginning iteration level when

*κ*=0. Based on these simulation results, the suggested selections of

*κ*and

*β*in [25] are also suitable for the AFD.

## 5 Conclusion

In order to improve the computation efficiency of the AFD, four typical optimization algorithms, including the UKF method, the NM algorithm, the GA, and the PSO algorithm, are adopted in basis search and compared to the conventional exhaustive search method. The maximization problem with one complex-valued variable in the basis search of the AFD is reformulated as the equivalent minimization problem with two real-valued variables. Simulations are conducted to four typical signals, including the heavisine signal, the doppler signal, the bump signal, and the block signal, which can represent spatially variable functions appearing in the signal processing. To verify the performance for real signals, simulations are also carried out for real ECG signals. Comparative results show that the UKF method can achieve the highest computation speed with a fast energy convergence rate for these signals.

## Notes

### Availability of data and materials

Please contact author for data requests.

### Funding

This work is supported in part by the Macau Science and Technology Development Fund (FDCT) under projects 036/2009/A, 142/2014/SB, 055/2015/A2 and 079/2016/A2, the University of Macau Research Committee under MYRG projects 069(Y1-L2)-FST13-WF, 2014-00174-FST, 2016-00240-FST, 2016-00053-FST and 2017-00207-FST.

### Authors’ contributions

All the authors have participated in writing the manuscript. All authors read and approved the manuscript.

### Ethics approval and consent to participate

Not applicable.

### Consent for publication

Not applicable.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## References

- 1.T. Qian, L. Zhang, Z. Li, Algorithm of adaptive Fourier decomposition. IEEE Trans. Signal Process.
**59**(12), 5899–5906 (2011). https://doi.org/10.1109/TSP.2011.2168520.MathSciNetzbMATHCrossRefGoogle Scholar - 2.T. Qian, Adaptive Fourier decompositions and rational approximations, part I: Theory. Int. J. Wavelets Multiresolut. Inf. Process.
**12**(5), 1461008 (2014). https://doi.org/10.1142/S0219691314610086.MathSciNetzbMATHCrossRefGoogle Scholar - 3.L. Zhang, W. Hong, W. Mai, T. Qian, Adaptive Fourier decomposition and rational approximation – part II: Software system design and development. Int. J. Wavelets Multiresolut. Inf. Process.
**12**(05), 1461009 (2014). https://doi.org/10.1142/S0219691314610098.MathSciNetzbMATHCrossRefGoogle Scholar - 4.W. Mi, T. Qian, F. Wan, A fast adaptive model reduction method based on Takenaka–Malmquist systems. Syst. Control Lett.
**61**(1), 223–230 (2012). https://doi.org/10.1016/j.sysconle.2011.10.016.MathSciNetzbMATHCrossRefGoogle Scholar - 5.Z. Wang, F. Wan, C. M. Wong, L. Zhang, Adaptive Fourier decomposition based ECG denoising. Comput. Biol. Med.
**77:**, 195–205 (2016). https://doi.org/10.1016/j.compbiomed.2016.08.013.CrossRefGoogle Scholar - 6.Z. Wang, L. Yang, C. M. Wong, F. Wan, in
*12th Int. Symp. Neural Networks*. Fast basis searching method of adaptive Fourier decomposition based on Nelder-Mead algorithm for ECG signals (SpringerJeju, South Korea, 2015), pp. 305–314. https://doi.org/10.1007/978-3-319-25393-0_34.Google Scholar - 7.J. Ma, T. Zhang, M. Dong, A novel ECG data compression method using adaptive Fourier decomposition with security guarantee in e-health applications. IEEE J. Biomed. Heal. Informatics.
**19**(3), 986–994 (2015). https://doi.org/10.1109/JBHI.2014.2357841.Google Scholar - 8.Q. Chen, T. Qian, Y. Li, W. Mai, X. Zhang, Adaptive Fourier tester for statistical estimation. Math. Method. Appl. Sci.
**39**(12), 3478–3495 (2016). https://doi.org/10.1002/mma.3795.MathSciNetzbMATHCrossRefGoogle Scholar - 9.C. Tan, L. Zhang, H. T. Wu, A novel Blaschke unwinding adaptive Fourier decomposition based signal compression algorithm with application on ECG Signals. IEEE J. Biomed. Heal. Inform., 1–11 (2018). https://doi.org/10.1109/JBHI.2018.2817192.
- 10.T. Qian, Y. -B. Wang, Adaptive Fourier series—a variation of greedy algorithm. Adv. Comput. Math.
**34**(3), 279–293 (2011). https://doi.org/10.1007/s10444-010-9153-4.MathSciNetzbMATHCrossRefGoogle Scholar - 11.T. Qian, Y. Wang, Remarks on adaptive Fourier decomposition. Int. J. Wavelets Multiresolut. Inf. Process.
**11**(1), 1350007 (2013). https://doi.org/10.1142/S0219691313500070.MathSciNetzbMATHCrossRefGoogle Scholar - 12.T. Qian, Intrinsic mono-component decomposition of functions: an advance of Fourier theory. Math. Methods Appl. Sci.
**33**(7), 880–891 (2010). https://doi.org/10.1002/mma.1214.MathSciNetzbMATHGoogle Scholar - 13.T. Qian, Cyclic AFD algorithm for the best rational approximation. Math. Methods Appl. Sci.
**37**(6), 846–859 (2014). https://doi.org/10.1002/mma.2843.MathSciNetzbMATHCrossRefGoogle Scholar - 14.R. R. Coifman, S. Steinerberger, H. -t. Wu, Carrier frequencies, holomorphy, and unwinding. SIAM J. Math. Anal.
**49**(6), 4838–4864 (2017). https://doi.org/10.1137/16M1081087.MathSciNetzbMATHCrossRefGoogle Scholar - 15.R. R. Coifman, S. Steinerberger, Nonlinear phase unwinding of functions. J. Fourier Anal. Appl.
**23**(4), 778–809 (2017). https://doi.org/10.1007/s00041-016-9489-3.MathSciNetzbMATHCrossRefGoogle Scholar - 16.G. Plonka, V. Pototskaia, Computation of adaptive Fourier series by sparse approximation of exponential sums. J. Fourier Anal. Appl., 1–29 (2018). https://doi.org/10.1007/s00041-018-9635-1.
- 17.A. Kirkbas, A. Kizilkaya, E. Bogar, in
*2017 40th International Conference on Telecommunications and Signal Processing (TSP)*. Optimal basis pursuit based on jaya optimization for adaptive fourier decomposition (IEEEBarcelona, Spain, 2017), pp. 538–543. https://doi.org/10.1109/TSP.2017.8076045.CrossRefGoogle Scholar - 18.D. L. Donoho, I. M. Johnstone, Ideal spatial adaptation by wavelet shrinkage. Biometrika.
**81**(3), 425–455 (1994). https://doi.org/10.1093/biomet/81.3.425.MathSciNetzbMATHCrossRefGoogle Scholar - 19.A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. -K. Peng, H. E. Stanley, PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation.
**101**(23), 215–220 (2000). https://doi.org/10.1161/01.CIR.101.23.e215.CrossRefGoogle Scholar - 20.G. B. Moody, R. G. Mark, The impact of the MIT-BIH arrhythmia database. IEEE Eng. Med. Biol. Mag.
**20**(3), 45–50 (2001). https://doi.org/10.1109/51.932724.CrossRefGoogle Scholar - 21.S. Y. Park, A. K. Bera, Maximum entropy autoregressive conditional heteroskedasticity model. J. Econom.
**150**(2), 219–230 (2009). https://doi.org/10.1016/j.jeconom.2008.12.014.MathSciNetzbMATHCrossRefGoogle Scholar - 22.K. Klein, J. Neira, Nelder-Mead simplex optimization routine for large-scale problems: A distributed memory implementation. Comput. Econ.
**43**(4), 447–461 (2013). https://doi.org/10.1007/s10614-013-9377-8.CrossRefGoogle Scholar - 23.J. Nocedal, S. J. Wright,
*Numer. Optim*(Springer, New York, USA, 1999).CrossRefGoogle Scholar - 24.J. A. Nelder, R. Mead, A simplex method for function minimization. Comput. J.
**7**(4), 308–313 (1965). https://doi.org/10.1093/comjnl/7.4.308.MathSciNetzbMATHCrossRefGoogle Scholar - 25.E. A. Wan, R. Van Der Merwe, in
*Adapt. Syst. Signal Process. Commun. Control Symp. 2000. AS-SPCC. IEEE 2000*. The unscented Kalman filter for nonlinear estimation (IEEEAlbert, Canada, 2002), pp. 153–158. https://doi.org/10.1109/ASSPCC.2000.882463.Google Scholar - 26.S. J. Julier, J. K. Uhlmann, in
*SPIE 3068, Signal Process. Sens. Fusion, Target Recognit. VI*. New extension of the Kalman filter to nonlinear systems (SPIEOrlando, FI, USA, 1997), pp. 182–193. https://doi.org/10.1117/12.280797.CrossRefGoogle Scholar - 27.S. Lienhard, J. G. Malcolm, C. -F. Westin, Y. Rathi, A full bi-tensor neural tractography algorithm using the unscented kalman filter. EURASIP J. Adv. Signal Process.
**2011**(1), 77 (2011). https://doi.org/10.1186/1687-6180-2011-77.CrossRefGoogle Scholar - 28.M. S. White, S. J. Flockton, A comparison of evolutionary algorithms for tracking time-varying recursive systems. EURASIP J. Adv. Signal Process.
**2003**(8), 396340 (2003). https://doi.org/10.1155/S1110865703303117.zbMATHCrossRefGoogle Scholar - 29.R. Salvador, F. Moreno, T. Riesgo, L. Sekanina, Evolutionary approach to improve wavelet transforms for image compression in embedded systems. EURASIP J. Adv. Signal Process.
**2011**(1), 973806 (2011). https://doi.org/10.1155/2011/973806.CrossRefGoogle Scholar - 30.J. Riionheimo, V. Välimäki, Parameter estimation of a plucked string synthesis model using a genetic algorithm with perceptual fitness calculation. EURASIP J. Adv. Signal Process.
**2003**(8), 758284 (2003). https://doi.org/10.1155/S1110865703302100.zbMATHCrossRefGoogle Scholar - 31.G. Pignalberi, R. Cucchiara, L. Cinque, S. Levialdi, Tuning range image segmentation by genetic algorithm. EURASIP J. Adv. Signal Process.
**2003**(8), 683043 (2003). https://doi.org/10.1155/S1110865703303087.zbMATHCrossRefGoogle Scholar - 32.E. Elbeltagi, T. Hegazy, D. Grierson, Comparison among five evolutionary-based optimization algorithms. Adv. Eng. Informatics.
**19**(1), 43–53 (2005). https://doi.org/10.1016/j.aei.2005.01.004.CrossRefGoogle Scholar - 33.L. M. Schmitt, Theory of genetic algorithms. Theor. Comput. Sci.
**259**(1-2), 1–61 (2001). https://doi.org/10.1016/S0304-3975(00)00406-0.MathSciNetzbMATHCrossRefGoogle Scholar - 34.S. Panda, N. P. Padhy, Comparison of particle swarm optimization and genetic algorithm for FACTS-based controller design. Appl. Soft Comput.
**8**(4), 1418–1427 (2008). https://doi.org/10.1016/j.asoc.2007.10.009.CrossRefGoogle Scholar - 35.L. M. Schmitt, Theory of genetic algorithms II: Models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary fitness function under scaling. Theor. Comput. Sci.
**310**(1-3), 181–231 (2004). https://doi.org/10.1016/S0304-3975(03)00393-1.MathSciNetzbMATHCrossRefGoogle Scholar - 36.B. Li, Z. Zhou, W. Zou, W. Gao, Particle swarm optimization based noncoherent detector for ultra-wideband radio in intensive multipath environments. EURASIP J. Adv. Signal Process.
**2011**(1), 341836 (2011). https://doi.org/10.1155/2011/341836.CrossRefGoogle Scholar - 37.R. He, K. Wang, Q. Li, Y. Yuan, N. Zhao, Y. Liu, H. Zhang, A novel method for the detection of R-peaks in ECG based on K-nearest neighbors and particle swarm optimization. EURASIP J. Adv. Signal Process.
**2017**(1), 82 (2017). https://doi.org/10.1186/s13634-017-0519-3.CrossRefGoogle Scholar - 38.M. Clerc, J. Kennedy, The particle swarm – explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput.
**6**(1), 58–73 (2002). https://doi.org/10.1109/4235.985692.CrossRefGoogle Scholar - 39.IC. Trelea, The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf. Process. Lett.
**85**(6), 317–325 (2003). https://doi.org/10.1016/S0020-0190(02)00447-7.MathSciNetzbMATHCrossRefGoogle Scholar - 40.T. Qian, H. Li, M. Stessin, Comparison of adaptive mono-component decompositions. Nonlinear Anal. Real World Appl.
**14**(2), 1055–1074 (2013). https://doi.org/10.1016/j.nonrwa.2012.08.017.MathSciNetzbMATHCrossRefGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.