Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

  • Saeed Mian Qaisar
  • Laurent Fesquet
  • Marc Renaudin
Open Access
Research Article

Abstract

The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme) presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

Keywords

Finite Impulse Response Filter Filter Order Adaptive Rate Interpolation Order Local Bandwidth 

1. Introduction

This work is part of a large project aimed to enhance the signal processing chain implemented in the mobile systems. The motivation is to reduce their size, cost, processing noise, electromagnetic emission and especially power consumption, as they are most often powered by batteries. This can be achieved by intelligently reorganizing their associated signal processing theory, and architecture. The idea is to combine event driven signal processing with asynchronous circuit design, in order to reduce the system processing activity and energy cost.

Almost all natural signals like speech, seismic, and biomedical are time varying in nature. Moreover, the man made signals like Doppler, Amplitude Shift Keying (ASK), and Frequency Shift Keying (FSK), also lay in the same category. The spectral contents of these signals vary with time, which is a direct consequence of the signal generation process [1]

The classical systems are based on the Nyquist signal processing architectures. These systems do not exploit the signal variations. Indeed, they sample the signal at a fixed rate without taking into account the intrinsic signal nature. Moreover they are highly constrained due to the Shannon theory especially in the case of low activity sporadic signals like electrocardiogram, phonocardiogram, seismic, and so forth. It causes to capture, and to process a large number of samples without any relevant information, a useless increase of the system activity, and its power consumption.

The power efficiency can be enhanced by intelligently adapting the system processing load according to the signal local variations. In this end, a signal driven sampling scheme, which is based on "level-crossing" is employed. The Level Crossing Sampling Scheme (LCSS) [2] adapts the sampling rate by following the local characteristics of the input signal [3, 4]. Hence, it drastically reduces the activity of the post-processing chain, because it only captures the relevant information [5, 6]. In this context, LCSS Based Analog to Digital Converters (LCADCs) have been developed [7, 8, 9]. Algorithms for processing [6, 10, 11, 12], and analysis [3, 5, 13, 14] of the nonuniformly spaced out in time-sampled data, obtained with the LCSS have also been developed.

Filtering is a basic operation, almost required in every signal processing chain. Therefore, this paper focuses on the development of efficient Finite Impulse Response (FIR) filtering techniques. The idea is to pilot the system processing activity by the input signal variations. By following this idea, an efficient solution is proposed by intelligently combining the features of both nonuniform and uniform signal processing tools, which promise a drastic computational gain of the proposed techniques compared to the classical one.

Section 2 briefly reviews the nonuniform signal processing tools employed in the proposed approach. Complete functionality of the proposed filtering techniques is described in Section 3. Section 4 demonstrates the appealing features of the proposed techniques with the help of an illustrative example. The computational complexities of both proposed techniques are deduced and compared, among and to the classical case in Section 5. Section 6 discusses the processing error. In Section 7, the proposed techniques performance is evaluated for a speech signal. Section 8 finally concludes the article.

2. Nonuniform Signal Processing Tools

2.1. LCSS (Level Crossing Sampling Scheme)

The LCSS belongs to the signal-dependent sampling schemes like zero-crossing sampling [15], Lebesgue sampling [16], and reference signal crossing sampling [17]. The concept of LCSS is not new and has been known at least since 1950s [18]. It is also known as an event-based sampling [19, 20]. In recent years, there have been considerable interests in the LCSS, in a broad spectrum of technology and applications. In [21, 22, 23, 24], authors have employed it for monitoring and control systems. It has also been suggested in literature for compression [2], random processes [25], and band-limited Gaussian random processes [26].

The LCSS is a natural choice for sampling the time-varying signals. It lets the signal to dictate the sampling process [4]. The nonuniformity in the sampling process represents the signal local variations [3]. In the case of LCSS, a sample is captured only when the input analog signal Open image in new window crosses one of the predefined thresholds. The samples are not uniformly spaced in time because they depend on Open image in new window variations as it is clear from Figure 1.
Figure 1

Level-crossing sampling scheme.

Let a set of levels which span the analog signal amplitude range be Open image in new window . These levels are equally spaced by a quantum Open image in new window . When Open image in new window crosses one of these predefined levels, a sample is taken [2]. This sample is the couple Open image in new window of an amplitude Open image in new window and a time Open image in new window . However Open image in new window is clearly equal to one of the levels and Open image in new window can be computed by employing

In (1), Open image in new window is the current sampling instant, Open image in new window is the previous one, and Open image in new window is the time elapsed between the current and the previous sampling instants.

2.2. LCADC (LCSS-Based Analog to Digital Converter)

Classically, during an ideal A/D conversion process the sampling instants are exactly known, where as samples amplitudes are quantized at the ADC resolution [27], which is defined by the ADC number of bits. This error is characterized by the Signal to Noise Ratio (SNR) [27], which can be expressed by

Here, M is the ADC number of bits. It follows that the SNR of an ideal ADC depends only on M and it can be improved by 6.02 dB for each increment in M.

The A/D conversion process, which occurs in the LCADCs [7, 8, 9], is dual in nature. Ideally in this case, samples amplitudes are exactly known since they are exactly equal to one of the predefined levels, while the sampling instants are quantized at the timer resolution Open image in new window . According to [7, 8], the SNR in this case is given by

Here, Open image in new window and Open image in new window are the powers of Open image in new window and of its derivative, respectively. It shows that in this case, the SNR does not depend on M any more, but on Open image in new window characteristics and Open image in new window . An improvement of Open image in new window Open image in new window in the SNR can be achieved by simply halving Open image in new window .

The choice of M is however crucial. It should be taken large enough to ensure a proper reconstruction of the signal. This problem has been addressed in [28, 29, 30, 31]. In particular, in [31], it is shown that a band-limited signal can be ideally reconstructed from nonuniformly spaced samples if the average number of samples satisfies the Nyquist criterion. In the case of LCADCs, the average sampling frequency depends on M and the signal characteristics [7, 8, 9]. Thus, for a given application an appropriate M should be chosen in order to respect the reconstruction criterion [31].

In [7, 8, 9], authors have shown advantages of the LCADCs over the classical ones. The major advantages are the reduced activity, the power saving, the reduced electromagnetic emission, and the processing noise reduction. Inspiring from these interesting features, the Asynchronous Analog to Digital Converter (AADC) [7] is employed to digitize Open image in new window in the studied case. The characteristics of the filtering techniques described in the sequel are highly determined by the characteristics of the nonuniformly sampled signal produced by the AADC. We have already defined the AADC amplitude range Open image in new window , the number of bits M and the quantum q. They are linked by the following relation:
This quantum together with the AADC processing delay for one sample Open image in new window yields the upper limit on the input signal slope, which can be captured properly:
In order to respect the reconstruction criterion [31] and the tracking condition [7], a band pass filter with pass-band Open image in new window is employed at the AADC input. This together with a given M induces the AADC maximum and minimum sampling frequencies [6, 11], defined by

Here, Open image in new window and Open image in new window are the Open image in new window bandwidth and fundamental frequencies, Open image in new window and Open image in new window are the AADC maximum and minimum sampling frequencies, respectively.

2.3. ASA (Activity Selection Algorithm)

The nonuniformly sampled signal obtained with the AADC can be used for further nonuniform digital processing [3, 10, 13]. However in the studied case, the nonuniformity of the sampling process, which yields information on the signal local features, is employed to select only the relevant signal parts. Furthermore, the characteristics of each signal selected part are analyzed and are employed later on to adapt the proposed system parameters accordingly. This selection and local-features extraction process is named as the ASA.

For activity selection, the ASA exploits the information laying in the level-crossing sampled signal nonuniformity [5]. This selection process corresponds to an adaptive length rectangular windowing. It defines a series of selected windows within the whole signal length. The ability of activity selection is extremely important to reduce the proposed system processing activity and consequently its power consumption. Indeed, in the proposed case, no processing is performed during idle signal parts, which is one of the reasons of the achieved computational gain compared to the classical case. The ASA is defined as follow:

Here, Open image in new window is clear from (1). Open image in new window is the fundamental period of the bandlimited signal Open image in new window and Open image in new window detect parts of the nonuniformly sampled signal with activity. If the measured time delay Open image in new window is greater than Open image in new window is considered to be idle. The condition Open image in new window is chosen to ensure the Nyquist sampling criterion for Open image in new window .

Open image in new window is the reference window length. Its choice depends on the input signal characteristics and the system resources. The upper bound on Open image in new window is posed by the maximum number of samples that the system can treat at once. Whereas the lower bound on Open image in new window is posed by the condition Open image in new window Open image in new window Open image in new window , which should be respected in order to achieve a proper spectral representation [5].

Open image in new window represents the length in seconds of the Open image in new window selected window Open image in new window . Open image in new window poses the upper bound on Open image in new window represents the number of nonuniform samples laying in Open image in new window , which lies on the Open image in new window active part of the nonuniformly sampled signal. Open image in new window and Open image in new window both belong to the set of natural numbers Open image in new window . The Open image in new window signal activity can be longer than Open image in new window . In this case, it will be splitted into more than one selected windows.

The above-described loop repeats for each selected window, which occurs during the observation length of Open image in new window . Every time before starting the next loop, Open image in new window is incremented and Open image in new window and Open image in new window are initialized to zero.

The maximum number of samples Open image in new window , which can take place within a chosen Open image in new window can be calculated by employing

The ASA displays interesting features, which are not available in the classical case. It only selects the active parts of the nonuniformly sampled signal. Moreover, it correlates the length of the selected window with the input signal activity, laying in it. In addition, it also provides an efficient reduction of the phenomenon of spectral leakage in the case of transient signals. The leakage reduction is achieved by avoiding the signal truncation problem with a simple and an efficient algorithm, instead of employing a smoothening (cosine) window function, which is used in the classical schemes [5]. These abilities make the ASA extremely effective in reducing the overall system processing activity, especially in the case of low activity sporadic signals [5, 6, 11, 12, 14].

3. Proposed Adaptive Rate Filtering

3.1. General Principle

Two techniques are described to filter the selected signal obtained at the ASA output. The signal processing chain common to both filtering techniques is shown in Figure 2.
Figure 2

Signal processing chain common to both filtering techniques.

The activity selection and the local features extraction are the bases of the proposed techniques. They make to achieve the adaptive rate sampling (only relevant samples to process) along with the adaptive rate filtering (only relevant operations to deliver a filtered sample). Such an achievement assures a drastic computational gain of the proposed filtering techniques compared to the classical one. The steps of realizing these ideas are detailed in the following subsections.

3.1.1. Adaptive Rate Sampling

The AADC sampling frequency is correlated to Open image in new window local variations [6, 11, 12, 14]. It follows that the local sampling frequency Open image in new window can be specific for Open image in new window . According to [5] Open image in new window can be calculated by employing

The upper and the lower bounds on Open image in new window are posed by Open image in new window and Open image in new window , respectively. In order to perform a classical filtering algorithm, the selected signal laying in Open image in new window is uniformly resampled before proceeding to the filtering stage (cf. Figure 2). Characteristics of the selected signal part laying in Open image in new window are employed to choose its resampling frequency Open image in new window . Once the resampling is done, there are Open image in new window samples in Open image in new window . Choice of Open image in new window is crucial and this procedure is detailed in the following subsection.

3.1.2. Adaptive Rate Filtering

It is known that for fixed design parameters (cut-off frequency, transition-band width, pass-band, and stop-band ripples) the FIR filter order varies as a function of the operational sampling frequency. For high sampling frequency, the order is high and vice versa. In the classical case, the sampling frequency and filter order both remains unique regardless of the input signal variations, so they have to be chosen for the worst case. This time invariant nature of the classical filtering causes a useless increase of the computational load. This drawback has been resolved up to a certain extent by employing the multirate filtering techniques [32, 33, 34].

The proposed filtering techniques of this paper are the intelligent alternatives to the multirate filtering techniques. They achieve computational efficiency by adapting the sampling frequency and the filter order according to the input signal local variations. Both techniques have some common features, which are described in the following.

In both cases, a reference FIR filter is offline designed for a reference sampling frequency Open image in new window . Its impulse response is Open image in new window , where Open image in new window is indexing the reference filter coefficients. Open image in new window is chosen in order to satisfy the Nyquist sampling criterion for Open image in new window , namely Open image in new window .

During online computation, Open image in new window and the local sampling frequency Open image in new window of window Open image in new window are used to define the local resampling frequency Open image in new window and a decimation factor Open image in new window . The Open image in new window is employed to uniformly resample the selected signal laying in Open image in new window , where as Open image in new window is employed to decimate Open image in new window for filtering Open image in new window .

Open image in new window can be specific depending upon Open image in new window [11, 12]. For proper online filtering, Open image in new window and Open image in new window should match. The approaches of keeping Open image in new window and Open image in new window coherent are explained below.

In the case, when Open image in new window , Open image in new window is chosen and Open image in new window remains unchanged. This case is treated similarly by both proposed techniques. This choice of Open image in new window makes to resample Open image in new window closer to the Nyquist rate, so avoiding unnecessary interpolations during the data resampling process. It thus further improves the proposed technique computational efficiency. This case is included in the description (see flowcharts in Figures 3 and 4) of the following two filtering techniques.
Figure 3

Flowchart of the ARD.

Figure 4

Flowchart of the ARR.

In the opposite case, that is, Open image in new window , Open image in new window is chosen and Open image in new window is online decimated in order to reduce Open image in new window to Open image in new window . In this case, the reference filter order is reduced for Open image in new window , which reduces the number of operations to deliver a filtered sample [6, 11]. Hence, it improves the proposed techniques computational efficiency. In this case, it appears that Open image in new window may be lower than the Nyquist frequency of Open image in new window and so it can cause aliasing. According to [6, 11], if the local signal amplitude is of the order of the maximal range Open image in new window then for a suitable choice of M (application-dependent) the signal crosses enough consecutive thresholds. Thus, it is locally oversampled with respect to its local bandwidth and so there is no aliasing problem. This statement is further illustrated with the results summarized in Table 3.

In order to decimate Open image in new window the decimation factor Open image in new window for Open image in new window is online calculated by employing
Open image in new window can be specific for each selected window depending upon Open image in new window . For an integral Open image in new window both techniques decimate Open image in new window in a similar way. Thus, a test on Open image in new window is made by computing Open image in new window and verifying if Open image in new window . Here, floor operation delivers only the integral part of Open image in new window . If the answer is yes, then Open image in new window is decimated with Open image in new window , the process is clear from

Equation (12) shows that the decimated filter impulse response for the Open image in new window selected window Open image in new window is obtained by picking every Open image in new window coefficient from Open image in new window . Here, j is indexing the decimated filter coefficients. If the order of Open image in new window is Open image in new window , then the order of Open image in new window is given as: Open image in new window .

A simple decimation causes a reduction of the decimated filter energy compared to the reference one. It will lead to an attenuated version of the filtered signal. Open image in new window is a good approximate of the ratio between the energy of the reference filter and that of the decimated one. Thus, this effect of decimation is compensated by scaling Open image in new window with Open image in new window . The process is clear from

The two techniques mainly differ in the way of decimating Open image in new window for a fractional Open image in new window . The process is explained in the following Sections.

3.2. ARD (Activity Reduction by Filter Decimation)

In the ARD technique, Open image in new window is decimated by employing Open image in new window . It calls for an adjustment of Open image in new window which is achieved as Open image in new window . As in this case, Open image in new window , so it makes Open image in new window . For the ARD Open image in new window scaling is performed with Open image in new window . The complete procedure of obtaining Open image in new window and Open image in new window for the ARD is described in Figure 3.

3.3. ARR (Activity Reduction by Filter Resampling)

In the ARR technique, Open image in new window is employed to decimated Open image in new window . In this case, Open image in new window is given as Open image in new window , so it remains equal to Open image in new window . The process of matching Open image in new window with Open image in new window requires a fractional decimation of Open image in new window , which is achieved by resampling Open image in new window at Open image in new window . Again NNRI is employed for the purpose of Open image in new window resampling. For the ARR Open image in new window scaling is performed with Open image in new window . The complete procedure of obtaining Open image in new window and Open image in new window for the ARR is described in Figure 4.

4. Illustrative Example

In order to illustrate the ARD and the ARR filtering techniques, an input signal Open image in new window shown on the left part of Figure 5 is employed. Its total duration is 20 seconds and it consists of three active parts. Summary of Open image in new window activities is given in Table 1.
Figure 5

The input signal (left) and the selected signal obtained with the ASA (right).

Table 1 shows that Open image in new window is band limited between Open image in new window Hz and Open image in new window  kHz. In this case, Open image in new window is digitized by employing a 3-bit resolution AADC. Thus, for given ENOB the corresponding minimum and maximum sampling frequencies are Open image in new window  Hz and Open image in new window  kHz. The AADC amplitude range Open image in new window  v is chosen, which results into a quantum Open image in new window  v.

Each activity contains a low- and a high-frequency component (cf. Table 1). In order to filter out the high-frequency parts from each activity, a low pass reference FIR filter is implemented by employing the standard Parks-McClellan algorithm. The reference filter parameters are summarized in Table 2.
Table 2

Summary of the reference filter parameters.

For this example the reference window length Open image in new window second is chosen. It satisfies the boundary conditions discussed in Section 2.3. The given Open image in new window delivers Open image in new window samples in this case (cf. Equation (9). The ASA delivers three selected windows for the whole Open image in new window span of 20 seconds, which are shown on the right part of Figure 5. The selected windows parameters are displayed in Table 3.

Table 3 shows that the first window is an example of the Open image in new window case, so it is tackled similarly by both techniques. In the other windows, Open image in new window is valid, so the online Open image in new window decimation is employed. As Open image in new window and Open image in new window , calculated by employing Equation (11) are fractional ones, so this case is tackled in a different way by the ARD and the ARR.

Values of Open image in new window , Open image in new window , Open image in new window and Open image in new window are calculated for the ARD, and the ARR by employing the methods shown in Figures 3 and 4, respectively. The obtained results are summarized in Tables 4 and 5.
Table 5

Values of Frs i , Nr i , d i and P i for each selected window in the ARR.

i

Frs i (Hz)

Nr i

d i

P i

1

2500

1250

1

127

2

1083

1083

2.3

54

3

464

464

5.4

24

Tables 3, 4, and 5 jointly exhibit the interesting features of the proposed filtering techniques, which are achieved by an intelligent combination of the nonuniform, and the uniform signal processing tools (cf. Figure 2). Open image in new window represents the sampling frequency adaptation by following the local variations of Open image in new window shows that the relevant signal parts are locally over-sampled in time with respect to their local bandwidths [6, 11]. Open image in new window shows the adaptation of the resampling frequency for each selected window. It further adds to the computational gain of the proposed techniques by avoiding the unnecessary interpolations during the resampling process. Open image in new window shows how the adjustment of Open image in new window avoids the processing of unnecessary samples during the post filtering process. Open image in new window represents how the adaptation of Open image in new window for Open image in new window avoids the unnecessary operations to deliver the filtered signal. Open image in new window exhibits the dynamic feature of ASA, which is to correlate Open image in new window with the signal activity laying in it [5].

These results have to be compared with what is done in the corresponding classical case. If Open image in new window is chosen as the sampling frequency, then the total Open image in new window span is sampled at 2500 Hz. It makes Open image in new window samples to process with the 127th-order FIR filter. On the other hand, in both proposed techniques the total number of resampled data points is much lower, 3000 and 2794 for the ARD and the ARR, respectively. Moreover, the local filter orders in Open image in new window and Open image in new window are also lower than 127. It promises the computational efficiency of the proposed techniques compared to the classical one. A detailed complexity comparison is made in the following Section.

5. Computational Complexity

In the classical case, with a Open image in new window order filter, it is well known that Open image in new window multiplications and Open image in new window additions are required to compute each filtered sample. If Open image in new window is the number of samples then the total computational complexity Open image in new window can be calculated by employing

In the adaptive techniques presented here, the adaptation process requires extra operations for each selected window. The computational complexities of both techniques, Open image in new window and Open image in new window are deduces as follow.

The following steps are common to both the ARD and the ARR techniques. The choice of Open image in new window is a common operation for both proposed techniques. It requires one comparison between Open image in new window and Open image in new window . The data resampling operation is also required in both techniques before filtering. In the studied case, the resampling process is performed by employing the Nearest Neighbour Resampling Interpolation (NNRI). The NNRI is chosen because of its simplicity, as it employs only one nonuniform observation for each resampled one. Moreover, it provides an unbiased estimate of the original signal variance. Due to this reason, it is also known as a robust interpolation method [35, 36]. The detailed reasons of inclination toward NNRI are discussed in [5, 35, 36]. The NNRI is performed as follow.

For each interpolation instant Open image in new window , the interval of nonuniform samples Open image in new window , within which Open image in new window lies is determined. Then the distance of Open image in new window to each Open image in new window and Open image in new window is computed and a comparison among the computed distances is performed to decide the smaller among them. For Open image in new window , the complexity of the first step is Open image in new window comparisons and the complexity of the second step is Open image in new window additions and Open image in new window comparisons. Hence, the NNRI total complexity for Open image in new window becomes Open image in new window comparisons and Open image in new window additions.

In the case, when Open image in new window , the decimation of Open image in new window is performed in both techniques. In order to do so, Open image in new window is computed by performing a division between Open image in new window and Open image in new window . Open image in new window is calculated by employing a floor operation on Open image in new window . A comparison is made between Open image in new window and Open image in new window . In the case when Open image in new window , the process of obtaining Open image in new window is similar for both techniques (cf. Figures 3 and 4). In this case, the decimator simply picks every Open image in new window th coefficient from Open image in new window . It has a negligible complexity compared to the operations like addition and multiplication. This is the reason why its complexity is not taken into account during the complexity evaluation process. In both techniques, the decimated filter impulse response is scaled, it requires Open image in new window multiplications. The fractional Open image in new window is tackled in a different way by each filtering technique and is detailed in the following subsections.

5.1. Complexity of the ARD Technique

Even if Open image in new window is fractional in the case of ARD technique, Open image in new window decimation is performed by employing D i . Frs i is modified in order to keep it coherent with Open image in new window and it requires one division (cf. Figure 3). Finally, a Open image in new window -order filter performs Open image in new window multiplications and Open image in new window Open image in new window additions for Open image in new window . The combine computational complexity for the ARD technique Open image in new window is given by

5.2. Complexity of the ARR Technique

In the case of ARR technique, Open image in new window is employed as the decimation factor. The fractional decimation is achieved by resampling Open image in new window at Open image in new window . The resampling is performed by employing the NNRI, which performs Open image in new window comparisons and Open image in new window additions to deliver h j i . The remaining operation cost between the ARD and the ARR is common. The combine computational complexity for the ARR technique Open image in new window is given by

In Equations (15) and (16), Open image in new window represents the selected windows index. Open image in new window and Open image in new window are the multiplying factors. Open image in new window is Open image in new window for the case when Open image in new window and it is Open image in new window otherwise. Open image in new window is Open image in new window for the case when Open image in new window and it is Open image in new window otherwise.

5.3. Complexity Comparison of the ARD and the ARR with the Classical Filtering

From (14), (15), and (16), it is clear that there are uncommon operations between the classical and the proposed adaptive rate filtering techniques. In order to make them approximately comparable, it is assumed that a comparison has the same processing cost as that of an addition and a division or a floor has the same processing cost as that of a multiplication. By following these assumptions, comparisons are merged into the additions count and divisions plus floors are merged into the multiplications count, during the complexity evaluation process. Now Equations (15) and (16) can be written as follow:

By employing results of the example studied in the previous section, computational comparisons of the ARD and the ARR with the classical one are made in terms of additions and multiplications. The results are computed for different Open image in new window time spans and are summarized in Tables 6 and 7.
Table 6

Computational gain of the ARD over the classical one for different x(t) time spans.

Gains in additions and multiplications of the proposed techniques over the classical one are clear from the above results. In the case of Open image in new window , where the resampling frequency and the filter order is the same as in the classical case (cf. Tables 4 and 5), a gain is achieved by using the proposed adaptive techniques. This is only due to the fact that the ASA correlates the window length to the activity (0.5 second), while the classic case computes during the total duration of Open image in new window  second. Gains are of course much larger in other windows, since the proposed techniques are taking benefit of processing the lesser samples along with the lower filter orders. When treating the whole Open image in new window span of 20 seconds, the proposed techniques also take advantage of the idle Open image in new window parts, which further induces additional gains compared to the classical case.

The above results confirm that the proposed filtering techniques lead toward a drastic reduction in the number of operations compared to the classical one. This reduction in operations is achieved due to the joint benefits of the AADC, the ASA and the resampling, as they enable to adapt the sampling frequency and the filter order by following the input signal local variations.

5.4. Complexity Comparison between the ARD and the ARR

The main difference between both proposed techniques occurs for the case when Open image in new window and Open image in new window is fractional (cf. Section 3).

The ARD makes an increment in Open image in new window in order to keep it coherent with Open image in new window . Increase in Open image in new window causes to increase Open image in new window and also to increase Open image in new window . Thus, in comparison to the ARR, this technique increases the computational load of the post-filtering operation, while keeping the decimation process of Open image in new window simple.

The ARR performs Open image in new window resampling at Open image in new window . Thus, in comparison to the ARD, this technique increases the complexity of the decimation process of Open image in new window while keeping the computational load of the post-filtering process lower.

In continuation to Section 5.3, a complexity comparison between the ARD and the ARR is made in terms of additions, and multiplications by employing Equations (17) and (18), respectively. It concludes that the ARR remains computationally efficient compared to the ARD, in terms of additions and multiplications, as far as the conditions given by expressions (19) and (20) remain true. Please note that Nr i and P i can be different for the ARD and the ARR (cf. Tables 4 and 5):

For this studied example, Open image in new window and Open image in new window are fractional ones, thus the ARD and the ARR proceed differently. Conditions (19) and (20) remain true for both Open image in new window and Open image in new window (cf. Tables 4 and 5). Hence, the gains in additions and multiplications of the ARR are higher than those of the ARD for Open image in new window and Open image in new window (cf. Tables 6 and 7). It shows that except for very specific situation the ARR technique will always remain less expensive than the ARD. The ARR achieves this computational performance by employing the fractional decimation of Open image in new window , which may lead a quality compromise of the ARR compared to the ARD. This issue is addressed in the following section.

6. Processing Error

6.1. Approximation Error

In the proposed techniques, the approximation error occurs due to two effects: the time quantization error which occurs due to the AADC finite timer precision and the interpolation error which occurs in the course of the uniform resampling process. After these two operations, the mean approximation error for Open image in new window can be computed by employing the following:

Here, Open image in new window is the Open image in new window resampled observation, interpolated with respect to the time instant Open image in new window , Open image in new window is the original sample value which should be obtained by sampling Open image in new window at Open image in new window . In the studied example discussed in Section 4, Open image in new window is analytically known, thus it is possible to compute its original sample value at any given time instant. It allows us to compute the approximation error introduced by the proposed adaptive rate techniques by employing Equation (21).

The results obtained for each selected window for both the ARD and the ARR are summarized in Table 8.
Table 8

Mean approximation error of each selected window for the ARD and the ARR.

Table 8 shows the approximation error introduced by the proposed techniques. This process is accurate enough for a 3-bit AADC. For the higher precision applications, the approximation accuracy can be improved by increasing the AADC resolution M and the interpolation order [6, 8, 37, 38]. Thus, an increased accuracy can be achieved at the cost of an increased computational load. Therefore, by making a suitable compromise between the accuracy level and the computational load, an appropriate solution can be devised for a specific application.

For a given M and interpolation order the approximation accuracy can be further improved by employing the symmetry during the interpolation process. It results into a reduced resampling error [38, 39]. The pros and cons of this approach are under investigation and a description on it is given in [40].

6.2. Filtering Error

In the proposed filtering techniques, a reference filter Open image in new window is employed and then it is online decimated for Open image in new window , depending on the chosen Open image in new window . This online decimation can cause the filtering precision degradation. In order to evaluate this phenomenon on our test signal the following procedure is adapted.

A reference filtered signal is generated. In this case, instead of decimating Open image in new window to obtain Open image in new window , a specific filter Open image in new window is directly designed for Open image in new window by using the Parks-McClellan algorithm. It is designed for Open image in new window by employing the same design parameters, summarized in Table 2. The signal activity corresponding to Open image in new window is sampled at Open image in new window with a high precision classical ADC. This sampled signal is filtered by employing Open image in new window . The filtered signal obtained in this way is used as a reference one for Open image in new window and its comparison is made with the results obtained by the proposed techniques.

Let Open image in new window be the Open image in new window th reference-filtered sample and Open image in new window be the Open image in new window th filtered sample obtained by one of the proposed filtering techniques. Then, the mean filtering error for Open image in new window can be calculated by employing

The mean filtering error of both proposed techniques is calculated, for each Open image in new window activity by employing (22). The results are summarized in Table 9.

Table 9 shows that the online decimation of Open image in new window in the proposed techniques causes a loss of the desired filtering quality. Indeed, the filtering error increases with the increase in Open image in new window . The measure of this error can be used to decide an upper bound to Open image in new window (by performing an offline calculation), for which the decimated and the scaled filters provide results with an acceptable level of accuracy. The level of accuracy is application-dependent. Moreover, for high precision applications, an appropriate filter can be online calculated for each selected window at the cost of an increased computational load. The process is clear from generating the reference filtered signal Open image in new window , discussed above.

Table 9 shows that Open image in new window and Open image in new window for the ARR are higher than that of the ARD. It is due to the fact of Open image in new window resampling for the ARR to deliver Open image in new window and Open image in new window . It makes to employ the interpolated coefficients of Open image in new window for filtering the resampled data, lies in Open image in new window and Open image in new window respectively, which results in an increased filtering error of the ARR compared to the ARD. Similar to Section 6.1, this resampling error can also be reduced to a certain extent, by employing a higher order interpolator [37, 38]. In conclusion, a certain increase in the accuracy can be achieved at a certain loss of the processing efficiency.

7. Speech Signal as a Case Study

In order to evaluate performances of the ARD and the ARR for real life signals, a speech signal x(t) shown on Figure 6(a) is employed. Open image in new window is a 1.6 second, [50 Hz; 5000 Hz] band-limited signal corresponding to a three-word sentence. The goal is to determine the pitch (fundamental frequency) of Open image in new window in order to determine the speaker's gender. For a male speaker, the pitch lies with the frequency range [100 Hz, 150 Hz], whereas for a female speaker, the pitch lies with the frequency range [200 Hz, 300 Hz] [41].
Figure 6

On the top, the input speech signal (a), the selected signal with the ASA (b) and a zoom of the second window W 2 (c). On the bottom, a spectrum zoom of the filtered signal laying in W2 obtained with the reference filtering (d), with the ARD (e) and with the ARR (f), respectively.

The reference frequency is chosen as Open image in new window kHz, which is a common sampling frequency for speech. A 4-bit resolution AADC is used for digitizing Open image in new window and therefore we have Open image in new window  kHz, and Open image in new window  kHz. The amplitude range is always set to Open image in new window  V, which leads to a quantum Open image in new window  v. The amplitude of Open image in new window is normalized to Open image in new window  v in order to avoid the AADC saturation.

The studied signal is part of a conversation and during a dialog, the speech activity is Open image in new window of the total dialog time [42]. A classical filtering system would remain active during the total dialog duration. The proposed LCSS-based filtering techniques will remain active only during Open image in new window of the dialog time span, which will reduce the system power consumption.

A speech signal mainly consists of vowels and consonants. Consonants are of lower amplitude compared to vowels [41, 43]. In order to determine the speakers pitch, vowels are the relevant parts of Open image in new window . For Open image in new window  v, consonants are ignored during the signal acquisition process, and are considered as low amplitude noise. In contrast, vowels are locally over-sampled like any harmonic signal [6, 10, 11]. This intelligent signal acquisition further avoids the processing of useless samples, within the Open image in new window of Open image in new window activity, and so further improves the proposed techniques computational efficiency.

In order to apply the ASA, Open image in new window  seconds is chosen. It results in Open image in new window in this case (cf. Equation (9). The ASA delivers three selected windows, which are shown on Figure 6(b). The parameters of each selected window are summarized in Table 10.
Although the consonants are partially filtered out during the data acquisition process, yet for proper pitch estimation, it is required to filter out the remaining effect of high frequencies still present in Open image in new window . To this aim, a reference low pass filter is designed, with the standard Parks-McClellan algorithm. Its characteristics are summarized in Table 11.
To find the pitch, we now focus on Open image in new window , which corresponds to the vowel "a". A zoom on this signal part is plotted on Figure 6(c). The condition Open image in new window is valid, and Open image in new window is fractional (cf. Equation (11).Thus, the filtering process for each proposed technique will differ, which makes it possible to compare their performances. The values of Open image in new window , Open image in new window , Open image in new window , and Open image in new window for both techniques are given in Table 12.

Computational gains of the proposed filtering techniques compared to the classical one are computed by employing Equations (14), (17), and (18). The results show 8.62 and 13.17 times gains in additions and 8.71 and 13.26 times gains in multiplications, respectively, for the ARD and the ARR, for Open image in new window . It confirms the computational efficiency of the proposed techniques compared to the classical one. It is gained firstly by achieving an intelligent signal acquisition and secondly by adapting the sampling frequency and the filter order by following the local variations of Open image in new window .

Once more the conditions (19) and (20) remain true for Open image in new window so the ARR technique remains computationally efficient than the ARD one.

Spectra of the filtered signal laying in Open image in new window , obtained with the reference filtering (cf. Section 6.2), with the ARD and with the ARR techniques are plotted, respectively, on Figures 6(d), 6(e), and 6(f).

The spectra on Figure 6 show that the fundamental frequency is about 215 HZ. Thus, one can easily conclude that the analyzed sentence is pronounced by a female speaker. Although it is required to decimate the reference filter 3 times and 3.7 times, respectively, for the ARD and the ARR, yet spectra of the filtered signal, obtained with the proposed techniques are quite comparable to spectrum of the reference-filtered signal. It shows that even after such a level of decimation, results delivered by the proposed techniques are of acceptable quality for the studied speech application.

The above discussion shows the suitability of the proposed techniques for the low activity time-varying signals like electrocardiogram, phonocardiogram, seismic, and speech. Speech is a common, and easily accessible signal. Therefore, the proposed techniques performance is studied for a speech application, though it can be applied to other appropriate real signals like electrocardiogram, phonocardiogram, and seismic. The devised approach versatility lays in the appropriate choice of system parameters like the AADC resolution M, the distribution of level crossing thresholds, and the interpolation order. These parameters should be tactfully chosen for a targeted application, so that they ensure an attractive tradeoff between the system computational complexity and the delivered output quality.

8. Conclusion

Two novel adaptive rate filtering techniques have been devised. These are well suited for low activity sporadic signals like electrocardiogram, phonocardiogram and seismic signals. For both filtering techniques, a reference filter is offline designed by taking into account the input signal statistical characteristics and the application requirements.

The complete procedure of obtaining the resampling frequency Open image in new window and the decimated filter coefficients Open image in new window for Open image in new window is described for both proposed techniques. The computational complexities of the ARD and the ARR are deduced and compared with the classical one. It is shown that the proposed techniques result into a more than one-order magnitude gain in terms of additions and multiplications over the classical one. It is achieved due to the joint benefits of the AADC, the ASA and the resampling as they allow the online adaptation of parameters ( Open image in new window and Open image in new window ) by exploiting the input signal local variations. It drastically reduces the total number of operations and therefore, the energy consumption compared to the classical case.

A complexity comparison between the ARD and the ARR is also made. It is shown that the ARR outperforms the ARD in most of the cases. Performances of the ARD and the ARR are also demonstrated for a speech application. The results obtained in this case are in coherence with those obtained for the illustrative example.

Methods to compute the approximation and the filtering errors for the proposed techniques are also devised. It is shown that the errors made by the proposed techniques are minor ones, in the studied case. A higher precision can be achieved by increasing the AADC resolution and the interpolation order. Thus, a suitable solution can be proposed for a given application by making an appropriate tradeoff between the accuracy level and the computational load.

A detailed study of the proposed filtering techniques computational complexities by taking into account the real processing cost at circuit level is in progress. Future works focus on the optimization of these filtering techniques and their further employment in real life applications.

References

  1. 1.
    Sekhar SC, Sreenivas TV: Adaptive window zero-crossing-based instantaneous frequency estimation. EURASIP Journal on Applied Signal Processing 2004,2004(12):1791-1806. 10.1155/S111086570440417XCrossRefGoogle Scholar
  2. 2.
    Mark JW, Todd TD: A nonuniform sampling approach to data compression. IEEE Transactions on Communications 1981, 29: 24-32. 10.1109/TCOM.1981.1094872CrossRefGoogle Scholar
  3. 3.
    Gretains M: Time-frequency representation based chirp like signal analysis using multiple level crossings. Proceedings of 15th European Signal Processing Conference (EUSIPCO '07), September 2007, Poznan, Poland 2154-2158.Google Scholar
  4. 4.
    Guan KM, Singer AC: Opportunistic sampling by level-crossing. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), April 2007, Honolulu, Hawaii, USA 3: 1513-1516.Google Scholar
  5. 5.
    Qaisar SM, Fesquet L, Renaudin M: Spectral analysis of a signal driven sampling scheme. Proceedings of the 14th European Signal Processing Conference (EUSIPCO '06), September 2006, Florence, ItalyGoogle Scholar
  6. 6.
    Qaisar SM, Fesquet L, Renaudin M: Computationally efficient adaptive rate sampling and filtering. Proceedings of 15th European Signal Processing Conference (EUSIPCO '07), September 2007, Poznan, Poland 2139-2143.Google Scholar
  7. 7.
    Allier E, Sicard G, Fesquet L, Renaudin M: A new class of asynchronous A/D converters based on time quantization. Proceedings of the 9th International Symposium on Asynchronous Circuits and Systems (ASYNC '03), May 2003, Vancouver, Canada 197-205.Google Scholar
  8. 8.
    Sayiner N, Sorensen HV, Viswanathan TR: A level-crossing sampling scheme for A/D conversion. IEEE Transactions on Circuits and Systems II 1996,43(4):335-339. 10.1109/82.488288CrossRefGoogle Scholar
  9. 9.
    Akopyan F, Manohar R, Apsel AB: A level-crossing flash asynchronous analog-to-digital converter. Proceedings of the International Symposium on Asynchronous Circuits and Systems (ASYNC '06), March 2006, Grenoble, France 12-22.CrossRefGoogle Scholar
  10. 10.
    Aeschlimann F, Allier E, Fesquet L, Renaudin M: Asynchronous FIR filters: towards a new digital processing chain. Proceedings of the International Symposium on Asynchronous Circuits and Systems (ASYNC '04), April 2004, Crete, Greece 10: 198-206.Google Scholar
  11. 11.
    Qaisar SM, Fesquet L, Renaudin M: Adaptive rate filtering for a signal driven sampling scheme. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), April 2007, Honolulu, Hawaii, USA 3: 1465-1468.Google Scholar
  12. 12.
    Qaisar SM, Fesquet L, Renaudin M: Computationally efficient adaptive rate sampling and filtering for low power embedded systems. Proceedings of the International Conference on Sampling Theory and Applications (SampTA '07), June 2007, Thessaloniki, GreeceGoogle Scholar
  13. 13.
    Aeschlimann F, Allier E, Fesquet L, Renaudin M: Spectral analysis of level crossing sampling scheme. Proceedings of the International Conference on Sampling Theory and Applications (SampTA '05), July 2005, Samsun, TurkeyGoogle Scholar
  14. 14.
    Qaisar SM, Fesquet L, Renaudin M: An adaptive resolution computationally efficient short-time Fourier transform. Research Letters in Signal Processing 2008, 2008:-5.Google Scholar
  15. 15.
    Bond FE, Cahn CR: On sampling the zeros of bandwidth limited signals. IRE Transactions on Information Theory 1958, 4: 110-113. 10.1109/TIT.1958.1057457CrossRefGoogle Scholar
  16. 16.
    Astrom KJ, Bernhardsson B: Comparison of Riemann and Lebesgue sampling for first order stochastic systems. Proceedings of the 41st IEEE Conference on Decision and Control (CDC '02), December 2002, Las Vegas, Nev, USA 2: 2011-2016.CrossRefGoogle Scholar
  17. 17.
    Bilinskis I: Digital Alias Free Signal Processing. John Wiley & Sons, New York, NY, USA; 2007.MATHCrossRefGoogle Scholar
  18. 18.
    Ellis PH: Extension of phase plane analysis to quantized systems. IRE Transactions on Automatic Control 1959, 4: 43-59. 10.1109/TAC.1959.1104845CrossRefGoogle Scholar
  19. 19.
    Lim M, Saloma C: Direct signal recovery from threshold crossings. Physical Review E 1998,58(5B):6759-6765.CrossRefGoogle Scholar
  20. 20.
    Miskowicz M: Asymptotic effectiveness of the event-based sampling according to the integral criterion. Sensors 2007,7(1):16-37. 10.3390/s7010016CrossRefGoogle Scholar
  21. 21.
    Astrom KJ, Bernhardsson B: Comparison of periodic and event based sampling for first-order stochastic systems. Proceedings of IFAC World Congress, 1999 301-306.Google Scholar
  22. 22.
    Miskowicz M: Send-on-delta concept: an event-based data reporting strategy. Sensors 2006,6(1):49-63. 10.3390/s6010049CrossRefGoogle Scholar
  23. 23.
    Otanez PG, Moyne JR, Tilbury DM: Using deadbands to reduce communication in networked control systems. Proceedings of the American Control Conference (ACC '02), May 2002, Anchorage, Alaska, USA 4: 3015-3020.Google Scholar
  24. 24.
    Gupta SC: Increasing the sampling efficiency for a control system. IEEE Transactions on Automatic and Control 1963, 263-264.Google Scholar
  25. 25.
    Blake IF, Lindsey WC: Level-crossing problems for random processes. IEEE Transactions on Information Theory 1973, 295-315.Google Scholar
  26. 26.
    Miskowicz M: Efficiency of level-crossing sampling for bandlimited Gaussian random processes. Proceedings of IEEE International Workshop on Factory Communication Systems (WFCS '06), June 2006, Torino, Italy 137-142.Google Scholar
  27. 27.
    Walden RH: Analog-to-digital converter survey and analysis. IEEE Journal on Selected Areas in Communications 1999,17(4):539-550. 10.1109/49.761034CrossRefGoogle Scholar
  28. 28.
    Nazario MA, Saloma C: Signal recovery in sinusoid-crossing sampling by use of the minimum-negative constraint. Applied Optics 1988, 37: 2953-2963.CrossRefGoogle Scholar
  29. 29.
    Lim M, Saloma C: Direct signal recovery from threshold crossings. Physical Review E 1998,58(5B):6759-6765.CrossRefGoogle Scholar
  30. 30.
    Beutler FJ: Error free recovery from irregularly spaced samples. SIAM Review 1996, 8: 328-335.MathSciNetCrossRefGoogle Scholar
  31. 31.
    Marvasti F: Nonuniform Sampling Theory and Practice. Kluwer Academic/Plenum Publishers, New York, NY, USA; 2001.MATHCrossRefGoogle Scholar
  32. 32.
    Vetterli M: A theory of multirate filter banks. IEEE Transactions on Acoustics, Speech, and Signal Processing 1987,35(3):356-372. 10.1109/TASSP.1987.1165137CrossRefGoogle Scholar
  33. 33.
    Chu S, Burrus CS: Multirate filter designs using comb filters. IEEE Transactions on Circuits and Systems 1984,31(11):913-924. 10.1109/TCS.1984.1085447CrossRefGoogle Scholar
  34. 34.
    Crochiere RE, Rabiner LR: Multirate Digital Signal Processing. Prentice-Hall, Englewood Cliffs, NJ, USA; 1993.Google Scholar
  35. 35.
    de Waele S, Broersen PMT: Time domain error measure for resampled irregular data. Proceedings of the 16th IEEE Instrumentation and Measurement Technology Conference (IMTC '99), May 1999, Venice, Italy 2: 1172-1177.Google Scholar
  36. 36.
    de Waele S, Broersen PMT: Error measures for resampled irregular data. IEEE Transactions on Instrumentation and Measurement 2000,49(2):216-222. 10.1109/19.843052CrossRefGoogle Scholar
  37. 37.
    Harris F: Multirate signal processing in communication systems. Proceedings of 15th European Signal Processing Conference (EUSIPCO '07), September 2007, Poznan, PolandGoogle Scholar
  38. 38.
    Klamer DM, Masry E: Polynomial interpolation of randomly sampled bandlimited functions and processes. SIAM Journal on Applied Mathematics 1982,42(5):1004-1019. 10.1137/0142071MATHMathSciNetCrossRefGoogle Scholar
  39. 39.
    Hildebrand FB: Introduction to Numerical Analysis. McGraw-Hill, Boston, Mass, USA; 1956.MATHGoogle Scholar
  40. 40.
    Qaisar SM, Fesquet L, Renaudin M: An improved quality adaptive rate filtering technique based on the level crossing sampling. Proceedings of the World Academy of Science, Engineering and Technology, July 2008 31: 79-84.Google Scholar
  41. 41.
    Rabiner LR, Schafer RW: Digital Processing of Speech Signals. Prentice-Hall, Englewood Cliffs, NJ, USA; 1978.Google Scholar
  42. 42.
    Fontolliet PG: Systèmes de Télécommunications. Dunod, Paris, France; 1983.Google Scholar
  43. 43.
    Quatieri TF: Discrete-Time Speech Signal Processing: Principles and Practice. Prentice-Hall, Englewood Cliffs, NJ, USA; 2001.Google Scholar

Copyright information

© Saeed Mian Qaisar et al. 2009

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  • Saeed Mian Qaisar
    • 1
  • Laurent Fesquet
    • 1
  • Marc Renaudin
    • 2
  1. 1.TIMA, CNRS UMR 5159Grenoble CedexFrance
  2. 2.Tiempo SASMontbonnot Saint MartinFrance

Personalised recommendations