1 Introduction

In conventional methods, analog signals are sampled at the Nyquist rate, which is twice the maximum frequency fmax of the signal. It is a uniform sampling method, in which the samples are equally spaced in time. In most of the signals, fmax is present only for a small duration. The samples acquired from sparse signals are redundant if the sampling frequency is based on the value of fmax. Redundant samples increase the burden of the transmitter, storage, and battery of wireless sensors. It is a waste of resources to sample the signals which do not change significantly. Another drawback of the conventional sampling is that the signal is band-limited to half of the sampling frequency, which can cause missing of abrupt changes present in it. Therefore, sampling must be specific to signals and their applications.

Usually, the algorithm used for data reduction is applied to all the uniformly acquired samples. This extra burden at the transmitter can be minimized if samples acquired are just sufficient for the reproduction of the signal. This will lead to lesser computation, storage, communication, and power requirements. There are many situations in which the dynamic power has to be reduced for longer battery life, and the size of data must be small to reduce the expensive transmission bandwidth. The dynamic power consumption P in Complementary Metal Oxide Semiconductor (CMOS) circuits is stated by Eq. (1).

$$P = \alpha CV^{2} f$$
(1)

α, C, V, and f are the activity factor, output capacitance, supply voltage, and the clock frequency, respectively [1]. Efforts have been made to minimize the dynamic power by reducing all the above parameters except α. In all the synchronous circuits, an output is produced in synchronization with each clock cycle, which results in a high value of α. Dynamic activity in signal processing systems can be reduced with asynchronous circuits that make use of event-driven approaches combined with a level-crossing sampling (LCS) scheme [2, 3]. Non-uniform and event-driven sampling methods have advantages compared to uniform and synchronous sampling while processing low-activity signals. It allows the exploitation of the local properties of the signal to avoid sampling at a high rate when the local bandwidth is low. For perfect reconstruction from non-uniform samples, the mean sampling rate must exceed the Nyquist rate [4, 5]. Several event-driven sampling methods have been evolved, which generate samples at non-uniform rates.

Vibration is observed as an oscillating motion about an equilibrium point. In mechanical systems, vibrations occur due to the effects of manufacturing tolerances, and rubbing contact between different parts. Small vibrations can trigger resonances in other mechanical parts, which cause significant vibrations. Hence, it is necessary to monitor vibrations for the condition monitoring of mechanical systems that have rotating parts [6]. Piezoelectric accelerometers are the universally used transducers for vibration measurement. It is robust and has a wide frequency range with excellent linearity throughout the ranges. Piezoelectric accelerometers do not require a power supply, since they are self-generating, and there is no wear and tear due to the absence of moving parts.

The staging phase of a rocket is the shutdown of one stage and the separation and ignition of the next stage [7]. During the staging phase, signals exhibit wide variations, and little signal compression is possible. Very high compression is possible in between the staging phases due to the quiescent nature of various parameters. The sampling frequency must be sufficient to capture these signals during the staging phase. However, the use of a constant sampling frequency captures redundant data between the staging phases. Hence, an event-driven sampling method is ideally suited for this situation. Since the vibration signals have a linear variation between peak values, the extremum sampling method, which captures samples at the local extrema of the signal, is the right choice.

Vibration signals have the highest frequency among all the other telemetry signals from a launch vehicle. Parameters like altitude, attitude, velocity, and acceleration are sampled at a much lower frequency. The sampling frequency of vibration signals selected for Indian launch vehicles is 6000 Hz with an 8-bits resolution. The data rate for the vibration signal is 48 kbps, and there are around ten vibration signals to be encoded, and the total bit rate required is 480 kbps. The total bit-rate available for telemetry is reported to be around 1 Mbps. Other telemetry signals like pressure and temperature are sampled at 60 Hz, and velocity and acceleration signals at 120 Hz approximately, which is very less compared to the sampling frequency used for vibration signals. Hence, there is a necessity for compressing vibration signals to reduce the total bit-rate. In industries, vibration data has to be sent to remote servers for processing. Reducing the amount of data and hence the transmission bandwidth is a major requirement.

The Consultative Committee for Space Data Systems (CCSDS) has recommended standards [8] for lossless data compression applications of space missions with packetized telemetry. The encoder consists of a pre-processor followed by an adaptive entropy encoder. At the pre-processor stage, a reversible function is applied to the input samples to reduce the entropy, which results in the reduction of the average number of bits in the representation. The adaptive entropy coder provides a set of coding options based on the redundancy in the block of data, which is the output from the pre-processor. Rice’s adaptive coding technique is used, which produces variable-length codes. The samples are encoded with fundamental sequence codes, which are words consisting of a number of ‘0’s followed by a ‘1’ at the end. Sequences of ‘0’s are further compressed with run-length coding.

In literature, various data compression methods applied to different types of signals for diverse applications are available [9]. Data compression is applied to reduce the requirements for memory space, communication bandwidth, and power consumption. Compression algorithms are generally classified into lossless and lossy. Even though the signal is exactly reproduced with a lossless method, the compression ratio is small, and the system complexity is high. Lossless methods are based on the encoding of the samples of the signals like run-length encoding, Huffman, and Lempel–Ziv coding. A certain degree of inaccuracy in the reconstructed signal is permitted in lossy compression methods. However, the compression ratios achieved with lossy methods are very high. Some of the methods used for lossy compression are prediction, fitting, and transform coding.

Lossless compression has been performed for the accelerometer signal in [10], where time delay estimation with differential pulse code modulation is used. However, the compression gained is less and is approximately equal to 70%. An adaptive compression scheme for vibration signals is proposed in [11], which applies lifting discrete wavelet transform with set-partitioning of embedded blocks. The compression of vibration signals from wireless sensor nodes with a modified discrete cosine transform has been proposed in [12]. Run-length encoding has been applied to telemetry data in a multichannel acquisition system in [13]. However, the method involves oversampling and averaging before performing run-length encoding. The compression of vibration signals collected from aeroplane engines has been achieved in [14] using discrete cosine transform. In [15], the authors propose a compression method referred to as sub-band adaptive quantization for vibration signals from aircraft engines. In all the above methods, samples are acquired at the Nyquist rate based on fmax, and the compression algorithm has to operate on all these samples.

There are two commonly used methods in which signals are represented as non-uniform samples. The first one is compressed sensing (CS) [16, 17], and the second is level-crossing sampling [18,19,20]. CS can be employed if the signal is sparse in some domain which means that the number of non-zero coefficients in that domain is very less. The physical signal need not be sparse in the time domain itself. An example is a sinusoidal signal which has a dense representation in the time domain, but has a single non-zero coefficient in the Fourier domain. A sparse signal can be multiplied by a suitable measurement matrix to have a compressed representation in CS. The reconstruction is possible by making use of optimization algorithms. CS has been employed for encoding vibration signals in [21] for high-speed rail monitoring in China. Vibration signals are found to have a sparse representation in discrete cosine transform (DCT). A sparsity-adaptive subspace pursuit CS algorithm has been applied in [22] to represent vibration signals from the wireless sensors, which are used for the health monitoring of structures. However, as the name “compressed sensing” indicates, the compression is not achieved while sampling. The vibration signals are first uniformly sampled and encoded. Subsequently, they are multiplied with a measurement matrix making use of the sparsity of the signal in DCT. The compressed set of samples is generated by the multiplication of data samples with a large-sized matrix, which increases the computational complexity at the transmitter. At the receiver, the execution of an iterative algorithm inhibits the real-time reconstruction.

Vibration signals received from the launch vehicles are generally used for condition monitoring. In this paper, extremum sampling, also known as peak sampling or mini-max sampling, has been proposed for encoding vibration signals. The local maxima and minima of signals are acquired along with the time gap between the peaks, which produce compressed samples in real-time [23, 24]. With this information, an oscillatory signal can be reconstructed. It is a non-uniform sampling method achieving compression during the sampling process itself. Being an activity-dependent sampling method, the number of samples is proportional to the frequency content present in the signal. A significant reduction in the data size has been achieved, and the reconstructed signal preserves the essential features of the original signal. The advantages of the proposed method are reduction in the dynamic power dissipation, transmission bandwidth, and storage requirement. Extremum sampling is implemented with LCS, which is an activity-dependent sampling scheme. LCS can tolerate noise to a certain extent since the amplitude variations within the inter-level distance do not cause a level-crossing. Tolerance to noise is further improved by incorporating hysteresis for updating the reference levels.

This paper is organized as follows. The basics of LCS and the implementation of extremum sampling with LCS are explained in Sect. 2. The effect of noise in extremum sampling and the performance parameters used for the evaluation of the proposed method are also discussed. Section 3 covers the simulation results. The discussion is presented in Sect. 4, followed by the concluding remarks in Sect. 5.

2 Extremum sampling

In extremum sampling (ES), samples are acquired at the zero-crossings of the first derivative of the signal. A sample is captured whenever a local maximum or a minimum value of the input signal is detected. Extremum sampling is suitable for the representation of signals like speech and vibration that have an oscillatory nature. If samples are generated at every peak in a noisy vibration signal, the sampling rate exceeds the Nyquist rate. In this paper, ES has been implemented with level-crossing sampling (LCS), which has the capability to tolerate noise levels within the limits of inter-level distance [25]. The signals which are not severely affected by noise do not require filtering before sampling.

2.1 Level-crossing sampling (LCS)

In LCS, a sample is acquired if the input crosses any of the threshold levels. Rather than sampling at a fixed rate, LCS picks the samples whenever there is a significant change in the amplitude. The general block diagram of a level-crossing sampler is depicted in Fig. 1. The major blocks are (1) a level-crossing detector (LCD) and (2) a time-to-digital converter (TDC).

Fig. 1
figure 1

The block diagram of a general level-crossing sampler

LCD sets a number of reference voltage levels, which spans the dynamic range of the input signal, as depicted in Fig. 2. The horizontal dotted lines represent different reference voltage levels ln. The signal x(t) is compared with the pre-fixed reference levels. A pulse signal is produced at the output of a level-crossing detector (LCD) whenever the signal crosses any of these reference levels.

Fig. 2
figure 2

Illustration of the level-crossing sampling scheme

The LCD output triggers a time-to-digital converter (TDC), to produce a digital count, which is a representation of the interval between level-crossing instants dtn [26]. A clock signal of time period TC is used for measuring the interval dtn. The interval dtn is represented in the form of the number of complete clock cycles, the total duration of which approximately equals dtn. Hence, each sample is represented by a couple (Ln, Dtn), which are the digital representations of a particular threshold level ln and dtn, respectively. Hence the signal is represented by an ordered sequence of amplitude-time pairs with non-uniformly spaced time values.

In classical ADCs, regular sampling is done at the Nyquist rate, and the amplitudes are quantized. The amplitude of each sample is represented by an N-bit word, where N is known as the resolution of the converter. Since the sampling interval is constant, the information regarding the sampling time is not needed. The SNR for classical ADC is specified [27], as shown in (2).

$$SNR_{\text{dB}} = 1.76 + 6.02N$$
(2)

In LCS, each sample is taken when the signal amplitude crosses any of the threshold levels. Assuming that ideal levels are considered, and the amplitudes in LCS are precisely known, the interval between the successive samples are quantized with the precision Tc of the timer used for representing Dtn. Since the time measurement is synchronized with the clock, there is an error in time by a quantity δt. Instead of Dtn, the time interval is measured as \(\overline{{Dt_{n} }}\). δt can be any value within the range [0, Tc]. In order to find an expression for the SNR, the error in the representation of Dtn is converted to an equivalent error δv in amplitude [27]. The signal-to-noise ratio (SNR) due to the time quantization is expressed [27], as shown in (3).

$$SNR_{\text{dB}} = 10\log_{10} \left( {\frac{{3P\left( {V_{in} } \right)}}{{P\left( {\frac{{dV_{in} }}{dt}} \right)}}} \right) + 20\log_{10} \left( {\frac{1}{{T_{c} }}} \right)$$
(3)

In the first term, the numerator is the power of the input signal Vin, and the denominator is the power in the derivative of the input. Both of these can be calculated from the probability density function of the input signal. Hence the SNR of LCS depends on the time period of the clock and not on the number of bits with which the sample is represented. While designing a level-crossing sampler, power spectral density, bandwidth fmax, maximum amplitude Vmax, and the probability density function p(x) of the signal must be known. Based on the required output SNR, Tc can be calculated from Eq. (3). Suppose the resolution required at the output is 16 bits. The SNR corresponding to 16 bits resolution from Eq. (2) is 98.08 dB. The first term in Eq. (3) is calculated using the speech signals from the database and is approximately 10 dB. By choosing Tc of 10−5 s, the SNR is 110 dB, which meets the required SNR.

In synchronous sampling, the number of samples is selected according to the Nyquist-Shannon sampling theorem so that reconstruction is possible with sinc interpolation. In the case of LCS, error-free recovery is possible if the average sampling rate \(\overline{{f_{s} }}\) is greater than twice the maximum frequency fmax present in the signal [3]. Δv which is equal to \(\frac{{V_{max} }}{{2^{M} }}\) where M is the resolution, can be calculated either with statistical methods using the known parameters of the input signal or with empirical methods. There are various methods for estimating the average level-crossing rate of the signal, which is same as the number of samples generated in unit time. In order to reduce the number of samples generated, Δv is selected such that it just satisfies the condition for \(\overline{{f_{s} }}\). This places the upper limit on the value of Δv. There is a lower limit on the value of Δv, which is specified by the tracking condition [25]. If a sample conversion is taking place, the input signal must not cross any of the reference levels, until the conversion is over. If dmax is the maximum delay of the loop for conversion, then the slope of the LCS system will be Δv/dmax. The slope of the system must be greater than the slope of the input signal. The maximum slope of the input signal is specified as 2πfmaxVmax. If the value of dmax is known, Δv must be sufficiently large to satisfy the condition for the slope.

2.2 Extremum sampling based on LCS

The advantage of LCS is that the sampling density adapts to the local activity of the signal. If there is no activity in the signal, LCS does not acquire any sample. However, if the value of Δv selected is minimal, the time interval between the successive samples is very small. If Δv is high, there is a loss of information that increases the error in reconstruction. Therefore, the sampling density in LCS depends on the input signal, and on the placement of the levels. Hence the advantages of LCS are dependent on the proper placement of the threshold levels within the dynamic range of the input signal. Extremum sampling (ES) is another non-uniform sampling method for which the sampling density depends only on the input signal. The most significant points that characterize a signal are its local extrema. ES can be implemented with LCS, and whenever the input signal crosses a threshold level, it checks whether the slope of the signal has changed from positive to negative or vice versa. If so, the amplitude corresponding to that threshold level is selected as a local extremum point. Hence, the accuracy of peak detection is dependent on the value of the inter-level distance Δv. The idea of extremum sampling is illustrated in Fig. 3. The input signal is compared with the threshold levels l1 to l10, which are equally spaced at Δv.

Fig. 3
figure 3

Illustration of extremum sampling based on LCS

Samples acquired with ES are represented by a triplet (Sn, Pn, Dtn). If the bit Sn is ‘1,’ it indicates that the present sample is representing a positive peak, and it is a negative peak if the bit is ‘0.’ Pn is the digital equivalent of the differences in amplitude between the successive peaks expressed in terms of Δv. If the difference between two adjacent peaks is Vpp, then Pn is equal to \(\ulcorner V_{pp} / \Delta v\urcorner\), where \(\ulcorner.\urcorner\) is the ceiling operation, which is rounding towards the nearest higher integer value. The number of bits required to represent Pn is \(\log_{2} (V_{pp} / \Delta v)\). Dtn represents the time gap between the successive peaks dtn, which is measured in terms of the number of cycles of the clock signal, as in the case of LCS. The signal-to-noise-ratio of the reconstructed signal depends on Δv and the period of the clock Tc. If Δv is large, there can be a significant change from the actual peak and the nearest threshold level, which causes quantization error in the amplitude. The quantization error in time, which was discussed for LCS, is equally applicable to ES. The block diagram for extremum sampling based on LCS is shown in Fig. 4.

Fig. 4
figure 4

Block diagram for extremum sampling based on LCS

The comparators and the logic circuit produce an approximation, Ri of the input signal. VH and VL are the upper and lower reference levels with which the input is compared. If the present amplitude of the input crosses either VH or VL, the value of Ri, VH, and VL are increased or decreased by Δv. Dtn is represented by the output of the time-counter. The counter is incremented with the clock signal and resets with a change of Sn, which indicates the occurrence of a local extremum. The number of threshold levels between successive peaks is measured by the level-counter, which increments at the instant of each level-crossing. This counter is also reset with a change in Sn. During the reconstruction, each sample is updated with the present Sn, Pn, and Dtn. Assuming bn = + 1 for Sn = 1, and bn = − 1 for Sn = 0, the present sample value x(tn) is computed from the previous sample value x(tn-1) as

$$x\left( {t_{n} } \right) = x\left( {t_{n - 1} } \right) + b_{n} P_{n} \Delta v$$
(4)

2.3 Effect of noise in extremum sampling

If the input signal is corrupted with high-frequency noise, the signal envelope will be having abrupt variations. Suppose extremum sampling is implemented such that it generates samples at each peak value of the input signal. If the noisy signal is subjected to such a sampler, it takes samples at the peak of each noise cycle. The number of samples generated will be very high compared to the Nyquist rate.

Since extremum sampling is implemented with LCS in the proposed method, amplitude variations within Δv do not cross any of the reference levels and do not generate a sample. For illustrating the effect of noise in the system, a sine wave of 100 Hz is used as a signal. Initially, the sinusoidal signal alone is subjected to the system. The signal and the variations in the reference levels VH and VL are shown in Fig. 5a. The sine wave is shown in black, and the reference levels VH and VL in red and blue colors. There are 20 level-crossings taking place, and at each level-crossing, VH and VL change by Δv. Just two samples are taken in ES implemented with LCS, which are the peak values. In order to illustrate the noise tolerance of ES based on LCS, a triangular signal of peak-to-peak amplitude (Vpp) which is slightly less than Δv is used to represent noise. Now the triangular wave alone is applied to the system. Since amplitude variations are within Δv, there is no level-crossing taking place, and no samples are acquired in ES based on LCS. The triangular wave is then added with the sine wave to generate a noisy signal. The added signal is then applied to the system. As illustrated in Fig. 5b, VH and VL are updated by Δv at each level-crossing. Near the positive and negative peak of the sine wave, the amplitude variations are within Δv, and there is no level-crossing taking place. During 150 level-crossings out of the total of 180, there is a change in the slope of the input from positive to negative or vice-versa, and a sample is generated. From Fig. 5b, it is observed that level-crossings are taking place even for very small amplitude variations in the signal. One such region near the negative peak of the sine wave is encircled in Fig. 5b, and it is shown magnified in Fig. 5c. At instant A, the input sine has a small increase in amplitude, and it crosses VH, and both the reference levels increase by Δv. A sample is acquired at this instant. Soon after this instant, the peak of the triangular wave has a very small fall in its amplitude, but this causes the signal to cross VL. VH and VL are decreased by Δv, and again one sample is generated. This has happened because while VH and VL increased by Δv, the updated VL occupies the level of the previous VH. Soon after crossing VH, if the signal has a decrease in its amplitude, which is much less than Δv, it then crosses the present VL, and a sample is generated. This can be avoided if the concept of hysteresis is incorporated in LCS.

Fig. 5
figure 5

a Input and the reference levels while sine wave is applied to ES, b Noisy input applied to ES without hysteresis, c encircled portion is magnified, d Noisy input applied to ES with hysteresis

In order to incorporate hysteresis, a finite offset of δ is maintained between present VL and previous VH if the slope of the signal is positive. Similarly, the same offset is maintained between the present VH and the previous VL if the signal slope is negative. The same input signal is applied to ES based on LCS with hysteresis, and the corresponding VH and VL are shown in Fig. 5d. Here the offset δ is selected as Δv itself. Similar to the output obtained for the noiseless input signal, as shown in Fig. 5a, there are just 20 level-crossings, and just two samples are generated in the system. The disadvantage of using ES with hysteresis is that the maximum error in amplitude is Δv + δ, compared to Δv, as in the case of ES without hysteresis. Depending upon the estimated value of the noise level, the value of δ can be selected. If the peak-to-peak value of the noise is greater than Δv, additional filtering must be done before applying to ES.

2.4 Performance measures

As an alternative to the conventional methods in which uniform sampling is followed by a compression process, this paper proposes extremum sampling for vibration signals. The effectiveness of compression on the signal is measured by the parameters mean square error (MSE), peak signal-to-noise ratio (PSNR), R-squared value (R2), and the compression ratio (CR). Since the number of bits per sample is different for the input and the output, CR is calculated on the basis of total bits in the input and the output.

$$CR = \frac{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{bits}}\,{\text{in}}\,{\text{the}}\,{\text{input}}}}{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{bits}}\,{\text{in}}\,{\text{the}}\,{\text{output}}}}$$
(5)

MSE, PSNR, and R2 evaluate the distortions caused by the sampling process. CR measures the performance of size reduction. Let yi be the ith value of the reconstructed signal and xi the value of the input at the same instant. MSE measures the average of the squares of the error between the input and the reconstructed signal. The reconstruction is better if the value of MSE is closer to zero. For a group of n samples, MSE is defined as shown in (6).

$$MSE = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \left( {x_{i} - y_{i} } \right)^{2}$$
(6)

PSNR compares the maximum level of the signal power to the level of the noise power, noise being the difference between the original and reconstructed signals. PSNR is expressed in (7).

$$PSNR_{\text{dB}} = 10\log_{10} \frac{{ \left( {X_{max} } \right)^{2} }}{MSE}$$
(7)

Xmax is the maximum amplitude of the input signal. R-squared (R2) is a statistical measure having a value between 0 and 1, representing the proportion of the variance between the variables. R2 with a value closer to 1 indicates that the reconstructed signal has a close similarity to the input signal. R2 can be calculated as

$$R^{2} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {x_{i} - y_{i} } \right)^{2} }}{{\mathop \sum \nolimits_{i = 1}^{n} \left( {x_{i} - \bar{x}} \right)^{2} }}$$
(8)

\(\bar{x}\) is the mean of the input signal. As an illustration, a short duration of the vibration signal ‘X097_DE’, which is subjected to extremum sampling, and the corresponding reconstructed output is shown in Fig. 6. The input signal and the reconstructed output are shown with dotted and bold lines, respectively.

Fig. 6
figure 6

Input signal, and the reconstructed output

3 Simulation results

Bearing vibration data from the data set provided by Case Western Reserve University is used for the simulation. It is available as uniform samples with a sampling rate of 12000 per second. These samples are interpolated to convert them into the analog form, and extremum sampling is performed on selected input signals. Simulation is done with Matlab/ Simulink. Details of input signals which are subjected to extremum sampling, and the corresponding values of CR, MSE, PSNR, and R2 are listed in Table 1. It is observed from the table that the signals have been reproduced with minimal error. High values of compression ratios have been achieved with an excellent correlation between the input and output signals. As discussed in Sect. 2.3, the maximum error in amplitude for ES with hysteresis is Δv + δ. In order to have a minimum error, Δv and δ are selected as 5 mv and 2.5 mv respectively. The frequency of the timer counter fclk selected as 100 kHz.

Table 1 Compression ratio (CR), Mean square error (MSE), Peak signal-to-noise ratio (PSNR) and R2 value

The signal-to-noise ratio of a level-crossing sampler is dependent only on the frequency of the clock used. In extremum sampling, the local peak is detected by a change in the direction of the slope at the level-crossing instant. Hence, extremum sampling suffers an error in the amplitude also. The variation of CR and R2 values attained for the signal ‘X097_DE’ with respect to Δv are plotted in Fig. 7. Even though CR increases with Δv, as shown in Fig. 7a, the error also has a proportional increase, as shown in Fig. 7b.

Fig. 7
figure 7

a Compression ratio (CR) versus Δv, b R2 value versus Δv

The reproduced signals exhibit great similarity with the input signals. As an example, a vibration signal ‘X097_DE’ and its spectrum are shown in Fig. 8a and 8b, respectively. The reconstructed outputs in the time domain and spectral domain are shown in Fig. 8c and 8d, respectively. All the peak values in the time domain and all the significant frequency components in the input are preserved.

Fig. 8
figure 8

a Input signal, b spectrum of the input signal, c output signal, d spectrum of the output signal

Vibration signals acquired with normal and faulty bearings are available in the database. On analysis, it is observed that the signals from faulty bearings have higher values of Pn and lower values of Dtn. The compression ratio achieved for a signal from a faulty bearing is slightly less than that with a normal one, but it maintains the reconstruction accuracy. The reconstruction accuracy is proportional to fclk, and inversely proportional to Δv.

4 Discussion

Most of the compression algorithms used for vibration signals are based on uniform sampling. An excess number of samples is generated first, which is then subjected to a compression algorithm. In these methods, the sparsity of the signals is not exploited. The methods which are based on compressed sensing make use of the sparsity of the signal in the transform domain. CS has been applied for compressing vibration signals in [21, 22]. The maximum value of CR attained in [21] is 2.5, and 4 in [22]. However, instead of sensing in a compressed form, signals are initially sampled at the Nyquist rate. The digitized samples are then multiplied by a suitable measurement matrix to convert it into the compressed form. These methods demand that iterative optimization algorithms be executed to reconstruct the original signal, where real-time reconstruction is a difficult task. In the proposed system, ES acquires the samples in the compressed form during the process of sampling itself giving CR of 4.89 which is better than the values reported in [21, 22]. The sampling rate adapts with the signal statistics and produces compressed data directly from the analog input signal enabling real-time encoding and reconstruction. The results indicate that very good compression is possible with minimal reconstruction error if ES is applied to vibration signals. In this paper, ES has been implemented with LCS, and hence it has inherent noise suppression based on the value of the inter-level distance Δv. However, analog filters may be required before performing the sampling if the noise level is high.

5 Conclusion

In this paper, extremum sampling has been proposed for vibration signals as an alternative to conventional sampling methods and CS-based methods. Samples at the local extrema of the input signal are acquired, with which reconstruction of the vibration signal is achieved with sufficient accuracy. Even though the proposed method falls in the category of lossy compression, the peak values of the input, and the time gap between them are retained in the reconstructed output. The method offers high compression ratios with minimal error. Extremum sampling is implemented with LCS, which is aliasing-free, adapts to the changes in the input signal, and has the ability of noise removal. The proposed method is suitable for launch vehicle telemetry systems and other wireless sensor networks. Even if the samples are non-uniformly spaced, the number of bits per sample remains the same, which enables easy multiplexing. The compression ratio achieved with the proposed method is not the outcome of a compression algorithm, but the result achieved with a non-uniform sampling method. Additional filtering operation is required if the noise amplitude is greater than Δv. The possibility of achieving further compression from these samples needs to be explored.