An efficient solution to sparse linear prediction analysis of speech
- 4k Downloads
- 5 Citations
Abstract
We propose an efficient solution to the problem of sparse linear prediction analysis of the speech signal. Our method is based on minimization of a weighted l2-norm of the prediction error. The weighting function is constructed such that less emphasis is given to the error around the points where we expect the largest prediction errors to occur (the glottal closure instants) and hence the resulting cost function approaches the ideal l0-norm cost function for sparse residual recovery. We show that the efficient minimization of this objective function (by solving normal equations of linear least squares problem) provides enhanced sparsity level of residuals compared to the l1-norm minimization approach which uses the computationally demanding convex optimization methods. Indeed, the computational complexity of the proposed method is roughly the same as the classic minimum variance linear prediction analysis approach. Moreover, to show a potential application of such sparse representation, we use the resulting linear prediction coefficients inside a multi-pulse synthesizer and show that the corresponding multi-pulse estimate of the excitation source results in slightly better synthesis quality when compared to the classical technique which uses the traditional non-sparse minimum variance synthesizer.
Keywords
Speech Signal Sparse Representation Vocal Tract Linear Predictive Code Linear Prediction Coefficient1 Introduction
Linear prediction (LP) analysis is a ubiquitous analysis technique in current speech technology. The basis of LP analysis is the source-filter production model of speech. For voiced sounds in particular, the filter is assumed to be an all-pole linear filter and the source is considered to be a semi-periodic impulse train which is zero most of the times, i.e., the source is a sparse time series. LP analysis results in the estimation of the all-pole filter parameters representing the spectral shape of the vocal tract. The accuracy of this estimation can be evaluated by observing the extent in which the residuals (the prediction error) of the corresponding prediction filter resemble the hypothesized source of excitation [1] (a perfect impulse train in case of voiced speech). However, it is shown in [1] that even when the vocal tract filter follows an actual all-pole model, this criterion of goodness is not fulfilled by the classical minimum variance predictor. Despite the theoretic physical significance, such sparse representation forms the basis for many applications in speech technology. For instance, a class of efficient parametric speech coders are based on the search for a sparse excitation sequence feeding the LP synthesizer [2].
It is argued in [3] that the reason behind the failure of the classical method in providing such sparse representation is that it relies on the minimization of l2-norm of prediction error. It is known that the l2-norm criterion is highly sensitive to the outliers [4], i.e., the points having considerably larger norms of error. Hence, l2-norm error minimization favors solutions with many small non-zero entries rather than the sparse solutions having the fewest possible non-zero entries [4]. Hence, l2-norm is not an appropriate objective function for the problems where sparseness constraints are incorporated. Indeed, the ideal solution for sparse residual recovery is to directly minimize the cardinality of this vector, i.e., the l0-norm of prediction error which yields a combinatorial optimization problem. Instead, to alleviate the exaggerative effect of l2-norm criterion at points with large norms of error, it is usual to consider the minimization of l1-norm as it puts less emphasis on outliers. l1-norm can be regarded as a convex relaxation of the l0-norm and its minimization problem can be re-casted into a linear program and solved by convex programming techniques [5].
The l1-norm minimization of residuals is already proven to be beneficial for speech processing [6, 7, 8]. In [6], the stability issue of l1-norm linear programming is addressed and a method is introduced for both having an intrinsically stable solution as well as keeping the computational cost down. The approach is based the Burg method for autoregressive parameters estimation using the least absolute forward-backward error.
In [7], the authors have compared the Burg method with their l1-norm minimization method using the modern interior points method and shown that the sparseness is not preserved with the Burg method. Later, they have proposed a re-weighted l1-norm minimization approach in [8], to enhance the sparsity of the residuals and to overcome the mismatch between l0-norm minimization and l1-norm minimization while keeping the problem solvable with convex programming tools. Initially the l1-norm minimization problem is solved using the interior points method and then the resulted residuals are used iteratively, to re-weight the l1-norm objective function such that less weight is given to the points having larger residual norms. The optimization problem is thus iteratively approaching the solution for the ideal l0-norm objective function. We also mention that, an interesting review is made in [9, 10], on several solvers for the general problem of mixed l p l0-norm minimization in the context of piece-wise constant function approximation, which indeed their adaptation to the problem of sparse linear prediction analysis can be beneficial (particularly the stepwise jump penalization algorithm, which is shown to be highly efficient and reliable in detection of sparse events).
In this article, we propose a new and efficient solution to sparse LP analysis which is based on weighting of the l2-norm objective function so as to maintain the computational tractability of the final optimization problem and to avoid the computational burden of convex programming. The weighting function plays the most important role in our solution in maintaining the sparsity of the resulting residuals. We first extract from the speech signal itself, the points having the potential of attaining largest norms of residuals (the glottal closure instants) and then we construct the weighting function such that the prediction error is relaxed on these points. Consequently, the weighted l2-norm objective function can be minimized by the solution of normal equations of liner least squares problem. We show that our closed-form solution provides better sparseness properties compared to the l1-norm minimization using the interior points method. Also, to show the usefulness of such sparse representation, we use the resulting prediction coefficients inside a multi-pulse excitation (MPE) coder and we show that the corresponding multi-pulse excitation source provides slightly better synthesis quality compared to the estimated excitation of the classical minimum variance synthesizer.
The article is organized as follows. In Section 2, we provide the general formulation of the LP analysis problem. In Section 3, we briefly review previous studies on sparse LP analysis and the numerical motivations behind them. We present our efficient solution in Section 4. In Section 5, the experimental results are presented and finally in Section 6, we draw our conclusion and perspectives.
2 Problem formulation
and N1 = 1 and N2 = N + K (For n < 1 and n > N, we put x (n) = 0). The l p -norm is defined as . Depending on the choice of p in Equation 2, the estimated linear prediction coefficients and the resulting residuals would possess different properties.
3 Approaching the l0-norm
The ideal solution to the LP analysis problem of Equation 2 so as to retrieve the sparse excitation source of voiced sounds, is to directly minimize the number of non-zero elements of the residual vector, i.e., its cardinality or the so-called l0-norm [11]. As this problem is an N P hard optimization problem [8], its relaxed but more tractable versions (p = 1,2) are the most widely used.
Setting p = 2 results in the classical minimum variance LP analysis problem. Although the latter suggests the highest computational efficiency, it is known that this solution can not provide the desired level of sparsity, even when the vocal tract filter is truly an all-pole filter [1]. It is known that l2-norm has an exaggerative effect on the points having larger values of prediction error (the so-called outliers). Consequently, the minimizer puts much effort on forcing down the value of these outliers, with the cost of more non-zero elements. Hence, the resulting residuals are not as sparse as desired.
It is known that this exaggerative effect on the outliers is reduced with the use of l1-norm and hence, its minimization could be a meliorative strategy w.r.t the minimum variance solution, in that the error on the outliers are less penalized [11]. The solution to the l1-norm minimization is not as easy as the classical minimum variance LP analysis problem but it can be solved by recasting the minimization problem into a linear program [12] and then using convex optimization tools [5]. However, it is argued in [6] that linear programming l1-norm minimization, suffers from stability and computational issues and instead, an efficient algorithm is introduced, based on a lattice filter structure in which the reflection coefficients are obtained using a Burg method with l1 criterion and the robustness of the method is shown to be interesting for voiced sound analysis. However, it is shown in [7] that the l1-norm Burg algorithms behaves somewhere in between the l2-norm and the l1-norm minimization. Instead, the authors have shown that enhanced sparsity level can be achieved using modern interior points method [5] of solving the linear program. They have shown interesting results of such analysis and have argued that the added computational burden is negligible considering the consequent simplifications (granted by such a sparse representation) in applications such as open and closed loop pitch analysis and algebraic excitation search.
An iteratively re-weighted l1-norm minimization approach is consequently proposed by the same authors in [8] to enhance the sparsity of residuals, while keeping the problem solvable by convex techniques. The algorithm starts by plain l1-norm minimization and then, iteratively, the resulting residuals are used to re-weight the l1-norm cost function such that the points having larger residuals (outliers) are less penalized and the points having smaller residuals are penalized heavier. Hence, the optimizer encourages small values to become smaller while augmenting the amplitude of outliers [13].
Graphical representation of different cost functions. Comparison between l p -norm cost functions for p ≤ 2. The “democratic” l0-norm cost function is approached as p → 0. The term “democratic” refers to the fact that l0-norm weights all the nonzero coefficients equally [11].
4 The weighted l2-norm solution
We aim at developing an alternative and efficient optimization strategy which approximates the desired sparsity of the residuals. Our approach is based on the minimization of a weighted version of the l2-norm criterion. The weighting function plays the key role in maintaining the sparsity of the residuals. Other than pure numerical motivations on de-emphasizing the exaggerative effect of l2-norm on outliers (as discussed in Section 3), the design of this function is motivated by the physical production process of the speech signal. We extract from the speech signal itself, the points which are physically susceptible of attaining larger values of residuals and we construct the weighting function such that the error at those outliers is less penalized.
The outliers of LP residuals have an important physical interpretation as their time-pattern follows the pitch period of the speech signal. In other words, they follow the physical excitation source of the vocal tract system, which is a sparse sequence of glottal pulses separated by the pitch period. Indeed, the impulse-like nature of this excitation source is reflected as effective discontinuities in the residual signal [14]: when no significant excitation is presented at the input of the vocal tract system, its output is resonating freely according to the hypothesized all-pole model, and hence, it is maximally predictable by the parameters of the filter. On the other hand, the predictability is minimized when the significant excitations takes place and hence, the output signal would be under the influence of both the excitation source and the vocal tract filter. Consequently, LP residual contains clear peaks (outliers) around the instants of significant excitations of the vocal tract system. Hence, if we have a-priori knowledge about these instants we can use this knowledge to impose constraints on the LP analysis problem, so as to relax the prediction error at those points. By doing so, we ensure that if any enhancement is achieved in the sparsity level of residuals, it also corresponds to the physical sparse source of excitation.
The instants of significant excitations of vocal tract are called the Glottal closure instants (GCI) [14]. The detection of GCIs has gained significant attention recently as it founds many interesting applications in pitch-synchronous speech analysis. Many methods are developed for GCI detection in adverse environment (a comparative study is provided in [15]) and the physical significance of the detected GCIs is validated by comparing them to the electro Glotto graph signal. In this article, we use the recent robust SEDREAMS algorithm [15]a. The weighting function is then constructed such that less emphasize is given to the GCI points, and hence the exaggerative effect on the outliers of the residuals is canceled. We can now proceed to formalize the proposed solution.
4.1 Optimization algorithm
It is interesting to mention that our experiments show that as long as the smoothness of the w(·) is maintained the stability of the solution is preserved. Indeed, the special form of the input vector X in Equation 2, is the one used in autocorrelation formulation of LP analysis using l2-norm minimization. It is proven that autocorrelation formulation always results in a minimum-phase estimate of the all-pole filter, even if the real vocal tract filter is not minimum phase [1]. As our formulation is similar to the autocorrelation formulation, we can fairly expect the same behavior (though we don’t have a theoretical proof). This is indeed beneficial, as having a non-minimum phase spectral estimate results in saturations during synthesis applications. Our experiments show that such saturation indeed never happens. This is an interesting advantage of our method compared to l1-norm minimization methods which do not guaranty a minimum phase solution, unless if additional constraints are imposed to the problem [7].
4.2 The weighting function
The weighting function A frame of a voiced sound along with the detected GCI locations and the constructed weighting function (with σ = 50 and κ = 1 ).
5 Experimental results
We first show the ability of our approach in retrieving sparse residuals for stationary voiced signals and also we show that it can provide a better estimation of the all-pole vocal-tract filter parameters. We then show how such sparse modeling can enhance the performance of a multi-pulse excitation estimation. All the results presented in this section are obtained using the following set of parameters for w(·): κ = 0 . 9 and σ = 50. The choice of the parameters was obtained using a small development set (of few voiced frames) taken from the TIMIT database [18].
5.1 Sparsity of residuals for voiced sounds
Comparison of the sparsity of residuals. The residuals of the LP analysis obtained from different optimization strategies. The prediction order is K = 13 and the frame length is N = 160.
Comparison of the sparsity levels
Method | kurtosis on the whole sentence | kurtosis onvoiced parts |
---|---|---|
l2-norm minimization | 51.7 | 39.7 |
l1-norm minimization | 81.9 | 65.9 |
weighted-l2-norm minimization | 86.6 | 69.7 |
5.2 Estimation of the all-pole vocal-tract filter
Estimation of the all-pole filter. (top) synthetic speech signal, (bottom) frequency response of the filters obtained with l2-norm and weighted-l2-norm minimization (prediction order K = 13). Note that only the first half of the frequency axis is shown so as to enhance the presentation.
5.3 Multi-pulse excitation estimation
The sparseness of the excitation source is a fundamental assumption in the framework of linear predictive coding (LPC) where the speech is synthesized by feeding the estimated all-pole filter by an estimate of the excitation source. The coding gain is achieved by considering a sparse representation for the excitation source. In the popular multi-pulse excitation (MPE) method [20, 21], the synthesis filter is estimated through the classic l2-norm minimization and then a sparse multi-pulse excitation sequence is extracted through an iterative Analysis-by-Synthesis procedure. However, as discussed in previous sections this synthesizer is not intrinsically a sparse one. Hence, it would be logical to expect that the employment of an intrinsically sparse synthesis filter, such as the one developed in this article, would enhance the quality of the synthesized speech using the corresponding multi-pulse estimate. Consequently, we compare the performance of the classical MPE synthesizer which uses minimum variance LPC synthesizer with the one whose synthesizer is obtained trough our weighted l2-norm minimization procedure. We emphasize that we follow exactly the same procedure for estimation of multipulse coders for both synthesizers, as in the classical MPE implementation in [21] (iterative minimization of perceptually weighted error of reconstruction).
Multi-pulse excitation coding
Method | PESQ | SNR |
---|---|---|
MPE + l2-norm | 3.3 | 9.5 dB |
MPE + weighted-l2-norm | 3.4 | 10.2 dB |
Finally, we emphasize that the superior performance of our weighted-l2-norm solution in retrieving sparse residuals in Section 5.2, plus the slight improvement of the coding quality in Section 5.3, was achieved with roughly the same computational complexity as the classical l2-norm minimization (if we neglect the computational burden of the GCI detector). This is a great advantage compared to the computationally demanding l1-norm minimization via convex programming (as in [7] or in [8] where multiple re-weighted l1-norm problems are solved) which also suffers from instability issues. Moreover, another important feature of our solution is that, during the coding experiment we observed that by using the Gaussian shape for the weighting function, the solution is always stable and it does not meet the instability issues as l1-norm minimization.
6 Conclusion
We introduced a simple and efficient solution to the problem of sparse residual recovery of the speech signal. Our approach is based on minimization of weighted l2-norm of the residuals. The l2-norm was used to preserve the simplicity and the efficiency of the solution, while the weighting function was designed to circumvent l2-norm’s exaggerative effect on larger residuals (the outliers). This is done by de-emphasizing the error on the GCIs where, by considering the physical production mechanism of the speech, we expect the outliers to occur. We showed that our methodology provides better sparsity properties compared to the complex and computationally demanding l1-norm minimization via linear programming. Moreover, the method is interestingly immune to instability problems as opposed to l1-norm minimization. Also, we showed that such intrinsically sparse representation can result in slightly better synthesis quality by sparse multi-pulse excitation of the synthesis filter in MPE-coding framework. The performance, the efficiency and the stability of the proposed solution show a promising potential in speech processing as its application can be further investigated in a variety of applications in the general framework of speech synthesis. This would be the subject of our future communications.
Endnote
aWe opted for the SEDREAMS so as to benefit from its proven reliability, in order to focus on proof-of-concept.
Notes
Acknowledgements
Vahid Khanagha is funded by the INRIA CORDIS doctoral program.
Supplementary material
References
- 1.Quatieri TF: Discret-Time Speech Signal Processing Principles and Practice. Prentice-Hall, Upper Saddle River, New Jersey, USA; 2001.Google Scholar
- 2.CHU WC: Speech coding algorithms: foundation and evolution of standardized coders. Wiley-Interscience, John Wiley & Sons, Inc., Hoboken, New Jersey, USA; 2003.CrossRefGoogle Scholar
- 3.Giacobello D: Sparsity in Linear Predictive Coding of Speech. PhD thesis. Multimedia Information and Signal Processing, Department of Electronic Systems, Aalborg University (2010)Google Scholar
- 4.Meng D, Zhao Q, Xu Z: Improved robustness of sparse PCA by L1-norm maximization. Pattern Recogn. Elsevier 2012, 45: 487-497. 10.1016/j.patcog.2011.07.009zbMATHCrossRefGoogle Scholar
- 5.Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, Shaftesbury Road, Cambridge, United Kingdom; 2004.zbMATHCrossRefGoogle Scholar
- 6.Denoel E, Solvay JP: Linear prediction of speech with a least absolute error criterion. IEEE Trans. Acoustics Speech Signal Process 1985, 33: 1397-1403. 10.1109/TASSP.1985.1164759CrossRefGoogle Scholar
- 7.Giacobello D, Christensen MG, Dahl J, Jensen SH, Moonen M: Sparse linear predictors for speech processing. In Proceedings of the INTERSPEECH. Brisbane, Australia; 2009:353-1356.Google Scholar
- 8.Giacobello D, Christensen MG, Murthi MN, Jensen SH, Moonen M: Enhancing sparsity in linear prediction of speech by iteratively reweighted 1-norm minimization. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Dallas, Texas, USA; 2010:4650-4653.Google Scholar
- 9.Little MA, Jones N: Generalized methods and solvers for noise removal from piecewise constant signals: Part I–Background theory. Proceedings of the Royal Society A 2011, 467(2135):3088-3114. 10.1098/rspa.2010.0671zbMATHCrossRefGoogle Scholar
- 10.Little MA, Jones N: Generalized methods and solvers for noise removal from piecewise constant signals: Part II–New methods. Proceedings of the Royal Society A 2011, 467(2135):3115-3140. 10.1098/rspa.2010.0674zbMATHCrossRefGoogle Scholar
- 11.Candès EJ, Wakin MB: Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl 2008, 14: 877-905. 10.1007/s00041-008-9045-xzbMATHMathSciNetCrossRefGoogle Scholar
- 12.Candès E, Romberg J: ’1-MAGIC: Recovery of sparse signals via convex programming. California Institute of Technology, Pasadena. 24 (2005)Google Scholar
- 13.Giacobello D, Christensen MG, Murth MN, Jensen SH, Marc Moonen F, prediction Sparselinear, Trans itsapplicationstospeechprocessing: IEEE Audio Speech Lang. Process 2012, 20: 1644-1657.Google Scholar
- 14.Murty K, Yegnanarayana B: Epoch extraction from speech signals. IEEE Trans. Audio Speech Lang. Process 2008, 16: 1602-1613.CrossRefGoogle Scholar
- 15.Drugman T, Thomas M, Gudnason J, Naylor P, Dutoit T: Detection of glottal closure instants from speech signals: a quantitative review. IEEE Trans. Audio Speech Lang. Process 2012, 20(3):994-1006.CrossRefGoogle Scholar
- 16.Thomas M, Gudnason J, Naylor P: Estimation of glottal closing and opening instants in voiced speech using the YAGA algorithm. IEEE Trans. Audio Speech Lang. Process 2012, 20(1):82-97.CrossRefGoogle Scholar
- 17.Wong D, Markel J, Gray JA: Least squares glottal inverse filtering from the acoustic speech waveform. IEEE Trans. Acoustics Speech Signal Process 1979, 27(4):350-355. 10.1109/TASSP.1979.1163260CrossRefGoogle Scholar
- 18.Garofolo JS, Lamel LF, Fisher WM, Fiscus JG, Pallett DS, Dahlgren NL, Zue V: DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus. Tech. rep., U.S. Dept. of Commerce, NIST. MD, Gaithersburg; 1993.Google Scholar
- 19.Hurley N, Rickard S: Comparing measures of sparsity. IEEE Trans. Inf. Theory 2009, 55: 4723-4740.MathSciNetCrossRefGoogle Scholar
- 20.Atal B, Remde J: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 1982, 614-617.Google Scholar
- 21.Singhal S, Atal B: Amplitude optimization and pitch prediction in multipulse coders. IEEE Trans. Acoustics Speech Signal Process 1989, 37: 317-327. 10.1109/29.21700CrossRefGoogle Scholar
- 22.Giacobello D, Christensen M, Murthi M, Jensen S, Moonen M: Retrieving sparse patterns using a compressed sensing framework: applications to speech coding based on sparse linear prediction. IEEE Signal Process. Lett 2010, 17(1):103-106.CrossRefGoogle Scholar
- 23.International Telecommunication Union: ITU-T Recommendation p. 862: Perceptual evaluation of speech quality (PESQ) an objective method for end-to-end speech quality assessment of narrowband telephone networks and speech codecsGoogle Scholar
Copyright information
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.