Single microphone speech separation by diffusionbased HMM estimation
 1.2k Downloads
 1 Citations
Abstract
We present a novel noniterative and rigorously motivated approach for estimating hidden Markov models (HMMs) and factorial hidden Markov models (FHMMs) of highdimensional signals. Our approach utilizes the asymptotic properties of a spectral, graphbased approach for dimensionality reduction and manifold learning, namely the diffusion framework. We exemplify our approach by applying it to the problem of single microphone speech separation, where the logspectra of two unmixed speakers are modeled as HMMs, while their mixture is modeled as an FHMM. We derive two diffusionbased FHMM estimation schemes. One of which is experimentally shown to provide separation results that compare with contemporary speech separation approaches based on HMM. The second scheme allows a reduced computational burden.
Keywords
Single microphone speech separation Manifold learning Diffusion maps Factorial hidden Markov modelsAbbreviations
 BSS
Blind source separation
 CASA
Computational auditory scene analysis
 DFHMM
Dual FHMM
 DFHMMH
DFHMM with hard mask
 DFHMMS
DFHMM with soft mask
 EM
Estimationmaximization
 FHMM
Factorial hidden Markov model
 FSHMM
Factorial scaled hidden Markov model
 GMM
Gaussian mixture model
 HFHMM
Hybrid FHMM
 HFHMMHE
HFHMM with hard mask and using exact embedding
 HFHMMHN
HFHMM with hard mask and using Nyström extension
 HFHMMSE
HFHMM with soft mask and using exact embedding
 HFHMMSN
HFHMM with soft mask and using Nyström extension
 HMM
Hidden Markov model
 ICA
Independent component analysis
 IdBM
Ideal binary mask
 iDFHMM
Ideal DFHMM
 JADE
Joint approximate diagonalization of eigenmatrices
 kNN
k nearest neighbors
 MAP
Maximumaposteriori
 MFCC
Mel frequency cepstral coefficients
 ML
Maximum likelihood
 MMSE
Minimum mean square error
 NHMM
Nonnegative hidden Markov model
 NMF
Nonnegative matrix factorization
 p.d.f.
Probability density function
 PSD
Power spectral density
 RBF
Radial basis function
 RoweisH
Roweis with hard mask
 RoweisS
Roweis with soft mask
 SAR
Source to artifacts ratio
 SCSS
Singlechannel speech separation
 SDR
Source to distortion ratio
 SIR
Source to interference ratio
 SNR
Source to noise ratio
 SOBI
Second order blind identification
 STFT
Shorttime Fourier transform
1 Introduction
Singlechannel speech separation (SCSS) is one of the most challenging tasks in speech processing, where the aim is to unmix two or more concurrently speaking subjects, whose audio mixture is acquired by a single microphone. The goal is therefore to decompose the single input signal into multiple output channels, each dominated by a single speaker. The core obstacle in such tasks is the lack of spatial information, and the common statistical characteristics of the mixed signals.
Singlechannel speech separation was studied by several schools of thought, where computational auditory scene analysis (CASA) proved to be among the most effective. CASAbased methods are motivated by the ability of the human auditory system to separate acoustic events, even when using a single ear (although binaural hearing is advantageous). CASA techniques imitate the human auditory filtering known as cochlear filtering, where timefrequency bins of the speech mixture are clustered using psychoacoustic cues such as the pitch period, temporal continuity, onsets and offsets, etc. The clustering associates each timefrequency bin with a particular source. The timefrequency bins associated with the desired source are retained, while those associated with the interference are attenuated. Such approaches were studied by Weintraub [1], Parsons [2], and Brown and Cooke [3]. Contemporary CASA schemes utilize oscillatory correlations [4], and common amplitude modulation [5], but do not utilize prior information regarding the source signals and their number. All they require is that each timefrequency bin is dominated by a single speaker. A probabilistic interpretation of CASA was proposed by Wang and Brown [6], and applied by Vincent and Plumbley [7], who proposed a Bayesian formulation of the separation based on a harmonic model of the sources.
The association of each timefrequency bin with a particular speaker is usually referred to as binary or hard masking. In the ideal case, where the timefrequency bin association of each source is perfectly known, the mask is usually referred to as the ideal binary mask (IdBM), and it was shown by Li and Wang [8] to be optimal in terms of source to noise ratio (SNR). Yilmaz and Rickard [9] showed that (ideal) binary masking enables the separation of up to ten sources from a single mixture.
Alternatively, a soft mask can be used, where each timefrequency bin is assumed to be associated with multiple signals (with different weights), and their relative spectral content in each timefrequency bin has to be estimated.
Blind source separation (BSS)based approaches are commonly implemented via independent component analysis (ICA), and are widely used in multimicrophone speech separation. In the SCSS context they are formulated as an undetermined BSS problem [10, 11]. Zibulevsky and Pearlmutter [12] used the Fourier coefficients to represent speech signals and utilize their sparseness for separation.
Van der Kouwe et al. [13] conducted an experimental study that compared CASA [4] to multimicrophone BSS approaches (joint approximate diagonalization of eigenmatrices (JADE) [14] and second order blind identification (SOBI) [15] algorithms), and showed the advantage of the latter approaches that utilize spatial information. However, in many important application, such a spatial information is unavailable.
Recent separation schemes applied nonnegative matrix factorization (NMF), where the magnitude of the Fourier transformed frames is factorized as the product of two nonnegative matrices. The first comprising the basis functions, and the second encoding the weights of the corresponding basis functions. The matrix of basis functions is speakeradaptive, learned in a training phase. In the separation stage, the power spectral density (PSD) of the mixture is modeled by a linear combination of the basis functions of both speakers. The corresponding weight matrices are estimated, and utilized to estimate the underlying sources. Virtanen [16] proposed an NMF approach that encourages PSD continuity and sparseness. Smaragdis [17] proposed a convolutive form of the NMF to model the time dependencies of the PSD. A semisupervised realtime NMF algorithm was proposed by Joder et al. [18]. Only one source is learned from training data whereas the other source is estimated based on the recent past of realtime data.
Benaroya et al. [19] express each source as a weighted sum of temporal Gaussian stationary processes, with positive, slowly timevarying weights. The PSD is approximated by the weighted sum of the variances of the Gaussian processes, yielding a nonnegative representation, and the sources are recovered utilizing the Wiener filter. Blouet and Cohen [20] extended this factorization [19] to separate speech from speechmusic mixtures, where the weighted sum of processes approximates the shorttime Fourier transform (STFT) of each source as complexGaussian stationary processes. A sinusoidal modeling of the timedomain was proposed by Mowlaee et al. [21], where a codebook is trained for each source and utilized in the separation stage. This work was extended in [22], and includes a preceding stage of detecting doubletalk or singletalk frames, as well as a speaker identification system.
Machine learning approaches were applied to speech separation by Bach and Jordan [23]. They proposed to treat the separation problem in the timefrequency domain as a segmentation problem and to apply the relevant segmentation tools to the audio features extracted from the speech spectrogram.
Deep learning techniques are gaining popularity following their success in singletalker automatic speech recognition tasks. Essentially, the networks are trained based on parallel sets of mixtures and their constituent target sources. They are optimized to predict the source of the target class, usually for each timefrequency bin. For example, in [24], the speakers are estimated by jointly optimizing a soft timefrequency mask layer with deep recurrent neural networks. However, these works often assume speakerdependent models with few target speakers that are known during training. In addition, they usually work on limited vocabulary and grammar. Yu et al. [25] proposed a speakerindependent method for multitalker speech separation by using permutation invariant training. It first determines the best outputtarget assignment and then optimizes the separation regression error given the assignment. Another speakerindependent technique is proposed in [26], where contrastive embedding vectors are assigned to each timefrequency region of the spectrogram. It results in implicit prediction of the segmentation labels of the target spectrogram from the input mixtures. Separation is obtained by optimizing Kmeans with respect to the unknown assignment. In [27], the authors propose to use an ensemble of deep neural networks and demonstrate the superiority of this structure over speech separation algorithms based on a single network.
A plethora of approaches utilize statistical models of speech signals. In [28], the PSD of each speech frame is computed by iterating between randomly drawing frequency bins from a mixture of multinomial distributions, and scaling the histogram of the drawings. Given the probabilistic models, the minimum mean square error (MMSE) estimate of the desired source is derived. Essentially, this method is virtually indistinguishable from methods applying NMF.
Gaussian mixture models (GMMs) and HMMs are extensively utilized in speech separation tasks. Kristjansson et al. [29] modeled the logspectrum of each speaker by an GMM. They approximate the joint probability density function (p.d.f.) of the logspectra of the speakers given the logspectrum of the mixture, by a normal distribution. Using this approximation, the posterior distributions of the logspectrum of each speaker are computed, and the MMSE estimator is derived. GMMbased representations can be modified to improve the temporal modeling. Such an approach was proposed by Benaroya et al. [30], where the variances in the GMM are scaled by timevarying factors, to incorporate source dynamics.
It is common to apply the GMM and HMM framework using the logmax approximation, that was first proposed by Nádas et al. [31], in the context of speech recognition, and was denoted as MIXMAX. Burshtein and Gannot [32] reformulated MIXMAX, by assuming that the logspectrum of the clean speech segments can be modeled by GMMs, while the noise logspectrum is normally distributed. In particular, the logspectrum of the noisy speech is approximated by the maximum of the logspectrum of the speech and noise. This result was extended by Yeminy et al. [33], who proposed a generalized formulation, incorporating the correlation between adjacent frequency bins. Reddy and Raj [34] applied the MIXMAX model to the speech separation task, where both input signals were modeled by GMMs, and derived two estimators. The first optimizes the MMSE, and the second uses a soft mask, computed using the posterior probability of the observed logspectrum to match the logspectrum of the desired speech. Radfar and Dansereau [35] used a similar model as in [34] with an additive error model, assuming that the error is zeromean and normally distributed. The logspectra of the speakers are recovered by optimizing the MMSE.
Roweis [36] modeled the logspectra of the two speakers as the output of HMM processes. Using the logmax approximation, the logspectrum of the mixture signal is approximated by the maximum between the corresponding outputs of the two underlying HMMs, and is modeled by an FHMM. The compound state of the logspectral vector of the mixed signal consists of two states, corresponding to the speakers. A variant of the Viterbi algorithm, denoted as factorial Viterbi algorithm, is used to reconstruct the logspectra of the speakers, while a binary mask is applied to recover the source. This approach was extended by Radfar et al. [37], to the case where the mixture of the two sources is a weighted sum of the noisefree speech signals. The gain factors are recovered by an iterative formulation of the FHMM, resulting in improved experimental results. However, in [37], the complexity of the algorithm is quadratic in the number of states, and the speakertospeaker ratio range must be an input of the algorithm. Hu and Wang [38] proposed an iterative separation scheme that estimates the gain factor without any prior knowledge. An HMM is trained for each source, where the utterances of the sources are scaled to have a known equal energy. The mixing model is the logmax approximation and the compound states are inferred using factorial Viterbi algorithm. Each loop of the algorithm consists of several steps. A separation phase, where the sources are estimated. Then, the speakertospeaker ratio is assessed using the estimated sources. Eventually, the pretrained models of the speakers and the mixture are adapted to the estimated gain factors. It was reported in [38] that best results were obtained when the separation was based on the MAP estimation.
Hershey et al. [39] also proposed to utilize both FHMM and logmax approximation. The FHMM encodes the grammar dynamics of the sources, when the structure and vocabulary of each unmixed speech are known in advance. At each grammar state, the dynamics of the acoustics of each source is encoded by an GMM which is based on the logmax approximation. The grammar dynamics takes into account temporal longterm components of the speech, with respect to the acoustic dynamics, yielding improved experimental results, that in some scenarios even outperform human listeners. Weiss and Ellis [40] proposed a similar model, where the speech characteristics are unknown a priori, but adapted iteratively. Ming et al. [41] proposed a datadriven technique that is also based on longterm temporal dynamics, intended for the general scenario in which the vocabulary and grammar are unknown. A combined FHMM and NMF approach was presented by Mysore et al. [42], and denoted nonnegative hidden Markov model (NHMM). For each source, several small spectral dictionaries are learned. Their evolvement in time is also learned via HMM. The composite signal of the two sources is separated by applying a soft mask, generated by an estimationmaximization (EM) procedure that estimates the contribution of each source at each timefrequency bin. Good separation performance is reported for this NHMM technique. A related work is [43], where a new model called factorial scaled hidden Markov model (FSHMM) combines Gaussian scaled mixture model and NMF. The FSHMM is utilized in [43] for speech separation and polyphonic audio representation.
The high dimensionality of the logspectral vectors and the large number of states of the factorial model, imposes high computational burden for the FHMM inference. Roweis proposed to mitigate the high computational burden by detecting pairs of states with the highest observation likelihood, and limiting the factorial Viterbi calculations to their corresponding paths. In [38], the beam search [44] is used to speed up the inference process. A band quantization approach that reduces the number of HMM states, was proposed by Rennie et al. [45] to reduce the computational complexity.
An efficient belief propagation technique for the inference, rather than the exact factorial Viterbi algorithm was proposed by Hershey et al. [39] for the temporal inference. They also presented two methods that together efficiently compute the acoustic likelihood estimation of the observed mixed signal, which is required for the temporal inference. The first is called bandquantization, and it suggests to approximate some of the acoustic GMM states that differ only in a few features. Each Gaussian is approximated using a shared set of a smaller number of Gaussians in each frequency bin. The second technique is jointstate pruning, which utilizes the fact that several states pairs have significantly larger probabilities than the rest of jointstates. This feature stems from the sparseness of the model. Those states can be used to explain the observations, rather than using all of the possible pairs of states. Rennie et al. [46] applied a similar model able to separate up to four speakers using loopy belief propagation and variational inference that reduce the inference complexity. The maxsum algorithm, which is also a belief propagation technique, is also employed by [47] to track multiple pitch trajectories described by an FHMM. ReyesGomez et al. [48] proposed to group several frequency bins into frequency bands, such that each frequency band is modeled by an HMM.
Dimensionality reduction schemes were also applied to speech separation. Michalevsky et al. [49] applied the diffusion framework [50] to speaker identification, where a feature vector consisting of the mel frequency cepstral coefficients (MFCC) and their first temporal derivative is used to parameterize the manifold of each speaker, by embedding them into a lower dimensional space. Samples are classified by a knearest neighbors (kNN) classifier applied to the embedding of the corresponding feature vector.

First, we derive a novel noniterative speech separation approach based on the diffusion framework [50], to compute HMM and FHMM models. A comprehensive set of experimental results exemplify the applicability of the proposed method. It is shown that the proposed scheme provides separation results that compare with contemporary speech separation approaches.

Second, by analyzing the asymptotics of Markov random walks, we show that the proposed scheme allows to directly estimate the states of the HMMs and FHMMs, without having to assume any underlying observation model, nor to apply EMbased iterative training. Hence, the estimation of the Markov states and transitions is decoupled from the estimation of the emission p.d.f.s, and their corresponding parametric model. Thus, we propose two FHMMbased approaches that estimate the underlying HMM in the diffusion domain. The first, provides a direct extension of the FHMM, where the underlying HMMs are computed in the diffusion domain, and the Gaussian observation models utilize the logmax approach applied in the original domain. We denote this approach hybrid FHMM (HFHMM), as it utilizes both the temporal and diffusion domains. In the second approach, denoted dual FHMM (DFHMM), we estimate the emission p.d.f.s in the diffusion domain, without assuming an explicit emission p.d.f. model. The underlying HMMs are computed similarly to the HFHMM approach.

Last, we propose to utilize the diffusion embedding as a nonlinear projection of the input mixture onto the manifolds spanning each of the speakers. Thus, we aim to utilize the diffusion embedding as a manifoldadaptive projection operator, where the states of each speaker are detected by an FHMM in its manifold. The HFHMM is experimentally shown to compare with previous results [36], and is shown to outperform DFHMM. The latter requires low computational cost, and can be applied alongside other approaches [39] in the lowdimensional space to further reduce the computational complexity.
The remainder of this paper is organized as follows. Section 2 formulates the monaural speech separation problem of two equipower and pretrained sources. The diffusion framework, the Nyström extension [51] and the HMM inference using the diffusion maps are surveyed in Section 3. The proposed diffusiondomain speech separation schemes are presented in Section 4, where we detail the separation and training procedures, and propose two alternative mask functions. The proposed approaches are experimentally verified in Section 5, and their performance is compared with contemporary schemes. The computational complexity of the HFHMM and the DFHMM schemes is discussed in Section 6, while conclusions and future directions are discussed in Section 7.
2 Problem formulation
Hence, a _{ ℓ } is the logspectrum of a[n], and b _{ ℓ } is the respective logspectrum of b[n].
The core assumption of our approach is that a _{ ℓ } and b _{ ℓ } are the observed outputs of two separate HMMs, one per speaker, denoted HMM_{ a } and HMM_{ b }, respectively. Let S ^{ a } be the number of states in HMM_{ a }, and \(s_{\ell }^{a}\) be the state of HMM_{ a } at time frame ℓ. The probabilistic attributes of HMM_{ a } are given by the initial probabilities \(\text {Pr}\left ({s_{1}^{a}} =s^{i,a}\right)={\pi _{i}^{a}}\,,\, i=1,\dots,S^{a}\), where s ^{ i,a } is the ith state of the first speaker. The transition probabilities are given by \(\text {Pr}\left (s_{\ell }^{a}=s^{j,a}s_{\ell 1}^{a}=s^{i,a}\right)=p_{ij}^{a}\); the emission p.d.f. is \(p\left (\mathbf {a}_{\ell }s_{\ell }^{a}=s^{i,a}\right)=\mathcal {N} \left (\mathbf {a}_{\ell };\,\boldsymbol {\mu }_{i}^{a},\mathbf {Q}_{i}^{a}\right)\), i.e., normally distributed with mean vector \(\boldsymbol {\mu }_{i}^{a}\), and a covariance matrix \(\mathbf {Q}_{i}^{a}\).
HMM_{ b } is defined mutatis mutandis such that there are S ^{ b } states with \(\text {Pr}\left ({s_{1}^{b}}=s^{i,b}\right)={\pi _{i}^{b}}; \text {Pr}\left (s_{\ell }^{b} =s^{j,b}s_{\ell 1}^{b}= s^{i,b}\right)= p_{ij}^{b};\) and \( p\left (\mathbf {b}_{\ell }s_{\ell }^{b}=s^{i,b}\right)=\mathcal {N}\left (\mathbf {b}_{\ell };\,\boldsymbol {\mu } _{i}^{b},\mathbf {Q}_{i}^{b}\right)\), where we assume S ^{ a }=S ^{ b } in sake of simplicity.
The mixture process can be modeled by the FHMM [36] that comprises two underlying HMMs evolving independently over time, each corresponding to a single speaker. At each time instant ℓ, the observed sample z _{ ℓ } depends on the states of HMM_{ a } and HMM_{ b }, emitting the latent outputs {a _{ ℓ },b _{ ℓ }}, respectively, such that z _{ ℓ }=ξ(a _{ ℓ },b _{ ℓ }), where ξ(·,·) is the mixing function.
where \(\boldsymbol {\mu }_{ij}=\max (\boldsymbol {\mu }_{i}^{a},\boldsymbol {\mu }_{j}^{b})\) and Q is the covariance matrix of the observation, assuming that \(\forall i,j\,\mathbf {Q}_{i}^{a}=\mathbf {Q}_{j}^{b}=\mathbf {Q}\).
3 The diffusion framework and the Nyström extension
The diffusion framework is the core computational tool used in our work. The fundamentals of the diffusion framework [50] are detailed in Section 3.1, and the Nyström extension is described in Section 3.2. The asymptotic properties of random walks that pave the way for a novel approach for HMM and FHMM estimation, are discussed in Section 3.3. A systematic approach for estimating the HMM parameters based on the diffusion framework is presented in Section 3.4.
3.1 Diffusion maps
where \(d_{ij}\triangleq \Vert \mathbf {x}_{i}\mathbf {x}_{j}\Vert ^{2}\). In addition, here ε>0 such that if x _{ i } and x _{ j } are similar w(x _{ i },x _{ j })∼1, and w(x _{ i },x _{ j })∼0 if they are dissimilar, implying that the corresponding affinity graph nodes are disconnected. ε is a scale factor quantifying the scale of the similarity, as x _{ i } and x _{ j } have nonzero affinity for d _{ ij }<3ε, approximately.
where 1=λ _{0}≥λ _{1}≥λ _{2}≥… are the (right) eigenvalues of P, and {ψ _{ l }},{ϕ _{ l }} are the corresponding right and left eigenvectors, respectively. Due to the spectrum decay, the term p _{ t }(x _{ i },x _{ j }) in (7) can be well approximated by summing only a few elements.
for x _{ i }∈Ω. Note that ψ _{ l }(x _{ i }) for 1≤l≤m(t) is the ith coordinate of ψ _{ l }, the lth eigenvector of P.
3.2 Nyström extension
The weighting by 1/λ _{ l } implies that due to the decaying spectrum of the Markov matrix, the Nyström extension can be applied to a limited number of eigenvectors to ensure numerical stability.
Since our models will be trained on a massive amount of data, the Nyström extension can help reduce the dimensions of P and, thus to reduce the computational burden and to mitigate the storage requirements.
3.3 The asymptotics of random walks and their convergence properties
The diffusion maps scheme utilizes numerically induced random walks to analyze data sets, by forming the diffusion kernel and the corresponding discrete eigenfunction {ψ _{ l }} computed with respect to the discrete domain (set) \(\Omega =\{\mathbf {x}_{i}\}_{i=1}^{L}\). This approach aims to study the intrinsic continuous manifold \(\widehat {\Omega }\) of the data, via its sampled finite manifestation Ω.
In the continuous domain, systems with potential wells are characterized by their eigenfunctions, where the stable states of the underlying Markov process, corresponding to the potential wells are points of high density in the metric eigenfunctions space \(\{\widehat {\mathbf {\psi }}_{l}\}\). Nadler et al. [56] extended these classical results asymptotically to the discrete domain, showing that as the discrete eigenfunction {ψ _{ l }} approximate the continuous ones, the points of high density in the discrete domain estimate those of the corresponding continuous ones.
This implies that a diffusion embedding computed as in Section 3.1 with respect to a finite and discrete set of points Ω can be used to approximate the states of a latent Markov walk \(\widehat {\Omega }\) given its discrete manifestation Ω. In particular, given that the intrinsic representation of the data and corresponding Markov system is assumed to be low dimensional, implies that it can be represented by a few leading eigenvectors. In speech analysis, the low dimensionality of the system stem from the multiple constraints induced on a human speech process by the physical attributes (the anatomical structure of the mouth, tongue etc.), as well as social conventions.
Lafon and Lee [57] studied the quantization of the corresponding Markov chain and graph, aiming to derive a computational approach for recovering the stable metastates and showed that the optimal quantization with respect to the diffusion distance as in (9) is given by the centroids computed by the Kmeans quantization scheme with an Euclidean distance metric, in the diffusion domain. Their result stems from the equivalence between the optimal L _{2} distances (optimized by the Kmeans scheme) and the diffusion distances. Hence, given a set of L points, their quantization in the diffusion space allows to optimally approximate the metastates of the latent Markov system [57], in terms of diffusion distance distortion.
3.4 Learning an HMM with diffusion maps
We propose to apply the diffusion framework to estimate an HMM model, by computing a reduced dimensionality representation of the training set and estimating the metastates. This emphasizes the gist of our HMM estimation approach, as the Markov states and transition probabilities can be estimated nonparametrically in the diffusion domain, based on the asymptotics of Markov random walks. Our approach decouples the estimation of the Markov states and the transition probabilities from the estimation of the emission p.d.f.s, and avoids the iterative training of the classical EMbased approach.
Let the set of samples \(\Omega =\{\mathbf {x}_{\ell }\}_{\ell =1}^{L}\in \mathbb {R}^{d}\) be a sequence of HMM emissions. The HMM has S states with transition probabilities \(\{p_{ij}\}_{i,j=1}^{S}\), initial probabilities \(\{\pi _{i}\}_{i=1}^{S}\), mean vectors \(\{\boldsymbol {\mu } _{i}\}_{i=1}^{S}\), and covariance matrices \(\{\mathbf {Q}_{i}\}_{i=1}^{S}\).
The diffusion embedding of \(\{\mathbf {x}_{\ell }\}_{\ell =1}^{L}\) is denoted by \(\{\bar {\mathbf {x}}_{\ell }\}_{\ell =1}^{L}\in \mathbb {R}^{D}\), where D≪d. In order to identify the metastable states of the random process manifested by \(\{\mathbf {x}_{\ell }\}_{\ell =1}^{L}\), we quantize the embedding coordinates \(\{\bar {\mathbf {x}}_{\ell }\}_{\ell =1}^{L}\) into S metastates, denoted as \(\{s^{i}\}_{i=1}^{S}\), using the Kmeans algorithm. The L _{2} distance in the embedding domain corresponds to the diffusion distance, and allows to coarsen the corresponding Markov chain.
The computed metastates are utilized to estimate the parameters of the HMM. The transition probabilities p _{ ij } are estimated by running the training samples through the metastates \(\left \{ s^{i}\right \}_{i=1}^{S},\) and accumulating the transitions in a S×S transition histogram. For that, each sample \(\{\bar {\mathbf {x}}_{\ell }\}_{\ell =1}^{L}\) is associated with its closest metastate in \(\{s^{i}\}_{i=1}^{S}\), in terms of the L _{2} norm. μ _{ i } is the average of the highdimensional vectors \(\{\mathbf {x}_{\ell }\}_{\ell =1}^{L}\) belonging to a state s ^{ i }. The initial probabilities and the covariance matrices are estimated similarly.
An example demonstrating the learning procedure and its results are given in Appendix.
4 Speech separation by diffusion maps
In this section, we introduce two novel speech separation schemes based on the diffusion framework. In both, we derive datadriven speech models for recovering latent statespace models, where S _{ a } and S _{ b }, are the first and second speakers, respectively, and the FHMM models are trained with respect to them.
4.1 HybridFHMM
We propose to train an HMM model per speaker using the diffusion framework following Section 3.4. Given the speaker’s estimated metastates and the corresponding probabilities found in the training step, the observation (emission) p.d.f.s are computed in the logspectral domain, using the logmax formulation in (4). With these emission p.d.f.s, the factorial Viterbi algorithm is carried out in the logspectral domain to infer the states sequences of the speakers, as suggested in [36]. Finally, a masking mechanism is applied to the mixed signal based on the decoded states. We denote this method HFHMM, as it utilizes both the (original) logspectral domain as well as the (embedded) diffusion domain. The training of the model is carried out in the lowdimensional space, and the inference in the highdimensional logspectral domain. By testing the HFHMM (and comparing it to [36]), it will be easy to demonstrate that the new training procedure, at the very least, does not come with performance penalty.
4.1.1 Training phase
Let u[n] be the discrete temporal samples forming the training set of S _{ a }. We start by computing the logspectral frames \(\Omega _{a} =\{\mathbf {u}_{\ell }\}_{\ell =1}^{M}\in \mathbb {R}^{d}\), where each frame u _{ ℓ } comprises d=K/2+1 frequency bins. The diffusion embedding of \(\{\mathbf {u}_{\ell }\}_{\ell =1}^{M}\) is denoted by \(\{\bar {\mathbf {u}}_{\ell }\}_{\ell =1}^{M}\in \mathbb {R}^{D}\), where D≪d. Throughout this paper, overbar designate term in the embedded space. In order to identify the metastable states of the random process manifested by \(\{\mathbf {u}_{\ell }\}_{\ell =1}^{M}\), we quantize the embedding coordinates \(\{\bar {\mathbf {u}}_{\ell }\}_{\ell =1}^{M}\) into S ^{ a }=S ^{ b }=S metastates, denoted \(\{s^{i,a}\}_{i=1}^{S}\), using the Kmeans algorithm. Although one can set a different value of states to each speaker, the same value was used for both, in sake of simplicity. S is on the order of tens of states due to the limited number of training data points. The transition probabilities \(p_{ij}^{a}\) and the logspectral mean vectors \(\{\boldsymbol {\mu }_{i} ^{a}\}_{i=1}^{S}\) are estimated following section 3.4, and a similar procedure is applied to v[n], the temporal samples of S _{ b }.
4.1.2 Test phase (latent state estimation)
The decoding phase of the proposed HFHMM scheme identifies with that of [36]. Let z[n]=a[n]+b[n] be a test mixture signal, where a[n] and b[n] are the utterances from S _{ a } and S _{ b }, respectively, and \(\{\mathbf {z}_{\ell }\}_{\ell =1}^{N}\in \mathbb {R}^{d}\) its corresponding logspectral vectors. The observation p.d.f. is modeled in the logspectral domain utilizing the logmax approximation, similarly to (4). The underlying states of the speakers are estimated using the factorial Viterbi algorithm (see Algorithm 1) given the models inferred in the training phase. The separation is implemented by adaptively masking the speakers (see Section 4.3).
4.2 DualFHMM
In the second proposed approach, we derive a novel formulation for estimating the emission (observation) p.d.f.s of the speech mixture directly in the diffusion domain, as opposed to the HFHMM scheme. The metastates of the underlying Markov processes (modeling the speakers) are estimated by Kmeans based training in the diffusion domain, as in the previous section.
We propose to synthesize an artificial mixture signal consisting of randomly combined training segments of both speakers. Two FHMM are trained in two different diffusion domains, one FHMM per speaker. Since each diffusion domain is based on the segments of one particular speaker, it is best adapted to that speaker. Denote these models FHMM_{ a } and FHMM_{ b }, respectively. In the test phase, we process the input data with both models, where at each time segment, FHMM _{ a } is used to infer the latent state of S _{ a }, and FHMM _{ b } infers the state of S _{ b }.
4.2.1 Training phase
We detail the computation of FHMM _{ a }, the FHMM defined in the diffusion embedding domain of S _{ a }, and the procedure is applied to FHMM _{ b } mutatis mutandis. In general, in order to derive an FHMM, two quantities need to be estimated. First, the states and the corresponding transition probabilities for each speaker (Markov process), and second, the observation p.d.f. associating an input measurement with an underlying Markov states. The metastates of S _{ a }, i.e. \(\left \{ s^{i,a}\right \}_{i=1}^{S}\), the metastates of S _{ b }, \(\left \{ s^{i,b}\right \}_{i=1}^{S}\), and the corresponding S×S transition matrices are computed similarly to Section 4.1.1.
In order to find the observation p.d.f. of a mixture signal, we first embed \(\{\mathbf {u}_{\ell }\}_{\ell =1}^{M}\), the training set of S _{ a }, to yield \(\{\bar {\mathbf {u}}_{\ell }\}_{\ell =1}^{M}\) and the corresponding eigenvalues \(\{{\lambda _{i}^{u}}\}_{i=1}^{D}\). Similarly, \(\{\bar {\mathbf {v} }_{\ell }\}_{\ell =1}^{M}\) is the embedding of the training sequences of S _{ b } into his diffusion domain.
In contrast to the HFHMM where all states share a single covariance matrix (in the highdimensional domain), in the DFHMM we chose to define a distinct covariance matrix (in the lowdimensional space) for each state, so the algorithm is as general as possible.
4.2.2 Latent state estimation
In the test phase a mixed utterance z[n]=a[n]+b[n] is measured, where a[n] and b[n] are the (unknown) separate speech signals. The latent states corresponding to z[n] are estimated by embedding z _{ ℓ } on the two diffusion spaces, yielding \(\{\bar {\mathbf {z}}_{\ell }^{a}\}_{\ell =1}^{N}\) and \(\{\bar {\mathbf {z}}_{\ell }^{b}\}_{\ell =1}^{N}\), and applying the embedded domain FHMMs, namely FHMM _{ a } and FHMM _{ b }, to \(\{\bar {\mathbf {z}}_{\ell }^{a}\}_{\ell =1}^{N}\) and \(\{\bar {\mathbf {z}}_{\ell }^{b}\}_{\ell =1}^{N}\), respectively. Each FHMM is used to infer the latent state of the speaker used for its own embedding. Hence, we use FHMM_{ a } only to recover the state sequence of S _{ a }, while discarding the states sequence obtained for S _{ b }. The states sequences are recovered using the factorial Viterbi algorithm with the parameters of FHMM_{ a }. It is identical to Algorithm 1, with \(\bar {\mathbf {z}}_{\ell }^{a}\) substituting z _{ ℓ }. Similarly, FHMM _{ b } is used to estimate the latent states of S _{ b }, i.e., using Algorithm 1 with \(\bar {\mathbf {z}}_{\ell }^{b}\) instead of z _{ ℓ }.
The gist of this approach, as can be deduced from Section 3.3, is that an embedding space, either FHMM_{ a } or FHMM_{ b }, encodes the speech attributes of the respective speaker, and hence would best estimate the latent states of the corresponding speaker.
4.3 Masking
In this estimator, the logspectral content of the weaker source is not attenuated as in (19), but synthesized according to the estimated HMM. This masking was shown by Radfar and Dansereau [35] to correspond to the MMSE estimator given a zero model error. Recovering the logspectrum of S _{ b } is carried out mutatis mutandis.
5 Experimental results
The proposed HFHMM and DFHMM schemes were experimentally verified by studying common stateoftheart speech separation tasks. The quality of the result is evaluated using both objective criteria and (informal) listening tests. The proposed schemes are compared to the separation scheme proposed by Roweis [36] (for both hard and soft masks), the iterative FHMMbased estimator by Hu and Wang [38], and to the MIXMAX estimator by Radfar and Dansereau [35].
5.1 Experimental setup
A training set of 450 noiseless sentences per speaker drawn from the speech separation challenge [58] is used. Each sentence is 1–2 s long, and was downsampled from 25 kHz to 8 kHz to shorten the running time of the code. The STFT was computed using 256 samples long frames, having an overlap of 128 samples between successive frames (50 % overlap). Consequently, each logspectral STFT feature vector is 129 coefficients long, and a Hann window was used in both the analysis and synthesis stages. On average, the training set of each speaker consisted of 55,000 logspectral vectors.
The application of the diffusion embedding to the training set is carried out in two steps. First, 30 random sentences per speaker are embedded by extracting the eigenvectors of the Markov matrix defined by the diffusion framework. In the second stage, the remainder of the 420 sentences are embedded by applying the Nyström extension. This embedding scheme was chosen due to complexity and memory considerations.
From the complexity aspect, the dimensionality of the Markov matrix determines the number of operations for the DFHMM in the test phase. The embedding of the mixed signal involves the Nyström extension, which is calculated via (11) and thus affected proportionally to the dimensionality of the Markov matrix. It can also be deduced from (12) that the samples creating the Markov matrix, used for calculating the weight functions measuring the graph connectivity, should be kept in the memory.
An RBF kernel is used to compute the spectral embedding, with kernel bandwidth ε. In general, a kernel bandwidth that is too large can result in HMM states which are almost identical, since all the data points are fully connected. An excessively small ε, might result in a model consisting of mostly disconnected graph nodes, with an increased number of states, that might be computationally intractable. A kernel bandwidth ε∼110 was found to be a good compromise, as it retains 5 % of the edges connected. This is a common approach used in previous works on diffusion based embedding [59], where the embedding was shown to be robust to the kernel bandwidth.
Tested speakers. The pairs of speakers used for testing each algorithm, each pair contributing 15 sentences
Male+male  Male+female  Female+female 

1+32  14+25  15+20 
14+30  19+20  18+29 
19+28  26+34  22+33 
26+27  32+23  16+31 
When generating the mixed signal w[n] for the DFHMM scheme (refer to (14)), each of the signals u[n] and v[n] was created by concatenating utterances from the database in a random order. This implementation stems from the unique structure of the utterances. Each sentence is composed of six words that are ordered in the following manner: command, color, preposition, letter, number and an adverb. For example, a valid sentence is “bin blue at Z three please”. Each component of the utterances has a final set of possible values. For instance, the command word can be only one of the following: “bin”, “lay”, “place’,’ or “set”. If u[n] and v[n] are summed without shuffling the utterances from the database, an undesired situation can occur in which the mixture of the signals depicts only states in which the speakers utter the same word.
Several variants of the proposed schemes were implemented to assess the influence of the various components on the performance. First, an ideal DFHMM (iDFHMM) with the accurate factorial states, instead of their estimated counterparts, is implemented by computing the embeddings and the metastates of the separate (unmixed) speakers.
Second, the hard mask (19) and the soft mask (20) are compared, and the corresponding schemes are denoted as DFHMMH (hard mask), DFHMMS (soft mask), iDFHMMH (idealized DFHMM with hard mask) etc.
Third, two training schemes are compared. The first, uses the entire training set to form the Markov matrix P, while in the second, the matrix is based on 30 sentences only and the Nyström extension is used to embed the remaining sentences. These variants are denoted as HFHMMHE, HFHMMSE for the exact embedding, and HFHMMHN, HFHMMSN for the procedure that utilizes the Nyström extension. Only the Nyström extension based training is used in the DFHMM scheme, as detailed earlier.
The proposed approaches are compared with contemporary stateoftheart schemes: (1) the work of Roweis, using 70 HMM states per speaker, that are inferred by the EM procedure. The HMMs are used to define an FHMM, as detailed in [36]; (2) the iterative algorithm by Hu and Wang [38], with the separation part implemented by inferring FHMM states for the mixed signal and then applying MAP estimation. As recommended in [38], the FHMM comprises 256 Gaussians per speaker, and a maximum number of 4 iterations is allowed. To reduce the computational complexity, the beam search uses the 16 most likely preceding state pairs.
The MIXMAX estimator by Radfar and Dansereau [35] uses 512 mixtures per GMM, that are trained on the same 450 sentences as the HFHMM and DFHMM schemes. Such a high dimensional GMM per speaker imposes a heavy computational burden. Therefore, we separated the mixed signals only the most probable states pair [35] and, as indicated by the authors, this procedure achieves comparable scores to that of full estimation. Finally, in order to reduce some of the artifacts and distortions of the hard mask, we set m _{0}(k)=−8,∀k for the algorithm proposed by Roweis [36], and for the HFHMM and the DFHMM schemes (when the hard mask is applied). This value was chosen to reflect the average level of low power timefrequency bins. It achieved the best balance between speech intelligibility and separation performance.
5.2 Figuresofmerit
In order to quantify the performance of the proposed separation schemes, we utilized the SIR, source to distortion ratio (SDR) and source to artifacts ratio (SAR) criteria, proposed by Vincent et al. [60] and implemented as a Matlab toolbox [61]. The SIR measures the attenuation of the interference with respect to the desired speech signal, and the SAR evaluates the level of artifacts (e.g. musical noise) in the processed signal. The SDR is the desired signal level with respect to the total contribution of all the other distortion factors. The SDR and the SAR criteria are informative when a hard mask is applied. The outcome of the algorithm was also assessed by informal listening tests.
5.3 Results
5.3.1 HFHMM
Quantitative results
Male+female  Male+male  Female+female  Average  

Male  Female        
SIR  SAR  SDR  SIR  SAR  SDR  SIR  SAR  SDR  SIR  SAR  SDR  SIR  SAR  SDR  
Hu and Wang [38]  16.2  7.3  6.5  15.3  8.0  7.0  12.6  6.0  4.6  11.6  5.5  4.0  14.0  6.7  5.5 
RoweisH  18.8  5.4  5.0  15.8  6.1  5.3  10.8  3.3  1.8  9.3  2.6  0.9  13.7  4.3  3.2 
RoweisS  13.0  7.2  5.8  12.6  7.2  5.7  7.7  5.5  2.6  6.5  4.8  1.7  9.9  6.2  3.9 
HFHMMHE  19.4  5.4  5.1  16.0  6.2  5.4  11.1  3.3  1.8  9.9  3.0  1.3  14.1  4.5  3.4 
HFHMMSE  13.2  7.2  5.9  12.5  7.2  5.7  7.7  5.5  2.6  6.9  5.2  2.1  10.1  6.3  4.1 
HFHMMHN  16.9  4.7  4.1  14.4  5.3  4.4  10.6  3.1  1.6  6.4  1.9  0.8  12.1  3.7  2.3 
HFHMMSN  12.1  6.4  5.0  11.7  6.3  4.8  7.5  5.2  2.4  4.5  4.0  0.0  8.9  5.5  3.0 
DFHMMH  13.7  5.0  4.1  15.5  4.6  4.0  9.4  2.7  0.7  7.0  4.0  0.4  11.4  3.9  2.1 
DFHMMS  10.8  6.4  4.7  12.7  5.6  4.5  6.7  4.8  1.5  5.6  4.2  0.4  8.9  5.2  2.3 
MIXMAX  15.6  8.6  7.6  15.1  8.6  7.4  10.7  6.8  4.8  9.7  6.3  4.1  12.3  7.6  6.0 
5.3.2 DFHMM
The results of applying a soft mask to the malefemale mixture are similar to those of the HFHMM. As expected, the SAR and SDR measures indicate that the DFHMMS yields lower distortion levels as compared with the DFHMMH scheme. However, the SIR measure deteriorates. This can be attributed to the higher level of residual interference, being a consequence of the softer mask.
We also subjectively compared the DFHMMH and the DFHMMS. The male speaker was recovered by the DFHMMH without noticeable artifacts. However, the separated female speech sometimes sounds disrupted when a hard mask is used, and applying a soft mask resolved this artifact. Similar trends were observed for the malemale combination.
Application of the DFHMMH resulted in audible artifacts, that are evident by the low SAR and SDR levels. As in the malefemale mixture, applying the DFHMMS, improves the results, and the ideal estimators, iDFHMMH and iDFHMMS, respectively, outperform their nonideal counterparts. We attribute this to the possible overlap of the metastate of speakers of the same gender. The femalefemale mixtures, exhibit similar results to the malemale case.
We further study the effect of the overlapping spectral components, by depicting the hard mask of the femalefemale mixtures, obtained by the DFHMMH scheme, in Fig. 8. The white regions of the mask correspond to the timefrequency bins, where S _{ b } is estimated to have higher spectral content. As most of the mask is white, this indicates that S _{ b } is the dominant speaker in the segment. We fail to identify S _{ a } accurately due to the overlapping spectral content of the same gender speakers, especially in the lower frequency band.
5.3.3 Comparison with competing algorithms
The comparative study is summarized in Table 2. For the male speaker in malefemale mixture, the HFHMMHE outperforms the other estimators with respect to the SIR measure. However, it obtains lower SAR and SDR than the MIXMAX algorithm. Hu and Wang, RoweisS and HFHMMSE also obtain good SAR score, but worse than the MIXMAX. The HFHMMHE has a better SIR result also for the female speaker, with the DFHMMH, RoweisH scoring below. The SAR and SDR of the MIXMAX are again superior to the respective measures obtained by the competing algorithms. The algorithm of Hu and Wang also exhibits satisfactory SAR and SDR, although still being inferior to the score obtained by the MIXMAX.
For the malemale mixture, the best SIR result is obtained by Hu and Wang algorithm. The second best results are obtained by the HFHMMHE, and then by RoweisH, HFHMMHN and the MIXMAX, respectively, with similar performance. The MIXMAX scores the best results in the SAR and SDR measures. The algorithm of Hu and Wang demonstrates better measures for the femalefemale mixtures in terms of SIR, as well. Again, also for these mixtures the MIXMAX gains the highest SAR score, and shares the best SDR score with Hu and Wang algorithm. However, the HFHMMSE and RoweisS also obtain good SAR results.
By looking at the overall performance, described in the three righthandside columns of Table 2, it follows that the HFHMMHE and Hu and Wang iterative algorithm obtained the best SIR. The HFHMMSE and Hu and Wang also demonstrate reasonable SAR. However, the best SAR and SDR performance was achieved by the MIXMAX algorithm. It is also indicated that using the Nyström extension leads to a degraded performance, which might explain the relatively disappointing scores of the DFHMM schemes.
Informal listening tests of all estimators and scenarios, demonstrate that there is a large room for improvement. While we notice a slight advantage to the MIXMAX and Hu and Wang algorithms over the proposed algorithms, we claim that the performance differences are rather marginal. Several examples can be found in our website^{3}.
Analyzing both objective results and the informal listening test, we can also observe that better results are obtained for the femalemale mixtures, as compared with the femalefemale and malemale mixtures. This may be attributed to the higher spectral content overlap of the latter two.
where \(\hat {\mathbf {a}}_{\ell }^{ij}\) is the estimation of \(\hat {\mathbf {a} }_{\ell }\) given z _{ ℓ } and \(\left (s_{\ell }^{a}=s^{i,a},s_{\ell }^{b}=s^{j,b}\right) \). The iterative algorithm of Hu and Wang also utilizes a more sophisticated soft masking procedure (with respect to the procedure discussed in Section 4.3) and hence yields good SAR and SDR. The iterative adaptation of the pretrained HMMs of the speakers might explain the good SIR performance of Hu and Wang algorithm.
6 Computational complexity of the DFHMM
One of the main attributes of the DFHMM algorithm is its computational efficiency (with respect to [36, 38]) due to the use of the lowdimensional embedding. The HFHMM has identical complexity as in [36], since they differ only in the training stage.
The application of the DFHMM algorithm consists of the following steps: spectral analysis and logarithm calculation, Nyström extension, factorial Viterbi algorithm, filtering (masking), and spectral synthesis. The procedure in [36] is similar, with the Nyström extension omitted. Another difference is the dimensionality of the factorial Viterbi algorithm. In the DFHMM, it is applied in the (lowdimensional) embedded domain of the mixed signal, whereas in [36] in the (highdimensional) logspectrum domain.
It therefore suffices to analyze the computational requirements of the Nyström extension for the DFHMM, and the factorial Viterbi algorithm in order to compare the computational requirements of both techniques. The number of HMM states for each speaker is S. The analysis refers to a single logspectral vector of the mixed signal.
6.1 Nyström extension
It follows that the number of operations depends on the number of samples in \(\Omega =\{\mathbf {x}_{i}\}_{i=1}^{L}\), namely LD additions and multiplications.
The computation of the embedding for each point in the dataset x _{ i }∈Ω involves the computation of the kernel (5), requiring d multiplications and additions. The exponent can be computed using a lookup table (LUT). Hence, (5) is implemented by Ld multiplications and additions, and L LUT indexing operations. Finally, note that in (12), the denominator is the same for every x _{ i }∈Ω. Consequently, only additional L multiplications and additions are required.
The total number of operations of the Nyström extension is therefore L(d+D+1) multiplications and additions, and L LUT indexing operations.
6.2 Factorial Viterbi algorithm
This analysis relates to each time instant ℓ. The normalization of the Gaussian can be discarded, as it does not affect the maximization. The computational complexity can be further reduced by maximizing the logarithm of the p.d.f.. Hence, only \(\mathbf {h}_{\ell ij}^{T}\mathbf {Q} \mathbf {h}_{\ell ij},\,i,j=1,2,\ldots,S\) should be calculated, requiring S ^{2}×(d ^{2}+d) multiplications and additions. In the forward stage, the computation of v _{ ℓ }(i,j) for all states requires approximately S ^{2} additions. Note that the term \(\tilde {p}_{ri}^{a}+\tilde {p}_{qj}^{b}\) can be calculated in advance and consequently does not require additional calculations. The Backward phase does not involve any additional calculations.
The DFHMM also applies the factorial Viterbi algorithm to decode the states. However, it is executed in a lowerdimensional space D≪d. It is applied twice, once for FHMM_{ a } and once for FHMM_{ b }. By substituting d by D, the total number of operations in the preprocessing stage is therefore reduced to 2×S ^{2}×(D ^{2}+D) multiplications and additions. The forward stage is independent of the dimensionality, and hence requires 2×S ^{2} additions. The major computational saving is thus attributed to the lower dimensionality of the embedded space.
Number of operations per output frame for the DFHMM, Roweis, MIXMAX, and the iterative separation by Hu and Wang (with I iterations)
Algorithm  Additions  Multiplications  LUT indexing 

DFHMM  L(d+D+1)+2S ^{2}(D ^{2}+D)  L(d+1)+D(L+2S ^{2}+D)  L 
Roweis  S ^{2}(d ^{2}+d+1)  S ^{2}(d ^{2}+d)  − 
MIXMAX  3d S ^{2}  2d S ^{2}  − 
Hu and Wang  (6d+W+2)S ^{2} I  (3d+W+1)S ^{2} I+d S I  2d S I 
To demonstrate the differences in the computational burden of the algorithm we show the results for the nominal values used for obtaining the results in Section 5: S=70, L=3000, d=129, and D=30. The parameters for [38] also include the beam width W=16 and the iterations number I=4.
With the above parameter settings, the number of multiplications of the DFHMM algorithm is lower by close to two orders of magnitude as compared with Roweis’ algorithm [36], and the number of additions is, respectively, lower by an order of magnitude. The number of operations required by [38] is double the number of operations required by the DFHMM.
The MIXMAX is an exception, since it only uses GMMs instead of the HMMs and therefore avoids the computationally expensive factorial search.
The computational burden of the DFHMM can be further reduced by decreasing the value of D. This, however, might result in degraded performance. Thus, there is a tradeoff between performance and computational complexity of the DFHMM.
Extended computational saving might be obtained by adopting a grammar model. As shown in [39], various methods for complexity reduction can be applied. These methods can be adopted by the proposed DFHMM algorithm to reduce the computational complexity without sacrificing performance. However, we preferred to leave these extensions for a later study, and to only focus in this paper on dimensionality reduction.
7 Conclusions
In this work, we presented two novel approaches for estimating temporal FHMM on manifolds based on the diffusion framework, that are noniterative and rigorously accurate. The core of our approach is to utilize the asymptotics of the Markov random walk, induced on the graph representation of a highdimensional data source, to decouple the estimation of the latent state space (states and transition probabilities), and the estimation of the emission (observation) p.d.f.s. We applied the proposed schemes to the task of separating two speakers using a singlemicrophone, that provides a viable baseline to validate the effectiveness of the proposed scheme. In particular, we derived two FHMMbased separation schemes, where the first estimates the HMM of each speaker in the diffusion domain, and then utilizes the logmax approximation to infer the FHMM model. The second, formulates the speech separation problem entirely in the embedded domain, as the derivation of two FHMM models, each adapted to the diffusion embedding of a particular speaker. The inferred states are used to construct masking functions to unmix the speech signal. Two masking schemes are presented, utilizing either soft or hard masks. We experimentally evaluated the proposed schemes using both objective metrics and informal subjective listening tests, for malefemale, malemale, and femalefemale mixtures. The HFHMM scheme is shown to yield comparable and even slightly better performance than [36], while the DFHMM scheme exhibits performance degradation, probably due to suboptimal embedding (that uses the Nyström extension). The MIXMAX [35] and the iterative algorithm by Hu and Wang [38] had the best SAR and SDR score, although the HFHMM and Roweis methods with soft masks obtained good SAR as well. The proposed HFHMM scheme obtained the best SIR scores on average among all tested algorithms, with insignificant advantage over Hu and Wang method. Informal listening tests demonstrate the insufficiency of the current solution to fully recover the two speakers. Although the separation capabilities of the MIXMAX and Hu and Wang algorithms are slightly better than those of the proposed schemes, the differences are quite marginal, according to our subjective evaluation. Several sound clips are available on our website.
Finally, the proposed trainingbased methods, and in particular the DFHMM scheme, can be considered as a computationally efficient alternative to the inference of timeseries modeled by a large number of states. The performance of the proposed methods is comparable to contemporary methods, and we anticipate that careful examination of the relation between the logspectral domain and the embedded domain might lead to further improvements.
8 Endnotes
9 Appendix
10 Example of learning HMMs with diffusion
In order to demonstrate the learning process of an HMM with diffusion maps, we take a simple HMM with 3 states as an example. The mean vectors of the HMM are μ _{1}=−10·1 _{7},μ _{2}=2·1 _{7} and μ _{3}=0·1 _{7}, where 1 _{ D } is the D×1 vector whose all elements are ones. Each state has a diagonal covariance matrix, with Q _{1}=0.5·I _{7},Q _{2}=Q _{3}=0.1·I _{7}, where Q _{ i } is the covariance matrix of the ith state, and I _{ D } is the identity matrix of dimension D.
Notes
Competing interests
The authors declare that they have no competing interests.
Endnotes
Signals in endnote a are available at http://www.eng.biu.ac.il/gannot/speechenhancement.
References
 1.M Weintraub, A theory and computational model of auditory monaural sound separation (stream, speech enhancement, selective attention, pitch perception, noise cancellation). PhD thesis (Stanford University, Palo Alto, 1985).Google Scholar
 2.TW Parsons, Separation of speech from interfering speech by means of harmonic selection. J. Acoust. Soc. Am. 60(4), 911–918 (1976).CrossRefGoogle Scholar
 3.GJ Brown, M Cooke, Computational auditory scene analysis. Comput. Speech Lang. 8(4), 297–336 (1994).CrossRefGoogle Scholar
 4.D Wang, GJ Brown, Separation of speech from interfering sounds based on oscillatory correlation. IEEE Trans. Neural Netw. 10(3), 684–697 (1999).MathSciNetCrossRefGoogle Scholar
 5.G Hu, D Wang, A tandem algorithm for pitch estimation and voiced speech segregation. IEEE Trans. Audio Speech Lang. Process. 18(8), 2067–2079 (2010).CrossRefGoogle Scholar
 6.D Wang, GJ Brown, Computational Auditory Scene Analysis: Principles, Algorithms, and Applications (WileyIEEE Press, Hoboken, 2006).CrossRefGoogle Scholar
 7.E Vincent, MD Plumbley, in Proceedings of Independent Component Analysis (ICA). Singlechannel mixture decomposition using Bayesian harmonic models (SpringerCharleston, 2006), pp. 722–730.Google Scholar
 8.Y Li, D Wang, On the optimality of ideal binary timefrequency masks. Speech Comm. 51:, 230–239 (2009).CrossRefGoogle Scholar
 9.O Yilmaz, S Rickard, Blind separation of speech mixtures via timefrequency masking. IEEE Trans. Sig. Process. 52(7), 1830–1847 (2004).MathSciNetCrossRefGoogle Scholar
 10.GJ Jang, TW Lee, A maximum likelihood approach to singlechannel source separation. J. Mach. Learn. Res. 4:, 1365–1392 (2003).MathSciNetMATHGoogle Scholar
 11.GJ Jang, TW Lee, in NIPS. A probabilistic approach to single channel blind signal separation (MIT PressVancouver, 2002), pp. 1173–1180.Google Scholar
 12.M Zibulevsky, BA Pearlmutter, blind source separation by sparse decomposition in a signal dictionary. Neural Comp. 13(4), 863–882 (2001).MATHCrossRefGoogle Scholar
 13.AJW van der Kouwe, D Wang, GJ Brown, A comparison of auditory and blind separation techniques for speech segregation. IEEE Trans. Speech Audio Process. 9(3), 189–195 (2001).CrossRefGoogle Scholar
 14.JF Cardoso, Highorder contrasts for independent component analysis. Neural Comput. 11:, 157–192 (1999).CrossRefGoogle Scholar
 15.A Belouchrani, K AbedMeraim, JF Cardoso, E Moulines, A blind source separation technique using second order statistics. IEEE Trans. Signal Process. 45:, 434–444 (1997).CrossRefGoogle Scholar
 16.T Virtanen, Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria. IEEE Trans. Audio Speech Lang. Process. 15:, 1066–1074 (2007).CrossRefGoogle Scholar
 17.P Smaragdis, Convolutive speech bases and their application to supervised speech separation. IEEE Trans. Audio Speech Lang. Process. 15(1), 1–12 (2007).CrossRefGoogle Scholar
 18.C Joder, F Weninger, F Eyben, D Virette, B Schuller, in Latent Variable Analysis and Signal Separation, 7191, ed. by F Theis, A Cichocki, A Yeredor, and M Zibulevsky. Realtime speech separation by semisupervised nonnegative matrix factorization. Lecture Notes in Computer Science (SpringerTelAviv, 2012), pp. 322–329.CrossRefGoogle Scholar
 19.L Benaroya, L McDonagh, F Bimbot, R Gribonval, in Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Nonnegative sparse representation for Wiener based source separation with a single sensor (IEEEHongKong, 2003), pp. 613–616.Google Scholar
 20.R Blouet, I Cohen, in Speech Processing in Modern Communication, ed. by I Cohen, J Benesty, and S Gannot. Codebook approaches for single sensor speech/music separation (SpringerBerlin, 2009), pp. 183–198.Google Scholar
 21.P Mowlaee, MG Christensen, SH Jensen, New results on singlechannel speech separation using sinusoidal modeling. IEEE Trans. Audio Speech Lang. Process. 19(5), 1265–1277 (2011).CrossRefGoogle Scholar
 22.P Mowlaee, R Saeidi, MG Christensen, ZH Tan, T Kinnunen, P Franti, SH Jensen, A joint approach for singlechannel speaker identification and speech separation. IEEE Trans. Audio Speech Lang. Process. 20(9), 2586–2601 (2012).CrossRefGoogle Scholar
 23.FR Bach, M Jordan, in NIPS. Blind onemicrophone speech separation: A spectral learning approach (MIT PressVancouver, 2005), pp. 65–72.Google Scholar
 24.PS Huang, M Kim, M HasegawaJohnson, P Smaragdis, Joint optimization of masks and deep recurrent neural networks for monaural source separation. IEEE/ACM Trans. Audio Speech Lang. Process. 23(12), 2136–2147 (2015).CrossRefGoogle Scholar
 25.D Yu, M Kolbæk, ZH Tan, J Jensen, in Proceedings of the International Conference on Audio, Speech and Signal Processing (ICASSP). Permutation invariant training of deep models for speakerindependent multitalker speech separation (IEEEShanghai, 2016).Google Scholar
 26.JR Hershey, Z Chen, J Le Roux, S Watanabe, in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Deep clustering: Discriminative embeddings for segmentation and separation (IEEEShanghai, 2016), pp. 31–35.Google Scholar
 27.XL Zhang, D Wang, A deep ensemble learning method for monaural speech separation. IEEE/ACM Trans. Audio Speech Lang. Process. 24(5), 967–977 (2016).CrossRefGoogle Scholar
 28.B Raj, P Smaragdis, in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). Latent variable decomposition of spectrograms for single channel speaker separation (IEEENew York, 2005), pp. 17–20.Google Scholar
 29.T Kristjansson, H Attias, J Hershey, in Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2. Single microphone source separation using high resolution signal reconstruction (IEEEMontreal, 2004), pp. 817–820.Google Scholar
 30.L Benaroya, F Bimbot, R Gribonval, Audio source separation with a single sensor. IEEE Trans. Audio Speech Lang. Process. 14(1), 191–199 (2006).CrossRefGoogle Scholar
 31.A Nádas, D Nahamoo, MA Picheny, Speech recognition using noiseadaptive prototypes. IEEE Trans. Acoust. Speech Sig. Process. 37(10), 1495–1503 (1989).CrossRefGoogle Scholar
 32.D Burshtein, S Gannot, Speech enhancement using a mixturemaximum model. IEEE Trans. Speech Audio Process. 10(6), 341–351 (2002).CrossRefGoogle Scholar
 33.Y Yeminy, S Gannot, Y Keller, in Proceedings of the International Workshop on Acoustic Echo and Noise Control (IWAENC), 2. Speech enhancement using a multidimensional MixtureMaximum model (IWAENCTel Aviv, 2010).Google Scholar
 34.AM Reddy, B Raj, Soft mask methods for singlechannel speaker separation. IEEE Trans. Audio Speech Lang. Process. 15(6), 1766–1776 (2007).CrossRefGoogle Scholar
 35.MH Radfar, RM Dansereau, Singlechannel speech separation using soft mask filtering. IEEE Trans. Audio Speech Lang. Process. 15(8), 2299–2310 (2007).CrossRefGoogle Scholar
 36.ST Roweis, One microphone source separation. Adv. neural Inf. Process. Syst. 13:, 793–799 (2001).Google Scholar
 37.MH Radfar, W Wong, RM Dansereau, WY Chan, in Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Scaled factorial hidden Markov models: A new technique for compensating gain differences in modelbased single channel speech separation (IEEEDallas, 2010), pp. 1918–1921.Google Scholar
 38.K Hu, D Wang, An iterative modelbased approach to cochannel speech separation. EURASIP J. Audio Speech Music Process. 2013(1), 1–11 (2013).CrossRefGoogle Scholar
 39.JR Hershey, SJ Rennie, PA Olsen, TT Kristjansson, Superhuman multitalker speech recognition: a graphical modeling approach. Elsevier Comput. Speech Lang. 24(1), 45–66 (2010).CrossRefGoogle Scholar
 40.RJ Weiss, DPW Ellis, Speech separation using speakeradapted eigenvoice speech models. Elsevier Comput. Speech Lang. 24(1), 16–29 (2010).CrossRefGoogle Scholar
 41.J Ming, R Srinivasan, D Crookes, A Jafari, Close–a datadriven approach to speech separation. IEEE Trans. Audio Speech Lang. Process. 21(7), 1355–1368 (2013).CrossRefGoogle Scholar
 42.GJ Mysore, P Smaragdis, B Raj, in Latent Variable Analysis and Signal Separation  9th International Conference, LVA/ICA 2010, St. Malo, France, September 2730, 2010. Proceedings. Nonnegative hidden Markov modeling of audio with application to source separation (SpringerSt. Malo, 2010), pp. 140–148.Google Scholar
 43.A Ozerov, C Févotte, M Charbit, in 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. Factorial scaled hidden Markov model for polyphonic audio representation and source separation (IEEENew York, 2009), pp. 121–124.CrossRefGoogle Scholar
 44.S Russell, P Norvig, Artificial Intelligence: a Modern Approach, 3rd edn (Prentice Hall Press, Upper Saddle River, 2009).MATHGoogle Scholar
 45.S Rennie, P Olsen, J Hershey, T Kristjansson, in Workshop on Statistical and Perceptual Audio Processing (SAPA). The iroquois model: using temporal dynamics to separate speakers (ISCAPittsburgh, 2006).Google Scholar
 46.SJ Rennie, JR Hershey, PA Olsen, in IEEE Workshop on Automatic Speech Recognition and Understanding. Hierarchical variational loopy belief propagation for multitalker speech recognition (IEEEMerano, 2009), pp. 176–181.Google Scholar
 47.M Wohlmayr, M Stark, F Pernkopf, A probabilistic interaction model for multipitch tracking with factorial hidden Markov models. IEEE Trans. Audio Speech Lang. Process. 19:, 799–810 (2011).CrossRefGoogle Scholar
 48.MJ Reyesgomez, DPW Ellis, N Jojic, in Proceedings of the International Conference on Audio, Speech and Signal Processing (ICASSP). Multiband audio modeling for singlechannel acoustic source separation (IEEEMontreal, 2004).Google Scholar
 49.Y Michalevsky, R Talmon, I Cohen, in Proceedings of the European Signal Processing Conference (EUSIPCO). Speaker identification using diffusion maps (IEEEBarcelona, 2011), pp. 4029–4032.Google Scholar
 50.RR Coifman, S Lafon, Diffusion maps. Appl. Comput. Harmon. Anal: Spec. Iss. Diffus. Maps Wavelets. 22:, 5–30 (2006).MathSciNetMATHCrossRefGoogle Scholar
 51.WH Press, BP Flannery, SA Teukolsky, WT Vetterling, Numerical Recipes in C: The Art of Scientific Computing, Second Edition, 2nd edn (Cambridge University Press, Cambridge, 1992).MATHGoogle Scholar
 52.JR Hershey, SJ Rennie, J Le Roux, Techniques for Noise Robustness in Automatic Speech Recognition (Wiley, Chichester, 2012). Chap. 12.Google Scholar
 53.Y Keller, RR Coifman, S Lafon, SW Zucker, Audiovisual group recognition using diffusion maps. IEEE Trans. Signal Process. 58(1), 403–413 (2010).MathSciNetCrossRefGoogle Scholar
 54.S Lafon, Y Keller, RR Coifman, Data fusion and multicue data matching by diffusion maps. IEEE Trans. Pattern Anal. Mach. Intell. 28:, 1784–1797 (2006).CrossRefGoogle Scholar
 55.B Nadler, S Lafon, RR Coifman, IG Kevrekidis, Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Appl. Comput. Harmon. Anal. 21(1), 113–127 (2006).MathSciNetMATHCrossRefGoogle Scholar
 56.B Nadler, S Lafon, RR Coifman, IG Kevrekidis, Diffusion maps, spectral clustering and eigenfunctions of FokkerPlanck operators. Adv. Neural Inf. Process. Syst. 18:, 955–962 (2005).Google Scholar
 57.S Lafon, AB Lee, Diffusion maps and coarsegraining: a unified framework for dimensionality reduction, graph partitioning and data set parameterization. IEEE Trans. Pattern Anal. Mach. Intell. 28:, 1393–1403 (2006).CrossRefGoogle Scholar
 58.M Cooke, TW Lee, The speech separation challenge. (2006). http://laslab.org/SpeechSeparationChallenge/. Accessed 2012.
 59.Y Keller, Y Gur, A diffusion approach to network localization. IEEE Trans. Signal Process. 59(6), 2642–2654 (2011). doi:10.1109/TSP.2011.2122261.MathSciNetCrossRefGoogle Scholar
 60.E Vincent, R Gribonval, C Fevotte, Performance measurement in blind audio source separation. IEEE Trans. Audio Speech Lang. Process. 14(4), 1462–1469 (2006).CrossRefGoogle Scholar
 61.C Févotte, R Gribonval, E Vincent, BSS EVAL Toolbox User Guide, (Rennes, 2005). http://bassdb.gforge.inria.fr/bss_eval/. Accessed 2012.
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.