Acoustic feature extraction method for robust speaker identification
When there is a mismatch between the acoustic training environment and the testing environment, the performance of automatic speaker identification systems degrades significantly. A robust feature extraction method for speaker recognition based on the gammatone filter is therefore proposed in this paper. By employing the working mechanism of the human auditory model instead of the traditional triangular filter banks, gammatone filter banks are used to simulate the auditory model of the human ear cochlea. The cube root compression method, equal loudness technology, and relative spectral (RASTA) filtering technology are incorporated into the robust feature extraction process. A simulation experiment is conducted based on the Gaussian mixture model (GMM) recognition algorithm. The experimental results indicate that the proposed feature parameters could show superior robustness and represent the characteristics of the speaker better than the conventional mel-frequency cepstrum coefficient (MFCC), cochlear cepstrum coefficient (CFCC) and relative spectra-perceptual linear predictive (RASTA-PLP) parameters.
KeywordsRobust speaker identification Gammatone filter banks Feature extraction RASTA CMVN
The authors would like to give an acknowledgment to authors of references and a great deal of work done by them, as well as the co-workers for their helpful comments. The authors would like to thank anonymous reviewers for their useful comments that help revising the paper.
- 1.Acero A (1993) Acoustical and environmental robustness in automatic speech recognition. Springer, vol. 201Google Scholar
- 6.Hermansky H, Morgan N, Bayya A, et al (1992) RASTA-PLP speech analysis technique. In: ICASSP-92, IEEE International Conference on Acoustics, Speech and Signal Processing. vol 1, 121–124Google Scholar
- 7.Huang X, Acero A, Hon HW (2001) Spoken language processing. Prentice Hall, Englewood CliffsGoogle Scholar
- 8.Hunt M, Lefebvre C (1988) Speaker dependent and independent speech recognition experiments with an auditory model. In: ICASSP-88, IEEE International Conference on Acoustics, Speech and Signal Processing. 215–218Google Scholar
- 9.Johannesma PIM (1972) The pre-response stimulus ensemble of neurons in the cochlear nucleus. Symposium on hearing theory. IPO, Eindhoven, pp 58–69Google Scholar
- 12.Lyon RF, Katsiamis AG, Drakakis EM (2010) History and future of auditory filter models. In: ISCAS, IEEE International Symposium on Circuits and Systems. 3809–3812Google Scholar
- 18.Seneff S (1990) A joint synchrony/mean-rate model of auditory speech processing. Readings in speech recognition. Morgan Kaufmann Publishers Inc: 101–111Google Scholar
- 19.Shao Y, Jin Z, Wang DL, et al (2009) An auditory-based feature for robust speech recognition. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing. 4625–4628Google Scholar
- 23.Von Be Kesy G (1961) Concerning the pleasures of observing and the mechanics of the inner ear. Nobel Lecture, December, 11Google Scholar
- 25.Zheng R, Zhang S, Xu B (2004) Text-independent speaker identification using GMM-UBM and frame level likelihood normalization. IEEE International Symposium on Chinese Spoken Language Processing. 289–292Google Scholar
- 26.Zue V, Glass J, Goodine D, et al (1990) The summit speech recognition system: Phonological modelling and lexical access. In: ICASSP-90, IEEE International Conference on Acoustics, Speech and Signal Processing. vol 1, 49–52Google Scholar