Advertisement

Improved Factorization of a Connectionist Language Model for Single-Pass Real-Time Speech Recognition

  • Łukasz Brocki
  • Danijel Koržinek
  • Krzysztof Marasek
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8502)

Abstract

Statistical Language Models are often difficult to derive because of the so-called “dimensionality curse”. Connectionist Language Models defeat this problem by utilizing a distributed word representation which is modified simultaneously as the neural network synaptic weights. This work describes certain improvements in the utilization of Connectionist Language Models for single-pass real-time speech recognition. These include comparing the word probabilities independently between the words and a novel mechanism of factorization of the lexical tree. Experiments comparing the improved model to the standard Connectionist Language Model in a Large-Vocabulary Continuous Speech Recognition (LVCSR) task show the new method obtains about a 33-fold speed increase while achieving a minimally worse word-level speech recognition performance.

Keywords

Connectionist language model real-time single-pass automatic speech recognition lexical tree factorization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jelinek, F.: Statistical methods for speech recognition. MIT press (1997)Google Scholar
  2. 2.
    Jelinek, F.: Interpolated estimation of markov source parameters from sparse data. Pattern Recognition in Practice, 381–397 (1980)Google Scholar
  3. 3.
    Katz, S.: Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing 35, 400–401 (1987)CrossRefGoogle Scholar
  4. 4.
    Kneser, R., Ney, H.: Improved clustering techniques for class-based statistical language modelling. In: Third European Conference on Speech Communication and Technology (1993)Google Scholar
  5. 5.
    Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1995, vol. 1, pp. 181–184 (1995)Google Scholar
  6. 6.
    Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. The Journal of Machine Learning Research 3, 1137–1155 (2003)zbMATHGoogle Scholar
  7. 7.
    Blitzer, J., Weinberger, K., Saul, L.K., Pereira, F.C.: Hierarchical distributed representations for statistical language modeling. Advances in Neural Information Processing Systems 18 (2005)Google Scholar
  8. 8.
    Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29, 82–97 (2012)CrossRefGoogle Scholar
  9. 9.
    Morin, F., Bengio, Y.: Hierarchical probabilistic neural network language model. In: Proceedings of the International Workshop on Artificial Intelligence and Statistics, pp. 246–252 (2005)Google Scholar
  10. 10.
    Bishop, C.M.: Neural networks for pattern recognition. Oxford university press (1995)Google Scholar
  11. 11.
    Ney, H., Haeb-Umbach, R., Tran, B.H., Oerder, M.: Improvements in beam search for 10000-word continuous speech recognition. In: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1992, vol. 1, pp. 9–12. IEEE (1992)Google Scholar
  12. 12.
    Przepiórkowski, A.: Korpus ipi pan. Wersja wstepna. Instytut Podstaw Informatyki, Polska Akademia Nauk, Warszawa (2004)Google Scholar
  13. 13.
    Marasek, K., Brocki, Ł., Koržinek, D., Szklanny, K., Gubrynowicz, R.: User-centered design for a voice portal. In: Marciniak, M., Mykowiecka, A. (eds.) Bolc Festschrift. LNCS, vol. 5070, pp. 273–293. Springer, Heidelberg (2009)Google Scholar
  14. 14.
    Koržinek, D., Brocki, Ł., Gubrynowicz, R., Marasek, K.: Wizard of oz experiment for a telephony-based city transport dialog system. In: Proceedings of the IIS 2008 Workshop on Spoken Language Understanding and Dialogue Systems (2008)Google Scholar
  15. 15.
    Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., Bengio, Y.: Theano: A CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy) (2010) Oral PresentationGoogle Scholar
  16. 16.
    Graves, A.: Sequence transduction with recurrent neural networks. In: CoRR. Volume abs/1211.3711, Edinburgh, Scotland (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Łukasz Brocki
    • 1
  • Danijel Koržinek
    • 1
  • Krzysztof Marasek
    • 1
  1. 1.Polish-Japanese Institute of Information TechnologyWarsawPoland

Personalised recommendations