Skip to main content

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

  • 4536 Accesses

Abstract

Both HMMs and n-gram models are usually created by estimating their parameters on some sample set. Afterwards, the trained models can be applied for the segmentation of new data. This is by definition not part of the training samples and can never be in practical applications. Thus, the characteristic properties of this test data can be predicted to a limited extent only on the basis of the training material. Therefore, in general differences between training and testing material will occur that cannot be captured by the statistical models created. Ultimately, this mismatch between training and testing conditions will adversely affect the quality of the results achieved.

Therefore, it is the common goal of model adaptation to compensate differences between the training and testing conditions of a recognition system which concern the statistical properties of the data. In this chapter, the most important techniques will be presented that have been proposed for the adaptation of HMMs and n-gram models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This effect can be partially avoided in practice by a suitable tying of parameters (cf. Sect. 9.2, p. 169).

  2. 2.

    By means of suitable extensions of the MLLR method, the parameters of covariance matrices can be adapted, too [107]. However, the rather moderate improvements observed in practice do usually not justify the considerably increased additional effort.

  3. 3.

    If all mean vectors are adapted by a single affine transformation only, a single regression class comprising all densities is used.

  4. 4.

    Here c(z|w 1,…,w n ) denotes the number of occurrences of word z in the string of words given by w 1,…,w n .

References

  1. Bigi, B., De Mori, R., El-Béze, M., Spriet, T.: Combined models for topic spotting and topic-dependent language modeling. In: Furui, S., Huang, B.H., Chu, W. (eds.) Proc. Workshop on Automatic Speech Recognition and Understanding, pp. 535–542 (1997)

    Google Scholar 

  2. Boulis, C., Diakoloukas, V.D., Digalakis, V.V.: Maximum likelihood stochastic transformation adaptation for medium and small data sets. Computer Speech & Language 15, 257–285 (2001)

    Article  Google Scholar 

  3. Boulis, C., Digalakis, V.V.: Fast speaker adaptation of large vocabulary continuous density HMM speech recognizer using a basis transform approach. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Istanbul, pp. 3630–3633 (2000)

    Google Scholar 

  4. Clarkson, P.R., Robinson, A.J.: Language model adaptation using mixtures and an exponentially decaying cache. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, München, vol. 2, pp. 799–802 (1997)

    Google Scholar 

  5. Diakoloukas, V.D., Digalakis, V.V.: Adaptation of Hidden Markov Models using multiple stochastic transformations. In: Proc. European Conf. on Speech Communication and Technology, pp. 2063–2066 (1997)

    Google Scholar 

  6. Diakoloukas, V.D., Digalakis, V.V.: Maximum-likelihood stochastic-transformation adaptation of Hidden Markov-Models. IEEE Trans. Audio Speech Lang. Process. 7(2), 177–187 (1999)

    Google Scholar 

  7. Digalakis, V.V.: Online adaptation of hidden Markov models using incremental estimation algorithms. IEEE Trans. Audio Speech Lang. Process. 7(3), 253–261 (1999)

    Google Scholar 

  8. Eckert, W., Gallwitz, F., Niemann, H.: Combining stochastic and linguistic language models for recognition of spontaneous speech. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Atlanta, vol. 1, pp. 423–426 (1996)

    Google Scholar 

  9. Fischer, A., Stahl, V.: Database and online adaptation for improved speech recognition in car environments. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Phoenix, AZ (1999)

    Google Scholar 

  10. Gales, M.J.F., Woodland, P.C.: Variance compensation within the MLLR framework. Technical report, Cambridge University Engineering Department (1996)

    Google Scholar 

  11. Gauvain, J.-L., Lee, C.-H.: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Audio Speech Lang. Process. 2(2), 291–298 (1994)

    Google Scholar 

  12. Huang, X., Acero, A., Hon, H.-W.: Spoken Language Processing: A Guide to Theory, Algorithm, and System Development. Prentice Hall, Englewood Cliffs (2001)

    Google Scholar 

  13. Iyer, R.M., Ostendorf, M.: Modeling long distance dependence in language: topic mixtures versus dynamic cache models. IEEE Trans. Audio Speech Lang. Process. 7(1), 30–39 (1999)

    Google Scholar 

  14. Kuhn, R., De Mori, R.: A cache-based natural language model for speech recognition. IEEE Trans. Pattern Anal. Mach. Intell. 12(6), 570–583 (1990)

    Article  Google Scholar 

  15. Leggetter, C.J., Woodland, P.C.: Flexible speaker adaptation using maximum likelihood linear regression. In: Workshop on Spoken Language Systems Technology, pp. 110–115 (1995). ARPA

    Google Scholar 

  16. Leggetter, C.J., Woodland, P.C.: Maximum likelihood linear regression for speaker adaptation of continuous density Hidden Markov Models. Comput. Speech Lang. 9, 171–185 (1995)

    Article  Google Scholar 

  17. Mahajan, M., Beeferman, D., Huang, X.D.: Improved topic-dependent language modeling using information retrieval techniques. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Phoenix, AZ, vol. 1, pp. 15–19 (1999)

    Google Scholar 

  18. Martínez, F., Tapias, D., Álvarez, J.: Towards speech rate independence in large vocabulary continuous speech recognition. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 725–728 (1998)

    Google Scholar 

  19. Popovici, C., Baggia, P.: Specialized language models using dialogue predictions. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, München, vol. 2, pp. 779–782 (1997)

    Google Scholar 

  20. Siegler, M.A., Stern, R.M.: On the effects of speech rate in large vocabulary speech recognition systems. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Detroit, vol. 1, pp. 612–615 (1995)

    Google Scholar 

  21. Wessel, F., Baader, A.: Robust dialogue-state dependent language modeling using leaving-one-out. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Phoenix, AZ (1999)

    Google Scholar 

  22. Wessel, F., Baader, A., Ney, H.: A comparison of dialogue-state dependent language models. In: Proc. ECSA Workshop on Interactive Dialogue in Multi-Modal Systems, Irsee, Germany, pp. 93–96 (1999)

    Google Scholar 

  23. Wrede, B., Fink, G.A., Sagerer, G.: An investigation of modelling aspects for rate-dependent speech recognition. In: Proc. European Conf. on Speech Communication and Technology, Aalborg, vol. 4, pp. 2527–2530 (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag London

About this chapter

Cite this chapter

Fink, G.A. (2014). Model Adaptation. In: Markov Models for Pattern Recognition. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-6308-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-6308-4_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-6307-7

  • Online ISBN: 978-1-4471-6308-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics