Skip to main content

On-line Estimation of Hidden Markov Model Parameters

  • Conference paper
  • First Online:
Discovery Science (DS 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1967))

Included in the following conference series:

Abstract

In modeling various signals such as the speech signal by using the Hidden Markov Model (HMM), it is often required to adapt not only to the inherent nonstationarity of the signal, but to changes of sources (speakers) who yield the signal. The well known Baum-Welch algorithm tries to adjust HMM so as to optimize the fit between the model and the signal observed. In this paper we develop an algorithm, which we call the on-line Baum-Welch algorithm, by incorporating the learning rate into the off-line Baum-Welch algorithm. The algorithm performs in a series of trials. In each trial the algorithm somehow produces an HMM Mt, then receives a symbol sequence wt, incurring loss - ln Pr(wt|Mt) which is the negative log-likelihood of the HMM Mt evaluated at wt. The performance of the algorithm is measured by the additional total loss, which is called the regret, of the algorithm over the total loss of a standard algorithm, where the standard algorithm is taken to be a criterion for measuring the relative loss. We take the off-line Baum-Welch algorithm as such a standard algorithm. To evaluate the performance of an algorithm, we take the Gradient Descent algorithm. Our experiments show that the on-line Baum-Welch algorithm performs well as compared to the Gradient Descent algorithm. We carry out the experiments not only for artificial data, but for some reasonably realistic data which is made by transforming acoustic waveforms to symbol sequences through the vector quantization method. The results show that the on-line Baum- Welch algorithm adapts the change of speakers very well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Abe and M. K. Warmuth. On the computational complexity of approximating distributions by probabilistic automata. Machine Learning, 9:205–260, 1992.

    MATH  Google Scholar 

  2. K. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. In 15th Conference on Uncertainty in Artificial Intelligence, 31–40, 1999. To appear in Machine Learning.

    Google Scholar 

  3. P. Baldi and Y. Chauvin. Smooth on-line learning algorithms for Hidden Markov Models. Neural Computation, 6:307–318, 1994.

    Article  Google Scholar 

  4. I. Collings and T. Rydén. A new maximum likelihood gradient algorithm for on-line Hidden Markov Model identification. In ICASSP 98, volume 4, 2261–2264, 1998.

    Google Scholar 

  5. M. Herbster and M. K. Warmuth. Tracking the best regressor. In Proc. 11th COLT, 24–31, 1998.

    Google Scholar 

  6. V. Krishnamurthy and J. Moore. On-line estimation of Hidden Markov Model parameters based on the Kullback-Leibler information measure. IEEE Transaction on Signal Processing, 41(8):2557–2573, 1993.

    Article  MATH  Google Scholar 

  7. S. E. Levinson, L. R. Rabiner, and M. M. Sondhi. An introduction to the application of the theory of probabilistic functions of a markov process to automatic speech recognition. Bell System Technical Journal, 62(4):1035–1074, Apr. 1983.

    MATH  MathSciNet  Google Scholar 

  8. G. McLachlan and T. Krishnan. The EM Algorithm and Extensions. A Wiley-Interscience Publication, 1997.

    Google Scholar 

  9. E. Ordentlich and T. Cover. The cost of achieving the best portfolio in hindsight. Mathematics of Operations Research, 23(4):960–982, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  10. J. Rissanen. Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40–47, 1996.

    Article  MATH  MathSciNet  Google Scholar 

  11. J. Takeuchi and A. Barron. Asymptotically minimax regret for exponential families. In SITA’ 97, 665–668, 1997.

    Google Scholar 

  12. J. Takeuchi and A. Barron. Asymptotically minimax regret by bayes mixtures. In IEEE ISIT’ 98, 1998.

    Google Scholar 

  13. V. Vovk. Competitive on-line linear regression. Technical Report CSD-TR-97-13, Department of Computer Science, Royal Holloway, University of London, 1997.

    Google Scholar 

  14. Q. Xie and A. Barron. Asymptotic minimax regret for data compression, gambling, and prediction. IEEE Trans. on Information Theory, 46(2):431–445, 2000.

    Article  MATH  MathSciNet  Google Scholar 

  15. K. Yamanishi. A decision-theoretic extension of stochastic complexity and its applications to learning. IEEE Transaction on Information Theory, 44(4):1424–39, July 1998.

    Article  MATH  MathSciNet  Google Scholar 

  16. K. Yamanishi. Extended stochastic complexity and minimax relative loss analysis. In Proc. 10th ALT, volume 1720 of Lecture Notes in Artificial Intelligence, 26–38, 1999.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mizuno, J., Watanabe, T., Ueki, K., Amano, K., Takimoto, E., Maruoka, A. (2000). On-line Estimation of Hidden Markov Model Parameters. In: Arikawa, S., Morishita, S. (eds) Discovery Science. DS 2000. Lecture Notes in Computer Science(), vol 1967. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44418-1_13

Download citation

  • DOI: https://doi.org/10.1007/3-540-44418-1_13

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41352-3

  • Online ISBN: 978-3-540-44418-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics