Skip to main content

A Fast Globally Supervised Learning Algorithm for Gaussian Mixture Models

  • Conference paper
  • First Online:
Web-Age Information Management (WAIM 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1846))

Included in the following conference series:

Abstract

In this paper, a fast globally supervised learning algorithm for Gaussian Mixture Models based on the maximum relative entropy (MRE) is proposed. To reduce the computation complexity in Gaussian component probability densities, the concept of quasi-Gaussian probability density is used to compute the simplified probabilities. For four different learning algorithms such as the maximum mutual information algorithm (MMI), the maximum likelihood estimation (MLE), the generalized probabilistic descent (GPD) and the maximum relative entropy (MRE) algorithm, the random experiment approach is used to evaluate their performances. The experimental results show that the MRE is a better alternative algorithm in accuracy and training speed compared with GPD, MMI and MLE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A.P. Dempster, N.M. Laird, and D.B. Rubin. “Maximum likelihood from Incomplete Data via the EM Algorithm” J.of Royal Statistical Soc.,Ser.B,Vol. 39, No. 1, 1977, pp. 1–38

    MATH  MathSciNet  Google Scholar 

  2. A.L. Yuolle, P. Stolorz and J. Utans, ”Statisitical physics mixtures of distributions,and the EM algorithm”, Neural Computations,Vol. 6, 1994,pp. 334–340

    Article  Google Scholar 

  3. R.J. Hanthanway, ”Another interpretation f the EM algorithm for mixture distributions”, Statist. Prob.Lrtt., Vol. 4, 1986, pp. 53–56

    Article  Google Scholar 

  4. C.E. Priebe. “Adaptive mixtures”. Journal of the American Statistical Association, 89(427), 1994

    Google Scholar 

  5. D.M. Titternington.” Recursive parameter estimation using incomplete data,” J.R. Statist.Soc.B, 46:257–267, 1984

    Google Scholar 

  6. L.R. Bahl, P.F. Brown, P.V. de Souza,and R.L. Mercer,”Maximum mutual information estimation of hidden Markov model parameters for speech recognition,”, in Proc. ICASSP’ 86(Tokyo, Japan), pp. 49–52, 1986

    Google Scholar 

  7. Y. Ephrain, A. Dembo,and L.R. Rabiner, ”A minimum discrimination information approach for hidden Markov modeling”,in Proc.ICASSP 87 (Dallas, TX), 1987.

    Google Scholar 

  8. B.H. Juang and S. Katagiri,”Discriminative learning for minimum error classification”, IEEE Trans. Signal processing,Vol. 40, no. 2, pp. 3043–3054, 1992.

    Article  MATH  Google Scholar 

  9. S. Katagiri, C.H. Lee and B.H. Juang, ”Discriminative multilayer feedforward networks,” Proc.1991 IEEE Workshop Neural Networks for Signal Processing, Princeton,NJ, 1991,pp. 11–20.

    Google Scholar 

  10. N.B. Karayiannis and G.W. Mi.”Growing radial basis neural networks:Merging supervised and unsupervised learning with networks growth technique” IEEE Trans. On Neural Networks,Vol. 8, No. 6, 1997, pp. 1492–1506

    Article  Google Scholar 

  11. T. Cover, J. Thomas. Elements of Information Theory. John Wiley &Sons, Inc., NewYork. 1991

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ma, J., Gao, W. (2000). A Fast Globally Supervised Learning Algorithm for Gaussian Mixture Models. In: Lu, H., Zhou, A. (eds) Web-Age Information Management. WAIM 2000. Lecture Notes in Computer Science, vol 1846. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45151-X_42

Download citation

  • DOI: https://doi.org/10.1007/3-540-45151-X_42

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67627-0

  • Online ISBN: 978-3-540-45151-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics