Abstract
In this paper, a fast globally supervised learning algorithm for Gaussian Mixture Models based on the maximum relative entropy (MRE) is proposed. To reduce the computation complexity in Gaussian component probability densities, the concept of quasi-Gaussian probability density is used to compute the simplified probabilities. For four different learning algorithms such as the maximum mutual information algorithm (MMI), the maximum likelihood estimation (MLE), the generalized probabilistic descent (GPD) and the maximum relative entropy (MRE) algorithm, the random experiment approach is used to evaluate their performances. The experimental results show that the MRE is a better alternative algorithm in accuracy and training speed compared with GPD, MMI and MLE.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A.P. Dempster, N.M. Laird, and D.B. Rubin. “Maximum likelihood from Incomplete Data via the EM Algorithm” J.of Royal Statistical Soc.,Ser.B,Vol. 39, No. 1, 1977, pp. 1–38
A.L. Yuolle, P. Stolorz and J. Utans, ”Statisitical physics mixtures of distributions,and the EM algorithm”, Neural Computations,Vol. 6, 1994,pp. 334–340
R.J. Hanthanway, ”Another interpretation f the EM algorithm for mixture distributions”, Statist. Prob.Lrtt., Vol. 4, 1986, pp. 53–56
C.E. Priebe. “Adaptive mixtures”. Journal of the American Statistical Association, 89(427), 1994
D.M. Titternington.” Recursive parameter estimation using incomplete data,” J.R. Statist.Soc.B, 46:257–267, 1984
L.R. Bahl, P.F. Brown, P.V. de Souza,and R.L. Mercer,”Maximum mutual information estimation of hidden Markov model parameters for speech recognition,”, in Proc. ICASSP’ 86(Tokyo, Japan), pp. 49–52, 1986
Y. Ephrain, A. Dembo,and L.R. Rabiner, ”A minimum discrimination information approach for hidden Markov modeling”,in Proc.ICASSP 87 (Dallas, TX), 1987.
B.H. Juang and S. Katagiri,”Discriminative learning for minimum error classification”, IEEE Trans. Signal processing,Vol. 40, no. 2, pp. 3043–3054, 1992.
S. Katagiri, C.H. Lee and B.H. Juang, ”Discriminative multilayer feedforward networks,” Proc.1991 IEEE Workshop Neural Networks for Signal Processing, Princeton,NJ, 1991,pp. 11–20.
N.B. Karayiannis and G.W. Mi.”Growing radial basis neural networks:Merging supervised and unsupervised learning with networks growth technique” IEEE Trans. On Neural Networks,Vol. 8, No. 6, 1997, pp. 1492–1506
T. Cover, J. Thomas. Elements of Information Theory. John Wiley &Sons, Inc., NewYork. 1991
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ma, J., Gao, W. (2000). A Fast Globally Supervised Learning Algorithm for Gaussian Mixture Models. In: Lu, H., Zhou, A. (eds) Web-Age Information Management. WAIM 2000. Lecture Notes in Computer Science, vol 1846. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45151-X_42
Download citation
DOI: https://doi.org/10.1007/3-540-45151-X_42
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67627-0
Online ISBN: 978-3-540-45151-8
eBook Packages: Springer Book Archive