Fast progressive training of mixture models for model selection
- 227 Downloads
Finite mixture models (FMM) are flexible models with varying uses such as density estimation, clustering, classification, modeling heterogeneity, model averaging, and handling missing data. Expectation maximization (EM) algorithm can learn the maximum likelihood estimates for the model parameters. One of the prerequisites for using the EM algorithm is the a priori knowledge of the number of mixture components in the mixture model. However, the number of mixing components is often unknown. Therefore, determining the number of mixture components has been a central problem in mixture modelling. Thus, mixture modelling is often a two-stage process of determining the number of mixture components and then estimating the parameters of the mixture model. This paper proposes a fast training of a series of mixture models using progressive merging of mixture components to facilitate model selection algorithm to make appropriate choice of the model. The paper also proposes a data driven, fast approximation of the Kullback–Leibler (KL) divergence as a criterion to measure the similarity of the mixture components. We use the proposed methodology in mixture modelling of a synthetic dataset, a publicly available zoo dataset, and two chromosomal aberration datasets showing that model selection is efficient and effective.
KeywordsModel selection Mixture models KL divergence Training 0–1 data
Helsinki Doctoral Programme in Computer Science—Advanced Computing and Intelligent Systems (Hecse), and Finnish Center of Excellence for Algorithmic Data Analysis (ALGODAN) funds the current research.
- Adhikari, P.R., & Hollmén, J. (2010a). Patterns from multi-resolution 0–1 data. In B. Goethals, N. Tatti, J. Vreeken (Eds.) Proceedings of the ACM SIGKDD workshop on useful patterns (UP’10) (pp. 8–12). ACM.Google Scholar
- Adhikari, P.R., & Hollmén, J. (2010b). Preservation of statistically significant patterns in multiresolution 0–1 data. In T. Dijkstra, E. Tsivtsivadze, E. Marchiori, T. Heskes (Eds.) Pattern recognition in bioinformatics. Lecture notes in computer science (Vol. 6282, pp. 86–97). Berlin/Heidelberg: Springer.CrossRefGoogle Scholar
- Adhikari, P.R., & Hollmén, J. (2012). Fast progressive training of mixture models for model selection. In J.-G. Ganascia, P. Lenca, J.-M. Petit (Eds.) Proceedings of fifteenth international conference on discovery science (DS 2012). LNAI (Vol. 7569, pp. 194–208). Springer-Verlag.Google Scholar
- Bache, K., & Lichman, M. (2013). UCI machine learning repository. University of California, Irvine, School of Information and Computer Science. http://archive.ics.uci.edu/ml.
- Beeferman, D., & Berger, A. (2000). Agglomerative clustering of a search engine query log. In Proceedings of the ACM KDD ’00, New York, USA (pp. 407–416).Google Scholar
- Blekas, K., & Lagaris, I.E. (2007). Split-merge incremental learning (SMILE) of mixture models. In Proceedings of the ICANN’07 (pp. 291–300). Springer-Verlag.Google Scholar
- Donoho, D.L. (2000) High-dimensional data analysis: the curses and blessings of dimensionality. Aide–Memoire of a lecture. In AMS conference on math challenges of the 21st century.Google Scholar
- Goldberger, J., Gordon, S., Greenspan, H. (2003). An efficient image similarity measure based on approximations of KL-divergence between two Gaussian mixtures. In Proceedings of the ICCV ’03, Washington DC, USA (pp. 487–493).Google Scholar
- Hershey, J.R., & Olsen, P.A. (2007). Approximating the Kullback Leibler divergence between Gaussian mixture models. In IEEE. ICASSP 2007 (Vol. 4, pp. 317–320).Google Scholar
- Hollmén, J., & Tikka, J. (2007). Compact and understandable descriptions of mixture of Bernoulli distributions. In M.R. Berthold, J. Shawe-Taylor, N. Lavrač (Eds.) Proceedings of the IDA 2007. LNCS (Vol. 4723, pp. 1–12).Google Scholar
- Kittler, J. (1986). Feature selection and extraction. Handbook of pattern recognition and image processing.. Academic Press.Google Scholar
- Li, Y., & Li, L. (2009). A novel split and merge EM algorithm for gaussian mixture model. In Fifth international conference on natural computation, 2009. ICNC ’09 (Vol. 6, pp. 479–483).Google Scholar
- Li, Y., & Li, L. (2009). A split and merge EM algorithm for color image segmentation. In IEEE ICIS 2009 (Vol. 4, pp. 395–399).Google Scholar
- Mclachlan, G.J., & Krishnan, T. (1996). The EM algorithm and extensions (1st ed.). Wiley-Interscience.Google Scholar
- Myllykangas, S., Tikka, J., Böhling, T., Knuutila, S., Hollmén, J. (2008). Classification of human cancers based on DNA copy number amplification modeling. BMC Medical Genomics, 1(15), 1–18.Google Scholar
- Perez-Cruz, F. (2008). Kullback–Leibler divergence estimation of continuous distributions. In IEEE international symposium on information theory, ISIT 2008 (pp. 1666–1670).Google Scholar
- Tikka, J., Hollmén, J., Myllykangas, S. (2007). Mixture modeling of DNA copy number amplification patterns in cancer. In F. Sandoval, A. Prieto, J. Cabestany, M. Graña (Eds.) Proceedings of the IWANN 2007. Lecture notes in computer science (Vol. 4507, pp. 972–979). San Sebastián, Spain: Springer-Verlag.Google Scholar
- Wang, Q., Kulkarni, S.R., Verdú, S. (2005). Universal estimation of divergence for continuous distributions via data-dependent partitions. In Proceedings international symposium on information theory, ISIT 2005 (pp. 152–156).Google Scholar