Skip to main content

Finite Inverted Beta-Liouville Mixture Models with Variational Component Splitting

  • Chapter
  • First Online:
Mixture Models and Applications

Part of the book series: Unsupervised and Semi-Supervised Learning ((UNSESUL))

Abstract

Use of mixture models to statistically approximate data has been an interesting topic of research in unsupervised learning methods. Mixture models based on exponential family of distributions have gained popularity in recent years. In this chapter, we introduce a finite mixture model based on Inverted Beta-Liouville distribution which has a higher degree of freedom to provide a better fit for the data. We use a variational learning framework to estimate the parameters which decreases the computational complexity of the model. We handle the problem of model selection with a component splitting approach which is an added advantage as it is done within the variational framework. We evaluate our model against some challenging applications like image clustering, speech clustering, spam image detection, and software defect detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.ci.gxnu.edu.cn/cbir/Dataset.aspx.

  2. 2.

    http://www.cs.princeton.edu/cass/spam/.

References

  1. Bakhtiari, A.S., Bouguila, N.: A latent Beta-Liouville allocation model. Expert Syst. Appl. 45, 260–272 (2016)

    Article  Google Scholar 

  2. Banfield, J.D., Raftery, A.E.: Model-based gaussian and non-gaussian clustering. Biometrics 49(3), 803–821 (1993)

    Article  MathSciNet  Google Scholar 

  3. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008). Similarity Matching in Computer Vision and Multimedia

    Article  Google Scholar 

  4. Bdiri, T., Bouguila, N.: Positive vectors clustering using inverted Dirichlet finite mixture models. Expert Syst. Appl. 39(2), 1869–1882 (2012)

    Article  Google Scholar 

  5. Belloni, A., Chernozhukov, V.: On the computational complexity of MCMC-based estimators in large samples. Ann. Statist. 37(4), 2011–2055 (2009)

    Article  MathSciNet  Google Scholar 

  6. Bouguila, N.: A variational component splitting approach for finite generalized Dirichlet mixture models. In: 2012 International Conference on Communications and Information Technology (ICCIT), pp. 53–57 (2012)

    Google Scholar 

  7. Bouguila, N., Ziou, D., Vaillancourt, J.: Novel mixtures based on the Dirichlet distribution: Application to data and image classification. In: Perner, P., Rosenfeld, A. (eds.) Machine Learning and Data Mining in Pattern Recognition, pp. 172–181. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  8. Chen, Y., Wang, J.Z., Krovetz, R.: An unsupervised learning approach to content-based image retrieval. In: Proceeding Seventh International Symposium Signal Processing and its Applications, vol. 1, pp. 197–200 (2003). https://doi.org/10.1109/ISSPA.2003.1224674

    Google Scholar 

  9. Chen, Y., Wang, J.Z., Krovetz, R.: Clue: cluster-based retrieval of images by unsupervised learning. IEEE Trans. Image Process. 14(8), 1187–1201 (2005). https://doi.org/10.1109/TIP.2005.849770

    Article  Google Scholar 

  10. Constantinopoulos, C., Likas, A.: Unsupervised learning of gaussian mixtures based on variational component splitting. IEEE Trans. Neural Netw. 18(3), 745–755 (2007)

    Article  Google Scholar 

  11. Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, pp. 1–22 (2004)

    Google Scholar 

  12. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceeding IEEE Computer Society Conference Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893 (2005)

    Google Scholar 

  13. Dredze, M., Gevaryahu, R., Elias-Bachrach, A.: Learning fast classifiers for image spam. In: CEAS 2007 - The Fourth Conference on Email and Anti-Spam, 2–3 August 2007, Mountain View, California, USA (2007)

    Google Scholar 

  14. Fan, W., Bouguila, N.: Variational learning of finite Beta-Liouville mixture models using component splitting. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2013)

    Google Scholar 

  15. Fan, W., Bouguila, N.: Model-based clustering based on variational learning of hierarchical infinite beta-liouville mixture models. Neural. Process. Lett. 44(2), 431–449 (2016)

    Article  Google Scholar 

  16. Fan, W., Bouguila, N., Ziou, D.: Variational learning of finite Dirichlet mixture models using component splitting. Neurocomputing 129, 3–16 (2014)

    Article  Google Scholar 

  17. Felix, E.A., Lee, S.P.: Integrated approach to software defect prediction. IEEE Access 5, 21524–21547 (2017)

    Article  Google Scholar 

  18. Fumera, G., Pillai, I., Roli, F.: Spam filtering based on the analysis of text information embedded into images. J. Mach. Learn. Res. 7, 2699–2720 (2006)

    Google Scholar 

  19. Islam, R., Sakib, K.: A package based clustering for enhancing software defect prediction accuracy. In: 2014 17th International Conference on Computer and Information Technology (ICCIT). IEEE, Piscataway, pp. 81–86 (2014)

    Google Scholar 

  20. Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: A review. ACM Comput. Surv. 31(3), 264–323 (1999)

    Article  Google Scholar 

  21. Jing, X.Y., Zhang, Z.W., Ying, S., Wang, F., Zhu, Y.P.: Software defect prediction based on collaborative representation classification. In: Companion Proceedings of the 36th International Conference on Software Engineering, pp. 632–633. ACM, New York (2014). ICSE Companion 2014

    Google Scholar 

  22. Kabal, P.: TSP speech database. Tech. rep., Department of Electrical & Computer Engineering. McGill University, Montreal (2002)

    Google Scholar 

  23. Liu, D., Chen, T.: Unsupervised image categorization and object localization using topic models and correspondences between images. In: Proceeding IEEE 11th International Conference Computer Vision, pp. 1–7 (2007). https://doi.org/10.1109/ICCV.2007.4408852

  24. Loh, W.Y.: Symmetric multivariate and related distributions. Technometrics 34, 235–236 (2012)

    Article  Google Scholar 

  25. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91 (2004)

    Article  Google Scholar 

  26. Opper, M., Saad, D.: Tutorial on Variational Approximation Methods. MITP, Cambridge (2001)

    Book  Google Scholar 

  27. Ravinder, M., Venugopal, T.: Content-based cricket video shot classification using bag-of-visual-features. In: Artificial Intelligence and Evolutionary Computations in Engineering Systems (2016)

    Google Scholar 

  28. Sayyad Shirabad, J., Menzies, T.: The PROMISE Repository of Software Engineering Databases. School of Information Technology and Engineering. University of Ottawa, Canada (2005)

    Google Scholar 

  29. Tyagi, V., Wellekens, C.: On desensitizing the mel-cepstrum to spurious spectral components for robust speech recognition. In: Proceedings (ICASSP ’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., vol. 1, pp. I/529–I/532 (2005)

    Google Scholar 

  30. Xu, R., Wunsch, D.: Survey of clustering algorithms. IEEE Trans. Neural Netw. 16(3), 645–678 (2005)

    Article  Google Scholar 

  31. Yu, J.: Fault detection using principal components-based gaussian mixture model for semiconductor manufacturing processes. IEEE Trans. Semicond. Manuf. 24(3), 432–444 (2011)

    Article  Google Scholar 

  32. Zakariya, S.M., Ali, R., Ahmad, N.: Combining visual features of an image at different precision value of unsupervised content based image retrieval. In: Proceeding IEEE International Conference Computational Intelligence and Computing Research, pp. 1–4 (2010)

    Google Scholar 

  33. Zheng, F., Zhang, G., Song, Z.: Comparison of different implementations of MFCC. J. Comput. Sci. Technol. 16(6), 582–589 (2001)

    Article  Google Scholar 

  34. Zhu, Q., Zhong, Y., Zhao, B., Xia, G., Zhang, L.: Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 13(6), 747–751 (2016)

    Article  Google Scholar 

  35. Zhu, J., Ge, Z., Song, Z.: Variational Bayesian gaussian mixture regression for soft sensing key variables in non-gaussian industrial processes. IEEE Trans. Control Syst. Technol. 25(3), 1092–1099 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kamal Maanicshah .

Editor information

Editors and Affiliations

Appendix: Proof of Eqs. (10.23), (10.24), (10.25) and (10.26)

Appendix: Proof of Eqs. (10.23), (10.24), (10.25) and (10.26)

From Eq. 10.16 we can write the logarithm of the joint as:

$$\displaystyle \begin{aligned} \ln p\big(\mathcal{X},\mathcal{Z}\big) = & \sum_{i=1}^N\sum_{j=1}^MZ_{ij}\Bigg[\ln\frac{\Gamma\big(\sum_{l=1}^{D}\alpha_{jl}\big)}{\prod_{l=1}^{D}\Gamma\big(\alpha_{jl}\big)} + \ln\frac{\Gamma(\alpha_j+\beta_j)}{\Gamma(\alpha_j)\Gamma(\beta_j)} + \sum_{l=1}^D\big(\alpha_{jl}-1\big)\ln X_{il} \\ &+\beta_j\ln\lambda_j + \Big(\alpha_j-\sum_{i=1}^D\alpha_{jl}\Big)\ln\Big(\sum_{i=1}^DX_{il}\Big)\\ &- \big(\alpha_j +\beta_j\big)\ln\Big(\lambda_j +\sum_{i=1}^DX_{il}\Big)\Bigg]\\ &+ \sum_{i=1}^N\Bigg[\sum_{j=1}^sZ_{ij}\ln\pi_j + \sum_{j=s+1}^MZ_{ij}\ln\pi_j^*\Bigg]-\big(M-s\big)\ln\Bigg[1-\sum_{k=1}^s\pi_k\Bigg]\\ &+\ln\frac{\Gamma\big(\sum_{j=s+1}^{M}c_{j}\big)}{\prod_{j=s+1}^{M}\Gamma\big(c_{j}\big)} +\sum_{j=s+1}^M\big(c_j-1\big)\ln\bigg[\pi_j^*-\Big(1-\sum_{k=1}^s\pi_k\Big)\bigg]\\ &+ \sum_{j=1}^M\sum_{l=1}^D u_{jl}\ln \nu_{jl} - \ln\Gamma\big(u_{jl}\big) + \big(u_{jl} - 1\big)\ln\alpha_{jl} - \nu_{jl}\alpha_{jl}\\ &+ \sum_{j=1}^M p_{j}\ln q_{j} - \ln\Gamma\big(p_{j}\big) + \big(p_{j} - 1\big)\ln\alpha_{j} - q_{j}\alpha_{j}\\ &+ \sum_{j=1}^M g_{j}\ln h_{j} - \ln\Gamma\big(g_{j}\big) + \big(g_{j} - 1\big)\ln\beta_{j} - h_{j}\beta_{j}\\ &+ \sum_{j=1}^M s_{j}\ln t_{j} - \ln\Gamma\big(s_{j}\big) + \big(s_{j} - 1\big)\ln\lambda_{j} - t_{j}\lambda_{j} \end{aligned} $$
(10.57)

To derive the variational solutions of each parameter, we consider the logarithm with respect to each of the parameter assuming the rest of the parameters to be constant. This is explained in the following subsections.

1.1 Variational Solution for Q(Z) Eq. (10.23)

The logarithm with respect to Q(Z i) on the joint is given by:

$$\displaystyle \begin{aligned}{{}} \ln Q\big(Z_i\big) = & \sum_{j=1}^MZ_{ij}\Bigg[R_j + S_j+ \sum_{l=1}^D\big(\alpha_{jl}-1\big)\ln X_{il} +\beta_j\ln\lambda_j \\ &+ \Big(\alpha_j-\sum_{i=1}^D\alpha_{jl}\Big)\ln\Big(\sum_{i=1}^DX_{il}\Big) - \big(\alpha_j +\beta_j\big)T_{ij}\Bigg]\\ &+\Bigg[\sum_{j=1}^sZ_{ij}\ln\pi_j + \sum_{j=s+1}^MZ_{ij}\ln\pi_j^*\Bigg]\\ =& \sum_{j=1}^s\bigg[\ln\pi_j +R_j + S_j+ \sum_{l=1}^D\big(\alpha_{jl}-1\big)\ln X_{il} +\beta_j\ln\lambda_j\\ &+ \Big(\alpha_j-\sum_{i=1}^D\alpha_{jl}\Big)\ln\Big(\sum_{i=1}^DX_{il}\Big) - \big(\alpha_j +\beta_j\big)T_{ij}\bigg]\\ &\sum_{j=s+1}^M\bigg[\langle\ln\pi_j^*\rangle +R_j + S_j+ \sum_{l=1}^D\big(\alpha_{jl}-1\big)\ln X_{il} +\beta_j\ln\lambda_j \end{aligned} $$
(10.58)
$$\displaystyle \begin{aligned} &+ \Big(\alpha_j-\sum_{i=1}^D\alpha_{jl}\Big)\ln\Big(\sum_{i=1}^DX_{il}\Big) - \big(\alpha_j +\beta_j\big)T_{ij}\bigg] \end{aligned} $$
(10.59)

where

$$\displaystyle \begin{aligned} R_j = \bigg\langle\ln\frac{\Gamma\big(\sum_{l=1}^{D}\alpha_{jl}\big)}{\prod_{l=1}^{D}\Gamma\big(\alpha_{jl}\big)}\bigg\rangle,\; S_j = \bigg\langle\ln\frac{\Gamma(\alpha_j+\beta_j)}{\Gamma(\alpha_j)\Gamma(\beta_j)}\bigg\rangle,\; T_{ij} = \bigg\langle\ln\Big(\lambda_j +\sum_{i=1}^DX_{il}\Big)\bigg\rangle \end{aligned} $$
(10.60)

R j, S j, and T ij are intractable in the above equations. Due to this reason we use second order Taylor series approximation for R j and S j and first order Taylor series approximation for T ij. the equations are given in Eqs. (10.30), (10.31), and (10.32), respectively. It is a notable fact that (10.58) is of the form:

$$\displaystyle \begin{aligned}{{}} \ln Q\big(\mathcal{Z}\big) =\sum_{i=1}^N\Bigg[ \sum_{j=1}^sZ_{ij}\ln\tilde{r}_{ij} + \sum_{j=s+1}^MZ_{ij}\ln \tilde{r}_{ij}^*\Bigg] + \text{const} \end{aligned} $$
(10.61)

given

$$\displaystyle \begin{aligned} \ln\tilde{r}_{ij} = &\;\ln \pi_j + R_j+S_j+\Big(\overline{\alpha}_j- \sum_{l=1}^D\overline{\alpha}_{jl}\Big)\ln\Big(\sum_{l=1}^DX_{il}\Big) + \overline{\beta}_j\langle\ln\lambda_j\rangle\\ &+\sum_{l=1}^D\Big[\big(\overline{\alpha}_{jd}-1\big)\ln X_{id}\Big]-\big(\overline{\alpha}+\overline{\beta}\big)T_{ij} \end{aligned} $$
(10.62)
$$\displaystyle \begin{aligned} \ln\tilde{r}_{ij}^* = &\;\langle\ln \pi_j^*\rangle + R_j+S_j+\Big(\overline{\alpha}_j- \sum_{l=1}^D\overline{\alpha}_{jl}\Big)\ln\Big(\sum_{l=1}^DX_{il}\Big) + \overline{\beta}_j\langle\ln\lambda_j\rangle\\ &+\sum_{l=1}^D\Big[\big(\overline{\alpha}_{jd}-1\big)\ln X_{id}\Big]-\big(\overline{\alpha}+\overline{\beta}\big)T_{ij} \end{aligned} $$
(10.63)

By taking the exponentiation of Eq. (10.58) we can write:

$$\displaystyle \begin{aligned} Q\big(\mathcal{Z}\big) \propto \prod_{i=1}^{N}\Bigg[\prod_{j=1}^{s}\tilde{r}_{ij}^{Z_{ij}}\prod_{j=s+1}^{M}\tilde{r}_{ij}^{*Z_{ij}}\Bigg] \end{aligned} $$
(10.64)

Normalizing this equation we can write the variational solution of \(Q\big (\mathcal {Z}\big )\) as

$$\displaystyle \begin{aligned} Q\big(\mathcal{Z}\big) \propto \prod_{i=1}^{N}\Bigg[\prod_{j=1}^{s}r_{ij}^{Z_{ij}}\prod_{j=s+1}^{M}r_{ij}^{*Z_{ij}}\Bigg] \end{aligned} $$
(10.65)

where r ij and \(r_{ij}^*\) can be obtained from Eqs. (10.28) and (10.29). Also, we can say that <Z ij> = r ij for j = 1, …, s and \(\big <Z_{ij}^*\big >= r_{ij}^*\) for j = s + 1, …, M

1.2 Proof of Eq. (10.24): Variational Solution of Q(π ∗)

Similarly, the logarithm of the variational solution Q(π ∗) is given as

$$\displaystyle \begin{aligned} \ln Q\big(\pi_j^*\big) = &\big<\ln p\big(\mathcal{X},\Theta\big)\big>_{\Theta \ne \pi_j^*}\\ =&\sum_{i=1}^N\big<Z_{ij}\big>\ln \pi_j^* + \big(c_j - 1\big) \ln \pi_j^* + \text{const}\\ =&\ln \pi_j^*\Bigg[\sum_{i=1}^N\big<Z_{ij}\big>+ c_j - 1 \Bigg] + \text{const} \end{aligned} $$
(10.66)

This equation shows that it has the same logarithmic form as that of Eq. (10.15). So we can write the variational solution of Q(π ∗) as

$$\displaystyle \begin{aligned} Q\big(\mathbf{\pi}^*\big) = \Bigg(1 - \sum_{k=1}^s\pi_k\Bigg)^{-M+s} \frac{\Gamma\big(\sum_{j=s+1}^Mc_j^*\big)}{\prod_{j=s+1}^M\Gamma\big(c_j^*\big)}\prod_{j=s+1}^M\Bigg(\frac{\pi_j^*}{1-\sum_{k=1}^s\pi_k}\Bigg)^{c_j^*-1} \end{aligned} $$
(10.67)

where

$$\displaystyle \begin{aligned} c_j^* = \sum_{i=1}^N \big<Z_{ij}\big>+ c_j \end{aligned} $$
(10.68)

\(\big <Z_{ij}\big >= r_{ij}^*\) in the above equation.

1.3 Proof of Eq. (10.25): Variational Solution of Q(α)

As in the other two cases the logarithm of the variational solution Q(α jl) is given by

$$\displaystyle \begin{aligned}{{}} \ln Q\big(\alpha_{jl}\big) = &\big<\ln p\big(\mathcal{X},\Theta\big)\big>_{\Theta \ne \alpha_{jl}}\\ =&\sum_{i=1}^N\big<Z_{ij}\big>\Big[\mathcal{J}\big(\alpha_{jl}\big)+\alpha_{jl}\ln X_{il} -\alpha_{jl}\ln\Big(\sum_{i=1}^DX_{il}\Big)\Big]\\ &+\big(u_{jl}-1 \big) \ln \alpha_{jl} - \nu_{jl}\alpha_{jl} + \text{const} \end{aligned} $$
(10.69)

where

$$\displaystyle \begin{aligned} \mathcal{J}\big(\alpha_{jl}\big) = \Bigg<\ln \frac{\Gamma\big(\alpha_{jl}+\sum_{s \ne l}^{D+1}\alpha_{js}\big)}{\Gamma\big(\alpha_{jl}\big)\prod_{s \ne l}^{D+1}\Gamma\big(\alpha_{js}\big)}\Bigg>_{\Theta \ne \alpha_{jl}} \end{aligned} $$
(10.70)

Similar to what we encountered in the case of R j the equation for \(\mathcal {J}\big (\alpha _{jl}\big )\) is also intractable. We solve this problem finding the lower bound for the equation by calculating the first-order Taylor expansion with respect to \(\overline {\alpha }_{jl}\). The calculated lower bound is given by

$$\displaystyle \begin{aligned} \mathcal{L}\big(\alpha_{jl}\big) \ge\,\, & \overline{\alpha}_{jl} \ln \alpha_{jl}\Bigg[\psi\Bigg(\sum_{l=1}^{D+1}\overline{\alpha}_{jl}\Bigg)-\psi\big(\overline{\alpha}_{jl}\big)+ \sum_{s \ne l}^{D+1}\overline{\alpha}_{js}\\ &\times\psi'\Bigg(\sum_{l=1}^{D+1}\overline{\alpha}_{jl}\Bigg)\big(\big<\ln \alpha_{js}\big>-\ln\overline{\alpha}_{js}\big)\Bigg] + \text{const} \end{aligned} $$
(10.71)

This approximation is also found to be a strict lower bound of \(\mathcal {L}\big (\alpha _{jl}\big )\). Substituting this equation for lower bound in Eq. (10.69)

$$\displaystyle \begin{aligned} \ln Q\big(\alpha_{jl}\big) = &\sum_{i=1}^N\langle Z_{ij}\rangle\overline{\alpha}_{jl}\ln\alpha_{jl}\Bigg[\psi\bigg(\sum_{l=1}^D\overline{\alpha}_{jl}\bigg) -\psi\big(\overline{\alpha}_{jl}\big)\\ &+ \psi'\Big(\sum_{l=1}^D\overline{\alpha}_{jl}\Big)\sum_{d \ne l}^D\Big(\langle\ln\alpha_{jl}\rangle- \ln\overline{\alpha}_{jl}\Big)\overline{\alpha}_{jl}\Bigg]\\ &+\sum_{i=1}^N\alpha_{jl}\langle Z_{ij}\rangle\Bigg[\ln X_{il} - \ln\Big(\sum_{l=1}^DX_{il}\Big)\Bigg] + \text{const} \end{aligned} $$
(10.72)

This equation can be rewritten as

$$\displaystyle \begin{aligned}{{}} \ln Q\big(\alpha_{jl}\big) = \ln \alpha_{jl}\big(u_{jl}+\varphi_{jl} - 1\big) - \alpha_{jl}\big(\nu_{jl}-\vartheta_{jl}\big) + \text{const} \end{aligned} $$
(10.73)

where

$$\displaystyle \begin{aligned} \varphi_{jl} =&\sum_{i=1}^N\langle Z_{ij}\rangle\overline{\alpha}_{jl}\Bigg[\psi\bigg(\sum_{l=1}^D\overline{\alpha}_{jl}\bigg) -\psi\big(\overline{\alpha}_{jl}\big)\\ &+ \psi'\Big(\sum_{l=1}^D\overline{\alpha}_{jl}\Big)\sum_{d \ne l}^D\Big(\langle\ln\alpha_{jl}\rangle- \ln\overline{\alpha}_{jl}\Big)\overline{\alpha}_{jl}\Bigg] \psi'\Bigg(\sum_{l=1}^{D+1}\overline{\alpha}_{jl}\Bigg)\big(\big<\ln \alpha_{js}\big>-\ln\overline{\alpha}_{js}\big)\Bigg] \end{aligned} $$
(10.74)
$$\displaystyle \begin{aligned} \vartheta_{jl} =& \sum_{i=1}^N\langle Z_{ij}\rangle\Bigg[\ln X_{il} - \ln\Big(\sum_{l=1}^DX_{il}\Big)\Bigg] \end{aligned} $$
(10.75)

Eq. (10.73) is the logarithmic form of a Gamma distribution. If we exponentiate both the sides, we get

$$\displaystyle \begin{aligned} Q\big(\alpha_{jl}\big) \propto \alpha_{jl}^{u_{jl}+\varphi_{jl} - 1}e^{-\big(\nu_{jl}-\vartheta_{jl}\big)\alpha_{jl}} \end{aligned} $$
(10.76)

This leaves us with the optimal solution for the hyper-parameters u jl and ν jl given by

$$\displaystyle \begin{aligned} u_{jl}^* = u_{jl} + \varphi_{jl},\,\,\,\, \nu_{jl}^* = \nu_{jl}-\vartheta_{jl} \end{aligned} $$
(10.77)

By following the same procedure we can get the variational solutions for Q(α), Q(β), and Q(λ).

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Maanicshah, K., Azam, M., Nguyen, H., Bouguila, N., Fan, W. (2020). Finite Inverted Beta-Liouville Mixture Models with Variational Component Splitting. In: Bouguila, N., Fan, W. (eds) Mixture Models and Applications. Unsupervised and Semi-Supervised Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-23876-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23876-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23875-9

  • Online ISBN: 978-3-030-23876-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics