Abstract
The contribution deals with sequential distributed estimation of global parameters of normal mixture models, namely mixing probabilities and component means and covariances. The network of cooperating agents is represented by a directed or undirected graph, consisting of vertices taking observations, incorporating them into own statistical knowledge about the inferred parameters and sharing the observations and the posterior knowledge with other vertices. The aim to propose a computationally cheap online estimation algorithm naturally disqualifies the popular (sequential) Monte Carlo methods for the associated high computational burden, as well as the expectation-maximization (EM) algorithms for their difficulties with online settings requiring data batching or stochastic approximations. Instead, we proceed with the quasi-Bayesian approach, allowing sequential analytical incorporation of the (shared) observations into the normal inverse-Wishart conjugate priors. The posterior distributions are subsequently merged using the Kullback–Leibler optimal procedure.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The terms “adaptation” and “combination” were introduced by [10]. We adopt them for our Bayesian counterparts.
References
Dedecius, K., Sečkárová, V.: Dynamic diffusion estimation in exponential family models. IEEE Signal Process. Lett. 20(11), 1114–1117 (2013)
Dedecius, K., Reichl, J., Djurić, P.M.: Sequential estimation of mixtures in diffusion networks. IEEE Signal Process. Lett. 22(2), 197–201 (2015)
Dongbing, Gu.: Distributed EM algorithm for Gaussian mixtures in sensor networks. IEEE Trans. Neural Netw. 19(7), 1154–1166 (2008)
Frühwirth-Schnatter, S.: Finite Mixture and Markov Switching Models. Springer, London (2006)
Hlinka, O., Hlawatsch, F., Djurić, P.M.: Distributed particle filtering in agent networks: a survey, classification, and comparison. IEEE Signal Process. Mag. 30(1), 61–81 (2013)
Kárný, M., Böhm, J., Guy, T.V., Jirsa, L., Nagy, I., Nedoma, P., Tesař, L.: Optimized Bayesian Dynamic Advising: Theory and Algorithms. Springer, London (2006)
Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)
Pereira, S.S., Lopez-Valcarce, R., Pages-Zamora, A.: A diffusion-based EM algorithm for distributed estimation in unreliable sensor networks. IEEE Signal Process. Lett. 20(6), 595–598 (2013)
Raiffa, H., Schlaifer, R.: Applied Statistical Decision Theory (Harvard Business School Publications). Harvard University Press, Cambridge (1961)
Sayed, A.H.: Adaptive networks. Proc. IEEE 102(4), 460–497 (2014)
Smith, A.F.M., Makov, U.E.: A Quasi-Bayes sequential procedure for mixtures. J. R. Stat. Soc. Ser. B (Methodol.) 40(1), 106–112 (1978)
Titterington, D.M., Smith, A.F.M., Makov, U.E.: Statistical Analysis of Finite Mixture Distributions. Wiley, New York (1985)
Weng, Y., Xiao, W., Xie, L.: Diffusion-based EM algorithm for distributed estimation of Gaussian mixtures in wireless sensor networks. Sensors 11(6), 6297–316 (2011)
Acknowledgements
This work was supported by the Czech Science Foundation, postdoctoral grant no. 14–06678P. The authors thank the referees for their valuable comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Below we give several useful definitions and lemmas regarding the Bayesian estimation of exponential family distributions with conjugate priors [9]. The proofs are trivial. Their application to the normal model and normal inverse-gamma prior used in Sect. 3.4 follows.
Definition 1 (Exponential family distributions and conjugate priors).
Any distribution of a random variable y parameterized by θ with the probability density function of the form
where f, g, η, and T are known functions, is called an exponential family distribution. η ≡ η(θ) is its natural parameter, T(y) is the (dimension preserving) sufficient statistic. The form is not unique.
Any prior distribution for θ is said to be conjugate to p(y | θ), if it can be written in the form
where q is a known function and the hyperparameters ν ∈ ℝ + and ξ is of the same shape as T(y).
Lemma 1 (Bayesian update with conjugate priors).
Bayes’ theorem
yields the posterior hyperparameters as follows:
Lemma 2.
The normal model
where μ,σ 2 are unknown can be written in the exponential family form with
Lemma 3.
The normal inverse-gamma prior distribution for μ,σ 2 with the (nonnatural) real scalar hyperparameters m, and positive s,a,b, having the density
can be written in the prior-conjugate form with
Lemma 4.
The Bayesian update of the normal inverse-gamma prior following the previous lemma coincides with the ‘ordinary’ well-known update of the original hyperparameters,
Definition 2 (Kullback–Leibler divergence).
Let f(x), g(x) be two probability density functions of a random variable x, f absolutely continuous with respect to g. The Kullback–Leibler divergence is the nonnegative functional
where the integration domain is the support of f. The Kullback–Leibler divergence is a premetric; it is zero if f = g almost everywhere, it does not satisfy the triangle inequality nor is it symmetric.
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Dedecius, K., Reichl, J. (2015). Distributed Estimation of Mixture Models. In: Frühwirth-Schnatter, S., Bitto, A., Kastner, G., Posekany, A. (eds) Bayesian Statistics from Methods to Models and Applications. Springer Proceedings in Mathematics & Statistics, vol 126. Springer, Cham. https://doi.org/10.1007/978-3-319-16238-6_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-16238-6_3
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16237-9
Online ISBN: 978-3-319-16238-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)