Abstract
This paper presents a learning algorithm for nonlinear time series data using a Mixture of Experts (MoE) model which combines several AutoRegressive with eXogenous inputs (ARX) models to reconstruct the nonlinear time signal. The MoE model is trained using a Variational Bayesian (VB) framework based on a factorised approximation of the posterior distribution. This framework provides a natural way of selecting the number of experts required by the model, and also provides model structure determination. The usefulness of this method is demonstrated on a noisy discrete-time nonlinear Duffing oscillator system for the case when the model structure is known, and when the model structure is not known.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3:79–87
Yuksel SE, Wilson JN, Gader PD (2012) Twenty years of mixture of experts. IEEE Trans Neural Netw Learn Syst 23(8):1177–1193
Jordan MI, Jacobs RA (1994) Hierarchical mixtures of experts and the EM algorithm. Neural Comput 6:181–214
Xu L, Jordan MI, Hinton GE (1995) An alternative model for mixtures of experts. In Cowan JD, Tesauro G, Alspector J (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 633–640
Hernández-Lobato JM, Suárez A (2006) Competitive and collaborative mixtures of experts for financial risk analysis. In: Kollias S, Stafylopatis A, Duch W, Oja E (eds) Artificial neural networks - ICANN 2006, vol 4132. Springer, Berlin, pp 691–700
Carvalho AX, Tanner MA (2005) Mixtures-of-experts of autoregressive time series: asymptotic normality and model specification. IEEE Trans Neural Netw 16(1):39–56
Carvalho AX, Tanner MA (2006) Modeling nonlinearities with mixtures-of-experts of time series models. Int J Math Math Sci 2006(6), 1–22
Kalliovirta L, Meitz M, Saikkonen P (2012) A Gaussian mixture autoregressive model for univariate time series. Technical report, HECER Discussion Paper 352
Depraetere N, Vandebroek M (2013) Order selection in finite mixtures of linear regressions: literature review and a simulation study. doi:10.1007/s00362-013-0534-x
Beal MJ (2003) Variational algorithms for approximate Bayesian inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London
Waterhouse S, MacKay D, Robinson T (1996) Bayesian methods for mixture of experts. In: Touretzky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems, vol 8. MIT Press, Cambridge, pp 351–357
Ueda N, Ghahramani Z (2002) Bayesian model search for mixture models based on optimizing variational bounds. Neural Netw 15:1223–1241
Bishop CM, Svensén M (2003) Bayesian hierarchical mixture of experts. In: Uncertainty in artificial intelligence: proceedings of the nineteenth conference
Ahmed N, Campbell M (2011) Variational learning of autoregressive mixtures of experts for fully Bayesian hybrid system identification. In: 2011 American control conference, pp 139–144
Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Series B Methodol 39(1):1–38
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 The Society for Experimental Mechanics, Inc.
About this paper
Cite this paper
Baldacchino, T., Rowson, J., Worden, K. (2014). Nonlinear Time Series Analysis Using Bayesian Mixture of Experts. In: Kerschen, G. (eds) Nonlinear Dynamics, Volume 2. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, Cham. https://doi.org/10.1007/978-3-319-04522-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-04522-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-04521-4
Online ISBN: 978-3-319-04522-1
eBook Packages: EngineeringEngineering (R0)