Skip to main content

Nonlinear Time Series Analysis Using Bayesian Mixture of Experts

  • Conference paper
  • First Online:
Nonlinear Dynamics, Volume 2

Abstract

This paper presents a learning algorithm for nonlinear time series data using a Mixture of Experts (MoE) model which combines several AutoRegressive with eXogenous inputs (ARX) models to reconstruct the nonlinear time signal. The MoE model is trained using a Variational Bayesian (VB) framework based on a factorised approximation of the posterior distribution. This framework provides a natural way of selecting the number of experts required by the model, and also provides model structure determination. The usefulness of this method is demonstrated on a noisy discrete-time nonlinear Duffing oscillator system for the case when the model structure is known, and when the model structure is not known.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3:79–87

    Article  Google Scholar 

  2. Yuksel SE, Wilson JN, Gader PD (2012) Twenty years of mixture of experts. IEEE Trans Neural Netw Learn Syst 23(8):1177–1193

    Article  Google Scholar 

  3. Jordan MI, Jacobs RA (1994) Hierarchical mixtures of experts and the EM algorithm. Neural Comput 6:181–214

    Article  Google Scholar 

  4. Xu L, Jordan MI, Hinton GE (1995) An alternative model for mixtures of experts. In Cowan JD, Tesauro G, Alspector J (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 633–640

    Google Scholar 

  5. Hernández-Lobato JM, Suárez A (2006) Competitive and collaborative mixtures of experts for financial risk analysis. In: Kollias S, Stafylopatis A, Duch W, Oja E (eds) Artificial neural networks - ICANN 2006, vol 4132. Springer, Berlin, pp 691–700

    Chapter  Google Scholar 

  6. Carvalho AX, Tanner MA (2005) Mixtures-of-experts of autoregressive time series: asymptotic normality and model specification. IEEE Trans Neural Netw 16(1):39–56

    Article  MathSciNet  Google Scholar 

  7. Carvalho AX, Tanner MA (2006) Modeling nonlinearities with mixtures-of-experts of time series models. Int J Math Math Sci 2006(6), 1–22

    Article  MathSciNet  Google Scholar 

  8. Kalliovirta L, Meitz M, Saikkonen P (2012) A Gaussian mixture autoregressive model for univariate time series. Technical report, HECER Discussion Paper 352

    Google Scholar 

  9. Depraetere N, Vandebroek M (2013) Order selection in finite mixtures of linear regressions: literature review and a simulation study. doi:10.1007/s00362-013-0534-x

    Google Scholar 

  10. Beal MJ (2003) Variational algorithms for approximate Bayesian inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London

    Google Scholar 

  11. Waterhouse S, MacKay D, Robinson T (1996) Bayesian methods for mixture of experts. In: Touretzky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems, vol 8. MIT Press, Cambridge, pp 351–357

    Google Scholar 

  12. Ueda N, Ghahramani Z (2002) Bayesian model search for mixture models based on optimizing variational bounds. Neural Netw 15:1223–1241

    Article  Google Scholar 

  13. Bishop CM, Svensén M (2003) Bayesian hierarchical mixture of experts. In: Uncertainty in artificial intelligence: proceedings of the nineteenth conference

    Google Scholar 

  14. Ahmed N, Campbell M (2011) Variational learning of autoregressive mixtures of experts for fully Bayesian hybrid system identification. In: 2011 American control conference, pp 139–144

    Google Scholar 

  15. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Series B Methodol 39(1):1–38

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tara Baldacchino .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 The Society for Experimental Mechanics, Inc.

About this paper

Cite this paper

Baldacchino, T., Rowson, J., Worden, K. (2014). Nonlinear Time Series Analysis Using Bayesian Mixture of Experts. In: Kerschen, G. (eds) Nonlinear Dynamics, Volume 2. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, Cham. https://doi.org/10.1007/978-3-319-04522-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-04522-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-04521-4

  • Online ISBN: 978-3-319-04522-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics