Skip to main content

Localised Mixtures of Experts for Mixture of Regressions

  • Conference paper
Between Data Science and Applied Data Analysis

Abstract

In this paper, an alternative to Mixture of Experts (ME) called localised mixture of experts is studied. It corresponds to ME where the experts are linear regressions and the gating network is a Gaussian classifier. The underlying regressors distribution can be considered to be Gaussian, so that the joint distribution is a Gaussian mixture. This provides a powerful speed-up of the EM algorithm for localised ME. Conversely, when studying Gaussian mixtures with specific constraints, one can use the standard EM algorithm for mixture of experts to carry out maximum likelihood estimation. Some constrained models are useful, and the corresponding modifications to apply to the EM algorithm are described.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • DEMPSTER, A. P., LAIRD, N. M. and RUBIN, D. B. (1977): Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39, 1–38.

    MathSciNet  MATH  Google Scholar 

  • J. FRITSCH. (1996): Modular neural networks for speech recognition. Master’s thesis, Carnegie Mellon University & University of Karlsruhe. ftp://reports.adm.cs.cmu.edu/usr/anon/1996/CMU-CS-96-203.ps.gz.

    Google Scholar 

  • FRITSCH, J., FINKE, M. and WAIBEL, A. (1997): Adaptively growing hierarchical mixtures of experts. In M. C. Mozer, M. I. Jordan and T. Petsche (Eds.), Advances in Neural Informations Processing Systems, 9. MIT Press.

    Google Scholar 

  • HENNIG, C. (1999): Models and Methods for Clusterwise Linear Regression. Gaul, W. and Locarek-Junge, H. (Eds): Classification in the Information Age. Springer, Berlin, 179–187.

    Chapter  Google Scholar 

  • HENNIG, C. (2000): Identifiability of Models for Clusterwise Linear Regression. Journal of Classification, 17, 273–296.

    Article  MathSciNet  MATH  Google Scholar 

  • HURN, M. A., JUSTEL, A. and C. P., ROBERT. (2000): Estimating mixtures of regressions. Technical report, CREST, France.

    Google Scholar 

  • JACOBS, R. A., JORDAN, M. I., NOWLAN, S. J. and HINTON, G. E. (1991): Adaptive mixture of local experts. Neural Computation, 3(1), 79–87.

    Article  Google Scholar 

  • JIANG, W. and TANNER, M.A. (1999): Hierarchical mixtures-of-experts for exponential family regression models: approximation and maximum likelihood estimation. Ann. Statistics, 27, 987–1011.

    Article  MathSciNet  MATH  Google Scholar 

  • JORDAN, M. I. and JACOBS, R. A. (1994): Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6, 181–214.

    Article  Google Scholar 

  • QUANDT, R. E.(1972): A new Approach to Estimating Switching Regressions Journal of the American Statistical Association, 67, 306–310.

    Article  MATH  Google Scholar 

  • KIEFER, N. M.(1978): Discrete Parameter Variation: Efficient Estimation of a Switching Regression Model, Econometrica, 46, 427–434.

    Article  MathSciNet  MATH  Google Scholar 

  • QUANDT, R. E. and RAMSEY, J. B. (1978): Estimating mixtures of normal distributions and switching regressions, JASA, 73, 730–752.

    Article  MathSciNet  MATH  Google Scholar 

  • McLACHLAN, G. J. and PEEL, D. (2000): Finite Mixture Models, Wiley.

    Google Scholar 

  • MOODY, J. and DARKEN, C.J. (1989): Fast learning in networks of locally-tuned processing units Neural Computation, 1, 281–294.

    Article  Google Scholar 

  • MOERLAND, P. (1999) Classification using localized mixtures of experts. In proc. of the International Conference on Artificial Neural Networks.

    Google Scholar 

  • SATO, M. and ISHII, S. (2000): On-line EM algorithm for the normalized gaussian network. Neural Computation, 12(2), 407–432.

    Article  Google Scholar 

  • XU, L. and JORDAN, M.I. (1995): On convergence properties of the EM algorithm for Gaussian mixtures. Neural Computation, 8(1), Jan.

    Google Scholar 

  • XU, L., HINTON, G. and JORDAN, M. I. (1995): An alternative model for mixtures of experts. In G. Tesauro et al., Advances in Neural Information Processing Systems, 7, 633–640, Cambridge MA, MIT Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bouchard, G. (2003). Localised Mixtures of Experts for Mixture of Regressions. In: Schader, M., Gaul, W., Vichi, M. (eds) Between Data Science and Applied Data Analysis. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-18991-3_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-18991-3_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40354-8

  • Online ISBN: 978-3-642-18991-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics