Skip to main content

A Sparse Regression Mixture Model for Clustering Time-Series

  • Conference paper
Artificial Intelligence: Theories, Models and Applications (SETN 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5138))

Included in the following conference series:

Abstract

In this study we present a new sparse polynomial regression mixture model for fitting time series. The contribution of this work is the introduction of a smoothing prior over component regression coefficients through a Bayesian framework. This is done by using an appropriate Student-t distribution. The advantages of the sparsity-favouring prior is to make model more robust, less independent on order p of polynomials and improve the clustering procedure. The whole framework is converted into a maximum a posteriori (MAP) approach, where the known EM algorithm can be applied offering update equations for the model parameters in closed forms. The efficiency of the proposed sparse mixture model is experimentally shown by applying it on various real benchmarks and by comparing it with the typical regression mixture and the K-means algorithm. The results are very promising.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  2. McLachlan, G.M., Peel, D.: Finite Mixture Models. John Wiley & Sons, Inc., New York (2001)

    Google Scholar 

  3. Gaffney, S.J., Smyth, P.: Curve clustering with random effects regression mixtures. In: Bishop, C.M., Frey, B.J. (eds.) Proc. of the Ninth Intern. Workshop on Artificial Intelligence and Statistics (2003)

    Google Scholar 

  4. DeSarbo, W.S., Cron, W.L.: A maximum likelihood methodology for clusterwise linear regression. Journal of Classification 5(1), 249–282 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  5. Chudova, D., Gaffney, S., Mjolsness, E., Smyth, P.: Mixture models for translation-invariant clustering of sets of multi-dimensional curves. In: Proc. of the Ninth ACM SIGKDD Intern. Conf. on Knowledge Discovery and Data Mining, Washington, DC, pp. 79–88 (2003)

    Google Scholar 

  6. Gaffney, S.J.: Probabilistic curve-aligned clustering and prediction with regression mixture models. Ph.D thesis, Department of Computer Science, University of California, Irvine (2004)

    Google Scholar 

  7. Blekas, K., Nikou, C., Galatsanos, N., Tsekos, N.V.: A regression mixture model with spatial constraints for clustering spatiotemporal data. Intern. Journal on Artificial Intelligence Tools (to appear)

    Google Scholar 

  8. Tipping, M.E.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research 1, 211–244 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  9. Zhong, M.: A Variational method for learning Sparse Bayesian Regression. Neurocomputing 69, 2351–2355 (2006)

    Article  Google Scholar 

  10. Schmolck, A., Everson, R.: Smooth Relevance Vector Machine: A smoothness prior extension of the RVM. Machine Learning 68(2), 107–135 (2007)

    Article  Google Scholar 

  11. Seeger, M.: Bayesian Inference and Optimal Design for the Sparse Linear Model. Journal of Machine Learning Research 9, 759–813 (2008)

    Google Scholar 

  12. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39, 1–38 (1977)

    MATH  MathSciNet  Google Scholar 

  13. Keogh, E., Xi, X., Wei, L., Ratanamahatana, C.A.: The ucr time series classification/clustering homepage (2006), www.cs.ucr.edu/~eamonn/timeseriesdata/

  14. Keogh, E.J., Pazzani, M.J.: Scaling up Dynamic Time Warping for Datamining Applications. In: 6th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 285–289 (2000)

    Google Scholar 

  15. Vlassis, N., Likas, A.: A greedy EM algorithm for Gaussian mixture learning. Neural Processing Letters 15, 77–87 (2001)

    Article  Google Scholar 

  16. Williams, O., Blake, A., Cipolla, R.: Sparse Bayesian Learning for Efficient Visual Tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence 27(8), 1292–1304 (2005)

    Article  Google Scholar 

  17. Antonini, G., Thiran, J.: Counting pedestrians in video sequences using trajectory clustering. IEEE Trans. on Circuits and Systems for Video Technology 16(8), 1008–1020 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

John Darzentas George A. Vouros Spyros Vosinakis Argyris Arnellos

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Blekas, K., Galatsanos, N., Likas, A. (2008). A Sparse Regression Mixture Model for Clustering Time-Series. In: Darzentas, J., Vouros, G.A., Vosinakis, S., Arnellos, A. (eds) Artificial Intelligence: Theories, Models and Applications. SETN 2008. Lecture Notes in Computer Science(), vol 5138. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87881-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87881-0_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87880-3

  • Online ISBN: 978-3-540-87881-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics