Skip to main content

Compressible Reparametrization of Time-Variant Linear Dynamical Systems

  • Chapter
  • First Online:
Solving Large Scale Learning Tasks. Challenges and Algorithms

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9580))

  • 1462 Accesses

Abstract

Linear dynamical systems (LDS) are applied to model data from various domains—including physics, smart cities, medicine, biology, chemistry and social science—as stochastic dynamic process. Whenever the model dynamics are allowed to change over time, the number of parameters can easily exceed millions. Hence, an estimation of such time-variant dynamics on a relatively small—compared to the number of variables—training sample typically results in dense, overfitted models. Existing regularization techniques are not able to exploit the temporal structure in the model parameters. We investigate a combined reparametrization and regularization approach which is designed to detect redundancies in the dynamics in order to leverage a new level of sparsity. On the basis of ordinary linear dynamical systems, the new model, called ST-LDS, is derived and a proximal parameter optimization procedure is presented. Differences to \(l_1\)-regularization-based approaches are discussed and an evaluation on synthetic data is conducted. The results show, that the larger the considered system, the more sparsity can be achieved, compared to plain \(l_1\)-regularization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Notice that \(\varvec{A}\) is a short notation for all transition matrices of the system.

  2. 2.

    \([{\varvec{x}}]_i\) represents the i-th component of vector \({\varvec{x}}\). Moreover, \([\varvec{M}]_{i,j}\) represents the entry in row i and column j of matrix \(\varvec{M}\).

  3. 3.

    The log partition function is usually denoted by \(A(\varvec{\theta })\). Since the symbol A is already reserved for transition matrices, we denote the log partition function with B instead.

References

  1. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. Cai, T., Liu, W., Luo, X.: A constrained \(\ell _1\) minimization approach to sparse precision matrix estimation. J. Am. Stat. Assoc. 106(494), 594–607 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Dagum, P., Galper, A., Horvitz, E.: Dynamic network models for forecasting. In: Proceedings of the 8th Annual Conference on Uncertainty in Artificial Intelligence, pp. 41–48 (1992)

    Google Scholar 

  4. Fearnhead, P.: Exact Bayesian curve fitting and signal segmentation. IEEE Trans. Signal Process. 53(6), 2160–2166 (2005)

    Article  MathSciNet  Google Scholar 

  5. Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3), 432–441 (2008)

    Article  MATH  Google Scholar 

  6. Han, F., Liu, H.: Transition matrix estimation in high dimensional time series. In: Proceedings of the 30th International Conference on Machine Learning, pp. 172–180 (2013)

    Google Scholar 

  7. Kolar, M., Song, L., Ahmed, A., Xing, E.P.: Estimating time-varying networks. Ann. Appl. Stat. 4(1), 94–123 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Lauritzen, S.L.: Graphical Models. Oxford University Press, Oxford (1996)

    MATH  Google Scholar 

  9. Magnus, J.R., Neudecker, H.: Matrix Differential Calculus with Applications in Statistics and Econometrics, 2nd edn. Wiley, Chichester (1999)

    MATH  Google Scholar 

  10. Rodrigues de Morais, S., Aussem, A.: A novel scalable and data efficient feature subset selection algorithm. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part II. LNCS (LNAI), vol. 5212, pp. 298–312. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  11. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Opt. 1(3), 127–239 (2014)

    Article  Google Scholar 

  12. Piatkowski, N., Lee, S., Morik, K.: Spatio-temporal random fields: compressible representation and distributed estimation. Mach. Learn. 93(1), 115–139 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  13. Ravikumar, P., Wainwright, M.J., Lafferty, J.D.: High-dimensional ising model selection using \(\ell _1\)-regularized logistic regression. Ann. Appl. Stat. 38(3), 1287–1319 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Song, L., Kolar, M., Xing, E.P.: Time-varying dynamic Bayesian networks. Adv. Neural Inf. Process. Syst. 22, 1732–1740 (2009)

    Google Scholar 

  15. Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused lasso. J. Royal Stat. Soc. Ser. B 67(1), 91–108 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Trabelsi, G., Leray, P., Ben Ayed, M., Alimi, A.M.: Dynamic MMHC: a local search algorithm for dynamic bayesian network structure learning. In: Tucker, A., Höppner, F., Siebes, A., Swift, S. (eds.) IDA 2013. LNCS, vol. 8207, pp. 392–403. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  17. Wainwright, M.J., Jordan, M.I.: Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn. 1(1–2), 1–305 (2008)

    MATH  Google Scholar 

  18. Wong, E., Awate, S., Fletcher, T.: Adaptive sparsity in gaussian graphical models. JMLR W&CP 28, 311–319 (2013)

    Google Scholar 

  19. Xuan, X., Murphy, K.: Modeling changing dependency structure in multivariate time series. In: Proceedings of the 24th International Conference on Machine Learning, pp. 1055–1062. ACM (2007)

    Google Scholar 

  20. Zhou, S., Lafferty, J.D., Wasserman, L.A.: Time varying undirected graphs. Mach. Learn. 80(2–3), 295–319 (2010)

    Article  MathSciNet  Google Scholar 

  21. Zhou, S., Rütimann, P., Xu, M., Bühlmann, P.: High-dimensional covariance estimation based on gaussian graphical models. J. Mach. Learn. Res. 12, 2975–3026 (2011)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgement

This work has been supported by Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB 876 “Providing Information by Resource-Constrained Data Analysis”, project A1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nico Piatkowski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Piatkowski, N., Schnitzler, F. (2016). Compressible Reparametrization of Time-Variant Linear Dynamical Systems. In: Michaelis, S., Piatkowski, N., Stolpe, M. (eds) Solving Large Scale Learning Tasks. Challenges and Algorithms. Lecture Notes in Computer Science(), vol 9580. Springer, Cham. https://doi.org/10.1007/978-3-319-41706-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-41706-6_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-41705-9

  • Online ISBN: 978-3-319-41706-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics