Advertisement

Statistical Structure Extraction in Dynamical Systems: Parametric Formulation

  • Gustavo Deco
  • Bernd Schürmann

Abstract

The dynamics underlying a time series can be extracted parametrically by learning the statistical dependencies observed in the measured data. In the parametric formulation, the goal is to model the observed process; that is, to extract the underlying statistical structure that explains the measurements. This can be formulated in the framework of neural computing and consequently by means of information theory (Deco and Obradovic, 1996). The concepts and basic elements of information theory that are required for the mathematical formulation of our theory and for the rest of the book are therefore briefly presented at the beginning of this chapter, and the most important theorems and inequalities are summarized in Appendix A. After this brief mathematical introduction, we pose parametric modeling first as a statistical problem by means of the concept of maximum likelihood and reinterpret it in the framework of information theory. Different kinds of parametric models will be studieD We start analyzing the most simple case of linear and Gaussian models (i.e., autoregressive models) and subsequently consider the more interesting case of nonlinear models by employing different types of neural networks architecture without and with feedback in the context of supervised feedforward and recurrent learning, respectively. The parametric extraction of statistical structure in time-dependent ordered data can be posed as a special case of factorial learning by eliminating the redundancy present in the data. Knowing how to eliminate the redundancy means that we know implicitly how the data are structured; in other words, we possess a model of the data. We conclude this chapter by formulating the concept of unsupervised modeling for univariate and multivariate time series in the context of independent component analysis and by noting its duality with the concept of maximum-likelihood-based supervised learning.

Keywords

Mutual Information Independent Component Analysis Unsupervised Learning Independent Component Analysis Empirical Likelihood 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • Gustavo Deco
    • 1
  • Bernd Schürmann
    • 1
  1. 1.Siemens Corporate TechnologyNeural ComputingMunichGermany

Personalised recommendations