Skip to main content

Estimating Lyapunov Exponents from Time Series

  • Chapter
  • First Online:
Book cover Chaos Detection and Predictability

Part of the book series: Lecture Notes in Physics ((LNP,volume 915))

Abstract

Lyapunov exponents are important statistics for quantifying stability and deterministic chaos in dynamical systems. In this review article, we first revisit the computation of the Lyapunov spectrum using model equations. Then, employing state space reconstruction (delay coordinates), two approaches for estimating Lyapunov exponents from time series are presented: methods based on approximations of Jacobian matrices of the reconstructed flow and so-called direct methods evaluating the evolution of the distances of neighbouring orbits. Most direct methods estimate the largest Lyapunov exponent, only, but as an advantage they give graphical feedback to the user to confirm exponential divergence. This feedback provides valuable information concerning the validity and accuracy of the estimation results. Therefore, we focus on this type of algorithms for estimating Lyapunov exponents from time series and illustrate its features by the (iterated) Hénon map, the hyper chaotic folded-towel map, the well known chaotic Lorenz-63 system, and a time continuous 6-dimensional Lorenz-96 model. These examples show that the largest Lyapunov exponent from a time series of a low-dimensional chaotic system can be successfully estimated using direct methods. With increasing attractor dimension, however, much longer time series are required and it turns out to be crucial to take into account only those neighbouring trajectory segments in delay coordinates space which are located sufficiently close together.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use forward delay coordinates here. Delay reconstruction backward in time provides equivalent results.

  2. 2.

    Since x 2(n + 1) = x 1(n) any x 2 time series will give the same results.

  3. 3.

    This is just a rough estimate, because the choice of the sampling time Δ t and the resulting distribution of reconstructed states on the attractor have also to be taken into account when estimating the required length of the time series.

  4. 4.

    The matrix C and its column vectors \(\mathbf{c}^{(j)}\) depend on the time step k. To avoid clumsy notation this dependance is not explicitly indicated.

References

  1. Abarbanel, H.D.I.: Analysis of Observed Chaotic Data. Springer, New York (1996)

    Book  MATH  Google Scholar 

  2. Abarbanel, H.D.I., Brown, R., Kennel, M.B.: Lyapunov exponents in chaotic systems: their importance and their evaluation using observed data. Int. J. Mod. Phys. B 5, 1347–1375 (1991)

    Article  ADS  MATH  Google Scholar 

  3. Benettin, G., Galgani, L., Giorgilli, A., Strelcyn, J.M.: Lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. Part II: Numerical application. Meccanica 15, 21–30 (1980)

    MATH  Google Scholar 

  4. Briggs, K.: An improved method for estimating Liapunov exponents of chaotic time series. Phys. Lett. A 151, 27–32 (1990)

    Article  ADS  MathSciNet  Google Scholar 

  5. Brown, R.: Calculating Lyapunov exponents for short and/or noisy data sets. Phys. Rev. E 47(6), 3962–3969 (1993)

    Article  ADS  Google Scholar 

  6. Brown, R., Bryant, P., Abarbanel, H.D.I.: Computing the Lyapunov spectrum of a dynamical system from an observed time series. Phys. Rev. A 43, 2787–2806 (1991)

    Article  ADS  MathSciNet  Google Scholar 

  7. Bryant, P., Brown, R., Abarbanel, H.D.I.: Lyapunov exponents from observed time series. Phys. Rev. Lett. 65, 1523–1526 (1990)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  8. Čenys, A.: Lyapunov spectrum of the maps generating identical attractors. Europhys. Lett. 21(4), 407–411 (1993)

    Article  ADS  Google Scholar 

  9. Dämmig, M., Mitschke, F.: Estimation of Lyapunov exponents from time series: the stochastic case. Phys. Lett. A 178, 385–394 (1993)

    Article  ADS  MathSciNet  Google Scholar 

  10. Darbyshire, A.G., Broomhead, D.S.: Robust estimation of tangent maps and Liapunov spectra. Physica D 89(3–4), 287–305 (1996)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  11. Dechert, W.D., Gençay, R.: The topological invariance of Lyapunov exponents in embedded dynamics. Physica D 90, 40–55 (1996)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  12. Eckmann, J.-P., Ruelle, D.: Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 57, 617–656 (1985)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  13. Eckmann, J.-P., Ruelle, D.: Fundamental limitations for estimating dimensions and Lyapunov exponents in dynamical systems. Physica D 56, 185–187 (1992)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  14. Eckmann, J.-P., Kamphorst, S.O., Ruelle, D., Ciliberto, S.: Lyapunov exponents from time series. Phys. Rev. A 34, 4971–4979 (1986)

    Article  ADS  MathSciNet  Google Scholar 

  15. Ellner, S., Gallant, A.R., McCaffrey, D., Nychka, D.: Convergence rates and data requirements for Jacobian-based estimates of Lyapunov exponents from data. Phys. Lett. A 153, 357–363 (1991)

    Article  ADS  MathSciNet  Google Scholar 

  16. Fell, J., Beckmann, P.: Resonance-like phenomena in Lyapunov calculations from data reconstructed by the time-delay method. Phys. Lett. A 190, 172–176 (1994)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  17. Fell, J., Röschke, j., Beckmann, P.: Deterministic chaos and the first positive Lyapunov exponent: a nonlinear analysis of the human electroencephalogram during sleep. Biol. Cybern. 69, 139–146 (1993)

    Google Scholar 

  18. Gao, J., Zheng, Z.: Local exponential divergence plot and optimal embedding of a chaotic time series. Phys. Lett. A 181, 153–158 (1993)

    Article  ADS  Google Scholar 

  19. Geist, K., Parlitz, U., Lauterborn, W.: Comparison of different methods for computing Lyapunov exponents. Prog. Theor. Phys. 83, 875–893 (1980)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  20. Gencay, R., Dechert, W.D.: An algorithm for the n Lyapunov exponents of an n-dimensional unknown dynamical system. Physica D 59, 142–157 (1992)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  21. Hénon, M.: A two-dimensional mapping with a strange attractor. Commun. Math. Phys. 50(1), 69–77 (1976)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  22. Holzfuss, J., Lauterborn, W.: Liapunov exponents from a time series of acoustic chaos. Phys. Rev. A 39, 2146–2152 (1989)

    Article  ADS  Google Scholar 

  23. Holzfuss, J., Parlitz, U.: Lyapunov exponents from time series. In: Arnold, L., Crauel, H., Eckmann, J.-P. (eds.) Proceedings of the Conference Lyapunov Exponents, Oberwolfach 1990. Lecture Notes in Mathematics, vol. 1486, pp. 263–270. Springer, Berlin (1991)

    Google Scholar 

  24. Kadtke, J.B., Brush, J., Holzfuss, J.: Global dynamical equations and Lyapunov exponents from noisy chaotic time series. Int. J. Bifurcat. Chaos 3, 607–616 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  25. Kantz, H.: A robust method to estimate the maximal Lyapunov exponent of a time series. Phys. Lett. A 185, 77–87 (1994)

    Article  ADS  Google Scholar 

  26. Kantz, H., Schreiber, T.: Nonlinear Time Series Analysis. Cambridge University Press, Cambridge (2004)

    MATH  Google Scholar 

  27. Kantz, H., Radons, G., Yang, H.: The problem of spurious Lyapunov exponents in time series analysis and its solution by covariant Lyapunov vectors. J. Phys. A: Math. Theor. 46, 254009 (2013)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  28. Kostelich, E.: Bootstrap estimates of chaotic dynamics. Phys. Rev. E 64, 016213 (2001)

    Article  ADS  Google Scholar 

  29. Kruel, Th.M., Eiswirth, M., Schneider, F.W.: Computation of Lyapunov spectra: effect of interactive noise and application to a chemical oscillator. Physica D 63, 117–137 (1993)

    Article  ADS  MATH  Google Scholar 

  30. Kurths, J., Herzel, H.: An attractor in solar time series. Physica D 25, 165–172 (1987)

    Article  ADS  MATH  Google Scholar 

  31. Lorenz, E.N.: Deterministic nonperiodic flow. J. Atmos. Sci. 20(2), 130–141 (1963)

    Article  ADS  Google Scholar 

  32. Lorenz, E.N.: Predictability a problem partly solved. In: Proceedings of the Seminar on Predictability, vol. 1, pp. 1–18. ECMWF, Reading (1996)

    Google Scholar 

  33. Oseledec, V.I.: A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc. 19, 197–231 (1968)

    MathSciNet  Google Scholar 

  34. Ott, E.: Chaos in Dynamical Systems. Cambridge University Press, Cambridge (1993)

    MATH  Google Scholar 

  35. Parlitz, U.: Identification of true and spurious Lyapunov exponents from time series. Int. J. Bifurcat. Chaos 2, 155–165 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  36. Parlitz, U.: Lyapunov exponents from Chua’s circuit. J. Circuits Syst. Comput. 3, 507–523 (1993)

    Article  Google Scholar 

  37. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes: The Art of Scientific Computing, 3rd edn. Cambridge University Press, Cambridge (2007)

    MATH  Google Scholar 

  38. Rosenstein, M.T., Collins, J.J., de Luca, C.J.: A practical method for calculating largest Lyapunov exponents from small data sets. Physica D 65, 117–134 (1993)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  39. Rössler, O.E.: An equation for hyperchaos. Phys. Lett. A 71, 155–157 (1979)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  40. Sano, M., Sawada, Y.: Measurement of the Lyapunov spectrum from a chaotic time series. Phys. Rev. Lett. 55, 1082–1085 (1985)

    Article  ADS  MathSciNet  Google Scholar 

  41. Sato, S., Sano, M., Sawada Y.: Practical methods of measuring the generalized dimension and largest Lyapunov exponent in high dimensional chaotic systems. Prog. Theor. Phys. 77, 1–5 (1987)

    Article  ADS  MathSciNet  Google Scholar 

  42. Sauer, T., Yorke, J.A.: How many delay coordinates do you need? Int. J. Bifurcat. Chaos 3, 737–744 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  43. Sauer, T., Yorke, J., Casdagli, M.: Embedology. J. Stat. Phys. 65, 579–616 (1991)

    Article  ADS  MathSciNet  Google Scholar 

  44. Sauer, T.D., Tempkin, J.A., Yorke, J.A.: Spurious Lyapunov exponents in attractor reconstruction. Phys. Rev. Lett. 81, 4341–4344 (1998)

    Article  ADS  Google Scholar 

  45. Shimada, I., Nagashima, T.: A numerical approach to ergodic problems of dissipative dynamical systems. Prog. Theor. Phys. 61, 1605–1616 (1979)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  46. Skokos, Ch.: The Lyapunov characteristic exponents and their computation. In: Lecture Notes in Physics, vol. 790, pp. 63–135. Springer, Berlin (2010)

    Google Scholar 

  47. Stoop, R., Meier, P.F.: Evaluation of Lyapunov exponents and scaling functions from time series. J. Opt. Soc. Am. B 5, 1037–1045 (1988)

    Article  ADS  Google Scholar 

  48. Stoop, R., Parisi, J.: Calculation of Lyapunov exponents avoiding spurious elements. Physica D 50, 89–94 (1991)

    Article  ADS  MATH  Google Scholar 

  49. Takens, F.: Detecting strange attractors in turbulence. In: Lecture Notes in Mathematics, vol. 898, pp. 366–381. Springer, Berlin (1981)

    Google Scholar 

  50. Theiler, J.: Estimating fractal dimension. J. Opt. Soc. Am. A 7, 1055–1073 (1990)

    Article  ADS  MathSciNet  Google Scholar 

  51. Wolf, A., Swift, J.B., Swinney, L., Vastano, J.A.: Determining Lyapunov exponents from a time series. Physica D 16, 285–317 (1985)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  52. Yang, H.-L., Radons, G., Kantz, H.: Covariant Lyapunov vectors from reconstructed dynamics: the geometry behind true and spurious Lyapunov exponents. Phys. Rev. Lett. 109(24), 244101 (2012)

    Article  ADS  Google Scholar 

  53. Zeng, X., Eykholt, R., Pielke, R.A.: Estimating the Lyapunov-exponent spectrum from short time series of low precision. Phys. Rev. Lett. 66, 3229–3232 (1991)

    Article  ADS  Google Scholar 

  54. Zeng, X., Pielke, R.A., Eykholt, R.: Extracting Lyapunov exponents from short time series of low precision. Mod. Phys. Lett. B 6, 55–75 (1992)

    Article  ADS  MathSciNet  Google Scholar 

Download references

Acknowledgements

Inspiring scientific discussions with S. Luther and all members of the Biomedical Physics Research Group and financial support from the German Federal Ministry of Education and Research (BMBF) (project FKZ 031A147, GO-Bio) and the German Research Foundation (DFG) (Collaborative Research Centre SFB 937 Project A18) are gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ulrich Parlitz .

Editor information

Editors and Affiliations

Appendix

Appendix

Let \(\varphi ^{k}: \mathbb{R}^{D} \rightarrow \mathbb{R}^{D}\) be the induced flow in reconstruction space mapping reconstructed states \(\mathbf{x}_{n} = (s_{n},s_{n+L},\ldots,s_{n+(D-1)L})\) to their future values \(\varphi ^{k}(\mathbf{x}_{n}) =\mathbf{ x}_{n+k}\). To estimate the D × D Jacobian matrices \(D\varphi ^{k}(\mathbf{x})\) from the temporal evolution of the reconstructed states \(\{\mathbf{x}_{n}\}_{n=1}^{N}\) the flow \(\varphi ^{k}\) has to be approximated by a general ansatz (black-box model) like a neural network [20] or a superposition of I basis functions \(b_{i}: \mathbb{R}^{D} \rightarrow \mathbb{R}\) providing an approximating function

$$\displaystyle{ \psi ^{k}(\mathbf{x}) = \left (\psi _{ 1}^{k}(\mathbf{x}),\ldots,\psi _{ D}^{k}(\mathbf{x})\right ) = \left (\sum _{ i=1}^{I}c_{ i1}b_{i}(\mathbf{x}),\ldots,\sum _{i=1}^{I}c_{ iD}b_{i}(\mathbf{x})\right ) =\mathbf{ b}(\mathbf{x}) \cdot C }$$
(1.50)

where C = (c ij ) denotes a I × D matrix of coefficients with columns \(\mathbf{c}^{(\,j)}\) ( j = 1, , D) that have to be estimated and \(\mathbf{b}(\mathbf{x}) = (b_{1}(\mathbf{x}),\ldots,b_{I}(\mathbf{x}))\) is a row vector consisting of the values of all basis functions evaluated at the state \(\mathbf{x}\).Footnote 4

For the special choice k = L (evolution time step equals the lag of the delay coordinates) the first D − 1 components of the map \(\psi ^{k}(\mathbf{x}_{n})\) are known (due to the delay reconstruction) and only for the last component an approximation is required

$$\displaystyle\begin{array}{rcl} \psi ^{L}(\mathbf{x})& =& \left (s_{ n+L},s_{n+2L},\ldots,s_{n+(D-1)L},\sum _{i=1}^{I}c_{ i}b_{i}(\mathbf{x})\right ){}\end{array}$$
(1.51)
$$\displaystyle\begin{array}{rcl} & =& (x_{n2},\ldots,x_{nD},\mathbf{b}(\mathbf{x}) \cdot \mathbf{ c}).{}\end{array}$$
(1.52)

With this notation the approximation \(D\psi ^{k}(\mathbf{x})\) of the desired Jacobian matrix \(D\varphi ^{k}(\mathbf{x})\) of the (induced) flow \(\varphi (\mathbf{x})\) in embedding space can be written as

$$\displaystyle{ D\psi ^{k}(\mathbf{x}) = \left (\begin{array}{ccc} \frac{\partial b_{1}} {\partial x_{1}} & \ldots & \frac{\partial b_{I}} {\partial x_{1}}\\ \vdots & \ddots & \vdots \\ \frac{\partial b_{1}} {\partial x_{D}} & \ldots & \frac{\partial b_{I}} {\partial x_{D}} \end{array} \right )\cdot C = G\cdot C }$$
(1.53)

where G will be called derivative matrix in the following.

For k = L the Jacobian matrix of the approximating function ψ k is given as

$$\displaystyle{ D\psi ^{L}(\mathbf{x}) = \left (\begin{array}{ccccc} 0 &1&0&\ldots & 0\\ 0 &0 &1 &\ldots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 0 &0&0&\ldots & 1 \\ \sum _{i=1}^{I}c_{i} \frac{\partial b_{i}} {\partial x_{1}} & \ldots & \ldots & \ldots & \sum _{i=1}^{I}c_{i} \frac{\partial b_{i}} {\partial x_{D}} \end{array} \right ) }$$
(1.54)

Linear basis functions \(b_{i}(\mathbf{x})\) can be used to model the (linearized) flow (very) close to the reference points \(\mathbf{x}_{n}\) along the orbits. To approximate the flow in a larger neighbourhood of \(\mathbf{x}_{n}\) or even globally, nonlinear basis functions are required, like multidimensional polynomials [2, 47], or radial basis functions [23, 24, 35].

To estimate the coefficient matrix C in Eq. (1.50) or the coefficient vector \(\mathbf{c}\) in Eq. (1.51) we select a set of representative states \(\{\mathbf{z}^{j}\}\) whose temporal evolution \(\varphi ^{k}(\mathbf{z}^{j})\) is known. For local modeling this set of states consists of nearest neighbours \(\{\mathbf{x}_{m(n)}: m(n) \in \mathcal{U}_{n}\}\) of the reference point \(\mathbf{x}_{n}\) where \(\mathcal{U}_{n}\) defines the chosen neighbourhood that can be of fixed mass (a fixed number K of nearest neighbours of \(\mathbf{x}_{n}\)) or of fixed size (all points with distance smaller than a given bound ε). For global modeling of the flow the set \(\{\mathbf{z}^{j}\}\) is usually a (randomly sampled) subset of all reconstructed states. Let

$$\displaystyle{ Y = \left (\begin{array}{cccc} \varphi _{1}^{k}({\boldsymbol z^{1}}) &\ldots & \varphi _{D}^{k}({\boldsymbol z^{1}})\\ \vdots &\vdots & \vdots \\ \varphi _{1}^{k}({\boldsymbol z^{J}})&\ldots &\varphi _{D}^{k}({\boldsymbol z^{J}})\\ \end{array} \right ) }$$
(1.55)

be a J × D matrix whose rows are components the (known) future values \(\varphi ^{k}(\mathbf{z}^{j})\) of the J states \(\{\mathbf{z}^{j}\}\) and let

$$\displaystyle{ B = \left (\begin{array}{cccc} b_{1}({\boldsymbol z^{1}}) &\ldots & b_{I}({\boldsymbol z^{1}})\\ \vdots &\vdots & \vdots \\ b_{1}({\boldsymbol z^{J}})&\ldots &b_{I}({\boldsymbol z^{J}})\\ \end{array} \right ) }$$
(1.56)

be the J × I (design) matrix [37] whose rows are the basis functions b i (⋅ ) evaluated at the selected states \(\{\mathbf{z}^{j}\}\). Using this notation the approximation task can be stated as a minimization problem with a cost function

$$\displaystyle{ g(\mathbf{c}^{(j)}) =\Vert B \cdot \mathbf{ c}^{(j)} -\mathbf{ y}^{(j)}\Vert ^{2} }$$
(1.57)

where y (j) denotes the j-th column of the matrix Y (given in Eq. (1.55)), or

$$\displaystyle{ g(C) =\Vert B \cdot C - Y \Vert _{F}^{2} }$$
(1.58)

where \(\Vert \cdot \Vert _{F} =\) denotes the Frobenius matrix norm (also called Schur norm).

The solution of this optimization problem may suffer from the fact that typically the states \(\{\mathbf{z}^{j}\}\) cover only some subspace of the reconstructed state space. Therefore, in particular for local modeling ill-posed optimization problems may occur with many almost equivalent solutions. For estimating Lyapunov exponents we prefer to select solutions for the coefficient matrix C that provide partial derivatives (elements of the Jacobian matrix) with small magnitudes, because in this way spurious Lyapunov exponents are shifted towards \(-\infty\). This goal can be achieved by Tikhonov–Philips regularization where the cost function of the optimization problem (1.58) is extended by a term ρ ∥ A ⋅ C ∥ resulting in

$$\displaystyle{ g(\mathbf{c}^{(j)}) =\Vert B \cdot \mathbf{ c}^{(j)} -\mathbf{ y}^{(j)}\Vert ^{2} +\rho ^{2}\Vert A \cdot \mathbf{ c}^{(j)}\Vert ^{2} }$$
(1.59)

where A denotes a so-called stabilizer matrix and \(\rho \in \mathbb{R}\) is the regularization parameter that is used to control the impact of the regularization term on the solution of the minimization problem. If the identity matrix is used as stabilizer A = I then \(\Vert \mathbf{c}(j)\Vert\) is minimized and the solution with the smallest coefficients is selected (also called Tikhonov stabilization). Another possible choice is the derivative matrix (1.53) A = G. In this case we minimize the sum of all squared singular values \(\sigma _{i}\) of \(D\psi ^{k}(\mathbf{x}) = U \cdot S \cdot V ^{tr}\), because

$$\displaystyle\begin{array}{rcl} \Vert G \cdot C\Vert _{F}^{2}& =& \Vert D\psi ^{k}(\mathbf{x})\Vert _{ F}^{2} = \mbox{ trace}\left ([D\psi ^{k}(\mathbf{x})]^{tr} \cdot D\psi ^{k}(\mathbf{x})\right ){}\end{array}$$
(1.60)
$$\displaystyle\begin{array}{rcl} & =& \mbox{ trace}\left (V \cdot S^{2} \cdot V ^{tr}\right ) = \mbox{ trace}(S^{2}) =\sum _{ i=1}^{D}\sigma _{ i}^{2}{}\end{array}$$
(1.61)

and so we minimize Lyapunov exponents by maximizing contraction rates.

To solve the optimization problem (1.59) we rewrite it as an augmented least squares problem with a cost function

$$\displaystyle{ g(\mathbf{c}^{(j)}) =\Vert \left (\begin{array}{c} B \\ \rho A \end{array} \right )\cdot \mathbf{c}^{(j)}-\left (\begin{array}{c} \mathbf{y}^{(j)} \\ 0\end{array} \right )\Vert ^{2} =\Vert \hat{B} \cdot \mathbf{c}^{(j)}-\hat{\mathbf{y}} ^{(j)}\Vert ^{2} }$$
(1.62)

that can be minimized by a solution of the corresponding normal equations

$$\displaystyle{ \left (B^{tr} \cdot B +\rho ^{2}A^{tr} \cdot A\right ) \cdot \mathbf{ c}^{(j)} = B^{tr} \cdot \mathbf{ y}^{(j)} }$$
(1.63)

using a sequence of Householder transformations [35] or by employing the singular value decomposition of the matrix \(\hat{B} = U_{\hat{B} } \cdot S_{\hat{B} } \cdot V _{\hat{B} }^{tr}\) providing the minimal solution [37]

$$\displaystyle{ \mathbf{c}^{(j)} = V _{\hat{ B} } \cdot S_{\hat{B} }^{-1} \cdot U_{\hat{ B} }^{tr} \cdot \hat{\mathbf{y}} ^{(j)}. }$$
(1.64)

for each column \(\hat{\mathbf{c}} ^{(j)}\) or

$$\displaystyle{ C = V _{\hat{B} } \cdot S_{\hat{B} }^{-1} \cdot U_{\hat{ B} }^{tr} \cdot \hat{Y } }$$
(1.65)

for the full coefficient matrix C where \(\hat{Y } = \left (\begin{array}{c} Y\\ 0 \end{array} \right )\).

For the stabilizer A = I the elements of the diagonal matrix \(S_{\hat{B} }^{-1}\) are given by

$$\displaystyle{ \frac{\hat{\sigma }_{i}} {\hat{\sigma }_{i}^{2} +\rho ^{2}} }$$
(1.66)

where \(\hat{\sigma }_{i}\) are the diagonal elements of \(S_{\hat{B} }\) (i.e., the singular values of\(\hat{B}\)).

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Parlitz, U. (2016). Estimating Lyapunov Exponents from Time Series. In: Skokos, C., Gottwald, G., Laskar, J. (eds) Chaos Detection and Predictability. Lecture Notes in Physics, vol 915. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-48410-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-48410-4_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-48408-1

  • Online ISBN: 978-3-662-48410-4

  • eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)

Publish with us

Policies and ethics