Skip to main content
Log in

Dimension Reduction for Systems with Slow Relaxation

In Memory of Leo P. Kadanoff

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We develop reduced, stochastic models for high dimensional, dissipative dynamical systems that relax very slowly to equilibrium and can encode long term memory. We present a variety of empirical and first principles approaches for model reduction, and build a mathematical framework for analyzing the reduced models. We introduce the notions of universal and asymptotic filters to characterize ‘optimal’ model reductions for sloppy linear models. We illustrate our methods by applying them to the practically important problem of modeling evaporation in oil spills.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Amir, A., Oreg, Y., Imry, Y.: On relaxations and aging of various glasses. Proc. Natl Acad. Sci. U.S.A. 109, 1850–1855 (2012)

    Article  ADS  Google Scholar 

  2. Arnold, H.M., Moroz, I.M., Palmer, T.N.: Stochastic parametrizations and model uncertainty in the Lorenz ’96 system. Philos. Trans. R. Soc. Lond. A 371, 20120510 (2013)

    Article  Google Scholar 

  3. Baladi, V.: Positive Transfer Operators and Decay of Correlations, vol. 16. Advanced Series in Nonlinear Dynamics. World Scientific, Singapore (2000)

  4. Berkenbusch, M.K., Claus, I., Dunn, C., Kadanoff, L.P., Nicewicz, M., Venkataramani, S.C.: Discrete charges on a two dimensional conductor. J. Stat. Phys. 116, 1301–1358 (2004)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  5. Berry, T., Harlim, J.: Forecasting turbulent modes with nonparametric diffusion models: learning from Noisy data. Physica D 320, 57–76 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  6. Bouchaud, J.-P., Cugliandolo, L.F., Kurchan, J., Mezard, M.: Out of equilibrium dynamics in spin-glasses and other glassy systems. In: Spin Glasses and Random Fields, pp. 161–223. World Scientific, Singapore (1998)

  7. Bouchaud, J.-P.: Aging in glassy systems: new experiments, simple models, and open questions. In: Cates, M.E., Evans, M. (eds.) Soft and Fragile Matter: Nonequilibrium Dynamics, Metastability and Flow, pp. 285–304. Institute of Physics, Bristol (2000)

    Chapter  Google Scholar 

  8. Box, G.E.P., Jenkins, G.M., Reinsel, G.C., Ljung, G.M.: Time Series Analysis: Forecasting and Control. Wiley Series in Probability and Statistics. Wiley, Hoboken (2015)

  9. Brown, K.S., Sethna, J.P.: Statistical mechanical approaches to models with many poorly known parameters. Phys. Rev. E 68, 021904 (2003)

    Article  ADS  Google Scholar 

  10. Budisic, M., Mohr, R., Mezic, I.: Applied Koopmanism. Chaos 22(4), 047510 (2012)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  11. Chekroun, M.D., Kondrashov, D., Ghil, M.: Predicting stochastic systems by noise sampling, and application to the El Niño-southern oscillation. Proc. Natl Acad. Sci. U.S.A. 108, 11766–11771 (2011)

    Article  ADS  Google Scholar 

  12. Chorin, A.J., Hald, O.H.: Stochastic Tools in Mathematics and Science, vol. 58. Texts in Applied Mathematics. Springer, New York (2014)

  13. Chorin, A.J., Hald, O.H., Kupferman, R.: Optimal prediction and the Mori–Zwanzig representation of irreversible processes. Proc. Natl Acad. Sci. U.S.A. 97, 2968–2973 (2000)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  14. Chorin, A., Hald, O., Kupferman, R.: Optimal prediction with memory. Physica D 166, 239–257 (2002)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  15. Chorin, A.J., Lu, F.: Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics. Proc. Natl Acad. Sci. U.S.A. 112, 9804–9809 (2015)

    Article  ADS  Google Scholar 

  16. Chorin, A., Stinis, P.: Problem reduction, renormalization, and memory. Commun. Appl. Math. Comput. Sci. 1, 1–27 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  17. Coifman, R.R., Lafon, S.: Diffusion maps. Appl. Comput. Harmonic Anal. 21, 5–30 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. Comeau, D., Zhao, Z., Giannakis, D., Majda, A.J.: Data-driven prediction strategies for low-frequency patterns of north pacific climate variability. Clim. Dyn. 48(5), 1855–1872 (2015)

    Google Scholar 

  19. Crisanti, A., Ritort, F.: Violation of the fluctuation–dissipation theorem in glassy systems: basic notions and the numerical evidence. J. Phys. A 36, R181 (2003)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  20. Darve, E., Solomon, J., Kia, A.: Computing generalized langevin equations and generalized Fokker–Planck equations. Proc. Natl Acad. Sci. U.S.A. 106, 10884–10889 (2009)

    Article  ADS  Google Scholar 

  21. Dixon, P.K., Wu, L., Nagel, S.R., Williams, B.D., Carini, J.P.: Scaling in the relaxation of supercooled liquids. Phys. Rev. Lett. 65, 1108–1111 (1990)

    Article  ADS  Google Scholar 

  22. Fingas, M.F.: A literature review of the physics and predictive modelling of oil spill evaporation. J. Hazard. Mater. 42, 157–175 (1995)

    Article  Google Scholar 

  23. Fingas, M.: Modeling evaporation using models that are not boundary-layer regulated. J. Hazard Mater. 107, 27–36 (2004)

    Article  Google Scholar 

  24. Fingas, M.: Modeling oil and petroleum evaporation. J. Pet. Sci. Res. 2(3), 104–115 (2013)

    Google Scholar 

  25. Flajolet, P., Odlyzko, A.: Singularity analysis of generating functions. SIAM J. Discret. Math. 3, 216–240 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  26. Giannakis, D., Majda, A.J.: Nonlinear Laplacian spectral analysis for time series with intermittency and low-frequency variability. Proc. Natl Acad. Sci. U.S.A. 109, 2222–2227 (2012)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  27. Givon, D., Kupferman, R., Stuart, A.: Extracting macroscopic dynamics: model problems and algorithms. Nonlinearity 17, R55 (2004)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  28. Givon, D., Kupferman, R., Hald, O.H.: Existence proof for orthogonal dynamics and the Mori–Zwanzig formalism. Isr. J. Math. 145, 221–241 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  29. Harlim, J., Kang, E.L., Majda, A.J.: Regression models with memory for the linear response of turbulent dynamical systems. Commun. Math. Sci. 11(2), 481–498 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  30. Hoult, D.P. (ed.): Oil on the Sea: Proceedings of a Symposium on the Scientific and Engineering Aspects of Oil Pollution of the Sea. Springer, New York (1969)

  31. Jazwinski, A.H.: Stochastic Processes and Filter Theory. Academic Press, New York (1970)

    MATH  Google Scholar 

  32. Kampen, N.V.: Stochastic Processes in Physics and Chemistry, 3rd edn. North-Holland Personal Library. North Holland, Amsterdam (2007)

  33. Kawasaki, K.: Simple derivations of generalized linear and nonlinear Langevin equations. J. Phys. A 6, 1289 (1973)

    Article  ADS  MathSciNet  Google Scholar 

  34. Kawasaki, K.: Theoretical methods dealing with slow dynamics. J. Phys. 12, 6343 (2000)

    Google Scholar 

  35. Kondrashov, D., Chekroun, M., Ghil, M.: Data-driven non-Markovian closure models. Physica D 297, 33–55 (2015)

    Article  ADS  MathSciNet  Google Scholar 

  36. Kubo, R.: The fluctuation–dissipation theorem. Rep. Prog. Phys. 29, 255 (1966)

    Article  ADS  MATH  Google Scholar 

  37. Kutner, M., Nachtsheim, C., Neter, J., Li, W.: Applied Linear Statistical Models. McGraw-Hill/Irwin, Chicago (2004)

    Google Scholar 

  38. Lin, K., Lu, F.: Stochastic parametrization, filtering, and the Mori–Zwanzig formalism. Preprint (2017)

  39. Lu, F., Lin, K.K., Chorin, A.J.: Data-based stochastic model reduction for the Kuramoto–Sivashinsky equation. Physica D 340, 46–57 (2017)

    Article  MathSciNet  Google Scholar 

  40. Mackay, D., Matsugu, R.S.: Evaporation rates of liquid hydrocarbon spills on land and water. Can. J. Chem. Eng. 51, 434–439 (1973)

    Article  Google Scholar 

  41. Majda, A.J., Harlim, J.: Physics constrained nonlinear regression models for time series. Nonlinearity 26, 201 (2013)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  42. Matan, K., Williams, R.B., Witten, T.A., Nagel, S.R.: Crumpling a thin sheet. Phys. Rev. Lett. 88, 076101 (2002)

    Article  ADS  Google Scholar 

  43. Moghimi, S., Ramírez, J.M., Restrepo, J.M., Venkataramani, S.C.: Mass exchange dynamics of surface and subsurface oil in shallow-water transport. Preprint (2017)

  44. Mori, H.: Transport, collective motion, and Brownian motion. Prog. Theor. Phys. 33, 423–455 (1965)

    Article  ADS  MATH  Google Scholar 

  45. Oded, B., Rubinstein, S.M., Fineberg, J.: Slip-stick and the evolution of frictional strength. Nature 463, 76–9 (2010)

    Article  ADS  Google Scholar 

  46. Oppenheim, A.V., Schafer, R.W., Buck, J.R.: Discrete-Time Signal Processing, 2nd edn. Prentice-Hall Signal Processing Series. Prentice Hall, Englewood Cliffs (1999)

  47. Ott, E.: Chaos in Dynamical Systems. Cambridge University Press, New York (1993)

    MATH  Google Scholar 

  48. Polya, G., Szegö, G.: Problems and Theorems in Analysis II: Theory of Functions, Zeros, Polynomials, Determinants, Number Theory, Geometry, Classics in Mathematics. Springer, New York (1998)

  49. Restrepo, J.M., Venkataramani, S.C., Dawson, C.: Nearshore sticky waters. Ocean Model. 80, 49–58 (2014)

    Article  ADS  Google Scholar 

  50. Restrepo, J.M., Ramírez, J.M., Venkataramani, S.C.: An oil fate model for shallow waters. J. Marine Sci. Eng. 3, 1504–1543 (2015)

    Article  Google Scholar 

  51. Spaulding, M.L.: A state-of-the-art review of oil spill trajectory and fate modeling. Oil Chem. Pollut. 4, 39–55 (1988)

    Article  Google Scholar 

  52. Stinis, P.: Renormalized Mori–Zwanzig-reduced models for systems without scale separation. Proc. R. Soc. Lond. Ser. A 471, 20140446 (2015)

    Article  ADS  MathSciNet  Google Scholar 

  53. Stiver, W., Mackay, D.: Evaporation rate of spills of hydrocarbons and petroleum mixtures. Environ. Sci. Technol. 18(11), 834–840 (1984)

    Article  ADS  Google Scholar 

  54. Sutton, O.G.: Wind structure and evaporation in a turbulent atmosphere. Proc. R. Soc. Lond. Ser. A 146, 701–722 (1934)

    Article  ADS  MATH  Google Scholar 

  55. Takens, E.: Detecting strange attractors in turbulence. In: Rand, D., Young, L.-S. (eds.) Dynamical Systems and Turbulence, pp. 366–381. Springer, Berlin (1981)

    Google Scholar 

  56. Transtrum, M.K., Machta, B.B., Sethna, J.P.: Geometry of nonlinear least squares with applications to sloppy models and optimization. Phys. Rev. E 83, 036701 (2011)

    Article  ADS  Google Scholar 

  57. Vautard, R., Yiou, P., Ghil, M.: Singular-spectrum analysis: a toolkit for short, Noisy chaotic signals. Physica D 58, 95–126 (1992)

    Article  ADS  Google Scholar 

  58. Vautard, R., Ghil, M.: Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series. Physica D 35, 395–424 (1989)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  59. Venturi, D., Cho, H., Karniadakis, G.: Mori–Zwanzig approach to uncertainty quantification. In: Ghanem, R., Higdon, D., Owhadi, H. (eds.) Handbook of Uncertainty Quantification. Springer, Heidelberg (2016)

    Google Scholar 

  60. Venturi, D., Karniadakis, G.: Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems. Proc. R. Soc. Math. Phys. Eng. Sci. 470, 20130754–20130754 (2014)

    Article  ADS  MathSciNet  Google Scholar 

  61. Walker, G.: On periodicity in series of related terms. Proc. R. Soc. Lond. Ser. A 131, 518–532 (1931)

    Article  ADS  MATH  Google Scholar 

  62. Waterfall, J.J., Casey, F.P., Gutenkunst, R.N., Brown, K.S., Myers, C.R., Brouwer, P.W., Elser, V., Sethna, J.P.: Sloppy-model universality class and the Vandermonde matrix. Phys. Rev. Lett. 97, 150601 (2006)

    Article  ADS  Google Scholar 

  63. Yule, G.U.: On a method of investigating periodicities in disturbed series, with special reference to Wolfer’s sunspot numbers. Philos. Trans. R. Soc. Lond. Ser. A 226, 267–298 (1927)

    Article  ADS  MATH  Google Scholar 

  64. Zwanzig, R.: Problems in nonlinear transport theory. In: Garrido, L. (ed.) Systems Far from Equilibrium, vol. 132. Lecture Notes in Physics. Springer, Berlin (1980)

  65. Zwanzig, R.: Nonlinear generalized Langevin equations. J. Stat. Phys. 9, 215–220 (1973)

    Article  ADS  Google Scholar 

  66. Zwanzig, R.: Nonequilibrium Statistical Mechanics. Oxford University Press, New York (2001)

    MATH  Google Scholar 

Download references

Acknowledgements

S.V. would like to acknowledge the many, very illuminating discussions with Kevin Lin who was very generous with his time and his ideas. We are grateful to an anonymous referee for pointing out the potential connections between our work and the sloppy models universality class. This viewpoint turns out to be particularly fruitful. This work was funded in part by a Grant from GoMRI. We also received support from NSF-DMS-1109856 and NSF-OCE-1434198.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shankar C. Venkataramani.

Additional information

It is with immense gratitude that we dedicate this article to Leo Kadanoff. Two of the authors (SV and JR) first met Leo as postdocs. Our lives would have been very different if not for the outsize role that Leo played in our professional development and also in our personal growth. His door, and his mind, were always open. He reminded us to ask questions, about how humility gave the courage to know what we knew and we did not. He taught us to be fearless about pursuing a wide range of interests. The fun he had with science was infectious, and the skill he had to ask the right questions is impossible to match. Our experiences were by no means unique. There are hundreds of people whose lives Leo touched in the same way. So many of his informal seminar or lunch questions turned into full research enterprises. It is no exaggeration to say that at some point people needed only to know that Leo had been the one to ask the question in order to assure themselves that their scientific investigations were worthwhile. The last time that one of us saw Leo was in May of 2015. Coincidentally, it was at a talk on the subject of oil spill modeling, and included some of the rudimentary ideas that grew into this paper. Leo came down to the (new) James Franck Institute for the talk. He was just as sharp as ever, and he made sure that the graduate students in the audience got all the physical intuition that the speaker elided over, by interjecting appropriately. It was classic Leo. How we miss him!

Appendices

Appendix 1: The Memory Kernel for Multiple Observables

One other comment is that we can indeed compute the memory kernel explicitly for the evaporation process (4), not just for the case with one observable, the mass \(M_n\), but also more generally if we have a vector-valued linear observable \(\varPhi \), i.e l scalar-valued observables \(\varPhi = \{\phi ^1,\phi ^2,\ldots ,\phi ^l\}^T\). Each scalar linear observable \(\phi ^i\) is given by an element of \(\mathcal {H}^*\), and we will denote the corresponding bra-vector by \(\langle {\phi ^i}|\). Using the Gram–Schmidt procedure if necessary, we can assume that the vectors \(\langle {\phi ^i}|\) given an orthonormal basis for their span, a l-dimensional subspace of \(\mathcal {H}^*\). The orthogonal projection \(P^* : \mathcal {H}^* \rightarrow \mathcal {H}^*\) onto this subspace is given by

$$\begin{aligned} \mathcal {P}^* = \big \vert {\phi ^1}\big \rangle \langle {\phi ^1}| + \big \vert {\phi ^2}\big \rangle \langle {\phi ^2}| + \cdots + \big \vert {\phi ^2}\big \rangle \langle {\phi ^2}|. \end{aligned}$$

It follows that \(\langle {\phi ^i}| P^* = \langle {\phi ^i}| P = \langle {\phi ^i}|\) and (11) gives

$$\begin{aligned} \langle {\phi ^i}|\big \vert {\xi _{n+1}}\big \rangle = \sum _{k=0}^n \sum _{j=1}^l \langle {\phi ^i}| \varLambda (Q \varLambda )^k\big \vert {\phi ^j}\big \rangle \langle {\phi ^j}|{\xi _n}\rangle + \langle {\phi ^i}| (\varLambda Q)^{n+1} |{\rho _0}\rangle . \end{aligned}$$

The quantities \( \langle {\phi ^i}|{\xi _{n+1}}\rangle \) are the entries of the “vector” observable \(\varPhi _{n+1}\). Defining the matrices \(H_k\) by \((H_k)_{ij} = \langle {\phi ^i}| \varLambda (Q \varLambda )^k\big \vert {\phi ^j}\big \rangle \) for \(k= 0,1,2,\ldots \) and the (column) vectors \(\beta _n\) by the entries \(\beta ^i_n = \langle {\phi ^i}| (\varLambda Q)^{n+1} \big \vert {\rho _0}\big \rangle \), we have the Mori–Zwanzig decomposition

$$\begin{aligned} \varPhi _{n+1} = \sum _{k=0}^n H_k \varPhi _{n-k} + \beta _n. \end{aligned}$$
(41)

If \(\big \vert {\rho _0}\big \rangle \) is in the span of \(\big \vert {\phi ^i}\big \rangle \), then \(Q \big \vert {\rho _0}\big \rangle = 0\) so that the noise \(\beta _n\) is identically zero. Taking \(\big \vert {\rho _0}\big \rangle = \big \vert {\phi ^1}\big \rangle ,\big \vert {\phi ^2}\big \rangle ,\ldots ,\big \vert {\phi ^l}\big \rangle \) in turn, and collecting the corresponding column vectors \(\varPhi _n\) into a \(l \times l\) matrix \(\varXi _n\), we have

$$\begin{aligned} (\varXi _n)_{ij} = \langle {\phi ^i}| \varLambda ^n \big \vert {\phi ^j}\big \rangle = \int _0^1 \phi ^i(w) e^{-n w \tau } \phi ^j(w) dw \end{aligned}$$

is a symmetric matrix for each n, and

$$\begin{aligned} \varXi _{n+1} = \sum _{k=0}^n H_k \varXi _{n-k}, \quad \text{ for }\; n = 0,1,2,\ldots \end{aligned}$$

As before, we can determine the memory kernel \(H_k\) using the \(\mathcal {Z}\)-transform. Defining the matrices

$$\begin{aligned} \hat{\varXi }(z) = \sum _{n=0}^\infty z^{-n} \varXi _n, \quad \hat{H}(z) = \sum _{n=0}^\infty z^{-n} H_n \end{aligned}$$

we get

$$\begin{aligned} \hat{H}(z) = z(I - \hat{\varXi }(z)^{-1}). \end{aligned}$$

The matrix \(\varXi _n\) is symmetric for all n, so that \(H_n\) is also symmetric for all n. We expect that the norm \(\Vert \varXi _n\Vert \) typically decays no faster than 1/n. This is true for instance if the constant functions are in the range of P, or more generally if there are continuous functions \(\psi \) with \(\psi (0) > 0\) in the range of P. In this case, we expect that the norm of \(H_n\) decays no faster that \(1/(n \log ^2(n))\) indicating again that, generically, one expects fat tails in the memory kernel for the system (4) if we use the Mori–Zwanzig decomposition based on any finite set of linear observables.

Appendix 2: Orthogonal Dynamics

We will now compute the statistics of the noise process \(\beta _n\) in the Mori–Zwanzig decomposition (12) with the usual approach through the study of the projection equation (9) and the orthogonal dynamics (10). Since the orthogonal dynamics are linear, it suffices to solve the system

$$\begin{aligned} \langle {F_0}| = \langle {\delta (w-x)}| Q, \quad \langle {F_{n+1}}| = \langle {F_n}| \varLambda Q,\ \ n = 0,1,2,\ldots \end{aligned}$$

where \(x \in [0,1]\) is fixed. A calculation reveals that, for any continuous function \(\phi \),

$$\begin{aligned} \langle {F_0}|{\phi }\rangle = \langle {\delta (w-x)}|{\phi }\rangle - \langle {\delta (w-x)}| P \big \vert {\phi }\big \rangle = \phi (x) - \int _0^1 \phi (w) dw. \end{aligned}$$

We will thus associate \(\langle {F_0}|\) with the “function” \(F_0(w) = \delta (w-x) - 1\). We can follow this computation to solve the orthogonal dynamics equations recursively. For example,

$$\begin{aligned} \langle {F_1}|{\phi }\rangle&= \langle {\delta (w-x)-1}| \varLambda \big \vert {\phi }\big \rangle - \langle {\delta (w-x)-1}| \varLambda P \big \vert {\phi }\big \rangle \\&= \int _0^1 (\delta (w-x)-1) e^{-w \tau } \phi (w) dw - \int _0^1 (\delta (w-x)-1) e^{-w \tau } dw \int _0^1 \phi (w) dw \\&= \int _0^1 \left[ e^{-x\tau } \delta (w-x)- e^{-w \tau } - e^{-x \tau } + \frac{1-e^{-\tau }}{\tau }\right] \phi (w) dw, \end{aligned}$$

so that \(\langle {F_1}|\) corresponds to the function \(F_1(w) = e^{-x\tau } \delta (w-x)- e^{-w \tau } - e^{-x \tau } + \frac{1-e^{-\tau }}{\tau }\). Using the fact that Q and \(\varLambda \) are self-adjoint operators on \(\mathcal {H}\), and further \(\langle {\psi }| \varLambda \big \vert {\phi }\big \rangle = \int \psi (w) e^{-w \tau } \phi (w) dw\) so that \(\varLambda \) is diagonal on the “basis” \(\{\delta (w-x)\}_{\{x \in [0,1]\}}\), an inductive argument shows that \(F_n(w) = e^{-nx\tau } \delta (w-x) + \varPsi _n(w;x)\) where \(\varPsi _n\) is a smooth, symmetric function \(\varPsi _n(w;x) = \varPsi _n(x;w)\). We will use these conclusions to verify the full solution for \(\langle {F_n}|\) that we obtain below by independent means.

Consider the \(\mathcal {Z}\)-transform \(\hat{\langle {F}|} = \sum z^{-n} \langle {F_n}|\). The orthogonal dynamics imply

$$\begin{aligned} \hat{\langle {F}|} (1 - z^{-1} \varLambda Q) = \hat{\langle {F}|} - z^{-1} \hat{\langle {F}|} \varLambda + z^{-1} \hat{\langle {F}|} \varLambda \big \vert {1}\big \rangle \langle {1}| = \langle {F_0}|. \end{aligned}$$

Using the ansatz \(\hat{F}(z,x,w) = \hat{A}(z,x) \delta (w-x) + \hat{\varPsi }(z,x,w)\) corresponding to a decomposition of \(F_n\) into its singular and regular parts, we get the pair of equations

$$\begin{aligned} (1- z^{-1} e^{-x \tau }) \hat{A}&= 1, \\ \hat{\varPsi } - z^{-1} e^{-x \tau } \hat{\varPsi } + z^{-1} e^{-w \tau } \hat{A} + z^{-1} \int e^{-x \tau } \hat{\varPsi } dx&= -1, \end{aligned}$$

where we have suppressed the arguments (zxw) for \(\hat{A}\) and \(\hat{\varPsi }\) for clarity. We can solve the first equation to obtain

$$\begin{aligned} \hat{A} = \frac{1}{1-z^{-1} e^{-x \tau }}. \end{aligned}$$

Using this in the second equation, we obtain

$$\begin{aligned} \hat{\varPsi } = -\frac{1}{(1-z^{-1} e^{-x \tau }) (1-z^{-1} e^{-w \tau })} - \frac{z^{-1} C(z,w)}{1-z^{-1} e^{-x \tau }}, \end{aligned}$$

where \(C(z,w) = \int e^{-x \tau } \hat{\varPsi } dx\) is determined in terms of the required solution \(\hat{\varPsi }\) self-consistently. Multiplying by \(e^{-x \tau }\) and integrating in x, and solving the resulting equation for C(zw), we obtain

$$\begin{aligned} C(z,w) = -\frac{z\left( \tau -\log \left( 1-e^{\tau } z\right) +\log (1-z)\right) }{\left( 1-z^{-1} e^{-w \tau }\right) \left( \log (1-z)-\log \left( 1-e^{\tau } z\right) \right) }. \end{aligned}$$

Using this result in the computation for \(\hat{\varPsi }\) gives

$$\begin{aligned} \hat{\varPsi }(z,x,w) = \frac{\tau }{\left( 1-z^{-1}e^{-w\tau }\right) \left( 1-z^{-1}e^{-x \tau }\right) \left( \log (1-z)-\log \left( 1-e^{\tau } z\right) \right) }. \end{aligned}$$

This gives a complete solution of orthogonal dynamics equation by

$$\begin{aligned} \hat{F}(z) = \frac{\delta (w-x)}{1-z^{-1} e^{-x \tau }} + \frac{\tau }{\left( 1-z^{-1}e^{-w\tau }\right) \left( 1-z^{-1}e^{-x \tau }\right) \left( \log (1-z)-\log \left( 1-e^{\tau } z\right) \right) }. \end{aligned}$$

The singular part of \(F_n\) is therefore \(e^{-n \tau x} \delta (w-x)\) as we noted above. Further, the regular part \(\hat{\varPsi }\) is symmetric in w and x, implying this property for each of the functions \(\varPsi _n\). Finally, for an observable given by a continuous function g, the solution to the orthogonal dynamics is given by

$$\begin{aligned} \sum _{n=0}^\infty z^{-n} \langle {g}|{Q (\varLambda Q)^n}{|\phi } \rangle = \int _0^1 \int _0^1 g(x) \hat{F}(z,w,x) \phi (w) dw dx. \end{aligned}$$

For the observable \(M_{n}\), the prediction for the total mass at the next time step, we have \(\langle {g}| = \langle {1}| \varLambda \). The \(\mathcal {Z}\)-transform of the memory kernel is given by

$$\begin{aligned} H(z) = \sum _{k \ge 1} z^{-k} h_k&= z^{-2} \sum _{k \ge 1} z^{-k+2} \langle {1}|{(\varLambda Q)^{k-1} \varLambda }|{1}\rangle \\&= z^{-1} \langle {1}|{\varLambda }|{1}\rangle +z^{-2}\sum _{n \ge 0} z^{-n} \langle {1}|{\varLambda Q (\varLambda Q)^n \varLambda }|{1}\rangle \\&= z^{-1} \frac{1-e^{-\tau }}{\tau } + z^{-2} \int _0^1 \int _0^1 e^{-x \tau } \hat{F}(z,w,x) e^{-w \tau } dw dx \\&= \left[ 1 - \frac{\tau }{\log (e^\tau z -1) - \log (z-1)}\right] . \end{aligned}$$

The \(\mathcal {Z}\)-transform of the expected values of the noise sequence \(\beta _n\) is given by

$$\begin{aligned} \sum z^{-n} \mathbb {E}[\beta _n] = \sum z^{-n} \mathbb {E}[\langle {F_n}|{\rho _0}\rangle ] = \int _0^1 \int _0^1 e^{-x \tau } \hat{F}(z,w,x) dw dx = 0 \end{aligned}$$

and the correlations between the noise \(\beta _n\) and the mass \(M_j\) are given by \(\mathbb {E}[\beta _n M_j] = \bar{\sigma }^2 \langle {1}|{(\varLambda Q)^n \varLambda ^j}|{1}\rangle \) (see Sect. 5). Taking the (two index) \(\mathcal {Z}\)-transform, noting that \(\beta _0 = 0\), we have

$$\begin{aligned} \sum _{n \ge 1} \sum _{j \ge 0} z^{-n} \zeta ^{-j} \mathbb {E}[\beta _n M_j]&= \bar{\sigma }^2\sum _{n \ge 1} \sum _{j \ge 0} z^{-n} \zeta ^{-j} \langle {1}|{(\varLambda Q)^{n} \varLambda ^j}|{1}\rangle \\&= \bar{\sigma }^2z^{-1}\sum _{n \ge 0} \sum _{j \ge 0} z^{-n} \zeta ^{-j} \langle {1}|{\varLambda Q (\varLambda Q)^n \varLambda ^j}|{1}\rangle \\&= \bar{\sigma }^2z^{-1} \int _0^1 \int _0^1 \frac{e^{-x \tau } \hat{F}(z,w,x)}{1-\zeta ^{-1} e^{-w \tau }} dw dx \\&= \frac{\bar{\sigma }^2}{\tau } z^{-1} \Bigg [\frac{1-z^{-1}e^{ -\tau }}{(1-z^{-1})(1-z^{-1}e^{-\tau })} \\&\quad + \frac{\log \left( \frac{1-z^{-1}}{1-z^{-1}e^{-\tau }}\right) \left( \log \left( z \log \left( \frac{\zeta -1}{\zeta e^{\tau }-1}\right) - \zeta \log \left( \frac{z -1}{z e^{\tau }-1}\right) \right) \right) }{(z-\zeta ) \log \left( \frac{z-1}{e^{\tau } z-1} \right) }\Bigg ]. \end{aligned}$$

It is not true that \(\mathbb {E}[\beta _n M_j] = 0\) if \(n > j\), as one would expect in the Mori–Zwanzig decomposition for a system with an invariant measure. In particular,

$$\begin{aligned} \mathbb {E}[\beta _2 M_1] = \frac{\bar{\sigma }^2 (1-e^{-\tau })((\tau -2) +(\tau +2)e^{-\tau })}{2\tau ^2} \ne 0 \end{aligned}$$

Appendix 3: Sampling Initial Conditions

For any prescribed value \(0< \bar{\sigma }^2 < \infty \), we can indeed find a family of I-dependent distributions such that

$$\begin{aligned} \mu _\gamma \rightarrow 1, \frac{\sigma _\gamma ^2}{I} \rightarrow \bar{\sigma }^2\;\text{ as }\; I \rightarrow \infty \end{aligned}$$

by appropriately truncating and rescaling a distribution that has finite mean but infinite variance. For example, the function

$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} \frac{9}{10} &{}\quad 0 \le x \le \frac{2}{3}, \\ \frac{9}{10}\left( \frac{3x}{2}\right) ^{5/2} &{}\quad x > \frac{2}{3} \end{array}\right. } \end{aligned}$$

satisfies \(f \ge 0\) on \((0,\infty )\) and \(\int _0^\infty f(x) dx = 1\), so f is indeed a nonmalized density on \((0,\infty )\). Further \(\int _0^\infty x f(x) dx = 1\) and \(\int _0^L x^2 f(x) dx \sim \sqrt{\frac{32}{75} L}\) for \(L \gg 1\). We can therefore define a sequence of I dependent distributions by truncating the support of f and renormalizing to have unit mass, i.e.

$$\begin{aligned} f_I(x) = {\left\{ \begin{array}{ll} c_I f(x) &{} \quad 0 \le x \le L_I \\ 0 &{}\quad x > L_I, \end{array}\right. } \end{aligned}$$

where \(L_I\) is any sequence satisfying \(L_I \ge 2/3\) for all I, \(L_I \nearrow \infty \) and \({\sqrt{\frac{32}{75 I^2} L_I} \rightarrow \bar{\sigma }^2}\) as \(I \rightarrow \infty \). Given such a sequence \(L_I\), the normalization \(c_I\) is determined by \(\int _0^I f_I(x) dx = 1\) so that \(c_I \rightarrow 1\).

Appendix 4: Asymptotic Solutions of the Yule–Walker Equations

We seek a solution to (30) as an asymptotic series in n, i.e. solutions of the form

$$\begin{aligned} h^{(n)}_j = a^0_j + \frac{1}{n} a_1^j + \frac{1}{n^2} a_2^j + \cdots . \end{aligned}$$
(42)

The difficulty in solving this system is evident if we expand the coefficient matrix A as a power series in n:

$$\begin{aligned} A = \frac{1}{2n}\begin{pmatrix} 1 &{}\quad 1 &{}\quad \cdots &{}\quad 1 \\ 1 &{} \quad 1 &{} \quad \cdots &{}\quad 1 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{} \quad \vdots \\ 1 &{}\quad 1 &{}\quad \cdots &{} \quad 1 \end{pmatrix} + \frac{1}{4n^2}\begin{pmatrix} 2 &{}\quad 3 &{} \quad \cdots &{}\quad L+1\\ 3 &{} \quad 4 &{} \quad \cdots &{} \quad L +2\\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \vdots \\ L +1&{} \quad L+2 &{} \quad \cdots &{}\quad 2L\end{pmatrix} + \cdots . \end{aligned}$$

Assuming \(L \ge 3\), the two matrices displayed in the expansion of A are singular. The first matrix has rank 1, the second has rank 2. Indeed the first \(L-1\) matrices in the expansion of A are all singular and their (row) nullspaces are nested

$$\begin{aligned} v^T \begin{pmatrix} 2 &{} \quad 3 &{}\quad \cdots &{}\quad L+1 \\ 3 &{}\quad 4 &{}\quad \cdots &{}\quad L+2 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ L +1 &{}\quad L+2 &{} \quad \cdots &{}\quad 2L \end{pmatrix} = 0 \implies v^T \begin{pmatrix} 1 &{} \quad 1 &{}\quad \cdots &{} \quad 1 \\ 1 &{} \quad 1 &{} \quad \cdots &{} \quad 1 \\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \vdots \\ 1 &{} \quad 1 &{}\quad \cdots &{} \quad 1 \end{pmatrix} =0, \end{aligned}$$

and so on. The determinant of A is thus very close to zero (\(\det \left\{ A \right\} \sim O(n^{-L^2})\) as we see below) so it is not clear that we have solutions for \(h^{(n)}\) where the leading order behavior stays O(1) instead of diverging with n. Proving the boundedness of \(h^{(n)}\) and determining the O(1) solution thus requires consideration of L solvability conditions given by the vectors that span the common (row)-nullspaces of the initial j terms in the expansion of A for \(j=1,2,\ldots ,L-1\). Higher order terms will require even longer expansion of the matrices and more solvability conditions.

In the general case of a process with slowly decaying correlations, it is still true that the matrix of coefficients in the Yule–Walker equation is nearly singular, and one does have to go through the process described above to find optimal, reduced dimensional, models for such systems. For the evaporation process (4) however, the coefficient matrix has a special structure, that we exploit to find the solutions for the optimal filter \(h^{(n)}\). The matrix A is a Cauchy matrix [48] i.e its entries are of the form \(A_{ij} = 1/(x_i-y_j)\). In particular, we can choose \(x_i = 2n-i\) and \(y_j = j\). The determinant of a Cauchy matrix \(A_{ij} = 1/(x_i-y_j)\) is given by [48]

$$\begin{aligned} \det {\left\{ {A}\right\} } = \frac{\prod _{i > j} (x_i-x_j)(y_j-y_i)}{\prod _i \prod _j (x_i-y_j)}. \end{aligned}$$

For the particular matrix A from above, the terms in the numerator are all bounded by L and the terms in the denominator are all \(\approx 2n\) if \(n \gg L\). Consequently, \(\det {A} \sim O(n^{-L^2})\). The matrix \(\hat{A}_m\) obtained by replacing the mth column of A by the vector \(v_i = \frac{1}{2n -i}\) is also a Cauchy matrix \(\hat{A}_{ij} = 1/(x_i - \hat{y}_j)\), with the same choice \(x_i = 2n-i\) and

$$\begin{aligned} \hat{y}_j = {\left\{ \begin{array}{ll} y_j &{} \quad j \ne m, \\ 0 &{} \quad j = m. \end{array}\right. } \end{aligned}$$

Cramer’s rule now yields,

$$\begin{aligned} h^{(n)}_m = \frac{\det {\{ {\hat{A}}\}}}{\det {\{{{A}}\}}} = \prod _{i \ne m} \frac{i}{i-m} \prod _i \frac{2n - i -m}{2n - i}. \end{aligned}$$
(43)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Venkataramani, S.C., Venkataramani, R.C. & Restrepo, J.M. Dimension Reduction for Systems with Slow Relaxation. J Stat Phys 167, 892–933 (2017). https://doi.org/10.1007/s10955-017-1761-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-017-1761-7

Keywords

Navigation