Skip to main content
Log in

Compressibility, Laws of Nature, Initial Conditions and Complexity

  • Published:
Foundations of Physics Aims and scope Submit manuscript

Abstract

We critically analyse the point of view for which laws of nature are just a mean to compress data. Discussing some basic notions of dynamical systems and information theory, we show that the idea that the analysis of large amount of data by means of an algorithm of compression is equivalent to the knowledge one can have from scientific laws, is rather naive. In particular we discuss the subtle conceptual topic of the initial conditions of phenomena which are generally incompressible. Starting from this point, we argue that laws of nature represent more than a pure compression of data, and that the availability of large amount of data, in general, is not particularly useful to understand the behaviour of complex phenomena.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Cited by Dyson [14].

  2. As paradigmatic example let us consider the Langevin equation

    $$\begin{aligned} {d^2 x \over dt^2}+ \gamma {dx \over dt}=-\omega ^2 x +c \eta \end{aligned}$$

    where \(\eta \) is a white noise, i.e. a Gaussian stochastic process with \(\langle \eta \rangle =0\) and \(\langle \eta (t) \eta (t') \rangle = \delta (t-t')\), and \(\gamma >0\). It is worth emphasising that the vector \(\mathbf{y}=(x, dx/dt)\) is a Markov process, i.e. its stochastic evolution at \(t>0\) is determined only by \(\mathbf{y}(0)\), on the contrary the scalar variable x is not a Markovian process, and thus its dynamics depends on its past history.

  3. The Reynolds number

    $$\begin{aligned} R_e={U L \over \nu }~, \end{aligned}$$

    being U and L the typical velocity and length of the flow respectively, indicates the relevance of the non linear terms. At small \(R_e\) we have a laminar flow, while the regime \(R_e \gg 1\) is called fully developed turbulence.

  4. This is the essence of Kac’s lemma, a well know result of ergodic theory  [11].

  5. In conservative cases, e.g. Hamiltonian systems, D is the number of variables involved in the dynamics; if the system is dissipative, D can be a fractional number and is smaller than the dimension of the phase-space.

  6. A rigorous result states: \(m \ge 2 [D] +1\); from heuristic arguments on can expect that \(m= [D]+1\) is enough [11].

References

  1. Sève, L.: Penser avec Marx aujourd’hui: philosophie? La Dispute (2014)

  2. Bailly, F., Longo, G.: Mathématiques et sciences de la nature. Hermann, Paris (2006)

    MATH  Google Scholar 

  3. Mach, E.: On the Economical Nature of Physical Inquiry. Cambridge University Press, Cambridge (2014)

    Book  Google Scholar 

  4. Mach, E.: The Science of Mechanics: A Critical and Historical Account of Its Development. Open Court Publishing Company, Chicago (1907)

    Google Scholar 

  5. Li, M., Vitányi, P.: An Introduction to Kolmogorov Complexity and Its Applications. Springer, Berlin (2009)

    MATH  Google Scholar 

  6. Solomonoff, R.J.: A formal theory of inductive inference. Part I-II. Inf. Control 7(1), 1 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  7. Davies, P.C.W.: Why is the physical world so comprehensible. In: Zurek, W.H. (ed.) Complexity, Entropy and the Physics of Information, pp. 61–70. Addison-Wesley, Boston (1990)

    Google Scholar 

  8. Barrow, J.D.: New Theories of Everything. Oxford University Press, Oxford (2007)

    MATH  Google Scholar 

  9. Born, M.: Natural Philosophy of Cause and Chance. Read Books, Vancouver (1948)

    MATH  Google Scholar 

  10. Sokal, A.D., Bricmont, J.: Postmodern Intellectuals. Picador, New York (1997)

    Google Scholar 

  11. Cencini, M., Cecconi, F., Vulpiani, A.: Chaos. World Scientific, Singapore (2010)

    MATH  Google Scholar 

  12. Coveney, P.V., Dougherty, E.R., Highfield, R.R.: Big data need big theory too. Phil. Trans. R. Soc. A 374(2080), 20160153 (2016)

    Article  ADS  Google Scholar 

  13. Crutchfield, J.P.: The dreams of theory. Wiley Interdiscip. Rev. 6(2), 75–79 (2014)

    Article  Google Scholar 

  14. Dyson, F.: Birds and frogs. Not. AMS 56(2), 212–223 (2009)

    MathSciNet  MATH  Google Scholar 

  15. Chibbaro, S., Rondoni, L., Vulpiani, A.: Reductionism, Emergence and Levels of Reality. Springer, Berlin (2014)

    Book  MATH  Google Scholar 

  16. Gershenfeld, N.A., Weigend, A.S. (eds.): Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley, Reading (1994)

  17. Onsager, L., Machlup, S.: Fluctuations and irreversible processes. Phys. Rev. 91(6), 1505 (1953)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  18. Ma, S.K.: Statistical Mechanics. World Scientific, Singapore (1985)

    Book  MATH  Google Scholar 

  19. Takens, F.: Detecting strange attractors in turbulence. In: Rand, D.A., Young, L.S. (eds.) Dynamical Systems and Turbulence. Springer, Berlin (1981)

    Google Scholar 

  20. Eckmann, J.-P., Ruelle, D.: Fundamental limitations for estimating dimensions and lyapunov exponents in dynamical systems. Physica D 56(2), 185–187 (1992)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  21. Newton, I.: Sir Isaac Newton’s Mathematical Principles of Natural Philosophy and His System of the World. University of California Press, Berkeley (1934)

    Google Scholar 

  22. Wigner, E.P.: Events, laws of nature and invariant principles. Science 145, 995–999 (1964)

    Article  ADS  Google Scholar 

  23. Li, M., Vitányi, P.M.B.: Inductive reasoning and Kolmogorov complexity. J. Comput. Syst. Sci. 44(2), 343–384 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  24. Li, M., Vitányi, P.M.B.: Kolmogorov complexity arguments in combinatorics. J. Comb. Theory Ser. A 66(2), 226–236 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  25. Li, M., Vitányi, P.M.B.: Kolmogorov complexity and its applications. Algorithm. Complex. 1, 187 (2014)

    MATH  Google Scholar 

  26. Martin-Löf, P.: The definition of random sequences. Inf. Control 9(6), 602–619 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  27. Calude, C., Longo, G.: Classical, quantum and biological randomness as relative unpredictability. Nat. Comput. 15, 263 (2016)

    Article  MathSciNet  Google Scholar 

  28. Dirac, P.A.M.: Quantum mechanics of many-electron systems. Proc. R. Soc. Lond. Ser. A 123(792), 714–733 (1929)

    Article  ADS  MATH  Google Scholar 

  29. Primas, H.: Chemistry, Quantum Mechanics and Reductionism. Perspectives in Theoretical Chemistry. Springer, Berlin (1981)

    Book  Google Scholar 

  30. Scerri, E.R.: Collected Papers on Philosophy of Chemistry. World Scientific, Singapore (2008)

    Book  Google Scholar 

  31. Jona-Lasinio, G.: Spontaneous symmetry breaking-variations on a theme. Prog. Theor. Phys. 124(5), 731 (2010)

    Article  ADS  MATH  Google Scholar 

  32. Frisch, U.: Turbulence: The Legacy of AN Kolmogorov. Cambridge University Press, Cambridge (1995)

    MATH  Google Scholar 

  33. Bohr, T., Jensen, M.H., Paladin, G., Vulpiani, A.: Dynamical Systems Approach to Turbulence. Cambridge University Press, Cambridge (2005)

    MATH  Google Scholar 

  34. Kantz, H., Schreiber, T.: Nonlinear Time Series Analysis. Cambridge University Press, Cambridge (1997)

    MATH  Google Scholar 

  35. McAllister, J.W.: Algorithmic randomness in empirical data. Stud. Hist. Philos. Sci. Part A 34(3), 633–646 (2003)

    Article  MathSciNet  Google Scholar 

  36. Ruelle, D.: The Claude Bernard lecture, 1989. Deterministic chaos: the science and the fiction. Proc. R. Soc. Lond. A 427(1873), 241–248 (1990)

  37. Vulpiani, A., Cecconi, F., Cencini, M., Puglisi, A., Vergni, D. (eds.): Large Deviations in Physics: The Legacy of the Law of Large Numbers, vol. 885. Springer, Berlin (2014)

    MATH  Google Scholar 

  38. Vulpiani, A.: Lewis Fry Richardson: scientist, visionary and pacifist. Lett. Mat. 2(3), 121–128 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Chaitin, G.J.: Information, Randomness & Incompleteness: Papers on Algorithmic Information Theory. World Scientific, Singapore (1990)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

We thank M. Falcioni for his remarks and suggestions and we thank in a special way A. Decoene for her careful reading of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergio Chibbaro.

Appendix: The Algorithmic Complexity in a Nutshell

Appendix: The Algorithmic Complexity in a Nutshell

The rationalization of the idea of ”randomness” needs the introduction of a precise mathematical formalisation of the complexity of a sequence.

This has been proposed independently in 1965 by Kolmogorov, Chaitin and Solomonoff, and refined by Martin-Löf [5, 26].

Given the sequence \(a_1, a_2, \ldots , a_N\), among all possible programs which generate this sequence one considers with the smallest number of instructions. Denoting by K(N) the number of these instructions, the algorithmic complexity of the sequence is defined by

$$\begin{aligned} K=\lim _{N \rightarrow \infty } {K(N) \over N} \,. \end{aligned}$$

Therefore, if there is a simple rule that can be expressed by a few instructions, the complexity vanishes. If there is no explicit rule, which is not just the complete list of 0 and 1, the complexity is maximal, that is 1. Intermediate values of K between 0 and 1 correspond to situations with no obvious rules, but such that part of the information necessary to do a given step is contained in the previous steps.

To give an intuitive idea of the concept of complexity, let us consider a situation related to the transmission of messages [39]: A friend on Mars needs the tables of logarithms. It is easy to send him the tables in binary language; this method is safe but would naturally be very expensive. It is cheaper to send the instructions necessary to implement the algorithm which computes logarithms: it is enough to specify few simple properties, e.g.

$$\begin{aligned} \ln (a \, b)=\ln (a) + \ln (b) , \,\, \ln (a^{\alpha } b^{\beta }) = \alpha \ln (a) + \beta \ln (b), \end{aligned}$$

and, in addition, for \(|x|<1\) the following Taylor expansion:

$$\begin{aligned} \ln (1+x)=\sum _{n=1}^{\infty }(-1)^{n+1} {x^n \over n} \,\,. \end{aligned}$$

However, if the friend is not interested in mathematics, but rather in football or the lottery, and wants to be informed of the results of football matches or lottery draw, there is no way of compressing the information in terms of an algorithm whose repeated use produces the relevant information for the different events; the only option is the transmission of the entire information. To sum up: the cost of the transmission of the information contained in the algorithm of logarithms is independent of the number of logarithms one wishes to compute. On the contrary, the cost of the transmission of football or lottery results increases linearly with the number of events. One might think that the difference is that there are precise mathematical rules for logarithms, but not for football matches and lottery drawings, which are then classified as random events.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chibbaro, S., Vulpiani, A. Compressibility, Laws of Nature, Initial Conditions and Complexity. Found Phys 47, 1368–1386 (2017). https://doi.org/10.1007/s10701-017-0113-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10701-017-0113-4

Keywords

Navigation