Abstract
We critically analyse the point of view for which laws of nature are just a mean to compress data. Discussing some basic notions of dynamical systems and information theory, we show that the idea that the analysis of large amount of data by means of an algorithm of compression is equivalent to the knowledge one can have from scientific laws, is rather naive. In particular we discuss the subtle conceptual topic of the initial conditions of phenomena which are generally incompressible. Starting from this point, we argue that laws of nature represent more than a pure compression of data, and that the availability of large amount of data, in general, is not particularly useful to understand the behaviour of complex phenomena.
Similar content being viewed by others
Notes
Cited by Dyson [14].
As paradigmatic example let us consider the Langevin equation
$$\begin{aligned} {d^2 x \over dt^2}+ \gamma {dx \over dt}=-\omega ^2 x +c \eta \end{aligned}$$where \(\eta \) is a white noise, i.e. a Gaussian stochastic process with \(\langle \eta \rangle =0\) and \(\langle \eta (t) \eta (t') \rangle = \delta (t-t')\), and \(\gamma >0\). It is worth emphasising that the vector \(\mathbf{y}=(x, dx/dt)\) is a Markov process, i.e. its stochastic evolution at \(t>0\) is determined only by \(\mathbf{y}(0)\), on the contrary the scalar variable x is not a Markovian process, and thus its dynamics depends on its past history.
The Reynolds number
$$\begin{aligned} R_e={U L \over \nu }~, \end{aligned}$$being U and L the typical velocity and length of the flow respectively, indicates the relevance of the non linear terms. At small \(R_e\) we have a laminar flow, while the regime \(R_e \gg 1\) is called fully developed turbulence.
This is the essence of Kac’s lemma, a well know result of ergodic theory [11].
In conservative cases, e.g. Hamiltonian systems, D is the number of variables involved in the dynamics; if the system is dissipative, D can be a fractional number and is smaller than the dimension of the phase-space.
A rigorous result states: \(m \ge 2 [D] +1\); from heuristic arguments on can expect that \(m= [D]+1\) is enough [11].
References
Sève, L.: Penser avec Marx aujourd’hui: philosophie? La Dispute (2014)
Bailly, F., Longo, G.: Mathématiques et sciences de la nature. Hermann, Paris (2006)
Mach, E.: On the Economical Nature of Physical Inquiry. Cambridge University Press, Cambridge (2014)
Mach, E.: The Science of Mechanics: A Critical and Historical Account of Its Development. Open Court Publishing Company, Chicago (1907)
Li, M., Vitányi, P.: An Introduction to Kolmogorov Complexity and Its Applications. Springer, Berlin (2009)
Solomonoff, R.J.: A formal theory of inductive inference. Part I-II. Inf. Control 7(1), 1 (1964)
Davies, P.C.W.: Why is the physical world so comprehensible. In: Zurek, W.H. (ed.) Complexity, Entropy and the Physics of Information, pp. 61–70. Addison-Wesley, Boston (1990)
Barrow, J.D.: New Theories of Everything. Oxford University Press, Oxford (2007)
Born, M.: Natural Philosophy of Cause and Chance. Read Books, Vancouver (1948)
Sokal, A.D., Bricmont, J.: Postmodern Intellectuals. Picador, New York (1997)
Cencini, M., Cecconi, F., Vulpiani, A.: Chaos. World Scientific, Singapore (2010)
Coveney, P.V., Dougherty, E.R., Highfield, R.R.: Big data need big theory too. Phil. Trans. R. Soc. A 374(2080), 20160153 (2016)
Crutchfield, J.P.: The dreams of theory. Wiley Interdiscip. Rev. 6(2), 75–79 (2014)
Dyson, F.: Birds and frogs. Not. AMS 56(2), 212–223 (2009)
Chibbaro, S., Rondoni, L., Vulpiani, A.: Reductionism, Emergence and Levels of Reality. Springer, Berlin (2014)
Gershenfeld, N.A., Weigend, A.S. (eds.): Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley, Reading (1994)
Onsager, L., Machlup, S.: Fluctuations and irreversible processes. Phys. Rev. 91(6), 1505 (1953)
Ma, S.K.: Statistical Mechanics. World Scientific, Singapore (1985)
Takens, F.: Detecting strange attractors in turbulence. In: Rand, D.A., Young, L.S. (eds.) Dynamical Systems and Turbulence. Springer, Berlin (1981)
Eckmann, J.-P., Ruelle, D.: Fundamental limitations for estimating dimensions and lyapunov exponents in dynamical systems. Physica D 56(2), 185–187 (1992)
Newton, I.: Sir Isaac Newton’s Mathematical Principles of Natural Philosophy and His System of the World. University of California Press, Berkeley (1934)
Wigner, E.P.: Events, laws of nature and invariant principles. Science 145, 995–999 (1964)
Li, M., Vitányi, P.M.B.: Inductive reasoning and Kolmogorov complexity. J. Comput. Syst. Sci. 44(2), 343–384 (1992)
Li, M., Vitányi, P.M.B.: Kolmogorov complexity arguments in combinatorics. J. Comb. Theory Ser. A 66(2), 226–236 (1994)
Li, M., Vitányi, P.M.B.: Kolmogorov complexity and its applications. Algorithm. Complex. 1, 187 (2014)
Martin-Löf, P.: The definition of random sequences. Inf. Control 9(6), 602–619 (1966)
Calude, C., Longo, G.: Classical, quantum and biological randomness as relative unpredictability. Nat. Comput. 15, 263 (2016)
Dirac, P.A.M.: Quantum mechanics of many-electron systems. Proc. R. Soc. Lond. Ser. A 123(792), 714–733 (1929)
Primas, H.: Chemistry, Quantum Mechanics and Reductionism. Perspectives in Theoretical Chemistry. Springer, Berlin (1981)
Scerri, E.R.: Collected Papers on Philosophy of Chemistry. World Scientific, Singapore (2008)
Jona-Lasinio, G.: Spontaneous symmetry breaking-variations on a theme. Prog. Theor. Phys. 124(5), 731 (2010)
Frisch, U.: Turbulence: The Legacy of AN Kolmogorov. Cambridge University Press, Cambridge (1995)
Bohr, T., Jensen, M.H., Paladin, G., Vulpiani, A.: Dynamical Systems Approach to Turbulence. Cambridge University Press, Cambridge (2005)
Kantz, H., Schreiber, T.: Nonlinear Time Series Analysis. Cambridge University Press, Cambridge (1997)
McAllister, J.W.: Algorithmic randomness in empirical data. Stud. Hist. Philos. Sci. Part A 34(3), 633–646 (2003)
Ruelle, D.: The Claude Bernard lecture, 1989. Deterministic chaos: the science and the fiction. Proc. R. Soc. Lond. A 427(1873), 241–248 (1990)
Vulpiani, A., Cecconi, F., Cencini, M., Puglisi, A., Vergni, D. (eds.): Large Deviations in Physics: The Legacy of the Law of Large Numbers, vol. 885. Springer, Berlin (2014)
Vulpiani, A.: Lewis Fry Richardson: scientist, visionary and pacifist. Lett. Mat. 2(3), 121–128 (2014)
Chaitin, G.J.: Information, Randomness & Incompleteness: Papers on Algorithmic Information Theory. World Scientific, Singapore (1990)
Acknowledgements
We thank M. Falcioni for his remarks and suggestions and we thank in a special way A. Decoene for her careful reading of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Appendix: The Algorithmic Complexity in a Nutshell
Appendix: The Algorithmic Complexity in a Nutshell
The rationalization of the idea of ”randomness” needs the introduction of a precise mathematical formalisation of the complexity of a sequence.
This has been proposed independently in 1965 by Kolmogorov, Chaitin and Solomonoff, and refined by Martin-Löf [5, 26].
Given the sequence \(a_1, a_2, \ldots , a_N\), among all possible programs which generate this sequence one considers with the smallest number of instructions. Denoting by K(N) the number of these instructions, the algorithmic complexity of the sequence is defined by
Therefore, if there is a simple rule that can be expressed by a few instructions, the complexity vanishes. If there is no explicit rule, which is not just the complete list of 0 and 1, the complexity is maximal, that is 1. Intermediate values of K between 0 and 1 correspond to situations with no obvious rules, but such that part of the information necessary to do a given step is contained in the previous steps.
To give an intuitive idea of the concept of complexity, let us consider a situation related to the transmission of messages [39]: A friend on Mars needs the tables of logarithms. It is easy to send him the tables in binary language; this method is safe but would naturally be very expensive. It is cheaper to send the instructions necessary to implement the algorithm which computes logarithms: it is enough to specify few simple properties, e.g.
and, in addition, for \(|x|<1\) the following Taylor expansion:
However, if the friend is not interested in mathematics, but rather in football or the lottery, and wants to be informed of the results of football matches or lottery draw, there is no way of compressing the information in terms of an algorithm whose repeated use produces the relevant information for the different events; the only option is the transmission of the entire information. To sum up: the cost of the transmission of the information contained in the algorithm of logarithms is independent of the number of logarithms one wishes to compute. On the contrary, the cost of the transmission of football or lottery results increases linearly with the number of events. One might think that the difference is that there are precise mathematical rules for logarithms, but not for football matches and lottery drawings, which are then classified as random events.
Rights and permissions
About this article
Cite this article
Chibbaro, S., Vulpiani, A. Compressibility, Laws of Nature, Initial Conditions and Complexity. Found Phys 47, 1368–1386 (2017). https://doi.org/10.1007/s10701-017-0113-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10701-017-0113-4