Classical physics, in particular, classical Newtonian mechanics, can be perceived as being modelled by systems of simultaneous differential equations of second order, for which the initial values of the variables and their derivatives are known. It slowly dawned on the mathematical physicists that the solutions, even if they satisfied Lipschitz continuity and thus were unique, could have a huge variety of solutions; with huge structural differences. Some of these solutions turned out to be unstable [256]: not always “a small error in the data only introduces a small error in the result” [359, pp. 442] (see also [162]).

1 Sensitivity to Changes of Initial Value

What is presently known as deterministic chaos [387, 458] – a term which is a contradictio in adjecto, an oxymoron of sorts – has a long and intriguing history, not without twists, raptures and surprises [170, 171]. As has been mentioned earlier (see Sect. 17.4 on p. 137) already Maxwell hinted on physical situations in which very tiny variations or disturbances of the state could get attenuated tremendously, resulting in huge variations in the evolution of the system. In an epistemic sense, this might make prediction and forecasting an extremely difficult, if not impossible task.

The idea is rather simple: the term “deterministic” refers to the state evolution – often a first-order, nonlinear difference equation [360] – which is “deterministic” in the sense that the past state determines the future state uniquely. This state evolution is capable of “unfolding” the information contained in the initial state.

The second term “chaos” or “chaotic” refers to a situation in which the algorithmic information of the initial value is “revealed” throughout evolution. Thereby, “true” irreducible chaos rests on the assumption of the continuum, and the possibility to “grab” or take (supposedly random with probability 1; cf. Sect. A.2 on p. 171) one element from the continuum, and recover the (in the limit algorithmically incompressible) “information” contained therein. That is, if the initial value is computable – that is neither incomputable nor random – then the evolution is not chaotic but merely sensitive to the computable initial value.

The question of whether physical initial values are computable or incomputable or even random (in the formal sense discussed in the Appendix A) is a nonoperational assumption and thus metaphysical. Very pointedly stated, with regards to the ontology and the type of randomness involved, deterministic chaos is sort of “garbage-in, garbage-out processes.”

In what may be considered as the beginning of deterministic chaos theory, Poincaré was forced to accept a gradual, that is epistemic (albeit not an ontologic in principle), departure from the deterministic position: sometimes small variations in the initial state of the bodies could lead to huge variations in their evolution at later times. In Poincaré’s own words [413, Chap. 4, Sect. II, pp. 56–57], “If we would know the laws of Nature and the state of the Universe precisely for a certain time, we would be able to predict with certainty the state of the Universe for any later time. But [[ \(\ldots \) ]] it can be the case that small differences in the initial values produce great differences in the later phenomena; a small error in the former may result in a large error in the latter. The prediction becomes impossible and we have a ‘random phenomenon.’ ” See also Maxwell’ observation of a metastabile state at singular points discussed in Sect. 17.4 earlier.

2 Symbolic Dynamics of the Logistic Shift Map

Symbolic dynamics [27, 310, 339] and ergodic theory [153, 192, 393] has identified the Poincaré map near a homocyclic orbit, the horseshoe map [470], and the shift map as equivalent origins of classical deterministic chaotic motion, which is characterized by a computable evolution law and the sensitivity and instability with respect to variations of the initial value [15, 338, 465].

This scenario can be demonstrated by considering the shift map \(\sigma \) as it pushes up “dormant” information residing in the successive bits of the initial state represented by the sequence \(s=0.\text {(bit 1)}\text {(bit 2)}\text {(bit 3)}\cdots \), thereby truncating the bits before the comma:

$$\begin{aligned} \begin{aligned} \sigma (s)= 0.\text {(bit 2)}\text {(bit 3)}\text {(bit 4)}\cdots ,\\ \sigma (\sigma (s))= 0.\text {(bit 3)}\text {(bit 4)}\text {(bit 5)}\cdots , \\ \sigma (\sigma (\sigma (s)))= 0.\text {(bit 4)}\text {(bit 5)}\text {(bit 6)}\cdots , \\ \vdots \end{aligned} \end{aligned}$$
(18.1)

Suppose a measurement device operates with a precision of, say, two bits after the comma, indicated by a two bit window of measurability; thus initially all information beyond the second bit after the comma is hidden to the experimenter. Consider two initial states \(s=[0.\text {(bit 1)}\text {(bit 2)}]\text {(bit 3)}\cdots \) and \(s'=[0.\text {(bit 1)}\text {(bit 2)}]\text {(bit 3)}'\cdots \), where the square brackets indicate the boundaries of the window of measurability (two bits in this case). Initially, as the representations of both states start with the same two bits after the comma \([0.\text {(bit 1)}\text {(bit 2)}]\), these states appear operationally identical and cannot be discriminated experimentally. Suppose further that, after the second bit, when compared, the successive bits \(\text {(bit }i\text {)}\) and \(\text {(bit }i\text {)}'\) in both state representations at identical positions \(i=3,4,\ldots \) are totally independent and uncorrelated. After just two iterations of the shift map \(\sigma \), s and \(s'\) may result in totally different, diverging observables \(\sigma (\sigma (s))= [0.\text {(bit 3)}\text {(bit 4)}]\text {(bit 5)}\cdots \) and \(\sigma (\sigma (s'))= [0.\text {(bit 3)}'\text {(bit 4)}']\text {(bit 5)}'\cdots \).

Suppose, as has been mentioned earlier, that the initial values are presumed, that is, hypothesized as chosen uniformly from the elements of a continuum, then almost all (that is, of measure one) of them are not representable by any algorithmically compressible number; in short, they are random (Sect. A.2 on p. 171).

Thus in this scenario of classical, deterministic chaos the randomness resides in the assumption of the continuum; an assumption which might be considered a convenience (for instance, for the sake of applying the infinitesimal calculus). Yet no convincing physically operational evidence supporting the necessity of the full structure of continua can be given. If the continuum assumption is dropped, then what remains is Maxwell’s and Poincaré’s observation of the unpredictability of the behaviour of a deterministic system due to instabilities and diverging evolutions from almost identical initial states [349].

3 Algorithmic Incomputability of Series Solutions of the n-Body Problem

There exist series solutions of the n-body problem [496, 560, 561]. From deterministic chaos theory – that is, from the great sensibility to changes in the initial values – it should be quite clear that the convergence of these series solutions could be extremely slow [170, 171].

However, one could go one step further and argue that, at least for systems capable of universal computation, in general there need not exist any computable criterion of convergence of these series [477]. This can be achieved by embedding a model of (ballistic) universal computation into an n-body system [510].