The age-old dream that mathematics represents the blueprint for reality has started to become fulfilled: the Book of Nature is intelligible to the human mind and deep truths about the workings of the world have been decoded. In other words, the human mind has begun to venture into realms in the abstract world which interrelate with the workings of the physical world—from the quantum foam comprising reality to the awe-inspiring vastness of the cosmic fabric. This main theme is encapsulated in Fig. 2.1, which is reproduced below.

However, this translation of aspects of reality into abstract representations has been very specific. For instance, the considered reality domains interestingly omit the very cornerstone of the whole enterprise of knowledge seeking: the human brain. And with it, a whole branch of reality is ignored, relating to self-organization, structure formation, and emergent complexity in general. Curiously, the Book of Nature does not speak much about the everyday structures and systems surrounding us humans. The complexity of life is mostly excluded. Furthermore, the focus of the abstract representation has been on a scheme of mathematization first introduced by Isaac Newton and Gottfried Wilhelm Leibniz (see also Sect. 2.1.1). In a nutshell, this approach can be labeled as equation-driven.

These observations allow the Book of Nature to be classified as follows. Its reality domain, while excluding complex phenomena like life and consciousness, focuses on fundamental aspects of the physical world. For instance, describing how subatomic particles interact via a unification of three of the four fundamental forces (Sect. 4.4) and how the force of gravity, replaced by the dynamics of space-time geometry (Sects. 4.1 and 10.1.2), sculptures the cosmos. The formal representation is equation-based, in other words, it is analytical. This fundamental-analytical dichotomy is the paradigm of the Book of Nature.

Fig. 5.1
figure 1

A copy of Fig. 2.1 on Page 46, illustrating the human mind’s journey into the abstract world, retrieving knowledge about the physical world

Fig. 5.2
figure 2

The dichotomies of reality and understanding. (Left) partitioning the world into the two domains labeled fundamental and complex. (Right) the two main modes of formal representation of reality relating to analytical and algorithmic descriptions

Only recently, with the advent of information processing,Footnote 1 Fig. 5.1 could be applied in a whole new context. By extending the validity domain of the formal representation to encompass computational aspects, a novel reality domain becomes intelligible that is much closer to human experience than, for instance, the elusive entities comprising matter and transmitting forces. Now, the focus shifts away from an equation-driven effort and embraces computational and simulational tools. This formal approach can essentially be denoted as algorithmic. Slowly, the everyday complexity surrounding us can be tackled. This reality domain, in contrast to the fundamental, will be called complex in the following. Miraculously, the human mind has suddenly stumbled upon an extension of the Book of Nature. A new dichotomy emerges, relating to the complex-algorithmic classification, uncovering the next volume of the Book of Nature. In Fig. 5.2 a conceptual demarcation of these concepts is shown.

The Pythagoreans’ dream of the mathematization of nature (Chap. 2) turns out to be only the beginning of a profound knowledge generation process. Building on the tools enabled by the fundamental-analytical dichotomy, new abstract worlds become accessible by the aid of computation and the complex-algorithmic paradigm is uncovered. In summary, the Book of Nature has been greatly expanded and is now comprised of:

 

VOLUME I:

The fundamental reality domain made accessible to the mind via analytical formal representations.

VOLUME II:

Real-world complexity encoded via algorithmic formalizations.

 

In the following, essential features of Volume I and II of the Book of Nature will be independently summarized and analyzed (Sects. 5.1 and 5.2), before a unifying theme is unveiled (Sect. 5.3). Finally, the entire landscape spanned by the fundamental-complex and analytical-algorithmic classifications is examined (Sect. 5.4). Elements are taken or adapted from Glattfelder et al. (2010) and Appendix A in Glattfelder (2013). Note that the contents of Volume II, relating to complex systems, is presented in detail in Chaps. 6 and 7.

1 Volume I: Analytical Tools and Physical Science

The tremendous success of the first volume of the Book of Nature is summarized in the next section and some cornerstones of its analytical powers highlighted. Then the limitations are exposed.

1.1 The Success

Staying faithful to the credo “Shut up and calculate!” (Sect. 2.2.1) has allowed a lot of ground to be covered. By not being consumed by philosophical questions relating to the nature of the abstract world, the human mind’s capacity to host or access it, and the correspondence between the physical and the abstract (the topics addressed in Fig. 2.2), progress can be made. Although, as mentioned, the reality domain is restricted to exclude complex systems, it still covers most of physical science. In effect, laws of nature can be understood as regularities and structures in a highly complicated universe. They critically depend on only a small set of conditions and are independent of many other conditions which could also possibly have an effect. Science can be understood as the quest to capture fundamental processes of nature within formal mathematical representations, i.e., in an analytical framework. To understand more about the nature of the physical system under investigation, experiments are performed yielding new insights. Historically, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences around 1660 (insights later published as Boyle (1682). Approximately at the same time, the philosopher Francis Bacon introduced modifications to Aristotle’s nearly two thousand year old ideas, introducing what came to be known as the scientific method, where inductive reasoning plays an important role (Bacon 2000). This paved the way for a modern understanding of scientific inquiry. From this initial thrust our modern knowledge of the world emerged, laying the fertile groundwork on which technology would flourish. All our current technologicontinuouscal advances, and the increasing speed at which progress is made, trace back to this initial spark.

A powerful example within the fundamental-analytical dichotomy, highlighting the success of the interplay between the abstract and physical worlds, is the notion of symmetry. This simple idea found its formal expression in the concept of invariance (Chap. 3). This is a prime example illustrating the translation process described in Fig. 5.1: a tangible idea is encoded as a mathematical abstraction. Digging deeper in the abstract world further unearthed group theory and its ties to geometry (Sects. 3.1.1 and 4.1). Mathematical invariance was then seen to flow into various themes. For instance, universal conservation laws (Sect. 3.1), the causal relation of space and time (Sect. 3.2.1) elementary particles being categorized by the groups describing space-time symmetries (Sect. 3.2.2), and the unification of the non-gravitational forces (Chap. 4). Weaving a tapestry out of these threads made from symmetry necessarily integrates a wide array of topics seen in physics. From

  • classical mechanics (Sects. 2.1.1 and 3.1.1) to quantum mechanics (Sects. 4.3.4 and 10.3.2);

  • special relativity (Sect. 3.2.1) to general relativity (Sects. 4.1 and 10.1.2);

  • quantum field theory (Sects. 3.1.4, 3.2.2.1, 4.2, and 10.1.1) to the standard model of particle physics (Sects. 4.2, 4.3, and 4.4);

  • unified field theories (Sect. 4.3.3) to higher dimensional unification schemes (Sect. 4.3.1).

And, last but not least, electromagnetism (Sect. 2.1.2 and Eq. (4.18)).

1.2 The Paradigms of Fundamental Processes

From the fundamental and universal importance of symmetry, three paradigms applicable to physics can be derived:

Mathematical models  of the physical world  are either:

\({\mathbf {\mathsf{{P}}}}_1^f\) : :

 independent of the choice of representation in a coordinate system;

\({\mathbf {\mathsf{{P}}}}_2^f\) : :

 unchanged by symmetry transformations;

\({\mathbf {\mathsf{{P}}}}_3^f\) : :

 constrained to transform according to a symmetry group.

To illustrate \(\textsf {P}_1^f\), imagine an arrow located in space. It has a length and an orientation. In the mathematical world, this can be represented by a vector, labeled a. By choosing a coordinate system, the abstract entity a can be given physical meaning \(a=(a_1, a_2, a_3)\). For each axis direction \(x_1, x_2, x_3\), the \(a_i\) describe the number of increments along the axis the vector is projected on. For instance \(a=(3,5,1)\). The problem is, however, that depending on the choice of the coordinate system, which is arbitrary, the same vector is described very differently: \(a=(3,5,1)=(0,23.34,-17)\). The paradigm above states that the physical content of the mathematical model should be independent of the decision of how one chooses to represents the mathematical model.

The first two requirements \(\textsf {P}_1^f\) and \(\textsf {P}_2^f\), seemingly innocuous, straightforward and commonsensical, are conceptualized as the powerful ideas of general covariance and invariance. The notion that vectors and tensors should be independent of the choice of coordinates used to express and compute these quantities, leads to one of the two main ingredients in the theory of general relativity, describing gravity (Sect. 4.1 and 10.1.2). Moreover, expecting the outcome of an experiment to be independent of the exact time and location the experiment was conducted at, results, via Noether’s theorem, in the conservation of energy and momentum in the universe (Sect. 3.1.4). Alternatively, imposing a theory to be invariant under gauge transformations (Sect. 4.2) yields a unifying theme on the basis of which the standard model of particle physics is constructed (Sect. 4.4). In Fig. 5.3 a schematic overview is given, of how \(\textsf {P}_1^f\) leads to the theory of general relativity and \(\textsf {P}_2^f\) to the standard model. While the former utilizes the external symmetry of space-time, the latter relies on internal gauge symmetry. It is indeed amazing, how the adoption of such simple paradigms leads to such effective and complete physical theories. \(\textsf {P}_3^f\) is more subtle, as it describes a link between the quantum world and the structure of the symmetry groups of space-time: the mathematical representation of the groups encode the transformation properties of quantum fields and particle states (Sects. 3.2.2.1 and 3.2.2.2). This gives rise to a mathematical lever with which the unseen quantum entities can be manipulated.

Fig. 5.3
figure 3

Adapted from Glattfelder (2013)

Conceptual overview of the structure of the two main physical theories describing all known forces in the universe: the standard model and general relativity. Focusing on specific reality domains (the quantum world or the arena of space-time) and guided by symmetry principles, it is possible to translate the physical essence into an abstract structure. Once encoded, this information is subjected to the dictum of mathematical theories, yielding the physical theories. Finally, decoding these formal representations allows the effects of the fundamental physical forces to be calculated.

1.3 The Limitations

In the last chapters, it was unveiled how mathematics underlies physics. From classical mechanics, electromagnetism, the non-gravitational forces unified in the standard model of particle physics to gravitational forces. In spite of this tremendous success there is still one omission, relating to many-body problem. This is a large category of physical problems pertaining to the properties of microscopic systems that are comprised of a large number of interacting entities.

Condensed matter physics attempts to explain the macroscopic behavior of matter based on microscopic properties and quantum effects (Ashcroft and Mermin 1976). It is one of physics first ventures into many-body problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely, the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors. In the super-conducting phase (Schilling et al. 1993), the wave function becomes symmetric.

Another macroscopic characteristic of matter based on microscopic properties are quasicrystals (Mackay 1982). A quasicrystalline pattern can fill an entire space, but lacks translational symmetry. In short, quasicrystals are structures which are ordered but not periodic. They have fractal properties (Sect. 5.2.1). Until Dan Shechtman received the 2011 Nobel Prize for the discovery of quasicrystals, the topic was controversial. Eminent chemist and two-time Nobel laureateFootnote 2 Linus Pauling exclaimed that “there are no quasi-crystals, just quasi-scientists” (quoted in The Guardian 2013). Shechtman faced disdain from his peers and his research was rejected as erroneous. Lesley Yellowlees, president of the Royal Society of Chemistry, summarized the ordeal (quoted in The Guardian 2013):

Dan Shechtman’s Nobel prize celebrated not only a fascinating and beautiful discovery, but also dogged determination against the closed-minded ridicule of his peers, including leading scientists of the day. His prize didn’t just reward a difficult but worthy career in science; it put the huge importance and value of funding basic scientific research in the spotlight.

Overall, many-body problems in physics represent a vast category of challenges which are notoriously hard to tackle. Determining the precise physical behavior of systems composed of many entities is, in general, hard, as the number of possible combinations of states increases exponentially with the number of entities to be considered. This intricacy drains the analytical formal representation’s power, as calculations become intractable. In contrast, the understanding of many-body problems often relies on approximations specific to the problem being analyzed and result in computationally intensive calculations. The algorithmic approach to decoding such complexity, defining a new dichotomy, emerges.

As an example, in classical mechanics the n-body problem describes the challenge of predicting the motions of n celestial bodies interacting with each other viaNewton’s law of universal gravity. Already 3-body problems—for instance, describing a Sun-Earth-Moon system given their initial positions, masses, and velocities—yield equations with no closed form solutions. As a result, numerical methods or computer simulations need to be invoked in order to solve such seemingly simple problems (Valtonen and Karttunen 2006).

A further challenge related to the understanding of systems of many interacting agents, rendering equations mute but emphasizing the power of algorithmic tools, is the discovery of chaos theory (Mandelbrot 1982; Gleick 1987). For instance, the behavior of water molecules in a dripping faucet becomes unpredictable, when the system enters the chaotic state (Shaw 1984). One critical aspect of chaotic systems in nature is their dependence on initial conditions. The Butterfly Effect describes this sensitivity metaphorically: The flapping of the wings of a butterfly creates tiny perturbations in the atmosphere which set the stage for the occurrence of a tornado weeks later. More precisely, the exact values of the initial conditions determine how the system evolves in time. However, as these initial conditions can never be set with infinite accuracy in the real world, the system’s evolution shows a path-dependence. In other words, two dynamical systems with nearly identical initial conditions can end up in two vastly different end states. More on chaos theory is presented in Sect. 5.2.1.

The Butterfly Effect was coined by Edward Lorenz, a mathematician, meteorologist, and a pioneer of chaos theory. Meteorology is a prime example of how inquiries into the workings of a complex system are stifled by chaotic behavior. In theory, if there existed an infinitely small grid of atmospheric measurements stations scattered all over the world, weather predictions would be accurate. Facing such impossibility, scientists have devised simulational methods able to tackle the uncertainty. As an example, the Monte Carlo method utilizes computational simulations which are repeated many times over with random sampling to obtain numerical results. The key insight is to use the statistical properties of seeming randomness to solve problems that might be deterministic in principle. The algorithmic Monte Carlo methods are mostly employed, and often useful, when it is difficult or even impossible to use other approaches, like analytical tools.

Another key limiting factor for the equation-based understanding of the workings of the world comes in the guise of non-linearity, a cornerstone of chaos theory. For linear systems the change of the output is proportional to the change of the input. Expressed mathematically

$$\begin{aligned} f(x) \sim x. \end{aligned}$$
(5.1)

Already the square of a variable is non-linear, i.e., \(f(x) = x^2\). Here we see an emerging conflict between the fundamental-analytical and the complex-algorithmic dichotomies. Linear algebra is the branch of mathematics describing vector spaces and, crucially, linear mappings between such spaces. The linear mappings are expressed as matrices. This mathematical language, relying on linear systems, has been extremely fruitful in describing quantum mechanics. However, most physical systems in nature are inherently non-linear (Mandelbrot 1982; Strogatz 1994). Moreover, this non-linear (and chaotic) behavior is, again, analytically hard to tackle. To conclude, a final limitation in physics comes from dissipative effects, like friction or turbulence, where the system loses energy (or matter) over time and exhibits non-linear dynamics. Hence calculations in physics often rely on idealizations. For instance, Newton’s classical mechanics can easily describe a game of pool, i.e., collisions between billiard balls, if friction is ignored, the balls are assumed to be perfectly spherical, and the collisions taken to be elastic (i.e., the kinetic is energy conserved).

In essence, while physics has had an amazing success in describing most of the observable universe in the last 300 years, it appears as thought its powerful mathematical formalism is ill-suited to address the real-world complexity surrounding and including us. Namely, situations where many agents are interacting with each other. For instance, ranging from particles, chemical compounds, cells, biological organisms to celestial bodies, and systems thereof. In order to approach real-life complex phenomena, one needs to adopt a more systems oriented focus.

2 Volume II: Algorithmic Tools and Complex Systems

For centuries, the fundamental-analytical dichotomy of understanding the universe has prevailed. A vast array of knowledge has been accumulated. However, only recently our focus has shifted to the intricate realities of systems of interacting agents surrounding us, contained within us, and comprising us. A new dichotomy relating to the complex-algorithmic classification emerged. Equipped with new computational and simulational tools we started to probe a new reality domain encompassing complex systems. A true paradigm shift occurred in our understanding, away from a reductionist philosophy prevailing in science towards a holistic, networked, and systems-based outlook.

Complex systems theory is the topic of Chap. 6. Here, in a nutshell, we introduce complex systems and networks, describe the paradigms of the complex-algorithmic dichotomy, and outline the success of this endeavor.

2.1 The Paradigms of Complex Systems

A complex system is usually understood as being comprised of many interacting or interconnected parts (or agents). A characteristic feature of such systems is that the whole often exhibits properties not obvious from the properties of the individual parts. This is called emergence. In other words, a key issue is how the macro behavior emerges from the interactions of the system’s elements at the micro level. Moreover, complex systems also exhibit a high level of adaptability and self-organization. The domains complex systems originate from are mostly socio-economical, biological, or physio-chemical (Chaps. 6 and 7).

The study of complex systems appears complicated, as it implies an approach very different from the reductionistic thinking of established science. Now, breaking down, identifying, and analyzing the behavior of a single constituent of a system does not reveal anything about the dynamics of the system as a whole. A quote from Anderson (1972), an influential article succinctly titled “More is Different” , illustrates this fact:

At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary [. . . ]. Psychology is not applied biology, nor is biology applied chemistry.

In the same vein, it is far from clear how to get from a description of quarks and leptons, via DNA, to an understanding of the human brain and consciousness. It appears as though these hierarchical levels of order defeat any reductionistic attempts of understanding by their very design.

As discussed, complex systems are usually very reluctant to be cast into closed-form analytical expressions. This means that it is generally hard to derive mathematical quantities describing the properties and dynamics of the system under study. If the paradigms of fundamental processes described on Page 143 fail, what is needed to replace them? Indeed, can we even hope to find such succinct guiding principles a second time? Remarkably and, again, unexpectedly, the answer is yes. The paradigms of complex systems are, once again, very concise:

\({\mathbf {\mathsf{{P}}}}_1^c\) : :

 Every complex system  is reduced to a set of objects and a set of functions between the objects.

\({\mathbf {\mathsf{{P}}}}_2^c\) : :

 Macroscopic complexity is the result of simple rules  of interaction at the micro level.

\(\textsf {P}_1^c\) is reminiscent of the natural problem solving philosophy of object-oriented programming, where the objects are implementations of classes (code templates) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys specific rules (encapsulation, inheritance, polymorphism, etc.). See, for instance Gamma et al. (1995).

Similarly, in the mathematical field of category theory a category is defined as the most basic structure: a set of objects and a set of morphisms (maps between the sets) (Hillman 2001). Special types of mappings, called functors, map categories into each other. Category theory was understood as the “unification of mathematics” in the 1940s. A natural incarnation of a category is given by a graph or a network, where the nodes represent the objects and the links describe their relationship or interactions. Now the structure of the network (i.e., the topology) determines the function of the network. This new science of networks, emerging from the study of complex systems and building on the formal representation of \(\textsf {P}_1^c\) as a graph, is presented in the Sect. 5.2.3.

Paradigm \(\textsf {P}_2^c\), the topic of the following section, describes how order emerges out of chaos, driven by a set of simple rules describing the interaction of the parts making up a complex system. Together, these two paradigms represent a shift away from mathematical models of reality towards algorithmic models, computing and simulating reality. In other words, a change in modus operandi from the fundamental-analytical to the complex-algorithmic dichotomy has occurred. Now, the analytical description of complex systems can be abandoned in favor of the algorithms describing the interaction of the objects, i.e., agents, in a system, according to specified rules of local interaction. This is the fundamental distinguishing characteristic outlined on the right-hand side of Fig. 5.2. Instead of encoding certain aspects of reality into mathematical equations, now computers are programmed with step-by-step recipes which are conjured up to tackle problems. Only by letting the algorithm run new knowledge is generated and the design of algorithms and the existence of algorithmic solutions become relevant.

This prominent approach is called agent-based modeling. One key realization is that the structure and complexity of each agent can be ignored when one focuses on their interactional structure. Hence the neurons in a brain, the chemicals interacting in metabolic systems, the ants foraging, the animals in swarms, the humans in a market, etc., can all be understood as being comprised of featureless interacting agents and modeled within this paradigm. By encapsulating the algorithms into a system of agents, complex behavior can be simulated. Some successful agent-based models are Axelrod (1997), Lux and Marchesi (2000), Schweitzer (2003), Andersen and Sornette (2005), Miller et al. (2008), Šalamon (2011), Helbing (2012).

2.2 The Science of Simple Rules

Paradigm \(\textsf {P}_2^c\) of complex systems, stating that complexity emerges from simplicity, is unexpected and very surprising. It is perhaps as puzzling as Eugene Wigner’s comments on the “unreasonable effectiveness of mathematics in the natural sciences” (Sect. 9.2.1). Prompted by the tremendous success of Volume I of the Book of Nature in decoding the workings of the universe by utilizing equations, scientists expressed their bafflement. For instance, also Albert Einstein (quoted in Isaacson 2007, p. 462):

The eternal mystery of the world is its comprehensibility. The fact that it is comprehensible is a miracle.

Now compounding the enigma is the discovery of Volume II. What appeared as intractable complexity from afar is uncovered to be the result of simple rules of interaction closeup. First, the universe speaks a mathematical language the human mind can discover or create. Then, what appeared as hopeless complicatedness is in fact derived from pure simplicity.

A New Kind of Science

Although the simplicity of complexity (Chap. 6) has attracted less philosophical interest than the “unreasonable effectiveness of mathematics”, some scientists have expressed their total bewilderment at the realization. For instance, Stephen Wolfram, a physicist, computer scientist, and entrepreneur. Wolfram started his academic career as a child prodigy, publishing his first peer-reviewed and single-author paper in particle physics at the age of sixteen (Wolfram 1975). Three years later, a publication appeared which is still relevant and referenced today, forty years later (Fox and Wolfram 1978). In 1981, he won the MacArthur Fellows Program,Footnote 3 colloquially know as the “Genius Grant”, a prize awarded annually to researchers who have shown “extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction”. In parallel, Wolfram led the development of the computer algebra system called SMP (Symbolic Manipulation Program) in the Caltech physics department during 1979–1981. A dispute with the administration over the intellectual property rights regarding SMP eventually caused him to hand in his resignation. Continuing on this computational journey, Wolfram began the development of Mathematica in 1986. This was a mathematical symbolic computation program and would become an invaluable tool used in many scientific, engineering, mathematical, and computing fields. In 1987, the private company Wolfram Research Inc. was founded, releasing Mathematica Version 1.0 in 1988. By 1990, Wolfram Research reached $10 million in annual revenue.Footnote 4 Today, Mathematica (Version 11.2.0) remains highly influential and most of its code is written in the Wolfram Language. This is a general multi-paradigm programming language developed by Wolfram Research.

However, Wolfram’s biggest fascination lies with complexity. It started with his work on cellular automata in 1981. These are discrete models studied in computability theory, mathematics, physics, complexity science, theoretical biology, and microstructure modeling. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states. A famous cellular automaton was devised by the mathematician John Conway in 1970, called the Game of Life (Gardner 1970). It is an infinite two-dimensional orthogonal grid of square cells, which can be in two sates (dead or alive). The game evolves according to four simple rules and the whole dynamics are solely determined by the choice of the initial state. The Game of Life attracted a lot of attention due to the complex patterns that could emerge from the interaction of the game’s simple rules. In essence, an early computational implementation demonstrating emergence and self-organization. In 1987,Wolfram founded the journal Complex Systems,Footnote 5 “devoted to the science, mathematics and engineering of systems with simple components but complex overall behavior”. This fascination with complexity had life-changing consequences for him.

In 2002, Wolfram wrote (Wolfram 2002, p. ix):

Just over twenty years ago I made what at first seemed like a small discoveryFootnote 6: a computer experiment of mine showed something I did not expect. But the more I investigated, the more I realized that what I had seen was the beginning of a crack in the very foundations of existing science, and a first clue towards a whole new kind of science.

Developing this new science would become his passion. In 1991, Wolfram set out to realize this vision, resulting in the 2002 book, A New Kind of Science , a one-thousand-two-hundred-page tour de force (Wolfram 2002). During the time of writing, Wolfram became nocturnal and reclusive, totally devoted to his project. Indeed, when he realized that there was no publisher who could print the book with the quality he envisioned for the diagrams, he simply founded Wolfram Media Inc. to do the job. See (Levy 2002) for more anecdotes. The book begins by setting the stage with the demarcation described in Fig. 5.2 (Wolfram 2002, p. 1):

Three centuries ago science was transformed by the dramatic new idea that rules based on mathematical equations could be used to describe the natural world. My purpose in this book is to initiate another such transformation, and to introduce a new kind of science that is based on the much more general types of rules that can be embodied in simple computer programs.

In other words, Wolfram describes the two opposing formal representations we humans can access: analytical vs. algorithmic. In essence “the big idea is that the algorithm is mightier than the equation” (Levy 2002). Wolfram claims to have re-expressed all of science utilizing the formal language of cellular automata, in essence, simple programs. Indeed, looking at the table of contents reveals the great scope in the topics that are covered:

1 :

The Foundations for a New Kind of Science

2 :

The Crucial Experiment

3 :

The World of Simple Programs

4 :

Systems Based on Numbers

5 :

Two Dimensions and Beyond

6 :

Starting from Randomness

7 :

Mechanisms in Programs and Nature

8 :

Implications for Everyday Systems

9 :

Fundamental Physics

10 :

Processes of Perception and Analysis

11 :

The Notion of Computation

12 :

The Principle of Computational Equivalence

From mathematics and its foundation, complex systems found in nature, physics and its foundation, to the nature of computation, a vast array of subject matter is covered diligently in great detail. Wolfram acknowledges the tremendous success of the mathematical approach to science, but stresses that many central issues remain unresolved, where the simple-programs paradigm could possibly shed new light on the challenges (Wolfram 2002, p. 21):

The typical issue was that there was some core problem that traditional methods or intuition had never successfully been able to address—and which the field had somehow grown to avoid. Yet over and over again, I was excited to find that with my new kind of science I could suddenly begin to make great progress—even on problems that in some cases had remained unanswered for centuries.

A New Kind of Science was received with skepticism and ignited controversy. However, regardless of how one views Wolfram and his claims, one epiphany remains. Namely, the counterintuitive realization that simplicity unlocks complexity (Wolfram 2002, p. 2):

Indeed, even some of the very simplest programs that I looked at had behavior that was as complex as anything I had ever seen.

It took me more than a decade to come to terms with this result, and to realize just how fundamental and far-reaching its consequences are.

Furthermore (Wolfram 2002, p. 19):

And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.

Until this phenomenon was reliably demonstrated and studied by Wolfram, people expected simple rules of interactions to lead to mostly simple outcomes. Discovering simplicity to be the spawning seed of complex behavior was truly unexpected. But perhaps the boldest claim in the book relates to the computational nature of the universe. Wolfram invokes a radical new level of reality, where beneath the laws of physics there lies a computational core. This theme will reappear in Chap. 13.

Quadratic and Logistic Maps

Another archetypal theme describing how simplicity encodes complexity comes from chaos theory. This time the notion is nested deep within mathematics itself and comes in the guise of fractal sets. Fractals are very particular abstract mathematical objects. The term was coined by the mathematician Benoît Mandelbrot (Mandelbrot 1975). Fractals came to prominence in the 1980s with the advent of chaos theory, as the graphs of most chaotic processes display fractal properties (Mandelbrot 1982)—that is, foremost, self-similarity. This is a feature of an object to contain, exactly or approximately, similar parts of itself. For instance, a coastline is self-similar: parts of it show the same statistical properties at many scales (Mandelbrot 1967). Such a characteristic is also called scale invariance, a topic discussed in Sect. 6.4 in the context of scaling laws. Indeed, many naturally occurring objects display fractal properties. So much so, that Mandelbrot chose the title of his seminal and hugely influential work on fractals and chaos theory to read: The Fractal Geometry of Nature (Mandelbrot 1982).

The most prototypical fractal, also entering pop culture, is the Mandelbrot set (Douady et al. 1984). Due to the rise of computational power, graphical images started to become more detailed around the 1980s, slowly unveiling the set’s aesthetic appeal. But most stunning was the self-similar property of the Mandelbrot set, where the original iconic shape would reemerge over and over again, at all resolutions accessible within the current computational limits. See Fig. 5.4 for an illustration. The Mandelbrot set is defined as the set of values c for which the iterations of the quadratic map

$$\begin{aligned} z_{n+1} = z_n^2 + c, \end{aligned}$$
(5.2)

remain bounded, where \(z_{0} = 0\). In other words, a chosen c belongs to the set if the series \(z_{1} = c\), \(z_{2} = z_1^2 + c=c^2+c\), ... does not go to infinity for \(n \rightarrow \infty \). As c is a complex number, i.e., \(c \in \mathbb {C}\), it can be represented as

$$\begin{aligned} c = a + i\cdot b, \end{aligned}$$
(5.3)

with \(a, b \in \mathbb {R}\) and \(i := \sqrt{-1}\). Hence one can display c graphically as a point in the (complex) plane with the coordinates \(c=(a,b)\), explaining the two-dimensional nature of fractals. Variants of the Mandelbrot set are easily conceived of, by altering the nature of the map. For instance

$$\begin{aligned} \hat{z}_{n+1} = \hat{z}_n^4 + c, \end{aligned}$$
(5.4)

yields the fractal set seen in the middle and right-hand panels of Fig. 5.4. Generically

$$\begin{aligned} \tilde{z}_{n+1} = f(\tilde{z}_n) + g(c), \end{aligned}$$
(5.5)

with two defining functions f and g. These iterative equations are also known as difference equations, a hallmark of discrete mathematics, discussed in Sect. 5.3.

Fig. 5.4
figure 4

The evolution of fractals. (Left) the first glimpse of the Mandelbrot set defined in Eq. (5.2), reproduced from Gleick (1987), (p. 225). (Middle) a fractal variant defined by Eq. (5.4). (Right) zooming into the middle fractal, revealing its self-similar nature. The colors indicate how quickly c diverges (the lighter the slower the divergence) while black shows the converging points defining the set. Note that these are original images produced by myself in the mid-1990s, explaining the pixelation seen somewhat skewing the self-similar patterns

Another simple equation describing a chaotic system is know as the logistic map

$$\begin{aligned} x_{n+1} = r x_n (1 - x_n), \end{aligned}$$
(5.6)

where the value of the term following the nth one is again determined by the values of the nth term itself, the initial value \(x_0\), and a constant r. It has the same structure as Eq. (5.2) defining the Mandelbrot set. The logistic map was introduced in a seminal paper by the biologist Robert May (May 1976). It is another archetypal example of how complex, chaotic behavior can arise from very simple non-linear dynamical equations. The equation describes the evolution of populations due to reproduction and starvation and is famous for its bifurcation diagram (Feigenbaum 1978), showing how the system descends into chaos.

Before Mandelbrot and othersFootnote 7 first saw the intricate shape of the fractal set named after him in the late-1970s, no one could have imagined that such a simple equation, \(z_{n+1} = z_n^2 + c\), had the power to encode such a wealth of structure. In essence, the simple rule of the iterative map contains an infinitude of complexity. Anywhere on the boundary of the Mandelbrot set, one can zoom in, theoretically indefinitely, and keep on rediscovering new delicate structures and patterns of stunning complexity. This is another prime example of \(\textsf {P}_2^c\): A seductively simple procedure results in one of the most complex objects in mathematics.

2.3 The New Science of Networks

While the second paradigm of complex systems uncovers that simple rules drive complex behavior, Paradigm \(\textsf {P}_1^c\) states that complex systems should be broken down into individual agents and their interactions. As a result, networks are an ideal abstraction for theses systems. The agents are represented by featureless nodes and the interactions are given by the links connecting the nodes. This thinking gave rise to a new interaction-base worldview and the crucial realization that networks are able to mirror the organizational properties of real-world complex systems. A new science of networks was ignited (Dorogovtsev and Mendes 2003, p. 1):

In the late 1990s the study of the evolution and structure of networks became a new field in physics.

The formal mathematical structures describing networks are graphs. The nearly three hundred year history of graph theory is briefly discussed in Sect. 5.3.2, where the notion of a random graph takes center stage around 1960. This fruitful marriage of probability theory and graph theory resulted in much successful scholarly work. So what is there to add in terms of a new science of networks? Indeed (quoted in Newman et al. 2006, p. 4):

If graph theory is such a powerful and general language and if so much beautiful and elegant work has already been done, what room is there for a new science of networks?

The authors then offer the following answers (quoted in Newman et al. 2006, p. 4):

We argue that the science of networks that has been taking shape over the last few years is distinguished from preceding work on networks in three important ways: (1) by focusing on the properties of real-world networks, it is concerned with empirical as well as theoretical questions; (2) it frequently takes the view that networks are not static, but evolve in time according to various dynamical rules; and (3) it aims, ultimately at least, to understand networks not just as topological objects, but also as the framework upon which distributed dynamical systems are built.

The first glimpse of this new science of networks came from sociology in the late 1960s. A milestone being the work of Mark Granovetter on the spread of information in social networks (Granovetter 1973). He realized that more novel information flows to individuals through weak rather than strong social ties, coining the term “the strength of weak ties.” Since our close friends move in similar circles to us, the information they have access to overlaps significantly with what we already know. Acquaintances, in contrast, know people we do not know and hence have access to novel information sources. Another topic of interest was the interconnectivity of individuals in social networks. Stated simply, how many other people does each individual in a network know? Stanley Milgram devised an ingenious, albeit simple, experiment in 1969. The unexpected results propelled a novel concept into the public consciousness: the notion of the small world phenomenon, colloquialized as “six degrees of separation” (Milgram 1967; Travers and Milgram 1969) In a nutshell (Newman et al. 2006, p. 16):

Milgram’s experiments started by selecting a target individual and a group of starting individuals. A package was mailed to each of the starters containing a small booklet or “passport” in which participants were asked to record some information about themselves. Then the participants were to try and get their passport to the specified target person by passing it on to someone they knew on a first-name basis who they believed either would know the target, or might know somebody who did. These acquaintances were then asked to do the same, repeating the process until, with luck, the passport reached the designated target. At each step participants were also asked to send a postcard to Travers and Milgram, allowing the researchers to reconstruct the path taken by the passport, should it get lost before it reached the target.

The researchers recruited 296 starting individuals from Omaha, Nebraska and Boston, and targeted a stockbroker living in a small town outside Boston. 64 out of the 296 chains reached the target, with the median number of acquaintances from source to target being 5,2. In other words, a median of six steps along the chain were required. A surprisingly short distance and an unexpected result considering the potential size of the analyzed network. As a modern example, researchers set up an experiment where over 60,000 e-mail users tried to reach one out of 18 target persons in 13 countries by forwarding messages to acquaintances. They also found that the average chain length was roughly six (Dodds et al. 2003). In an other experiment, the microblogging service Twitter was analyzed in 2009. Then it was comprised of 41,7 million user profiles and 1,47 billion social relations and had an average path length found to be 4, 12 (Kwak et al. 2010).

In 1998, Duncan Watts and Steven Strogatz introduced the small-world network model to replicate this small-world property found in more and more real-world networks (Watts and Strogatz 1998). They identified two independent structural features according to which graphs could be classified. The clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together, derived from the number of triangles present in the network. The second classification measure is the average shortest path length, the key parameter of small-world networks. Applying these quantities to random graphs, constructed according to the prototypical Erdős-Rényi model,Footnote 8 reveal a small average path length (usually varying as the logarithm of the number of nodes) along with a small clustering coefficient. In contrast, small-world networks are characterized by a high clustering coefficient and a small average path length. The algorithm introduced in the Watts-Strogatz model considers regular ring lattices, or graphs with n nodes each connected to k neighbors, and imposes a probability for the rewiring of links (excluding self-loops) . These models also turned out to be receptive to a variety of techniques from statistical physics, attracting “a good deal of attention in the physics community and elsewhere” (Newman et al. 2006, p. 286).

Finally, after the random graph and small-world network models had been introduced, an additional type of real-world network was discovered by Albert-László Barabási and Réka Albert. This seminal finding ultimately ushered in the new field of complex networks, indeed ignited “a revolution in network science” (Dorogovtsev and Mendes 2003, p. 1). In summary, the hallmark of this new network class is that its degree distribution follows a power law.Footnote 9 As power-law distributions are discussed in detail in Sect. 6.4, it suffices to mention here that such distributions are characterized as follows: while there are a few nodes, called hubs, which have very high connectivity, most nodes, however, have medium to low degree. In Barabási and Albert (1999) the authors proposed that the power-law degree distribution they observed in the WWW is a generic property of many real-world networks. In addition, they offered a specific model of a growing network that generates power-law degree distributions similar to those seen in the WWW and other networks. This growth mechanism is know as preferential attachment: with a certain probability new nodes are added to the network and these preferentially form links with existing nodes of high degree. The influence of Barabási and Albert on this new budding network science is reflected in the number of citations of their publications. Alone Barabási and Albert (1999) and Albert and Barabási (2002) jointly garnered over 18,000 citations.Footnote 10

Fig. 5.5
figure 5

Reproduced with kind permission from Geipel (2010)

Examples of common network topologies. (Left) a regular two-dimensional lattice. (Middle) a random network with an average degree of one. (Right) a scale-free network with an average degree of one showing two hubs.

Note that although scale-free networks are also small-world networks, the opposite is not always true. However, many real-world complex networks show both scale-free and small-world characteristics. In Fig. 5.5 examples of networks with various levels of structure are shown. The feature of complex networks in general to capture and encode the organizational architecture of complex systems is what ushered in the new science of complexity, explained in Chap. 6.

2.4 The Success

It is remarkable that a multitude of simple interactions can result in overall complex behavior that exhibits properties like emergence, adaptivity, resilience, and sustainability. Moreover, the fact that order and structure can arise from local interactions between parts of an initially disordered system is astonishing. Indeed, the universe has always been governed by this structure formation mechanism, self-organizing itself into ever more complex manifestations. From an initial singularity with no structure the universe appears to, at least in our vicinity, be spontaneously evolving towards ever more order. Albeit with no external agency and despite the second law of thermodynamics forcing the entropy—the level of disorder—of the universe to increases over time.Footnote 11 Mysterious as these processes may appear, the study of complex systems gives us insights into the mechanisms governing complexity. Moreover, should there exist an unseen fundamental force in the universe, driving it to ever more complexity, then the emergence of first life and later consciousness is perhaps less wondrous.

In essence, complexity does not stem from the number of participating agents in the system but from the number of interactions among them. For instance, there are about 20,000–25,000 genes in a human (International Human Genome Sequencing Consortium 2004). In contrast, bread wheat has nearly 100,000 genes (Brenchley et al. 2012). Thus the complexity of humans is evidently not a result of the size of our genome. It is crucial how the genes express themselves, meaning how the information encoded in a gene is used in the synthesis of functional gene products, such as proteins. The gene regulatory network is a collection of molecular regulators that interact with each other to govern the gene expression levels (Brazhnik et al. 2002).

This novel interaction-based outlook also highlights the departure from a top-down to a bottom-up approach to understanding complexity. A top-down philosophy is associated with clear centralized control or organization. In contrast, bottom-up approaches are akin to decentralized decision-making. The control or organization is spread out over a network. For instance, it once was thought that the brain would, like a computer, have a CPU—a central processing unit responsible for top-down decision-making (Whitworth 2008). Today, we know that the information processing in our brains is massively parallel (Alexander and Crutcher 1990), decentralized into a neural network, giving rise to highly complex, modular, and overlapping neural activity (Berman et al. 2006).

Philosophically, the step towards bottom-up approaches can be understood as a departure from reductionist problem-solving methods and an embracing of a systems-based and holistic outlook. It marks the acceptance of the fact, that we should stop looking for a master-mind behind the scenes, an elusive puppet-master orchestrating the occurrence of events, following devilishly cunning plans.Footnote 12 Mapping interactions onto networks or simulating them in agent-based models allows the complex system they describe to be formally analyzed. In Fig. 5.6 an illustrated overview of an agent-based simulation is given: In a computer program agents are interacting according to simple local rules and give rise to global patterns and behaviors seen in real-world complex systems.

Fig. 5.6
figure 6

Reproduced from Glattfelder et al. (2010)

The properties of complex systems and the paradigms leading to an agent-based simulation describing them.

By adopting a bottom-up philosophy, novel problems become tractable which before resisted a top-down attack. For instance, modeling the flocking behavior of birds. This swarming behavior has all the hallmarks of complexity (Bonabeau et al. 1999). It is an adaptive and self-organizing phenomenon. So how is it possible to program a simulation of such intricate behavior? Again, adhering to the paradigm of simple rules, a bottom-up approach turns out to offer an easy solution. In 1986 an artificial life program called Boids was developed,Footnote 13 reproducing the emergent swarming properties. The following three simple rules tell each agent how to interact locally in the simulation:

  1. 1.

    Separation: steer to avoid a crowding of agents.

  2. 2.

    Alignment: steer towards the average heading of local agents.

  3. 3.

    Cohesion: steer to move toward the average position of local agents.

Many hitherto hard (or impossible) to tackle problems suddenly become accessible and tractable with the application of the paradigms of complex systems. In detail, the organizing principles and the evolution of dissipative, real-world complex systems, which are inherently unpredictable, stochastic in nature, and plagued by non-linear dynamics, can now be understood. This, by analyzing the architecture of the underlying network topology or by computer simulations. Hence more patterns and regularities in the natural world are uncovered. For instance, earthquake correlations (Sornette and Sornette 1989), crowd dynamics (Helbing et al. 2000), traffic dynamics (Treiber et al. 2000), pedestrian dynamics (Moussaïd et al. 2010), population dynamics (Turchin 2003), urban dynamics (Bettencourt et al. 2008), social cooperation (Helbing and Yu 2009), and market dynamics (see Sect. 7.3). Recall the mentioned selection of effective agent-based models (Axelrod 1997; Lux and Marchesi 2000; Schweitzer 2003; Andersen and Sornette 2005; Miller et al. 2008; Šalamon 2011; Helbing 2012). Chapter 6 is exclusively devoted to the successful treatment of complex systems and Chap. 7 discusses finance and economics.

3 The Profound Unifying Powers of Mathematics

The two volumes of the Book of Nature appear to speak two different formal dialects. While Volume I is written in an equation-based mathematical language, Volume II utilizes an algorithmic formal representation, intelligible to computers. In this section it will be uncovered how a mathematical idiom also underpins the algorithmic abstraction. In essence, the entirety of mathematics incorporates both formal strands and hence unifies all human knowledge generation in one consolidated formal representation. The journey leading to this realization begins in pre-Socratic Greece and touches on the Protestant Reformation, the Jesuits, Newton, Galileo Galilei, the bridges of Königsberg, and digital information (bits) . Before embarking on this voyage, the edifice of mathematics requires a closer inspection.

There is one general demarcation line one can find in mathematics, splitting the subject matter into continuous and discrete renderings. Most non-mathematicians only come into contact with the continuous implementation of mathematics,Footnote 14 for instance, by being exposed to calculus, geometry, algebra, or topology. While the branch of discrete mathematics deals with objects that can assume only distinct, separated values, continuous mathematics considers only objects that can vary smoothly.Footnote 15

Philosophically, the schism between continuity and discreteness originated in ancient Greece with Parmenides, who asserts that the ever-changing nature of reality is an illusion obscuring its true essence: an immutable and eternal continuum. Still in modern times this intellectual battle between viewing the nature of reality as fundamentally continuous or discreet is been fought. Charles Pierce proposed the term synechism to describe the continuous nature of space, time and law (Peirce 1892). A related mystery is the question if reality is infinite or not. Immanuel Kant, for instance, came to the startling conclusion that the world is “neither finite nor infinite” (Bell 2014). In contrast, the triumph of “atomism,” i.e., the atomic theory developed in physics and chemistry, only applies to matter and forces, conjuring up the following image: the discrete entities making up the contents of the universe act in the arena of continuous space-time. This view goes to the heart of Leibniz’ philosophical system, called monadism , in which space and time are continua, but real objects are discreet, comprised of simple units he called monads (Furth 1967).

There are, however, also modern efforts to discretize space and time as well, in effect bringing the quantum revolution to an even deeper level. This proposition goes to the very heart of one of theoretical physics’ most pressing problems: the incompatibility of quantum field theory (Sects. 3.2.2.1 and 3.1.4), describing all particles and their (non-gravitational) interactions, and general relativity decoding gravity (Sects. 4.1 and 10.1.2). Quantum theory, by its very name, deals with discrete entities while general relativity describes a continuous phenomenon. For decades, string/M-theory was hailed as the savior, however to no avail (Sect. 4.3.2). These issues are discussed in Sect. 10.2.

Despite the clear top-level separation of mathematics into these two proposed themes, there also exist overarching concepts linking the continuous and the discrete. Indeed, many ideas in mathematics can be expressed in either language and often there are discrete companions to continuous notions to be foundFootnote 16 and vice versa. Specifically, the discrete counterpart of a differential equationFootnote 17 is called a recurrence relation, or difference equation. Examples of such equations were given in Sect. 5.2.1, discussing chaos theory.Footnote 18 Then, what is known as time-scale calculus is a unification of the theory of difference equations with that of differential equations. In detail, dynamic equations on time scales are a way of unifying and extending continuous and discrete analysis (Bohner and Peterson 2003). One powerful mathematical theory, spanning both worlds, is group theory. It was encountered in its continuous expression in Chap. 3, specifically the continuous symmetries described by Lie groups (Sect. 3.1.2), arguably the most fruitful concept in theoretical physics (Chaps. 3 and 4). In its discrete version, group theory underlies modern-day cryptography, utilizing discrete logarithms, giving rise to the modern decentralized economy fueled by blockchain technology (Sect. 7.4.3). But perhaps the most interesting mathematical chimera is the fractal. It is defined by the discrete difference equation (5.2) but its intricate border (seen in Fig. 5.4) is continuous and hence infinite in detail, allowing one to indefinitely zoom into it and witness its mesmerizing self-similar nature.

3.1 The Continuous—A History

The process of finding the derivative, i.e., the mechanism of differentiation, not only lies at the heart of contemporary mathematics but also marks the birth of modern physics. It builds on a hallmark abstract notion that first appeared in pre-Socratic Greece and can be seen in the calculations performed by Democritus (Boyer 1968), the proponent of physical atomism (see Sect. 3.1), in the 5th Century B.C.E. Since then, this novel idea entered and left the collective human consciousness at various times in history. The concept in question is the abstract idea of infinitesimals. As an example, a continuous line is thought to be composed of infinitely many distinct but infinitely small parts. In general, the concept of infinitesimals is closely related to the notion of the continuum, a unified entity with no discernible parts which is infinitely divisible. In this sense, a global perspective yields the continuum, while an idealized local point of view uncovers its ethereal constituents, the infinitesimals (Bell 2014). The idea of infinitesimals is a deceptively benign proposition, but nonetheless problematic and even dangerous.

Ancient Greece

One account has it that the Pythagoreans expelled one of their own philosophers, Hippasus, from their order and possibly even killed him, as he had discovered “incommensurable magnitudes” (Boyer 1968). Hippasus understood that it was impossible to compare, for instance, the diagonal of a square with its side, no matter how small a unit of measure is chosen. In essence, this is a consequence of the existence of irrational numbers. These are real numbers that cannot be expressed as a ratio of integers. In other words, irrational numbers cannot be represented with terminating or repeating decimals. Looking at a square of unit length, its diameter is given, ironically, by the Pythagorean theorem \(a^2+b^2=c^2\) which yields \(c=\sqrt{2} = 1.4142\dots \) This is a number with infinitely many digits. Other famous irrational numbers, magically appearing everywhere in mathematics and physics, are \(\pi =3.1415\dots \) and \(\exp (1)=2.7182\dots \) Currently, the record computation of \(\pi \) has revealed \(1.21 \times 10^{13}\) digits (Yee and Kondo 2013). Irrational numbers posed a great threat to the fundamental tenet of Pythagoreanism, which asserted that the essence of all things is related to whole numbers, igniting the conflict with Hippasus.

This early budding of the notion of the infinitesimal would soon be stifled by associated paradoxes uncovered by the philosopher Zeno. The notorious Zeno’s paradoxes show how infinitesimals lead to logical contradictions. One conundrum argues that before a moving object can travel a certain distance, it must first travel half this distance. But before it can even cover this, the object must travel the first quarter of the distance, and so on. This results in an infinite number of subdivisions and the beginning of the motion is impossible because there is no finite instance at which it can start. “The arguments of Zeno seem to have had a profound influence on the development of Greek mathematics [...]” (Boyer 1968, p. 76). “Thereafter infinitesimals are shunned by ancient mathematicians” (Alexander 2014, p. 303), with the exception of Archimedes. Still today there are discussions on whether Zeno’s paradoxes have been resolved—touching issues regarding the nature of change and infinity (Salmon 2001). It would take another two thousand years, before the dormant idea of infinitesimals would reemerge. If only to be faced with more antagonism. This time, the threat emanated from the Catholic Church, which saw its hegemony in Western Europe threatened by the power-struggles initiated by the Reformation. In the wake of these events, Galileo would be sentenced to house arrest in 1633 by the Inquisition for the last nine years of his life.

Middle Ages: The Protestant Reformation

In 1517, the Catholic priest Martin Luther launched the Reformation by nailing a treatise comprised of 95 theses to a church door, instigating a fundamental conflict between Catholics and Protestants. As a reformation movement, Protestantism under Luther sought “to purify Christianity and return it to its pristine biblical foundation” (Tarnas 1991, p. 234). The Catholic Church was perceived to have experienced irreparable theological decline: “the long-developing political secularism of the Church hierarchy undermining its spiritual integrity while embroiling it in diplomatic and military struggles; the prevalence of both deep piety and poverty among the Church faithful, in contrast to an often irreligious but socially and economically privileged clergy” (Tarnas 1991, p. 234). Moreover, Pope Leo X’s authorization of financing the Church by selling spiritual indulgences—the practice of paying money to have one’s sins forgiven—was seen as a perversion of the Christian essence. Luther’s revolution aimed at bringing back the Christian faith to its roots, where only Christ and the Bible are relevant. In this sense, Protestantism was not only a rebellion against the existing power-structure of the Catholic Church, it was also a conservative fundamentalist movement. The effect of this combination lead to a paradoxical outcome: while the Reformation’s “essential character was so intensely and unambiguously religious, its ultimate effects on Western culture were profoundly secularizing” (Tarnas 1991, p. 240). Indeed, the Protestant’s work ethic can be seen to lay the foundations for modern capitalism (Weber 1920 and Sect. 7.4.2). Whereas traditionally the pursuit of material prosperity was perceived as a threat to religious life, now, the two are seen as mutually beneficial.

Against the backdrop of the increasing popularity and spread of Protestantism, a counter-reformation in the Catholic Church was launched. It was spearheaded by the Jesuits, a Roman Catholic order established in 1540, dedicated to restoring Church authority. Their emphasis lay on education and they soon became “the most celebrated teachers on the Continent” (Tarnas 1991, p. 246). In this environment the Jesuits would confront Galileo and also the idea of infinitesimals would reemerge.

With respect to Galileo, it is quite perceivable that the Church could have reacted in a very different manner. “As Galileo himself pointed out, the Church had long been accustomed to sanctioning allegorical interpretations of the Bible whenever the latter appeared to conflict with the scientific evidence” (Tarnas 1991, p. 259). Indeed, even some Jesuit astronomers in the Vatican recognized Galileo’s genius and he himself was a personal friend of the pope. However, the Protestant threat compounded the perceived risks emanating from any novel and potentially heretical worldview. And so the heliocentric model of the solar system—the Copernican revolutionFootnote 19 ignited by the Renaissance mathematician, astronomer, and Catholic cleric Nicolaus Copernicus, fostered by Tycho Brahe and Kepler, ultimately finding its full potential expressed through Galileo—was banned by Church officials. In this conflict of religion versus science, Galileo was forced to recant in 1633 before being put under house arrest. Not so lucky was the mystical Neoplatonist philosopher and astronomer Giordano Bruno. He espoused the idea that the universe is infinite and that the stars are like our own sun, with orbiting planets, in effect extending the Copernican model to the whole universe (Singer 1950). This idea suggested a radical new cosmology. Bruno was burned at the stake in 1600. However, the reason for his execution was not his support of the Copernican worldview, but because he was indeed a heretic, holding beliefs which diverged heavily from the established dogma. Next to his liberal view “that all religions and philosophies should coexist in tolerance and mutual understanding” (Tarnas 1991, p. 253), he was a member of the movement know as Hermetism, a cult following scriptures thought to have originated in Egypt at the time of Moses. These heretic beliefs of Bruno on vital theological matters sealed his fate and resulted in a torturous death (Gribbin 2003).

With the Catholic Church’s efficient, dedicated, and callous modus operandi, why did Luther not get banished as a heretic? First, Pope Leo X long delayed any response to what he perceived as “merely another monk’s quarrel” (Tarnas 1991, p. 235). When Luther finally did get stigmatized as a heretic, the political climate in Europe had shifted in a way facilitating the splitting of the cultural union maintained by the Catholic Church as a result of this theological insurgence. A second factor was the “printing revolution” initiated by Johannes Gutenberg’s invention of the printing press after 1450. Perhaps marking one of the first viral phenomena, this new technology allowed for the unprecedented dissemination of information. The rise in literacy and the facilitated access to knowledge allowed a new mass of people to participate in discussions which would have been beyond their means not too long a go. Utilizing this new technology, Luther translated the tow Biblical Testaments from Hebrew and ancient Greek into German in 1522 and 1534. This work proved to be highly influential and would help pave the way to the emergence of other new religious denominations, next to Protestantism, as now many people could offer their personal interpretation, further fracturing the unity of Catholicism.

Middle Ages: The Re-emergence of Infinitesimals

Approximately 1,800 years after Archimedes’ work on the areas and volumes enclosed by geometrical figures using infinitesimals, there was finally a revival of interest in this idea among European mathematicians in the late 16th Century. A Latin translation of the works of Archimedes in 1544 made his techniques widely available to scholars for the first time. Then, in 1616, the Jesuits first clashed with Galileo for his use of infinitesimals. Indeed, even a Jesuit mathematician was prohibited by his superiors from publishing work deemed to close to this dangerous idea. In the eyes of the Jesuits, if the notion of a continuum made up of infinitely many infinitesimally small units were to prevail “the eternal and unchallengeable edifice of Euclidean geometry would be replaced by a veritable tower of Babel, a place of strife and discord built on teetering foundations, likely to topple at any moment” (Alexander 2014, p. 120). Between the years 1625 and 1658, a cat-and-mouse game would follow, where the Jesuits would condemn the growing interest in infinitesimals, only to be faced with notable publications by mathematicians on the subject. Consult (Alexander 2014) for the details.

Finally, in 1665, the tides turned, as a young Newton experimented with infinitesimals and developed techniques that would become known as calculus. Ten years later, Leibniz independently developed his own version of calculus and publishes the first scholarly paper on the subject in 1684. When Newton published his revolutionary Philosophiæ Naturalis Principia Mathematica in 1687 (Newton 1687), a political controversy ensued over which mathematician, and therefore which country, deserved credit. For Newton and Leibniz the idea of infinitesimals was more than just a mathematical curiosity. Crucially, it was related to the reality of physical processes. In Newton’s worldview the conception the continuum was generated by motion, and Leibniz famously exclaimed, natura non facit saltus—“nature makes no jump” (Bell 2014). Although infinitesimals proved themselves to be spectacularly useful tools, their logical status remained doubtful under mathematical scrutiny. Notable scholars viewed them as unnecessary and erroneous. Such as the likes of George Berkeley, Georg Cantor, and Bertrand Russell (see, for instance Bell 2014). In the latter half of the 19th Century the debatable concept of the infinitesimal was replaced by the well-defined notion of the limit

$$\begin{aligned} \lim _{x \rightarrow a} f(x) = L. \end{aligned}$$
(5.7)

The Modern Age

The introduction of the mathematically sound definition of a limit now allowed calculus to be rigorously reformulated in clear mathematical terms, still used today. It is an interesting observation, that the idea of infinitesimals has experienced a renaissance in the last decades, establishing the concept on a logically solid basis. One attempt fuses infinitesimal and infinite numbers, creating what is called nonstandard analysis. A second endeavor employs category theory to meld what is known as smooth infinitesimal analysis. These novel developments shed new light on the nature of the continuum. More details on the history of infinitesimals and the related mathematics are found in Bell (2014), Alexander (2014).

In the following, some technical aspects of differentiation are briefly introduced.

For a smooth function \(f:\mathbb {R} \rightarrow \mathbb {R}\) the derivative of f at the point \(t_0\) is defined as

$$\begin{aligned} \dot{f}(t_0) := \frac{\text {d} }{\text {d} t} f(t_0) = \lim _{t\rightarrow 0}\frac{f(t_0+t)-f(t_0)}{t}. \end{aligned}$$
(5.8)

In other words, t is taken to infinitesimally approach zero. Because zero is never reached, the fraction is well-defined. For multivalued functions, e.g., vector fields \(\mathbf {F}:\mathbb {R}^n \rightarrow \mathbb {R}^m\), partial derivatives exist for all components

$$\begin{aligned} \partial _j F_i (x_1, \dots , x_n) := \frac{\partial }{\partial x_j} F_i (x_1, \dots , x_n); \quad i=1,\dots ,m; \ \ j=1,\dots ,n. \end{aligned}$$
(5.9)

These expressions can be assembled in a matrix yielding the general notion of the derivative, called the Jacobian matrix

$$\begin{aligned} \mathcal {J}_F:=\begin{bmatrix} {\partial _1 F_1}&\cdots&{\partial _n F_1} \\ \vdots&\ddots&\vdots \\ {\partial _1 F_m}&\cdots&{\partial _n F_m} \end{bmatrix}. \end{aligned}$$
(5.10)

In the end, infinitesimals paved the way to the introduction of the derivative, an essential tool in the first volume of the Book of Nature. Next to the specific expression for the derivatives of functions (e.g., \(\dot{f}\), \(\partial _i F_j\), and \(\mathcal {J}_F\)) the main mathematical actors appearing in physical theories are related to partial derivatives. For instance, the partial derivatives can be combined to form a vector, the nabla operator \(\nabla \), defined in (2.2). Or the d’Alembertian operator \(\Box \) introduced in (4.17). Table 5.1 shows a summary of the various theories in which the notion of the derivative is vital, as it enters the mathematical equations which describe the workings of multiple fundamental processes in the universe. It is truly amazing, how one specific abstract idea can be singled out and seen to play such an enormously successful role in unlocking the secrets of the universe and furnishing a unifying theme for Volume I of the Book of Nature.

In a nutshell:

The derivative, a cornerstone of continuous mathematics, lies at the heart of the analytical machinery that is employed to represent fundamental aspects of the physical world, as described in the formal encoding scheme outlined in Fig. 5.1 and detailed in Table 5.1.

Table 5.1 Various themes of the notion of the derivative seen to permeate many physical theories as a common thread. It can be understood as a unified mathematical underpinning, a simple but powerful abstract framework encoding the physical world. The acronyms GR and GT refer to general relativity and gauge theory, respectively. \(G_{\text {SM}}\) is the standard model symmetry group, seen in (4.72)

3.2 Discrete Mathematics: From Algorithms to Graphs and Complexity

There exists one abstract concept, found in discrete mathematics, which is bestowed with great explanatory power. It is a formal representation that can capture a whole new domain of reality in that it underpins the algorithmic understanding of complex systems. Metaphorically, the discrete cousin of the continuous derivative is a graph. As a result, the tapestry of mathematics, weaved out of the continuous and discreet strands, has the capacity to unify the two disjoint volumes of the Book of Nature. In other words, human knowledge generation is truly and profoundly driven by mathematics.

Discrete mathematics is as old as humankind. The idea behind counting is to establish a one-to-one correspondence (called a bijection) between a set of discrete objects and natural numbers. Arithmetics, the basic mathematics taught to children, is categorized under the umbrella of discrete mathematics. Indeed, the foundations of mathematics rests on notions springing from discrete mathematics: logic and set theory. Higher discrete mathematical concepts include combinatorics, probability theory, and graph theory. More information on discrete mathematics and its applications can, for instance, be found in Biggs (2003), Rosen (2011), Joshi (1989).

Although continuous mathematics generally enjoys more popularity, discrete mathematics has witnessed a renaissance driven by computer science. The duality of digital information, which is expressed as strings of binary digits—called bits which exist in the dual states represented by 0 or 1—lies at the heart of discreteness. In this sense, the development of computers, and information processing in general, build on insights uncovered in the arena of discrete mathematics. A landmark development in the field of logic was the introduction of Boolean algebra in 1854, in which the variables can only take on two values: true and false (Boole 1854). Then, in 1937, Claude Shannon showed in his master’s thesis how this binary system can be used to design digital circuits (Shannon 1940). In effect, Shannon implemented Boolean algebra for the first time using electronic components. Later, he famously laid the theoretical foundations regarding the quantification, storing, and communication of data, in effect inventing the field of information theory (Shannon 1948) . The concepts Shannon developed are at the heart of today’s digital information theory. Shannon and the notion of information are discussed further in Sect. 13.1.2. In summary, the hallmark of modern computers is their digital nature: they operate on information which adopts discrete values. This property is mirrored by the discrete character of the formal representations used to describe these entities, see, for instance Biggs (2003), Steger (2001a, b). Indeed, the merger of discrete mathematics with computer science has given rise to the new field of theoretical computer science (Hromkovič 2010). In contrast to the technical and applied areas of computer science, theoretical computer science focuses on computability and algorithms. Examples are the methodology concerned with the design of algorithms or the theory regarding the existence of algorithmic solutions.

Paradigm \(\textsf {P}_1^c\) (Sect. 5.2.1) emerges as the crucial guiding principle for the formal representation of complexity. A complex system can formally either directly be mapped onto a complex network or described as an evolving network of interacting agents, following algorithmic instructions. Both incarnations find their abstraction in the notion of a graph.

Graph Theory

The discrete counterpart to the derivative, a versatile and universal tool in continuous mathematics, is the notion of a graph. In 1735 Leonard Euler was working on a paper on the seven bridges of Königsberg. The publication of this work (Euler 1941) in effect established the field of graph theory (Biggs et al. 1986; Bollobás 1998). The problem Euler was trying to tackle, was to find a walk through the city that would cross each of the seven bridges only once. Although he could prove that the problem had no solution, the formal tool Euler employed was revolutionary. As detailed, graphs today play an essential role in mathematics and computer science.

In modern terms, the defining features of a graph \(G=G(V,E)\) are the set of vertices V, or nodes, which are connected by edges, or links, in a set E, where the edge \(e_{ij} \in E\) connects the nodes \(v_i, v_j \in V\). The adjacency matrix of a graph \(A=A(G)\) maps the graph’s topology onto the matrix \(A_{ij}\), allowing further mathematical operations to be performed on G, as now the powerful tools of linear algebra can be utilized. Finally, the number \(k_i\) of edges per vertex i is know as the degree. The degree distribution \(\mathcal {P} (k)\) succinctly captures the network architecture.

This simple formal structure was utilized by Euler as a representation of the problem at hand: he ingeniously encoded the Königsberg bridges as the links and the connected landmasses as the nodes in a small network. Indeed, Euler anticipated the idea of topology: the actual layout of this network, when it is illustrated, is irrelevant and the essence of the relationships is encoded in the specifics of the abstract idea of the graph itself.

Euler’s contribution to graph theory represents only a minuscule fraction of his mathematical productivity and “his output far surpassed in both quantity and quality that of scores of mathematicians working many lifetimes. It is estimated that he published an average of 800 pages of new mathematics per year over a career that spanned six decades” (Dunham 1994, p. 51). Indeed, even his deteriorating eyesight, leading to blindness, “was in no way a barrier to his productivity, and to this day his triumph in the face of adversity remains an enduring legacy” (Dunham 1994, p. 55).

At the end of the 1950s graph theory was extended by the introduction of probabilistic methods. This new branch, called random graph theory, was a fruitful source of many graph-theoretic results and was pioneered by Paul ErdősFootnote 20 and his collaborator, Alfréd Rényi (Erds and Rényi 1959, 1960). A hallmark of these graphs is that their degree distribution \(\mathcal {P} (k)\) has the form of a Poisson probability distribution. In other words, the number nodes with high connectivity decreases rapidly.

A random graph comprised of n nodes and l links follows a binomial degree distribution

$$\begin{aligned} \mathcal {P} (k_i = k) = {\left( {\begin{array}{c}n\\ k\end{array}}\right) } p^k (1-p)^{n-k}, \end{aligned}$$
(5.11)

where \(k_i\) is the degree of node i and the link probability is given by p (Erds and Rényi 1960). The first terms gives the number of equivalent choices of such a network. The remaining term describes the probability of a graph with k links and n nodes existing. The average degree \(\langle k \rangle \) is now defined as

$$\begin{aligned} z := \langle k \rangle = \frac{l}{n} = p (n-1). \end{aligned}$$
(5.12)

The average degree and the degree distribution can be approximated by

$$\begin{aligned} \mathcal {P} (k)\approx & {} \frac{z^k e^{-z}}{k!}, \end{aligned}$$
(5.13)
$$\begin{aligned} z\approx & {} p n. \end{aligned}$$
(5.14)

Note that (5.13) describes a Poisson distribution. In the limit of large n the approximations become exact. This can be seen by noting that

$$\begin{aligned} e^{-z}= & {} \lim _{n \rightarrow \infty } \left( 1 + \frac{-z}{n} \right) , \end{aligned}$$
(5.15)
$$\begin{aligned} 1= & {} \lim _{n \rightarrow \infty } \left( \frac{n!}{n^k (n-k)!} \right) . \end{aligned}$$
(5.16)

The scale-free networks, introduced in Sect. 5.2.3 and establishing the new science of networks, are defined by their degree distribution following a scaling law (see Sect. 6.4.3.3). This can simply be expressed mathematically as

$$\begin{aligned} \mathcal {P} (k) \sim k^{- \alpha }, \end{aligned}$$
(5.17)

where the exponent \(\alpha \) lies typically between two and three. In detail

$$\begin{aligned} \mathcal P (k) = \frac{k^{-\alpha } e^{-k/\kappa }}{ \text {Li}_\alpha ( e^{-1/\kappa })}. \end{aligned}$$
(5.18)

The exponential term in the numerator, governed by the parameter \(\kappa \), results in an exponential cutoff, the term in the denominator ensures the proper normalization, and \(\text {Li}_n (x)\) is the nth polylogarithm of x (Newman et al. 2001; Albert and Barabási 2002). Note that for the limit \(\kappa \rightarrow \infty \)

$$\begin{aligned} \mathcal P (k) = \frac{k^{-\alpha }}{\zeta (\alpha )}, \end{aligned}$$
(5.19)

where the Riemann \(\zeta \)-function now acts as the normalization constant.

Whereas the (continuous) analytical machinery used for over three centuries has the power to unlock the secrets of fundamental systems, (discrete) graphs directly tackle complexity. In the pictorial language of Fig. 5.1, complex systems are located on the left side. Graph theory represents their abstract counterpart. In other words, graphs are elevated to the exalted ranks of formal representations able to capture and encode a vast plethora of aspects of the physical world, similar to the abundant usefulness of the derivative.

In closing:

Complex systems are represented by networks which are formalized as graphs, a notion from of discrete mathematics that lies at the heart of the algorithmic approach which is employed to represent complex aspects of the physical world, as described in the formal encoding scheme seen in Fig. 5.1.

3.3 Unity

To summarize, both mathematical variants—the continuous and the discrete— have one particular property which gives them a special status in their volume of the Book of Nature. In other words, each branch has one feature that makes it a powerful tool in the abstract world of formal representations (i.e., the right-hand side of Fig. 5.1). One is the (continuous) operation of differentiation and the other is the (discrete) notion of a graph. While the former unlocks knowledge about the fundamental workings of nature, the latter gives insights into the organizational principles of complex systems.

By introducing the continuous-discrete dichotomy it is possible to give an underpinning to the formal representations seen on the right-hand side in Fig. 5.2. The analytical formal representation is inexorably tied to the continuous mathematical structure while the algorithmic formal representation is intimately related to the discrete mathematical structure. This is illustrated in Fig. 5.7. In this sense, the abstract human thought system called mathematics is not only a very powerful probe into reality, it also unifies the two separate formal representations describing the two different reality domains.

In closing, Fig. 5.8 depicts a grand overview of all the discussed concepts. It contrasts the fundamental-complex, analytical-algorithmic, and continuous-discreet dichotomies encountered in the two volumes of the Book of Nature.

Fig. 5.7
figure 7

The mathematical structures underlying the two modes of formal representation, unifying the two separate knowledge generation systems within a single formal thought system. As a result, Fig. 5.2 is given more detail

To summarize:

The cognitive act of translating specific fundamental and complex aspects of the observable universe into formal representations—utilizing analytical (equation-based) and algorithmic (interaction-based) tools—is the basis for generating vast knowledge about the workings of reality. Specifically, the fundamental and complex reality domains of the physical world are encoded into analytical and algorithmic formal representations, respectively. Underpinning these are the continuous and a discrete structures of mathematics.

Digging deeper, continuous mathematics, associated with the analytical formal theme, provides the machinery of derivation, which plays a fundamental role in the physical sciences. In a similar vein, discrete mathematics, the basis of the algorithmic formal theme, offers graphs as a universal abstract tool able to capture complexity. In this sense, mathematics, understood as the totality of its continuous and discrete branches, is the unifying abstract framework on which the process of translation builds upon. This overarching formal framework is hosted in the human mind and mirrors the structure and functioning of the physical world, transforming translation into knowledge generation.

This process of human knowledge generation finds its metaphor in the discovery of the two volumes of the Book of Nature, written in the language of mathematics. A graphical overview is presented in Fig. 5.8. The tremendous success of this endeavor can be seen in the dramatic acceleration of technological advancements in recent times, bearing witness to the increasing ability of the human mind to manipulate the physical reality it is embedded in.  

4 The Book of Nature Reopened

For over 300 years the Book of Nature has revealed insights into the workings of the world. Chapter by chapter, novel understanding was disclosed, from quantum theory to cosmology. The human mind was capable of translating a multitude of quantifiable aspects of reality into formal, abstract representations. Then, by entering this abstract realm, the mind was able to derive new insights, which could be decoded back into the physical world (see Fig. 5.1). This is a truly remarkable feat and the foundation from which the technological advancements of the human species springs.

But this should only be the beginning. It is truly remarkable that what was considered to be the Book of Nature—the analytical understanding of fundamental processes—turns out to only be the first volume in a greater series. In the last decades, humans have witnesses yet another unearthing of an additional volume of the Book of Nature. And just like Volume I, this newly found addition to the Book of Nature Series offers new and deep insights into a domain of reality previously clouded by ignorance: the organization and evolution of complex systems. In other words, the properties of real-world complexity surrounding us become intelligible.

Fig. 5.8
figure 8

A comprehensive map of human knowledge generation. The observable universe is explained in the Book of Nature, specifically its two volumes. The physical world, comprised of reality domains, finds its formal representation in the abstract world, hosted in the human mind, and unified by the two mathematical structures. See the boxed text for details

Figure 5.8 shows a conceptualized illustration of this truly remarkable achievement. The knowledge generated in this way is the engine driving humanities astonishing technological advancements, (see also the first section of Chap. 8). In essence, this knowledge generation boils down to acts of translation. As illustrated in Fig. 5.1, a reality domain of the physical world is encoded as a formal representation inhabiting the abstract world. Constrained and guided by the rules pertaining to the rich structure of the abstract world, new information can be harnessed, which can then be decoded back into to physical world, yielding novel insights.

The template for this act of translation is given by \({\mathcal {T}}_{\textsc {fr,rd}}\), where the label fr denotes the formal representation and rd the reality domain, respectively. Throughout this book it has been argued that both the physical and abstract world should each be split into two categories. The physical is categorizing by the fundamental-complex dichotomy and the abstract by the analytical-algorithmic dichotomy. The two volumes of the Book of Nature can now be understood as follows:

  • Volume I corresponds to the analytical encoding of fundamental processes, \({\mathcal {T}}_{\text {An,Fu}}\).

  • Volume II corresponds to the algorithmic encoding of complex processes, \({\mathcal {T}}_{\text {Al,Co}}\).

Now it becomes apparent that this attempt at categorizing human knowledge generation into the proposed dichotomies adds an additional mystery:

Why has the successful knowledge generation process, giving the human mind access to the intimate workings of the universe, primarily been based on the translational mechanisms \({\mathcal {T}}_{\text {An,Fu}}\) and \({\mathcal {T}}_{\text {Al,Co}}\)? What about the two other translation possibilities \({\mathcal {T}}_{\text {Al,Fu}}\) and \({\mathcal {T}}_{\text {An,Co}}\)?

Fig. 5.9
figure 9

Adapted from (Glattfelder 2013)

A schematic overview of the possible acts of translation encapsulated in the matrix \(\mathcal {T}\): each element represents the encoding of fundamental or complex aspects of reality into formal representations relating to analytical or algorithmic facets of the abstract world (compare with Figs. 5.1 and 5.2). Interestingly, in the pursuit of knowledge by the human mind, mostly only two of the four possibilities have been extensively utilized: \({\mathcal {T}}_{\text {An,Fu}}\) and \({\mathcal {T}}_{\text {Al,Co}}\) corresponding to Volume I and II in the Book of Nature Series.

In Fig. 5.9 all four possible translational mechanisms arising from the dichotomies are shown. Understood as a matrix, primarily the diagonal elements of \({\mathcal {T}}\) are responsible for lifting humanities’ veil of ignorance. What do we know about the other two translational possibilities? Do they represent failed attempts at knowledge generation? If so, what is special about the two successful acts of translation? Or will, in the end, the human mind unearth further volumes in the Book of Nature Series, guided by the two dormant translational possibilities? This will be the focus of the next section.

From a philosophical perspective, this intricate and intimate interaction of the human mind with the physical world raises inevitable and profound questions. For instance, successful knowledge generation via the describes translational mechanisms assumes the existence of three entities: the physical world that accommodates the mental world of the human mind, which discovers or creates the abstract world of formal thought systems, which in turn unlocks secrets of the physical world (a conundrum encountered in Fig. 2.2 of Sect. 2.2.1). In detail:

  1. 1.

    There exists an abstract realm of objects transcending physical reality (ontology).

  2. 2.

    The human mind possesses a quality that allows it to access this world and acquire information (epistemology) .

  3. 3.

    The structures in the abstract world map the structures in the physical (structural realism, see Sects. 2.2.1, 6.2.2 and 10.4.1).

4.1 Beyond Volumes I and II

As observed, the two translational possibilities \({\mathcal {T}}_{\text {Al,Fu}}\) and \({\mathcal {T}}_{\text {An,Co}}\) have not been prominently utilized as knowledge generation mechanisms. This could mean two things. First, complex systems are indeed immune to being treated with an equation-based formalism, and, conversely, the same is true for fundamental systems being described algorithmically. Or, these alternative possibilities have only been sparsely explored to date, still leaving behind mostly uncharted terrain. In the following, some attempts at filling in the blanks are described.

The Complex-Analytical Demarcation

Pattern formation in nature is clearly the result of self-organization in space and time. Alan Turning proposed an analytical mechanism to describe biological pattern formation (Turing 1952). He utilized what is known as reaction-diffusion equations. These are partial differential equations used to describe systems consisting of many interacting components, like chemical reactions. Turning’s model successfullyFootnote 21 replicates a plethora of patterns, from sea shells to fish and other vertebrae skin (Meinhardt 2009; Kondo and Miura 2010). In effect, he proposed an analytical approach to complexity.

Running agent-based models can sometimes be computationally costly. However, there exist analytical shortcuts that can be taken. Instead of simulating the complex system, it can be studied by solving a set of differential equations describing the time evolution of the individual agent’s degrees of freedom. Technically, this can be achieved by utilizing Langevin stochastic equations. Each such equation describes the time evolution of the position of a single agent (Ebeling and Schweitzer 2001). From the reaction-diffusion equation, Langevin equations can be derived. See Sect. 7.1.1.1 for the history of the Langevin equations, including Einstein’s early work and the Black-Scholes formula for option pricing. Utilizing self-similar stochastic processes for the modeling of random systems evolving in time has been relevant for their understanding (Embrechts and Maejima 2002). See again Sect. 7.1.1.1.

Langevin equations can be solved analytically or numerically. They describe the individual agent’s behavior at the micro level. Moving up to a macroscopic description of the system, what is known as the Fokker-Planck partial differential equation describes the collective evolution of the probability density function of a system of agents as a function of time. The two formalism can be mapped into each other (Gardiner 1985). However, as an example, computing 10,000 agents constrained by Langevin equations approximates the macro dynamics of the system more efficiently than an effort directly attempting to solve the equivalent Fokker-Planck differential equation.

Some scholars have argued against the dictum that complex systems are, in general, not susceptible to mathematical analysis and should hence be investigated by the means of simulation analysis (Sornette 2008). Didier Sornette, a physicist, econophysicist, and complexity scientist, offers the insight that the formal analytical treatment of triggering processes between earthquakes can be successfully applied to various complex systems. Examples range from the dynamics of sales of book blockbusters to viewer activity on the YouTube video-sharing website to financial bubbles and crashes (Sornette 2008). Furthermore, he argues that the right level of magnification (level of granularity) in the description of a complex system can reveal order and organization. As a result, pockets of predictability at some coarse-grained level can be detected. This partial predictability approach is potentially relevant for meteorological, climate, and financial systems. However, a big challenge remains in identifying the complex systems that are susceptible to this approach and finding the right level of coarse-graining.

Another modern example of tackling complexity with analytical tools is mathematical biology (to which Turning’s pattern formation belongs). Influential work in this field grapples with the mathematization of the theory of evolution, as detailed in Martin Nowak’s book “Evolutionary Dynamics: Exploring the Equations of Life” (Nowak 2006). Nowak, a biochemist and mathematician by training, is also a Roman Catholic. His view on the tension between theology and science, especially the conflicts between the theory of evolution and Christianity (Powell 2007):

Science and religion are two essential components in the search for truth. Denying either is a barren approach.

The Fundamental-Algorithmic Demarcation

Recall from Sect. 5.1.3 the troubles relating to solving gravitational n-body problems. In essence, here it does not suffice to know the analytical encoding of the challenge at hand. The system of differential equations describing the motion of \(n \ge 3\) gravitationally interacting bodies cannot be solved analytically. Only for a few simple, albeit important, problems Newton’s equation can be solved. Although the exact theoretical solution for the general case can be approximated (via Taylor series or numerical integration) the dynamics are generally best understood utilizing n-body simulations (Valtonen and Karttunen 2006).

The largest such simulation, called the Millennium Run,Footnote 22 investigated how matter evolved in the universe over time by reproducing cosmological structure formation. The simulation was comprised of ten billion particles, each representing approximately a billion solar masses of dark matter (Springel et al. 2005). In summary, the dynamics of a fundamental (cosmological) system, comprised of a multitude of gravitating bodies, is not understood analytically via differential equations. Rather, computer simulations, mimicking the forces of interaction in the system, offer theoretical predictions.

Overall, the translational mechanism \({\mathcal {T}}_{\text {Al,Fu}}\) is a niche, in the sense that it is only sparsely explored and offers speculative concepts. For instance, the ideas espoused by Wolfram in Sect. 5.2.2. He is essentially proposing that cellular automata are the universal tool to decode and understand the universe in all its facets. In effect, “A New Kind of Science” (Wolfram 2002) would represent the knowledge generated by \({\mathcal {T}}_{\text {Al,Fu}}\) (as well as \({\mathcal {T}}_{\text {Al,Co}}\)). Although Wolfram acknowledges the tremendous success of the mathematical approach to physics, he stresses that many central issues remain unresolved in fundamental physics, where cellular automata could possibly shed new light (Wolfram 2002, Chapter 9). He epitomizes these hopes in the following quote (Wolfram 2002, p. 465):

And could it even be that underneath all the complex phenomena we see in physics there lies some simple program which, if run long enough, would reproduce our universe in every detail?

Contemporary support for this idea comes from Nobel laureate Gerard ’t Hooft, where he proposes an interpretation of quantum mechanics utilizing cellular automata (’t Hooft 2016). Finally, some theoretical physicists propose to describe space-time as a network in some fundamental theories of quantum gravity. For instance, spin networks in loop quantum gravity (see Sect. 10.2.3). Another idea tries to understand emergent complexity as arising from fundamental quantum field theories (Täuber 2008).

Blurring the Lines

Computers have also helped blur the lines between the analytical and algorithmic formal representations. In 1977, the four-color theorem was the first major mathematical theorem to be verified using a computer program (Appel and Haken 1977). “The four-color theorem states that any map in a plane can be colored using four-colors in such a way that regions sharing a common boundary (other than a single point) do not share the same color.”Footnote 23 Computer-aided proofs of a mathematical theorem are usually very large proofs-by-exhaustion, where the statement to be proved is split into many cases and each case then checked individually. “The proof of the four colour theorem gave rise to a debate about the question to what extent computer-assisted proofs count as proofs in the true sense of the word” (Horsten 2012).