Information—Consciousness—Reality pp 139180  Cite as
The Two Volumes of the Book of Nature
Abstract
The Book of Nature has been found. The mathematical description of the universe gives the human mind the power to manipulate reality and technology becomes possible. However, this miraculous knowledge generation has been very specific: fundamental aspects of reality (from the quantum realm to cosmic scales) are encoded analytically, i.e., as equations. This appears to exclude realworld complexity, for instance, the emergent property of consciousness appearing in a selforganizing biological neural network. A fluke of reality allows the human mind to also conquer this domain. What appears as complexity, turns out to be the result of simple rules. Only very recently have the fruits of technology given humans a new level of abstraction: the magic of computation. Now, complex systems can be encoded algorithmically, i.e., by utilizing algorithms and simulations running in computers. As a result, complexity can be tamed and comprehended. This new knowledge generation is understood as Volume II of the Book of Nature, whereas physical science represents Volume I. Underlying the analytical and algorithmic formal representations are two fundamental structures of mathematics: the continuous and the discrete. In this sense, all human knowledge generation is unified mathematically. Level of mathematical formality: medium to low.
The ageold dream that mathematics represents the blueprint for reality has started to become fulfilled: the Book of Nature is intelligible to the human mind and deep truths about the workings of the world have been decoded. In other words, the human mind has begun to venture into realms in the abstract world which interrelate with the workings of the physical world—from the quantum foam comprising reality to the aweinspiring vastness of the cosmic fabric. This main theme is encapsulated in Fig. 2.1, which is reproduced below.
However, this translation of aspects of reality into abstract representations has been very specific. For instance, the considered reality domains interestingly omit the very cornerstone of the whole enterprise of knowledge seeking: the human brain. And with it, a whole branch of reality is ignored, relating to selforganization, structure formation, and emergent complexity in general. Curiously, the Book of Nature does not speak much about the everyday structures and systems surrounding us humans. The complexity of life is mostly excluded. Furthermore, the focus of the abstract representation has been on a scheme of mathematization first introduced by Isaac Newton and Gottfried Wilhelm Leibniz (see also Sect. 2.1.1). In a nutshell, this approach can be labeled as equationdriven.
Only recently, with the advent of information processing,^{1} Fig. 5.1 could be applied in a whole new context. By extending the validity domain of the formal representation to encompass computational aspects, a novel reality domain becomes intelligible that is much closer to human experience than, for instance, the elusive entities comprising matter and transmitting forces. Now, the focus shifts away from an equationdriven effort and embraces computational and simulational tools. This formal approach can essentially be denoted as algorithmic. Slowly, the everyday complexity surrounding us can be tackled. This reality domain, in contrast to the fundamental, will be called complex in the following. Miraculously, the human mind has suddenly stumbled upon an extension of the Book of Nature. A new dichotomy emerges, relating to the complexalgorithmic classification, uncovering the next volume of the Book of Nature. In Fig. 5.2 a conceptual demarcation of these concepts is shown.
The Pythagoreans’ dream of the mathematization of nature (Chap. 2) turns out to be only the beginning of a profound knowledge generation process. Building on the tools enabled by the fundamentalanalytical dichotomy, new abstract worlds become accessible by the aid of computation and the complexalgorithmic paradigm is uncovered. In summary, the Book of Nature has been greatly expanded and is now comprised of:
 VOLUME I

The fundamental reality domain made accessible to the mind via analytical formal representations.
 VOLUME II

Realworld complexity encoded via algorithmic formalizations.
In the following, essential features of Volume I and II of the Book of Nature will be independently summarized and analyzed (Sects. 5.1 and 5.2), before a unifying theme is unveiled (Sect. 5.3). Finally, the entire landscape spanned by the fundamentalcomplex and analyticalalgorithmic classifications is examined (Sect. 5.4). Elements are taken or adapted from Glattfelder et al. (2010) and Appendix A in Glattfelder (2013). Note that the contents of Volume II, relating to complex systems, is presented in detail in Chaps. 6 and 7.
5.1 Volume I: Analytical Tools and Physical Science
The tremendous success of the first volume of the Book of Nature is summarized in the next section and some cornerstones of its analytical powers highlighted. Then the limitations are exposed.
5.1.1 The Success
Staying faithful to the credo “Shut up and calculate!” (Sect. 2.2.1) has allowed a lot of ground to be covered. By not being consumed by philosophical questions relating to the nature of the abstract world, the human mind’s capacity to host or access it, and the correspondence between the physical and the abstract (the topics addressed in Fig. 2.2), progress can be made. Although, as mentioned, the reality domain is restricted to exclude complex systems, it still covers most of physical science. In effect, laws of nature can be understood as regularities and structures in a highly complicated universe. They critically depend on only a small set of conditions and are independent of many other conditions which could also possibly have an effect. Science can be understood as the quest to capture fundamental processes of nature within formal mathematical representations, i.e., in an analytical framework. To understand more about the nature of the physical system under investigation, experiments are performed yielding new insights. Historically, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences around 1660 (insights later published as Boyle (1682). Approximately at the same time, the philosopher Francis Bacon introduced modifications to Aristotle’s nearly two thousand year old ideas, introducing what came to be known as the scientific method, where inductive reasoning plays an important role (Bacon 2000). This paved the way for a modern understanding of scientific inquiry. From this initial thrust our modern knowledge of the world emerged, laying the fertile groundwork on which technology would flourish. All our current technologicontinuouscal advances, and the increasing speed at which progress is made, trace back to this initial spark.

classical mechanics (Sects. 2.1.1 and 3.1.1) to quantum mechanics (Sects. 4.3.4 and 10.3.2);

special relativity (Sect. 3.2.1) to general relativity (Sects. 4.1 and 10.1.2);

quantum field theory (Sects. 3.1.4, 3.2.2.1, 4.2, and 10.1.1) to the standard model of particle physics (Sects. 4.2, 4.3, and 4.4);

unified field theories (Sect. 4.3.3) to higher dimensional unification schemes (Sect. 4.3.1).
And, last but not least, electromagnetism (Sect. 2.1.2 and Eq. ( 4.18)).
5.1.2 The Paradigms of Fundamental Processes
From the fundamental and universal importance of symmetry, three paradigms applicable to physics can be derived:
Mathematical models of the physical world are either:
 \({\mathbf {\mathsf{{P}}}}_1^f\) :

independent of the choice of representation in a coordinate system;
 \({\mathbf {\mathsf{{P}}}}_2^f\) :

unchanged by symmetry transformations;
 \({\mathbf {\mathsf{{P}}}}_3^f\) :

constrained to transform according to a symmetry group.
To illustrate \(\textsf {P}_1^f\), imagine an arrow located in space. It has a length and an orientation. In the mathematical world, this can be represented by a vector, labeled a. By choosing a coordinate system, the abstract entity a can be given physical meaning \(a=(a_1, a_2, a_3)\). For each axis direction \(x_1, x_2, x_3\), the \(a_i\) describe the number of increments along the axis the vector is projected on. For instance \(a=(3,5,1)\). The problem is, however, that depending on the choice of the coordinate system, which is arbitrary, the same vector is described very differently: \(a=(3,5,1)=(0,23.34,17)\). The paradigm above states that the physical content of the mathematical model should be independent of the decision of how one chooses to represents the mathematical model.
5.1.3 The Limitations
In the last chapters, it was unveiled how mathematics underlies physics. From classical mechanics, electromagnetism, the nongravitational forces unified in the standard model of particle physics to gravitational forces. In spite of this tremendous success there is still one omission, relating to manybody problem. This is a large category of physical problems pertaining to the properties of microscopic systems that are comprised of a large number of interacting entities.
Condensed matter physics attempts to explain the macroscopic behavior of matter based on microscopic properties and quantum effects (Ashcroft and Mermin 1976). It is one of physics first ventures into manybody problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely, the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors. In the superconducting phase (Schilling et al. 1993), the wave function becomes symmetric.
Overall, manybody problems in physics represent a vast category of challenges which are notoriously hard to tackle. Determining the precise physical behavior of systems composed of many entities is, in general, hard, as the number of possible combinations of states increases exponentially with the number of entities to be considered. This intricacy drains the analytical formal representation’s power, as calculations become intractable. In contrast, the understanding of manybody problems often relies on approximations specific to the problem being analyzed and result in computationally intensive calculations. The algorithmic approach to decoding such complexity, defining a new dichotomy, emerges.Dan Shechtman’s Nobel prize celebrated not only a fascinating and beautiful discovery, but also dogged determination against the closedminded ridicule of his peers, including leading scientists of the day. His prize didn’t just reward a difficult but worthy career in science; it put the huge importance and value of funding basic scientific research in the spotlight.
As an example, in classical mechanics the nbody problem describes the challenge of predicting the motions of n celestial bodies interacting with each other viaNewton’s law of universal gravity. Already 3body problems—for instance, describing a SunEarthMoon system given their initial positions, masses, and velocities—yield equations with no closed form solutions. As a result, numerical methods or computer simulations need to be invoked in order to solve such seemingly simple problems (Valtonen and Karttunen 2006).
A further challenge related to the understanding of systems of many interacting agents, rendering equations mute but emphasizing the power of algorithmic tools, is the discovery of chaos theory (Mandelbrot 1982; Gleick 1987). For instance, the behavior of water molecules in a dripping faucet becomes unpredictable, when the system enters the chaotic state (Shaw 1984). One critical aspect of chaotic systems in nature is their dependence on initial conditions. The Butterfly Effect describes this sensitivity metaphorically: The flapping of the wings of a butterfly creates tiny perturbations in the atmosphere which set the stage for the occurrence of a tornado weeks later. More precisely, the exact values of the initial conditions determine how the system evolves in time. However, as these initial conditions can never be set with infinite accuracy in the real world, the system’s evolution shows a pathdependence. In other words, two dynamical systems with nearly identical initial conditions can end up in two vastly different end states. More on chaos theory is presented in Sect. 5.2.1.
The Butterfly Effect was coined by Edward Lorenz, a mathematician, meteorologist, and a pioneer of chaos theory. Meteorology is a prime example of how inquiries into the workings of a complex system are stifled by chaotic behavior. In theory, if there existed an infinitely small grid of atmospheric measurements stations scattered all over the world, weather predictions would be accurate. Facing such impossibility, scientists have devised simulational methods able to tackle the uncertainty. As an example, the Monte Carlo method utilizes computational simulations which are repeated many times over with random sampling to obtain numerical results. The key insight is to use the statistical properties of seeming randomness to solve problems that might be deterministic in principle. The algorithmic Monte Carlo methods are mostly employed, and often useful, when it is difficult or even impossible to use other approaches, like analytical tools.
In essence, while physics has had an amazing success in describing most of the observable universe in the last 300 years, it appears as thought its powerful mathematical formalism is illsuited to address the realworld complexity surrounding and including us. Namely, situations where many agents are interacting with each other. For instance, ranging from particles, chemical compounds, cells, biological organisms to celestial bodies, and systems thereof. In order to approach reallife complex phenomena, one needs to adopt a more systems oriented focus.
5.2 Volume II: Algorithmic Tools and Complex Systems
For centuries, the fundamentalanalytical dichotomy of understanding the universe has prevailed. A vast array of knowledge has been accumulated. However, only recently our focus has shifted to the intricate realities of systems of interacting agents surrounding us, contained within us, and comprising us. A new dichotomy relating to the complexalgorithmic classification emerged. Equipped with new computational and simulational tools we started to probe a new reality domain encompassing complex systems. A true paradigm shift occurred in our understanding, away from a reductionist philosophy prevailing in science towards a holistic, networked, and systemsbased outlook.
Complex systems theory is the topic of Chap. 6. Here, in a nutshell, we introduce complex systems and networks, describe the paradigms of the complexalgorithmic dichotomy, and outline the success of this endeavor.
5.2.1 The Paradigms of Complex Systems
A complex system is usually understood as being comprised of many interacting or interconnected parts (or agents). A characteristic feature of such systems is that the whole often exhibits properties not obvious from the properties of the individual parts. This is called emergence. In other words, a key issue is how the macro behavior emerges from the interactions of the system’s elements at the micro level. Moreover, complex systems also exhibit a high level of adaptability and selforganization. The domains complex systems originate from are mostly socioeconomical, biological, or physiochemical (Chaps. 6 and 7).
In the same vein, it is far from clear how to get from a description of quarks and leptons, via DNA, to an understanding of the human brain and consciousness. It appears as though these hierarchical levels of order defeat any reductionistic attempts of understanding by their very design.At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary [. . . ]. Psychology is not applied biology, nor is biology applied chemistry.
As discussed, complex systems are usually very reluctant to be cast into closedform analytical expressions. This means that it is generally hard to derive mathematical quantities describing the properties and dynamics of the system under study. If the paradigms of fundamental processes described on Page 143 fail, what is needed to replace them? Indeed, can we even hope to find such succinct guiding principles a second time? Remarkably and, again, unexpectedly, the answer is yes. The paradigms of complex systems are, once again, very concise:
 \({\mathbf {\mathsf{{P}}}}_1^c\) :

Every complex system is reduced to a set of objects and a set of functions between the objects.
 \({\mathbf {\mathsf{{P}}}}_2^c\) :

Macroscopic complexity is the result of simple rules of interaction at the micro level.
\(\textsf {P}_1^c\) is reminiscent of the natural problem solving philosophy of objectoriented programming, where the objects are implementations of classes (code templates) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys specific rules (encapsulation, inheritance, polymorphism, etc.). See, for instance Gamma et al. (1995).
Similarly, in the mathematical field of category theory a category is defined as the most basic structure: a set of objects and a set of morphisms (maps between the sets) (Hillman 2001). Special types of mappings, called functors, map categories into each other. Category theory was understood as the “unification of mathematics” in the 1940s. A natural incarnation of a category is given by a graph or a network, where the nodes represent the objects and the links describe their relationship or interactions. Now the structure of the network (i.e., the topology) determines the function of the network. This new science of networks, emerging from the study of complex systems and building on the formal representation of \(\textsf {P}_1^c\) as a graph, is presented in the Sect. 5.2.3.
Paradigm \(\textsf {P}_2^c\), the topic of the following section, describes how order emerges out of chaos, driven by a set of simple rules describing the interaction of the parts making up a complex system. Together, these two paradigms represent a shift away from mathematical models of reality towards algorithmic models, computing and simulating reality. In other words, a change in modus operandi from the fundamentalanalytical to the complexalgorithmic dichotomy has occurred. Now, the analytical description of complex systems can be abandoned in favor of the algorithms describing the interaction of the objects, i.e., agents, in a system, according to specified rules of local interaction. This is the fundamental distinguishing characteristic outlined on the righthand side of Fig. 5.2. Instead of encoding certain aspects of reality into mathematical equations, now computers are programmed with stepbystep recipes which are conjured up to tackle problems. Only by letting the algorithm run new knowledge is generated and the design of algorithms and the existence of algorithmic solutions become relevant.
This prominent approach is called agentbased modeling. One key realization is that the structure and complexity of each agent can be ignored when one focuses on their interactional structure. Hence the neurons in a brain, the chemicals interacting in metabolic systems, the ants foraging, the animals in swarms, the humans in a market, etc., can all be understood as being comprised of featureless interacting agents and modeled within this paradigm. By encapsulating the algorithms into a system of agents, complex behavior can be simulated. Some successful agentbased models are Axelrod (1997), Lux and Marchesi (2000), Schweitzer (2003), Andersen and Sornette (2005), Miller et al. (2008), Šalamon (2011), Helbing (2012).
5.2.2 The Science of Simple Rules
Now compounding the enigma is the discovery of Volume II. What appeared as intractable complexity from afar is uncovered to be the result of simple rules of interaction closeup. First, the universe speaks a mathematical language the human mind can discover or create. Then, what appeared as hopeless complicatedness is in fact derived from pure simplicity.The eternal mystery of the world is its comprehensibility. The fact that it is comprehensible is a miracle.
A New Kind of Science
Although the simplicity of complexity (Chap. 6) has attracted less philosophical interest than the “unreasonable effectiveness of mathematics”, some scientists have expressed their total bewilderment at the realization. For instance, Stephen Wolfram, a physicist, computer scientist, and entrepreneur. Wolfram started his academic career as a child prodigy, publishing his first peerreviewed and singleauthor paper in particle physics at the age of sixteen (Wolfram 1975). Three years later, a publication appeared which is still relevant and referenced today, forty years later (Fox and Wolfram 1978). In 1981, he won the MacArthur Fellows Program,^{3} colloquially know as the “Genius Grant”, a prize awarded annually to researchers who have shown “extraordinary originality and dedication in their creative pursuits and a marked capacity for selfdirection”. In parallel, Wolfram led the development of the computer algebra system called SMP (Symbolic Manipulation Program) in the Caltech physics department during 1979–1981. A dispute with the administration over the intellectual property rights regarding SMP eventually caused him to hand in his resignation. Continuing on this computational journey, Wolfram began the development of Mathematica in 1986. This was a mathematical symbolic computation program and would become an invaluable tool used in many scientific, engineering, mathematical, and computing fields. In 1987, the private company Wolfram Research Inc. was founded, releasing Mathematica Version 1.0 in 1988. By 1990, Wolfram Research reached $10 million in annual revenue.^{4} Today, Mathematica (Version 11.2.0) remains highly influential and most of its code is written in the Wolfram Language. This is a general multiparadigm programming language developed by Wolfram Research.
However, Wolfram’s biggest fascination lies with complexity. It started with his work on cellular automata in 1981. These are discrete models studied in computability theory, mathematics, physics, complexity science, theoretical biology, and microstructure modeling. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states. A famous cellular automaton was devised by the mathematician John Conway in 1970, called the Game of Life (Gardner 1970). It is an infinite twodimensional orthogonal grid of square cells, which can be in two sates (dead or alive). The game evolves according to four simple rules and the whole dynamics are solely determined by the choice of the initial state. The Game of Life attracted a lot of attention due to the complex patterns that could emerge from the interaction of the game’s simple rules. In essence, an early computational implementation demonstrating emergence and selforganization. In 1987,Wolfram founded the journal Complex Systems,^{5} “devoted to the science, mathematics and engineering of systems with simple components but complex overall behavior”. This fascination with complexity had lifechanging consequences for him.
Developing this new science would become his passion. In 1991, Wolfram set out to realize this vision, resulting in the 2002 book, A New Kind of Science , a onethousandtwohundredpage tour de force (Wolfram 2002). During the time of writing, Wolfram became nocturnal and reclusive, totally devoted to his project. Indeed, when he realized that there was no publisher who could print the book with the quality he envisioned for the diagrams, he simply founded Wolfram Media Inc. to do the job. See (Levy 2002) for more anecdotes. The book begins by setting the stage with the demarcation described in Fig. 5.2 (Wolfram 2002, p. 1):Just over twenty years ago I made what at first seemed like a small discovery^{6}: a computer experiment of mine showed something I did not expect. But the more I investigated, the more I realized that what I had seen was the beginning of a crack in the very foundations of existing science, and a first clue towards a whole new kind of science.
In other words, Wolfram describes the two opposing formal representations we humans can access: analytical vs. algorithmic. In essence “the big idea is that the algorithm is mightier than the equation” (Levy 2002). Wolfram claims to have reexpressed all of science utilizing the formal language of cellular automata, in essence, simple programs. Indeed, looking at the table of contents reveals the great scope in the topics that are covered:Three centuries ago science was transformed by the dramatic new idea that rules based on mathematical equations could be used to describe the natural world. My purpose in this book is to initiate another such transformation, and to introduce a new kind of science that is based on the much more general types of rules that can be embodied in simple computer programs.
 1

The Foundations for a New Kind of Science
 2

The Crucial Experiment
 3

The World of Simple Programs
 4

Systems Based on Numbers
 5

Two Dimensions and Beyond
 6

Starting from Randomness
 7

Mechanisms in Programs and Nature
 8

Implications for Everyday Systems
 9

Fundamental Physics
 10

Processes of Perception and Analysis
 11

The Notion of Computation
 12

The Principle of Computational Equivalence
A New Kind of Science was received with skepticism and ignited controversy. However, regardless of how one views Wolfram and his claims, one epiphany remains. Namely, the counterintuitive realization that simplicity unlocks complexity (Wolfram 2002, p. 2):The typical issue was that there was some core problem that traditional methods or intuition had never successfully been able to address—and which the field had somehow grown to avoid. Yet over and over again, I was excited to find that with my new kind of science I could suddenly begin to make great progress—even on problems that in some cases had remained unanswered for centuries.
Furthermore (Wolfram 2002, p. 19):Indeed, even some of the very simplest programs that I looked at had behavior that was as complex as anything I had ever seen.
It took me more than a decade to come to terms with this result, and to realize just how fundamental and farreaching its consequences are.
Until this phenomenon was reliably demonstrated and studied by Wolfram, people expected simple rules of interactions to lead to mostly simple outcomes. Discovering simplicity to be the spawning seed of complex behavior was truly unexpected. But perhaps the boldest claim in the book relates to the computational nature of the universe. Wolfram invokes a radical new level of reality, where beneath the laws of physics there lies a computational core. This theme will reappear in Chap. 13.And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.
Quadratic and Logistic Maps
Another archetypal theme describing how simplicity encodes complexity comes from chaos theory. This time the notion is nested deep within mathematics itself and comes in the guise of fractal sets. Fractals are very particular abstract mathematical objects. The term was coined by the mathematician Benoît Mandelbrot (Mandelbrot 1975). Fractals came to prominence in the 1980s with the advent of chaos theory, as the graphs of most chaotic processes display fractal properties (Mandelbrot 1982)—that is, foremost, selfsimilarity. This is a feature of an object to contain, exactly or approximately, similar parts of itself. For instance, a coastline is selfsimilar: parts of it show the same statistical properties at many scales (Mandelbrot 1967). Such a characteristic is also called scale invariance, a topic discussed in Sect. 6.4 in the context of scaling laws. Indeed, many naturally occurring objects display fractal properties. So much so, that Mandelbrot chose the title of his seminal and hugely influential work on fractals and chaos theory to read: The Fractal Geometry of Nature (Mandelbrot 1982).
Before Mandelbrot and others^{7} first saw the intricate shape of the fractal set named after him in the late1970s, no one could have imagined that such a simple equation, \(z_{n+1} = z_n^2 + c\), had the power to encode such a wealth of structure. In essence, the simple rule of the iterative map contains an infinitude of complexity. Anywhere on the boundary of the Mandelbrot set, one can zoom in, theoretically indefinitely, and keep on rediscovering new delicate structures and patterns of stunning complexity. This is another prime example of \(\textsf {P}_2^c\): A seductively simple procedure results in one of the most complex objects in mathematics.
5.2.3 The New Science of Networks
The formal mathematical structures describing networks are graphs. The nearly three hundred year history of graph theory is briefly discussed in Sect. 5.3.2, where the notion of a random graph takes center stage around 1960. This fruitful marriage of probability theory and graph theory resulted in much successful scholarly work. So what is there to add in terms of a new science of networks? Indeed (quoted in Newman et al. 2006, p. 4):In the late 1990s the study of the evolution and structure of networks became a new field in physics.
The authors then offer the following answers (quoted in Newman et al. 2006, p. 4):If graph theory is such a powerful and general language and if so much beautiful and elegant work has already been done, what room is there for a new science of networks?
The first glimpse of this new science of networks came from sociology in the late 1960s. A milestone being the work of Mark Granovetter on the spread of information in social networks (Granovetter 1973). He realized that more novel information flows to individuals through weak rather than strong social ties, coining the term “the strength of weak ties.” Since our close friends move in similar circles to us, the information they have access to overlaps significantly with what we already know. Acquaintances, in contrast, know people we do not know and hence have access to novel information sources. Another topic of interest was the interconnectivity of individuals in social networks. Stated simply, how many other people does each individual in a network know? Stanley Milgram devised an ingenious, albeit simple, experiment in 1969. The unexpected results propelled a novel concept into the public consciousness: the notion of the small world phenomenon, colloquialized as “six degrees of separation” (Milgram 1967; Travers and Milgram 1969) In a nutshell (Newman et al. 2006, p. 16):We argue that the science of networks that has been taking shape over the last few years is distinguished from preceding work on networks in three important ways: (1) by focusing on the properties of realworld networks, it is concerned with empirical as well as theoretical questions; (2) it frequently takes the view that networks are not static, but evolve in time according to various dynamical rules; and (3) it aims, ultimately at least, to understand networks not just as topological objects, but also as the framework upon which distributed dynamical systems are built.
The researchers recruited 296 starting individuals from Omaha, Nebraska and Boston, and targeted a stockbroker living in a small town outside Boston. 64 out of the 296 chains reached the target, with the median number of acquaintances from source to target being 5,2. In other words, a median of six steps along the chain were required. A surprisingly short distance and an unexpected result considering the potential size of the analyzed network. As a modern example, researchers set up an experiment where over 60,000 email users tried to reach one out of 18 target persons in 13 countries by forwarding messages to acquaintances. They also found that the average chain length was roughly six (Dodds et al. 2003). In an other experiment, the microblogging service Twitter was analyzed in 2009. Then it was comprised of 41,7 million user profiles and 1,47 billion social relations and had an average path length found to be 4, 12 (Kwak et al. 2010).Milgram’s experiments started by selecting a target individual and a group of starting individuals. A package was mailed to each of the starters containing a small booklet or “passport” in which participants were asked to record some information about themselves. Then the participants were to try and get their passport to the specified target person by passing it on to someone they knew on a firstname basis who they believed either would know the target, or might know somebody who did. These acquaintances were then asked to do the same, repeating the process until, with luck, the passport reached the designated target. At each step participants were also asked to send a postcard to Travers and Milgram, allowing the researchers to reconstruct the path taken by the passport, should it get lost before it reached the target.
In 1998, Duncan Watts and Steven Strogatz introduced the smallworld network model to replicate this smallworld property found in more and more realworld networks (Watts and Strogatz 1998). They identified two independent structural features according to which graphs could be classified. The clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together, derived from the number of triangles present in the network. The second classification measure is the average shortest path length, the key parameter of smallworld networks. Applying these quantities to random graphs, constructed according to the prototypical ErdősRényi model,^{8} reveal a small average path length (usually varying as the logarithm of the number of nodes) along with a small clustering coefficient. In contrast, smallworld networks are characterized by a high clustering coefficient and a small average path length. The algorithm introduced in the WattsStrogatz model considers regular ring lattices, or graphs with n nodes each connected to k neighbors, and imposes a probability for the rewiring of links (excluding selfloops) . These models also turned out to be receptive to a variety of techniques from statistical physics, attracting “a good deal of attention in the physics community and elsewhere” (Newman et al. 2006, p. 286).
Note that although scalefree networks are also smallworld networks, the opposite is not always true. However, many realworld complex networks show both scalefree and smallworld characteristics. In Fig. 5.5 examples of networks with various levels of structure are shown. The feature of complex networks in general to capture and encode the organizational architecture of complex systems is what ushered in the new science of complexity, explained in Chap. 6.
5.2.4 The Success
It is remarkable that a multitude of simple interactions can result in overall complex behavior that exhibits properties like emergence, adaptivity, resilience, and sustainability. Moreover, the fact that order and structure can arise from local interactions between parts of an initially disordered system is astonishing. Indeed, the universe has always been governed by this structure formation mechanism, selforganizing itself into ever more complex manifestations. From an initial singularity with no structure the universe appears to, at least in our vicinity, be spontaneously evolving towards ever more order. Albeit with no external agency and despite the second law of thermodynamics forcing the entropy—the level of disorder—of the universe to increases over time.^{11} Mysterious as these processes may appear, the study of complex systems gives us insights into the mechanisms governing complexity. Moreover, should there exist an unseen fundamental force in the universe, driving it to ever more complexity, then the emergence of first life and later consciousness is perhaps less wondrous.
In essence, complexity does not stem from the number of participating agents in the system but from the number of interactions among them. For instance, there are about 20,000–25,000 genes in a human (International Human Genome Sequencing Consortium 2004). In contrast, bread wheat has nearly 100,000 genes (Brenchley et al. 2012). Thus the complexity of humans is evidently not a result of the size of our genome. It is crucial how the genes express themselves, meaning how the information encoded in a gene is used in the synthesis of functional gene products, such as proteins. The gene regulatory network is a collection of molecular regulators that interact with each other to govern the gene expression levels (Brazhnik et al. 2002).
This novel interactionbased outlook also highlights the departure from a topdown to a bottomup approach to understanding complexity. A topdown philosophy is associated with clear centralized control or organization. In contrast, bottomup approaches are akin to decentralized decisionmaking. The control or organization is spread out over a network. For instance, it once was thought that the brain would, like a computer, have a CPU—a central processing unit responsible for topdown decisionmaking (Whitworth 2008). Today, we know that the information processing in our brains is massively parallel (Alexander and Crutcher 1990), decentralized into a neural network, giving rise to highly complex, modular, and overlapping neural activity (Berman et al. 2006).
 1.
Separation: steer to avoid a crowding of agents.
 2.
Alignment: steer towards the average heading of local agents.
 3.
Cohesion: steer to move toward the average position of local agents.
Many hitherto hard (or impossible) to tackle problems suddenly become accessible and tractable with the application of the paradigms of complex systems. In detail, the organizing principles and the evolution of dissipative, realworld complex systems, which are inherently unpredictable, stochastic in nature, and plagued by nonlinear dynamics, can now be understood. This, by analyzing the architecture of the underlying network topology or by computer simulations. Hence more patterns and regularities in the natural world are uncovered. For instance, earthquake correlations (Sornette and Sornette 1989), crowd dynamics (Helbing et al. 2000), traffic dynamics (Treiber et al. 2000), pedestrian dynamics (Moussaïd et al. 2010), population dynamics (Turchin 2003), urban dynamics (Bettencourt et al. 2008), social cooperation (Helbing and Yu 2009), and market dynamics (see Sect. 7.3). Recall the mentioned selection of effective agentbased models (Axelrod 1997; Lux and Marchesi 2000; Schweitzer 2003; Andersen and Sornette 2005; Miller et al. 2008; Šalamon 2011; Helbing 2012). Chapter 6 is exclusively devoted to the successful treatment of complex systems and Chap. 7 discusses finance and economics.
5.3 The Profound Unifying Powers of Mathematics
The two volumes of the Book of Nature appear to speak two different formal dialects. While Volume I is written in an equationbased mathematical language, Volume II utilizes an algorithmic formal representation, intelligible to computers. In this section it will be uncovered how a mathematical idiom also underpins the algorithmic abstraction. In essence, the entirety of mathematics incorporates both formal strands and hence unifies all human knowledge generation in one consolidated formal representation. The journey leading to this realization begins in preSocratic Greece and touches on the Protestant Reformation, the Jesuits, Newton, Galileo Galilei, the bridges of Königsberg, and digital information (bits) . Before embarking on this voyage, the edifice of mathematics requires a closer inspection.
There is one general demarcation line one can find in mathematics, splitting the subject matter into continuous and discrete renderings. Most nonmathematicians only come into contact with the continuous implementation of mathematics,^{14} for instance, by being exposed to calculus, geometry, algebra, or topology. While the branch of discrete mathematics deals with objects that can assume only distinct, separated values, continuous mathematics considers only objects that can vary smoothly.^{15}
Philosophically, the schism between continuity and discreteness originated in ancient Greece with Parmenides, who asserts that the everchanging nature of reality is an illusion obscuring its true essence: an immutable and eternal continuum. Still in modern times this intellectual battle between viewing the nature of reality as fundamentally continuous or discreet is been fought. Charles Pierce proposed the term synechism to describe the continuous nature of space, time and law (Peirce 1892). A related mystery is the question if reality is infinite or not. Immanuel Kant, for instance, came to the startling conclusion that the world is “neither finite nor infinite” (Bell 2014). In contrast, the triumph of “atomism,” i.e., the atomic theory developed in physics and chemistry, only applies to matter and forces, conjuring up the following image: the discrete entities making up the contents of the universe act in the arena of continuous spacetime. This view goes to the heart of Leibniz’ philosophical system, called monadism , in which space and time are continua, but real objects are discreet, comprised of simple units he called monads (Furth 1967).
There are, however, also modern efforts to discretize space and time as well, in effect bringing the quantum revolution to an even deeper level. This proposition goes to the very heart of one of theoretical physics’ most pressing problems: the incompatibility of quantum field theory (Sects. 3.2.2.1 and 3.1.4), describing all particles and their (nongravitational) interactions, and general relativity decoding gravity (Sects. 4.1 and 10.1.2). Quantum theory, by its very name, deals with discrete entities while general relativity describes a continuous phenomenon. For decades, string/Mtheory was hailed as the savior, however to no avail (Sect. 4.3.2). These issues are discussed in Sect. 10.2.
Despite the clear toplevel separation of mathematics into these two proposed themes, there also exist overarching concepts linking the continuous and the discrete. Indeed, many ideas in mathematics can be expressed in either language and often there are discrete companions to continuous notions to be found^{16} and vice versa. Specifically, the discrete counterpart of a differential equation^{17} is called a recurrence relation, or difference equation. Examples of such equations were given in Sect. 5.2.1, discussing chaos theory.^{18} Then, what is known as timescale calculus is a unification of the theory of difference equations with that of differential equations. In detail, dynamic equations on time scales are a way of unifying and extending continuous and discrete analysis (Bohner and Peterson 2003). One powerful mathematical theory, spanning both worlds, is group theory. It was encountered in its continuous expression in Chap. 3, specifically the continuous symmetries described by Lie groups (Sect. 3.1.2), arguably the most fruitful concept in theoretical physics (Chaps. 3 and 4). In its discrete version, group theory underlies modernday cryptography, utilizing discrete logarithms, giving rise to the modern decentralized economy fueled by blockchain technology (Sect. 7.4.3). But perhaps the most interesting mathematical chimera is the fractal. It is defined by the discrete difference equation (5.2) but its intricate border (seen in Fig. 5.4) is continuous and hence infinite in detail, allowing one to indefinitely zoom into it and witness its mesmerizing selfsimilar nature.
5.3.1 The Continuous—A History
The process of finding the derivative, i.e., the mechanism of differentiation, not only lies at the heart of contemporary mathematics but also marks the birth of modern physics. It builds on a hallmark abstract notion that first appeared in preSocratic Greece and can be seen in the calculations performed by Democritus (Boyer 1968), the proponent of physical atomism (see Sect. 3.1), in the 5th Century B.C.E. Since then, this novel idea entered and left the collective human consciousness at various times in history. The concept in question is the abstract idea of infinitesimals. As an example, a continuous line is thought to be composed of infinitely many distinct but infinitely small parts. In general, the concept of infinitesimals is closely related to the notion of the continuum, a unified entity with no discernible parts which is infinitely divisible. In this sense, a global perspective yields the continuum, while an idealized local point of view uncovers its ethereal constituents, the infinitesimals (Bell 2014). The idea of infinitesimals is a deceptively benign proposition, but nonetheless problematic and even dangerous.
Ancient Greece
One account has it that the Pythagoreans expelled one of their own philosophers, Hippasus, from their order and possibly even killed him, as he had discovered “incommensurable magnitudes” (Boyer 1968). Hippasus understood that it was impossible to compare, for instance, the diagonal of a square with its side, no matter how small a unit of measure is chosen. In essence, this is a consequence of the existence of irrational numbers. These are real numbers that cannot be expressed as a ratio of integers. In other words, irrational numbers cannot be represented with terminating or repeating decimals. Looking at a square of unit length, its diameter is given, ironically, by the Pythagorean theorem \(a^2+b^2=c^2\) which yields \(c=\sqrt{2} = 1.4142\dots \) This is a number with infinitely many digits. Other famous irrational numbers, magically appearing everywhere in mathematics and physics, are \(\pi =3.1415\dots \) and \(\exp (1)=2.7182\dots \) Currently, the record computation of \(\pi \) has revealed \(1.21 \times 10^{13}\) digits (Yee and Kondo 2013). Irrational numbers posed a great threat to the fundamental tenet of Pythagoreanism, which asserted that the essence of all things is related to whole numbers, igniting the conflict with Hippasus.
This early budding of the notion of the infinitesimal would soon be stifled by associated paradoxes uncovered by the philosopher Zeno. The notorious Zeno’s paradoxes show how infinitesimals lead to logical contradictions. One conundrum argues that before a moving object can travel a certain distance, it must first travel half this distance. But before it can even cover this, the object must travel the first quarter of the distance, and so on. This results in an infinite number of subdivisions and the beginning of the motion is impossible because there is no finite instance at which it can start. “The arguments of Zeno seem to have had a profound influence on the development of Greek mathematics [...]” (Boyer 1968, p. 76). “Thereafter infinitesimals are shunned by ancient mathematicians” (Alexander 2014, p. 303), with the exception of Archimedes. Still today there are discussions on whether Zeno’s paradoxes have been resolved—touching issues regarding the nature of change and infinity (Salmon 2001). It would take another two thousand years, before the dormant idea of infinitesimals would reemerge. If only to be faced with more antagonism. This time, the threat emanated from the Catholic Church, which saw its hegemony in Western Europe threatened by the powerstruggles initiated by the Reformation. In the wake of these events, Galileo would be sentenced to house arrest in 1633 by the Inquisition for the last nine years of his life.
Middle Ages: The Protestant Reformation
In 1517, the Catholic priest Martin Luther launched the Reformation by nailing a treatise comprised of 95 theses to a church door, instigating a fundamental conflict between Catholics and Protestants. As a reformation movement, Protestantism under Luther sought “to purify Christianity and return it to its pristine biblical foundation” (Tarnas 1991, p. 234). The Catholic Church was perceived to have experienced irreparable theological decline: “the longdeveloping political secularism of the Church hierarchy undermining its spiritual integrity while embroiling it in diplomatic and military struggles; the prevalence of both deep piety and poverty among the Church faithful, in contrast to an often irreligious but socially and economically privileged clergy” (Tarnas 1991, p. 234). Moreover, Pope Leo X’s authorization of financing the Church by selling spiritual indulgences—the practice of paying money to have one’s sins forgiven—was seen as a perversion of the Christian essence. Luther’s revolution aimed at bringing back the Christian faith to its roots, where only Christ and the Bible are relevant. In this sense, Protestantism was not only a rebellion against the existing powerstructure of the Catholic Church, it was also a conservative fundamentalist movement. The effect of this combination lead to a paradoxical outcome: while the Reformation’s “essential character was so intensely and unambiguously religious, its ultimate effects on Western culture were profoundly secularizing” (Tarnas 1991, p. 240). Indeed, the Protestant’s work ethic can be seen to lay the foundations for modern capitalism (Weber 1920 and Sect. 7.4.2). Whereas traditionally the pursuit of material prosperity was perceived as a threat to religious life, now, the two are seen as mutually beneficial.
Against the backdrop of the increasing popularity and spread of Protestantism, a counterreformation in the Catholic Church was launched. It was spearheaded by the Jesuits, a Roman Catholic order established in 1540, dedicated to restoring Church authority. Their emphasis lay on education and they soon became “the most celebrated teachers on the Continent” (Tarnas 1991, p. 246). In this environment the Jesuits would confront Galileo and also the idea of infinitesimals would reemerge.
With respect to Galileo, it is quite perceivable that the Church could have reacted in a very different manner. “As Galileo himself pointed out, the Church had long been accustomed to sanctioning allegorical interpretations of the Bible whenever the latter appeared to conflict with the scientific evidence” (Tarnas 1991, p. 259). Indeed, even some Jesuit astronomers in the Vatican recognized Galileo’s genius and he himself was a personal friend of the pope. However, the Protestant threat compounded the perceived risks emanating from any novel and potentially heretical worldview. And so the heliocentric model of the solar system—the Copernican revolution^{19} ignited by the Renaissance mathematician, astronomer, and Catholic cleric Nicolaus Copernicus, fostered by Tycho Brahe and Kepler, ultimately finding its full potential expressed through Galileo—was banned by Church officials. In this conflict of religion versus science, Galileo was forced to recant in 1633 before being put under house arrest. Not so lucky was the mystical Neoplatonist philosopher and astronomer Giordano Bruno. He espoused the idea that the universe is infinite and that the stars are like our own sun, with orbiting planets, in effect extending the Copernican model to the whole universe (Singer 1950). This idea suggested a radical new cosmology. Bruno was burned at the stake in 1600. However, the reason for his execution was not his support of the Copernican worldview, but because he was indeed a heretic, holding beliefs which diverged heavily from the established dogma. Next to his liberal view “that all religions and philosophies should coexist in tolerance and mutual understanding” (Tarnas 1991, p. 253), he was a member of the movement know as Hermetism, a cult following scriptures thought to have originated in Egypt at the time of Moses. These heretic beliefs of Bruno on vital theological matters sealed his fate and resulted in a torturous death (Gribbin 2003).
With the Catholic Church’s efficient, dedicated, and callous modus operandi, why did Luther not get banished as a heretic? First, Pope Leo X long delayed any response to what he perceived as “merely another monk’s quarrel” (Tarnas 1991, p. 235). When Luther finally did get stigmatized as a heretic, the political climate in Europe had shifted in a way facilitating the splitting of the cultural union maintained by the Catholic Church as a result of this theological insurgence. A second factor was the “printing revolution” initiated by Johannes Gutenberg’s invention of the printing press after 1450. Perhaps marking one of the first viral phenomena, this new technology allowed for the unprecedented dissemination of information. The rise in literacy and the facilitated access to knowledge allowed a new mass of people to participate in discussions which would have been beyond their means not too long a go. Utilizing this new technology, Luther translated the tow Biblical Testaments from Hebrew and ancient Greek into German in 1522 and 1534. This work proved to be highly influential and would help pave the way to the emergence of other new religious denominations, next to Protestantism, as now many people could offer their personal interpretation, further fracturing the unity of Catholicism.
Middle Ages: The Reemergence of Infinitesimals
Approximately 1,800 years after Archimedes’ work on the areas and volumes enclosed by geometrical figures using infinitesimals, there was finally a revival of interest in this idea among European mathematicians in the late 16th Century. A Latin translation of the works of Archimedes in 1544 made his techniques widely available to scholars for the first time. Then, in 1616, the Jesuits first clashed with Galileo for his use of infinitesimals. Indeed, even a Jesuit mathematician was prohibited by his superiors from publishing work deemed to close to this dangerous idea. In the eyes of the Jesuits, if the notion of a continuum made up of infinitely many infinitesimally small units were to prevail “the eternal and unchallengeable edifice of Euclidean geometry would be replaced by a veritable tower of Babel, a place of strife and discord built on teetering foundations, likely to topple at any moment” (Alexander 2014, p. 120). Between the years 1625 and 1658, a catandmouse game would follow, where the Jesuits would condemn the growing interest in infinitesimals, only to be faced with notable publications by mathematicians on the subject. Consult (Alexander 2014) for the details.
The introduction of the mathematically sound definition of a limit now allowed calculus to be rigorously reformulated in clear mathematical terms, still used today. It is an interesting observation, that the idea of infinitesimals has experienced a renaissance in the last decades, establishing the concept on a logically solid basis. One attempt fuses infinitesimal and infinite numbers, creating what is called nonstandard analysis. A second endeavor employs category theory to meld what is known as smooth infinitesimal analysis. These novel developments shed new light on the nature of the continuum. More details on the history of infinitesimals and the related mathematics are found in Bell (2014), Alexander (2014).
In a nutshell:
The derivative, a cornerstone of continuous mathematics, lies at the heart of the analytical machinery that is employed to represent fundamental aspects of the physical world, as described in the formal encoding scheme outlined in Fig. 5.1 and detailed in Table 5.1.
Various themes of the notion of the derivative seen to permeate many physical theories as a common thread. It can be understood as a unified mathematical underpinning, a simple but powerful abstract framework encoding the physical world. The acronyms GR and GT refer to general relativity and gauge theory, respectively. \(G_{\text {SM}}\) is the standard model symmetry group, seen in ( 4.72)
Domains  Symbols  Equations 

Classical mechanics  \(\partial _t, \partial _t^2, \partial _{q^i}, \partial _{\dot{q}^i}\)  
Field theory  \(\partial _\mu , \partial _{\psi ^i}\)  ( 3.6) 
Maxwell equations  \(\partial _t, \nabla \cdot , \nabla \times \)  ( 2.4) 
Covariant Maxwell equations  \(\partial _\mu , \Box \)  
Quantum operators  \(i \partial _t, \nabla /i\)  ( 3.51) 
Schrödinger equation  \(i \partial _t\)  ( 3.24) 
Dirac equation  
Coordinate transformation (GR)  \(\varDelta ^{\mu ^\prime }_{\ \nu } = \frac{\partial x^{\prime \mu }}{\partial x^{ \nu }}\)  ( 4.3) 
Curvature (GR)  \( [\nabla _X, \nabla _Y] \nabla _{[X,Y]}\)  ( 4.47) 
Covariant derivative (GR)  \(\nabla _\mu = \partial _\mu  \Gamma _{\mu \_}^{\_}\)  ( 4.8) 
Covariant derivative (GT)  \(D_\mu = \partial _\mu  A^k_{\mu } \text {X}_k\)  
\(G_{\text {SM}}\)invariant derivative  \(\mathcal {D}_\mu = \partial _\mu + i \hat{g} G_\mu ^\alpha \lambda _\alpha = +\, i g W^i_\mu \tau _i +i g^\prime B_\mu Y\)  ( 4.73) 
5.3.2 Discrete Mathematics: From Algorithms to Graphs and Complexity
There exists one abstract concept, found in discrete mathematics, which is bestowed with great explanatory power. It is a formal representation that can capture a whole new domain of reality in that it underpins the algorithmic understanding of complex systems. Metaphorically, the discrete cousin of the continuous derivative is a graph. As a result, the tapestry of mathematics, weaved out of the continuous and discreet strands, has the capacity to unify the two disjoint volumes of the Book of Nature. In other words, human knowledge generation is truly and profoundly driven by mathematics.
Discrete mathematics is as old as humankind. The idea behind counting is to establish a onetoone correspondence (called a bijection) between a set of discrete objects and natural numbers. Arithmetics, the basic mathematics taught to children, is categorized under the umbrella of discrete mathematics. Indeed, the foundations of mathematics rests on notions springing from discrete mathematics: logic and set theory. Higher discrete mathematical concepts include combinatorics, probability theory, and graph theory. More information on discrete mathematics and its applications can, for instance, be found in Biggs (2003), Rosen (2011), Joshi (1989).
Although continuous mathematics generally enjoys more popularity, discrete mathematics has witnessed a renaissance driven by computer science. The duality of digital information, which is expressed as strings of binary digits—called bits which exist in the dual states represented by 0 or 1—lies at the heart of discreteness. In this sense, the development of computers, and information processing in general, build on insights uncovered in the arena of discrete mathematics. A landmark development in the field of logic was the introduction of Boolean algebra in 1854, in which the variables can only take on two values: true and false (Boole 1854). Then, in 1937, Claude Shannon showed in his master’s thesis how this binary system can be used to design digital circuits (Shannon 1940). In effect, Shannon implemented Boolean algebra for the first time using electronic components. Later, he famously laid the theoretical foundations regarding the quantification, storing, and communication of data, in effect inventing the field of information theory (Shannon 1948) . The concepts Shannon developed are at the heart of today’s digital information theory. Shannon and the notion of information are discussed further in Sect. 13.1.2. In summary, the hallmark of modern computers is their digital nature: they operate on information which adopts discrete values. This property is mirrored by the discrete character of the formal representations used to describe these entities, see, for instance Biggs (2003), Steger (2001a, b). Indeed, the merger of discrete mathematics with computer science has given rise to the new field of theoretical computer science (Hromkovič 2010). In contrast to the technical and applied areas of computer science, theoretical computer science focuses on computability and algorithms. Examples are the methodology concerned with the design of algorithms or the theory regarding the existence of algorithmic solutions.
Paradigm \(\textsf {P}_1^c\) (Sect. 5.2.1) emerges as the crucial guiding principle for the formal representation of complexity. A complex system can formally either directly be mapped onto a complex network or described as an evolving network of interacting agents, following algorithmic instructions. Both incarnations find their abstraction in the notion of a graph.
Graph Theory
The discrete counterpart to the derivative, a versatile and universal tool in continuous mathematics, is the notion of a graph. In 1735 Leonard Euler was working on a paper on the seven bridges of Königsberg. The publication of this work (Euler 1941) in effect established the field of graph theory (Biggs et al. 1986; Bollobás 1998). The problem Euler was trying to tackle, was to find a walk through the city that would cross each of the seven bridges only once. Although he could prove that the problem had no solution, the formal tool Euler employed was revolutionary. As detailed, graphs today play an essential role in mathematics and computer science.
In modern terms, the defining features of a graph \(G=G(V,E)\) are the set of vertices V, or nodes, which are connected by edges, or links, in a set E, where the edge \(e_{ij} \in E\) connects the nodes \(v_i, v_j \in V\). The adjacency matrix of a graph \(A=A(G)\) maps the graph’s topology onto the matrix \(A_{ij}\), allowing further mathematical operations to be performed on G, as now the powerful tools of linear algebra can be utilized. Finally, the number \(k_i\) of edges per vertex i is know as the degree. The degree distribution \(\mathcal {P} (k)\) succinctly captures the network architecture.
This simple formal structure was utilized by Euler as a representation of the problem at hand: he ingeniously encoded the Königsberg bridges as the links and the connected landmasses as the nodes in a small network. Indeed, Euler anticipated the idea of topology: the actual layout of this network, when it is illustrated, is irrelevant and the essence of the relationships is encoded in the specifics of the abstract idea of the graph itself.
Euler’s contribution to graph theory represents only a minuscule fraction of his mathematical productivity and “his output far surpassed in both quantity and quality that of scores of mathematicians working many lifetimes. It is estimated that he published an average of 800 pages of new mathematics per year over a career that spanned six decades” (Dunham 1994, p. 51). Indeed, even his deteriorating eyesight, leading to blindness, “was in no way a barrier to his productivity, and to this day his triumph in the face of adversity remains an enduring legacy” (Dunham 1994, p. 55).
In closing:
Complex systems are represented by networks which are formalized as graphs, a notion from of discrete mathematics that lies at the heart of the algorithmic approach which is employed to represent complex aspects of the physical world, as described in the formal encoding scheme seen in Fig. 5.1.
5.3.3 Unity
To summarize, both mathematical variants—the continuous and the discrete— have one particular property which gives them a special status in their volume of the Book of Nature. In other words, each branch has one feature that makes it a powerful tool in the abstract world of formal representations (i.e., the righthand side of Fig. 5.1). One is the (continuous) operation of differentiation and the other is the (discrete) notion of a graph. While the former unlocks knowledge about the fundamental workings of nature, the latter gives insights into the organizational principles of complex systems.
By introducing the continuousdiscrete dichotomy it is possible to give an underpinning to the formal representations seen on the righthand side in Fig. 5.2. The analytical formal representation is inexorably tied to the continuous mathematical structure while the algorithmic formal representation is intimately related to the discrete mathematical structure. This is illustrated in Fig. 5.7. In this sense, the abstract human thought system called mathematics is not only a very powerful probe into reality, it also unifies the two separate formal representations describing the two different reality domains.
To summarize:
The cognitive act of translating specific fundamental and complex aspects of the observable universe into formal representations—utilizing analytical (equationbased) and algorithmic (interactionbased) tools—is the basis for generating vast knowledge about the workings of reality. Specifically, the fundamental and complex reality domains of the physical world are encoded into analytical and algorithmic formal representations, respectively. Underpinning these are the continuous and a discrete structures of mathematics.
Digging deeper, continuous mathematics, associated with the analytical formal theme, provides the machinery of derivation, which plays a fundamental role in the physical sciences. In a similar vein, discrete mathematics, the basis of the algorithmic formal theme, offers graphs as a universal abstract tool able to capture complexity. In this sense, mathematics, understood as the totality of its continuous and discrete branches, is the unifying abstract framework on which the process of translation builds upon. This overarching formal framework is hosted in the human mind and mirrors the structure and functioning of the physical world, transforming translation into knowledge generation.
This process of human knowledge generation finds its metaphor in the discovery of the two volumes of the Book of Nature, written in the language of mathematics. A graphical overview is presented in Fig. 5.8. The tremendous success of this endeavor can be seen in the dramatic acceleration of technological advancements in recent times, bearing witness to the increasing ability of the human mind to manipulate the physical reality it is embedded in.
5.4 The Book of Nature Reopened
For over 300 years the Book of Nature has revealed insights into the workings of the world. Chapter by chapter, novel understanding was disclosed, from quantum theory to cosmology. The human mind was capable of translating a multitude of quantifiable aspects of reality into formal, abstract representations. Then, by entering this abstract realm, the mind was able to derive new insights, which could be decoded back into the physical world (see Fig. 5.1). This is a truly remarkable feat and the foundation from which the technological advancements of the human species springs.
Figure 5.8 shows a conceptualized illustration of this truly remarkable achievement. The knowledge generated in this way is the engine driving humanities astonishing technological advancements, (see also the first section of Chap. 8). In essence, this knowledge generation boils down to acts of translation. As illustrated in Fig. 5.1, a reality domain of the physical world is encoded as a formal representation inhabiting the abstract world. Constrained and guided by the rules pertaining to the rich structure of the abstract world, new information can be harnessed, which can then be decoded back into to physical world, yielding novel insights.

Volume I corresponds to the analytical encoding of fundamental processes, \({\mathcal {T}}_{\text {An,Fu}}\).

Volume II corresponds to the algorithmic encoding of complex processes, \({\mathcal {T}}_{\text {Al,Co}}\).
Why has the successful knowledge generation process, giving the human mind access to the intimate workings of the universe, primarily been based on the translational mechanisms \({\mathcal {T}}_{\text {An,Fu}}\) and \({\mathcal {T}}_{\text {Al,Co}}\)? What about the two other translation possibilities \({\mathcal {T}}_{\text {Al,Fu}}\) and \({\mathcal {T}}_{\text {An,Co}}\)?
In Fig. 5.9 all four possible translational mechanisms arising from the dichotomies are shown. Understood as a matrix, primarily the diagonal elements of \({\mathcal {T}}\) are responsible for lifting humanities’ veil of ignorance. What do we know about the other two translational possibilities? Do they represent failed attempts at knowledge generation? If so, what is special about the two successful acts of translation? Or will, in the end, the human mind unearth further volumes in the Book of Nature Series, guided by the two dormant translational possibilities? This will be the focus of the next section.
 1.
There exists an abstract realm of objects transcending physical reality (ontology).
 2.
The human mind possesses a quality that allows it to access this world and acquire information (epistemology) .
 3.
The structures in the abstract world map the structures in the physical (structural realism, see Sects. 2.2.1, 6.2.2 and 10.4.1).
5.4.1 Beyond Volumes I and II
As observed, the two translational possibilities \({\mathcal {T}}_{\text {Al,Fu}}\) and \({\mathcal {T}}_{\text {An,Co}}\) have not been prominently utilized as knowledge generation mechanisms. This could mean two things. First, complex systems are indeed immune to being treated with an equationbased formalism, and, conversely, the same is true for fundamental systems being described algorithmically. Or, these alternative possibilities have only been sparsely explored to date, still leaving behind mostly uncharted terrain. In the following, some attempts at filling in the blanks are described.
The ComplexAnalytical Demarcation
Pattern formation in nature is clearly the result of selforganization in space and time. Alan Turning proposed an analytical mechanism to describe biological pattern formation (Turing 1952). He utilized what is known as reactiondiffusion equations. These are partial differential equations used to describe systems consisting of many interacting components, like chemical reactions. Turning’s model successfully^{21} replicates a plethora of patterns, from sea shells to fish and other vertebrae skin (Meinhardt 2009; Kondo and Miura 2010). In effect, he proposed an analytical approach to complexity.
Running agentbased models can sometimes be computationally costly. However, there exist analytical shortcuts that can be taken. Instead of simulating the complex system, it can be studied by solving a set of differential equations describing the time evolution of the individual agent’s degrees of freedom. Technically, this can be achieved by utilizing Langevin stochastic equations. Each such equation describes the time evolution of the position of a single agent (Ebeling and Schweitzer 2001). From the reactiondiffusion equation, Langevin equations can be derived. See Sect. 7.1.1.1 for the history of the Langevin equations, including Einstein’s early work and the BlackScholes formula for option pricing. Utilizing selfsimilar stochastic processes for the modeling of random systems evolving in time has been relevant for their understanding (Embrechts and Maejima 2002). See again Sect. 7.1.1.1.
Langevin equations can be solved analytically or numerically. They describe the individual agent’s behavior at the micro level. Moving up to a macroscopic description of the system, what is known as the FokkerPlanck partial differential equation describes the collective evolution of the probability density function of a system of agents as a function of time. The two formalism can be mapped into each other (Gardiner 1985). However, as an example, computing 10,000 agents constrained by Langevin equations approximates the macro dynamics of the system more efficiently than an effort directly attempting to solve the equivalent FokkerPlanck differential equation.
Some scholars have argued against the dictum that complex systems are, in general, not susceptible to mathematical analysis and should hence be investigated by the means of simulation analysis (Sornette 2008). Didier Sornette, a physicist, econophysicist, and complexity scientist, offers the insight that the formal analytical treatment of triggering processes between earthquakes can be successfully applied to various complex systems. Examples range from the dynamics of sales of book blockbusters to viewer activity on the YouTube videosharing website to financial bubbles and crashes (Sornette 2008). Furthermore, he argues that the right level of magnification (level of granularity) in the description of a complex system can reveal order and organization. As a result, pockets of predictability at some coarsegrained level can be detected. This partial predictability approach is potentially relevant for meteorological, climate, and financial systems. However, a big challenge remains in identifying the complex systems that are susceptible to this approach and finding the right level of coarsegraining.
Science and religion are two essential components in the search for truth. Denying either is a barren approach.
The FundamentalAlgorithmic Demarcation
Recall from Sect. 5.1.3 the troubles relating to solving gravitational nbody problems. In essence, here it does not suffice to know the analytical encoding of the challenge at hand. The system of differential equations describing the motion of \(n \ge 3\) gravitationally interacting bodies cannot be solved analytically. Only for a few simple, albeit important, problems Newton’s equation can be solved. Although the exact theoretical solution for the general case can be approximated (via Taylor series or numerical integration) the dynamics are generally best understood utilizing nbody simulations (Valtonen and Karttunen 2006).
The largest such simulation, called the Millennium Run,^{22} investigated how matter evolved in the universe over time by reproducing cosmological structure formation. The simulation was comprised of ten billion particles, each representing approximately a billion solar masses of dark matter (Springel et al. 2005). In summary, the dynamics of a fundamental (cosmological) system, comprised of a multitude of gravitating bodies, is not understood analytically via differential equations. Rather, computer simulations, mimicking the forces of interaction in the system, offer theoretical predictions.
Contemporary support for this idea comes from Nobel laureate Gerard ’t Hooft, where he proposes an interpretation of quantum mechanics utilizing cellular automata (’t Hooft 2016). Finally, some theoretical physicists propose to describe spacetime as a network in some fundamental theories of quantum gravity. For instance, spin networks in loop quantum gravity (see Sect. 10.2.3). Another idea tries to understand emergent complexity as arising from fundamental quantum field theories (Täuber 2008).And could it even be that underneath all the complex phenomena we see in physics there lies some simple program which, if run long enough, would reproduce our universe in every detail?
Blurring the Lines
Computers have also helped blur the lines between the analytical and algorithmic formal representations. In 1977, the fourcolor theorem was the first major mathematical theorem to be verified using a computer program (Appel and Haken 1977). “The fourcolor theorem states that any map in a plane can be colored using fourcolors in such a way that regions sharing a common boundary (other than a single point) do not share the same color.”^{23} Computeraided proofs of a mathematical theorem are usually very large proofsbyexhaustion, where the statement to be proved is split into many cases and each case then checked individually. “The proof of the four colour theorem gave rise to a debate about the question to what extent computerassisted proofs count as proofs in the true sense of the word” (Horsten 2012).
Footnotes
 1.
Which, in itself, is a prime example of the enormous effectiveness of this scientific knowledgegenerating process.
 2.
He was awarded the Nobel Prize in chemistry and the Nobel Peace Prize. Marie Curie was the first person to ever be honored twice, with a Nobel Prize in physics and chemistry. To this day, the illustrious group of people to have received two Nobel Prizes is comprised of four people.
 3.
 4.
 5.
 6.
Wolfram is referring to a cellular automaton rule he introduced in 1983, called Rule 30, out of 256 possible rules. Rule 30 produces complex, seemingly random patterns from the simple, welldefined rules of interaction.
 7.
There exists a dispute about the discovery of the Mandelbrot set (Horgan (2009)).
 8.
See Eq. (5.11) on Page 168.
 9.
See Eq. (5.17) on Page 168.
 10.
10,641 plus 7,646, respectively, retrieved in February 2015 from the Web of Knowledge, an academic citation indexing service provided by Thomson Reuters.
 11.
This is possible because the second law of thermodynamics only applies to isolated systems. Systems far from the thermodynamic equilibrium (nonequilibrium thermodynamics) are candidates for selforganizing behavior. Overall, the entropy always increases in the universe. See Nicolis and Prigogine (1977).
 12.
This philosophical realignment as potential political and societal ramifications. For instance, with respect to the surprising popularity and pervasiveness of conspiracy theories in the 21st Century, see Sect. 12.2.
 13.
 14.
Next to basic arithmetic, which is, of course, part of discrete mathematics.
 15.
Technically, this means that between any two numbers there must lie an infinite set of numbers, as is the case for real numbers.
 16.
For instance, discrete versions of calculus, geometry, algebra, and topology have been defined, although they are less commonly used.
 17.
 18.
 19.
See also Sect. 9.1.3.
 20.
Recall the peculiar life he chose to live recounted on Page 57 at the end of Sect. 2.2.
 21.
See, for instance, the interactive demonstrations found at http://demonstrations.wolfram.com/TuringPatternInAReactionDiffusionSystem/.
 22.
 23.
References
 Albert, R., Barabási, A.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74(1), 47–97 (2002)CrossRefGoogle Scholar
 Alexander, A.: Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. Scientific American/FSG, New York (2014)Google Scholar
 Alexander, G.E., Crutcher, M.D.: Functional architecture of basal ganglia circuits: neural substrates of parallel processing. Trends Neurosci. 13(7), 266–271 (1990)CrossRefGoogle Scholar
 Andersen, J.V., Sornette, D.: A mechanism for pockets of predictability in complex adaptive systems. EPL (Eur. Lett.) 70(5), 697 (2005)CrossRefGoogle Scholar
 Anderson, P.W.: More is Different. Science 177(4047), 393–396 (1972)CrossRefGoogle Scholar
 Appel, K., Haken, W.: The solution of the fourcolormap problem. Sci. Am. 237(4), 108–121 (1977)CrossRefGoogle Scholar
 Ashcroft, N., Mermin, N.: Solid State Physics. Brooks Cole, Pacific Grove (1976)Google Scholar
 Axelrod, R.M.: The Complexity of Cooperation: AgentBased Models of Competition and Collaboration. Princeton University Press, Princeton (1997)Google Scholar
 Bacon, F.: Novum organum. In: Jardine, L., Silverthorne, M. (eds.) Francis Bacon: The New Organon. Cambridge University Press, Cambridge (2000)Google Scholar
 Barabási, A., Albert, R.: Emergence of scaling in random networks. Science, 509 (1999)Google Scholar
 Bell, J.L.: Continuity and infinitesimals. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy, Winter 2014 ednGoogle Scholar
 Berman, M.G., Jonides, J., Nee, D.E.: Studying mind and brain with fMRI. Soc. Cogn. Affect. Neurosci. 1(2), 158–161 (2006)CrossRefGoogle Scholar
 Bettencourt, L.M., Lobo, J., West, G.B.: Why are large cities faster? universal scaling and selfsimilarity in urban organization and dynamics. Eur. Phys. J. BCondens. Matter Complex Syst. 63(3), 285–293 (2008)CrossRefGoogle Scholar
 Biggs, N., Lloyd, E.K., Wilson, R.J.: Graph Theory, 1736–1936. Clarendon Press, Oxford (1986)Google Scholar
 Biggs, N.L.: Discrete Mathematics, 2nd edn. Oxford University Press, Oxford (2003)Google Scholar
 Bohner, M., Peterson, A.C. (eds.): Advances in Dynamic Equations on Time Scales. Birkhäuser, Boston (2003)Google Scholar
 Bollobás, B.: Modern Graph Theory. Springer, New York (1998)CrossRefGoogle Scholar
 Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm iIntelligence: From Natural to Artificial Systems. Oxford university press, Oxford (1999)Google Scholar
 Boole, G.: An Investigation of the Laws of Thought on which are Founded the Mathematical Theories of Logic and Probabilities. Walton and Maberly, London (1854)Google Scholar
 Boyer, C.B.: A History of Mathematics. Wiley, New York (1968). http://archive.org/details/AHistoryOfMathematics
 Boyle, R.: New Experiments PhysicoMechanical, Touching the Spring of the Air, and Its Effects. Miles Flesher, London (1682). http://name.umdl.umich.edu/A29007.0001.001
 Brazhnik, P., de la Fuente, A., Mendes, P.: Gene networks: how to put the function in genomics. TRENDS Biotechnol. 20(11), 467–472 (2002)CrossRefGoogle Scholar
 Brenchley, R., Spannagl, M., Pfeifer, M., Barker, G.L., Amore, D., R., Allen, A.M., McKenzie, N., Kramer, M., Kerhornou, A., Bolser, D., Kay, S., Waite, D., Trick, M., Bancroft, I., Gu, Y., Huo, N., Luo, M.C., Sehgal, S., Gill, B., Kianian, S., Anderson, O., Kersey, P., Dvorak, J., McCombie, W.R., Hall, A., Mayer, K.F.X., Edwards, K.J., Bevan, M.W., Hall, N.: Analysis of the bread wheat genome using wholegenome shotgun sequencing. Nature 491(7426), 705–710 (2012)Google Scholar
 Dodds, P.S., Muhamad, R., Watts, D.J.: An Experimental Study of Search in Global Social Networks. Science 301(5634), 827–829 (2003)CrossRefGoogle Scholar
 Dorogovtsev, S., Mendes, J.: Evolution of Networks: From Biological Nets to the Internet and WWW. Oxford University Press, Oxford (2003)CrossRefGoogle Scholar
 Douady, A., Hubbard, J.H., Lavaurs, P.: Etude dynamique des polynômes complexes (1984)Google Scholar
 Dunham, W.: The Mathematical Universe. Wiley, New York (1994)Google Scholar
 Ebeling, W., Schweitzer, F.: Swarms of particle agents with harmonic interactions. Theory in Biosciences 120(3–4), 207–224 (2001)CrossRefGoogle Scholar
 Embrechts, P., Maejima, M.: Selfsimilar Process. Princeton University Press, Princeton (2002)Google Scholar
 Erdős, P., Rényi, A.: On random graphs. Publ. Math. Debr. 6, 290–297 (1959)Google Scholar
 Erdős, P., Rényi, A.: On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci. 5, 17–61 (1960)Google Scholar
 Euler, L.: Solutio problematis ad geometriam situs pertinentis. Comment. Acad. Sci. Petropolitanae 8, 128–140 (1941)Google Scholar
 Feigenbaum, M.J.: Quantitative Universality for a Class of Nonlinear Transformations. J. Stat. Phys. 19(1), 25–52 (1978)CrossRefGoogle Scholar
 Fox, G.C., Wolfram, S.: Observables for the analysis of event shapes in e+ e annihilation and other processes. Phys. Rev. Lett. 41(23), 1581 (1978)CrossRefGoogle Scholar
 Furth, M.: Monadology. Phys. Rev. Lett. 76(2), 169–200 (1967)Google Scholar
 Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable ObjectOriented Software. AddisonWesley, New York (1995)Google Scholar
 Gardiner, C.: Handbook of Stochastic Methods (1985)Google Scholar
 Gardner, M.: Mathematical games: the fantastic combinations of John Conway’s new solitaire game “Life”. Sci. Am. 223(4), 120–123 (1970)CrossRefGoogle Scholar
 Geipel, M.: Dynamics of communities and code in open source software. Ph.D. thesis, Chair of Systems Design, ETH, Zurich, eCollection (2010). http://ecollection.ethbib.ethz.ch/view/eth:254?q=m m geipel
 Glattfelder, J.B.: Decoding Complexity: Uncovering Patterns in Economic Networks. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 Glattfelder, J.B., Bisig, T., Olsen, R.: R&D strategy document (2010). http://arxiv.org/abs/1405.6027
 Gleick, J.: Chaos: Making a New Science. Viking Penguin, New York (1987)Google Scholar
 Granovetter, M.S.: The strength of weak ties. Am. J. Sociol. 1360–1380 (1973)CrossRefGoogle Scholar
 Gribbin, J.: Science: A History. Penguin, Bury St Edmunds (2003)Google Scholar
 Helbing, D.: Social SelfOrganization: AgentBased Simulations and Experiments to Study Emergent Social Behavior. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 Helbing, D., Yu, W.: The outbreak of cooperation amongsuccessdrivenindividuals under noisy conditions. Proc. Natl. Acad. Sci. 106(10), 3680–3685 (2009)CrossRefGoogle Scholar
 Helbing, D., Farkas, I., Vicsek, T.: Simulating dynamical features of escape panic. Nature 407(6803), 487–490 (2000)CrossRefGoogle Scholar
 Hillman, C.: A Categorical Primer, Pennsylvania State University (2001). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.3264
 ’t Hooft, G.: The Cellular Automaton Interpretation of Quantum Mechanics. Springer, Heidelberg (2016)CrossRefGoogle Scholar
 Horgan, J.: Who discovered the mandelbrot set? Sci. Am. (2009). https://www.scientificamerican.com/article/mandelbrotset1990horgan/
 Horsten, L.: Philosophy of Mathematics. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy, summer 2012 edn. (2012). http://plato.stanford.edu/archives/sum2012/entries/philosophymathematics/
 Hromkovič, J.: Theoretical Computer Science: Introduction to Automata, Computability, Complexity, Algorithmics, Randomization, Communication, and Cryptography. Springer, Heidelberg (2010)Google Scholar
 International Human Genome Sequencing Consortium: Finishing the Euchromatic Sequence of the Human Genome. Nature 431(7011), 931–945 (2004)CrossRefGoogle Scholar
 Isaacson, W.: Albert Einstein: His Life and Universe. Simon & Schuster, London (2007)Google Scholar
 Joshi, K.: Foundations of Discrete Mathematics. New Age International, New Delhi (1989)Google Scholar
 Kondo, S., Miura, T.: Reactiondiffusion model as a framework for understanding biological pattern formation. Science 329(5999), 1616–1620 (2010)CrossRefGoogle Scholar
 Kwak, H., Lee, C., Park, H., Moon, S.: What is twitter, a social network or a news media? In: Proceedings of the 19th international conference on World wide web, pp. 591–600. ACM (2010)Google Scholar
 Levy, S.: The man who cracked the code to everything... wired magazine (2002). https://www.wired.com/2002/06/wolfram/
 Lux, T., Marchesi, M.: Volatility clustering in financial markets: a microsimulation of interacting agents. Int. J. Theor. Appl. Financ. 3(04), 675–702 (2000)CrossRefGoogle Scholar
 Mackay, A.L.: Crystallography and the penrose pattern. Phys. A: Stat. Mech. Appl. 114(1), 609–613 (1982)CrossRefGoogle Scholar
 Mandelbrot, B.B.: How long is the coast of britain. Science 156(3775), 636–638 (1967)CrossRefGoogle Scholar
 Mandelbrot, B.B.: Les Objects Fractals: Forme. Hasard et Dimension, Flammarion, Paris (1975)Google Scholar
 Mandelbrot, B.B.: The Fractal Geometry of Nature. Freeman, New York (1982)Google Scholar
 May, R.M.: Simple mathematical models with very complicated dynamics. Nature 261(5560), 459–467 (1976)CrossRefGoogle Scholar
 Meinhardt, H.: The Algorithmic Beauty of Sea Shells. Springer, New York (2009)CrossRefGoogle Scholar
 Milgram, S.: The small world problem. Psychol. Today 2(1), 60–67 (1967)Google Scholar
 Miller, J.H., Page, S.E., LeBaron, B.: Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press, Princeton (2008)Google Scholar
 Moussaïd, M., Perozo, N., Garnier, S., Helbing, D., Theraulaz, G.: The walking behaviour of pedestrian social groups and its impact on crowd dynamics. PloS one 5(4), e10047 (2010)CrossRefGoogle Scholar
 Newman, M., Strogatz, S., Watts, D.: Random graphs with arbitrary degree distributions and their applications. Phys. Rev. E 64(2), 26,118 (2001)CrossRefGoogle Scholar
 Newman, M., Barabási, A., Watts, D.: The Structure and Dynamics of Networks. Princeton University Press, Princeton (2006)Google Scholar
 Newton, I.: Philosophiæ naturalis principia mathematica. Londini : Jussu Societatis Regiae ac Typis Josephi Streater (1687)Google Scholar
 Nicolis, G., Prigogine, I.: SelfOrganization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations, vol. 191977. Wiley, New York (1977)Google Scholar
 Nowak, M.A.: Evolutionary Dynamics: Exploring the Equations of Life. Belknap Press, Cambridge (2006)Google Scholar
 Peirce, C.S.: The law of mind. Monist, 533–559 (1892)Google Scholar
 Powell, A.: Can Science, Religion Coexist in Peace? Harvard Gazette (2007). http://news.harvard.edu/gazette/story/2007/03/cansciencereligioncoexistinpeace/
 Rosen, K.H.: Discrete Mathematics and Its Applications, 7th edn. McGrawHill Science, New York (2011)Google Scholar
 Šalamon, T.: Design of AgentBased Models. Tomáš Bruckner, Řepín (2011)Google Scholar
 Salmon, W.C. (ed.): Zeno’s Paradoxes, Reprint edn. Hackett Publishing Company, Indianapolis (2001)Google Scholar
 Schilling, A., Cantoni, M., Guo, J., Ott, H.: Superconductivity above 130 k in the hgbacacuo system. Nature 363(6424), 56–58 (1993)CrossRefGoogle Scholar
 Schweitzer, F.: Brownian Agents and Active Particles: Collective Dynamics in the Natural and Social Sciences. Springer, Heidelberg (2003)Google Scholar
 Shannon, C.E.: A symbolic analysis of relay and switching circuits. Master’s thesis. MIT (1940)Google Scholar
 Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948)CrossRefGoogle Scholar
 Shaw, R.: The Dripping Faucet as a Model Chaotic System. Aerial Press, Santa Cruz (1984)Google Scholar
 Singer, D.W.: Giordano Bruno: His Life and Thought. Henry Schuman, New York (1950)Google Scholar
 Sornette, A., Sornette, D.: Selforganized criticality and earthquakes. EPL (Eur. Lett.) 9(3), 197 (1989)CrossRefGoogle Scholar
 Sornette, D.: Interdisciplinarity in socioeconomics, mathematical analysis and predictability of complex systems. SocioEcon. Rev. 6, 27–38 (2008). http://arXiv.org/abs/0807.3814
 Springel, V., White, S.D., Jenkins, A., Frenk, C.S., Yoshida, N., Gao, L., Navarro, J., Thacker, R., Croton, D., Helly, J., Peacock, J.A., Cole, S., Thomas, P., Couchman, H., Evrard, A., Colberg, J., Pearce, F.: Simulations of the formation, evolution and clustering of galaxies and quasars. Nature 435(7042), 629–636 (2005)CrossRefGoogle Scholar
 Steger, A.: Diskrete StrukturenBand 1, 2nd edn. Springer, Heidelberg (2001a)Google Scholar
 Steger, A.: Diskrete StrukturenBand 2, 2nd edn. Springer, Heidelberg (2001b)Google Scholar
 Strogatz, S.: Non Linear Dynamics. Addison Wesley, USA (1994)Google Scholar
 Tarnas, R.: The Passion of the Western Mind: Understanding the Ideas that Have Shaped Our World View. Ballantine Books, New York (1991)Google Scholar
 Täuber, U.: Fieldtheoretic methods. (2008). arXiv:07070794v1, http://arxiv.org/abs/0707.0794
 The Guardian: Dan Shechtman: “Linus Pauling said I was talking nonsense” (2013). http://www.theguardian.com/science/2013/jan/06/danshechtmannobelprizechemistryinterview
 Travers, J., Milgram, S.: An experimental study of the small world problem. Sociometry 425–443 (1969)CrossRefGoogle Scholar
 Treiber, M., Hennecke, A., Helbing, D.: Congested traffic states in empirical observations and microscopic simulations. Phys. Rev. E 62(2), 1805 (2000)CrossRefGoogle Scholar
 Turchin, P.: Complex Population Dynamics: A Theoretical/Empirical Synthesis, vol. 35. Princeton University Press, Princeton (2003)Google Scholar
 Turing, A.M.: The chemical basis of morphogenesis. Philos. Trans. R. Soc. Lond. B: Biol. Sci. 237(641), 37–72 (1952)CrossRefGoogle Scholar
 Valtonen, M., Karttunen, H.: The ThreeBody Problem. Cambridge University Press, Cambridge (2006)CrossRefGoogle Scholar
 Watts, D., Strogatz, S.: Collective dynamics of ‘smallworld’ networks. Nature 393(440), 440–442 (1998)CrossRefGoogle Scholar
 Weber, M.: Die protestantische ethik und der geist des kapitalismus. Gesammelte Aufsätze zur Religionssoziologie 1 (1920)Google Scholar
 Whitworth, B.: Some Implications of Comparing Brain and Computer Processing. In: Hawaii International Conference on System Sciences, Proceedings of the 41st Annual, , pp. 38–38, IEEE (2008)Google Scholar
 Wolfram, S.: Hadronic Electrons? Aust. J. Phys. 28(5), 479–488 (1975)CrossRefGoogle Scholar
 Wolfram, S.: A New Kind of Science. Wolfram Media Inc, Champaign (2002)Google Scholar
 Yee, A.J., Kondo, S.: Round 2... 10 trillion digits of pi. (2013). http://www.numberworld.org/misc_runs/pi10t/details.html, Retrieved May 2014
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons AttributionNonCommercial 2.5 International License (http://creativecommons.org/licenses/bync/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.