Arguably, financial markets and economic interactions represent ideal manifestations of complex systems. The understanding of the structure and functioning of markets and economies is perhaps the single most important goal to ensure equitable future prosperity—economic and ecological. Complexity theory could be the key to overcoming the ideological and dogmatic trenches plaguing these sociopolitical endeavors which shape every aspect of human existence. While the application of the tools and methods derived from studying complex systems brought about a wealth of novel understanding in the last decades for computer science, biology, physics, and sociology, finance and economics have been wary and slow to adopt the insights.

The architectures of institutionalized power play an important role in shaping the commercial reality we humans are exposed to, with profound implications for all of life on Earth. In this chapter, a lot of historical context is presented explaining the current paradigm of finance and economics. This allows the systemic defects and structural shortcomings to be better understood, thus possibly setting a new course for the future. At the heart of all economic and financial interactions lies the individual human being, driven by personal, long-term self-actualization and tempted by the short-term fruits of greed and fraud. This is thus the key question: How can we foster collective intelligent behavior from individual preferences? Indeed, an age-old challenge faced by evolution itself.

1 Terra Cognita

Before sailing off into terra incognita, some notable developments in the history of finance and economics require a discussion. From the emergence of randomness in science, the appearance of a new caste of mathematical wizards, the widespread adoption of a certain brand of economic thinking, the failure of the economic operating system, to the gridlock created by conflicting ideologies, the last 100 years have seen some very critical events unfold.

1.1 Some Historical Background

In 1776, Adam Smith presented An Inquiry into the Nature and Causes of the Wealth of Nations (Smith 1776). This treatise is considered to be the first modern work of economics. It was inspired by Isaac Newton’s revolutionary and foundational work establishing modern physics: Philosophiæ Naturalis Principia Mathematica (Newton 1687). Today, finance and economics are fundamentally equation-driven. Mathematics is the bedrock upon which the theories are built upon, reaching ever new heights of esoteric abstraction. At the heart of this evolution stands what is known as a stochastic process: the mathematization of a series of random events unfolding in time. Scholars grappled with this concept for nearly a century until a new profession emerged: the quantitative analyst.

1.1.1 Stochastic Processes

The historical rise of financial mathematics is intertwined with major developments in physics. The year 1900 marked a radical turning point in physics. Max Planck was grappling with the problem of black-body radiation, which defied any theoretical explanation. This issue, however, did not appear very fundamental and the general feeling at the time was, that physics had nearly conquered all there is to know about the physical world. In a creative act, Planck introduced an idea Footnote 1 that would lead to the uncovering of the quantum realm of reality (see Sect. 4.3.4). This discovery ushered in an era of conceptual challenges about the nature of reality from which physics has (or more precisely, physicists have) to this day not recovered from (see Sect. 10.3.2.2). One fundamental tenet of quantum theory is its probabilistic character. The old notion of a deterministic clock-work universe is lost forever. Now, nature is hiding eternally behind a veil of probability.

Also in the year 1900, the mathematician Louis Bachelier presented his Ph.D. thesis (Bachelier 1900). He introduced a formalization of randomness called a stochastic process and applied this to the valuation of stock options. Unfortunately, his pioneering work on randomness and mathematical finance was essentially forgotten until the late 1950s. Bachelier based his work on Brownian motion. This is the name name given to the random motion of particles suspended in a fluid, first described by the botanist Robert Brown. He had observed particles in water trapped in cavities inside pollen grains through a microscope (Brown 1828). The mathematical formalization of Brownian motion is called a stochastic process. Today, the continuous-time stochastic process is called a Wiener process, named in honor of the mathematician and philosopher Norbert Wiener.

The year 1905 was Albert Einstein’s annus mirabilis. He was working at the Patent Office in Bern and submitted his Ph.D. to the University of Zurich later that year. Einstein also published four papers in 1905, which would rock the foundations of physics. His work on the photoelectric effect (Einstein 1905c) not only gave Planck’s theoretical notion of energy quanta a physical reality in terms of photons, further establishing the relevance of quantum theory (albeit to Einstein’s dismay, see Sects. 4.3.4 and 10.3.2), it also lead to him being awarded the Nobel prize sixteen years later. In 1905, Einstein also presented the theory of special relativity (Einstein 1905d) and the infamous mass-energy equivalence \(E = m c^2\) (Einstein 1905a). His fourth paper of 1905 is less known. Brownian motion had always been lacking any satisfactory explanation. Einstein’s paper gave a statistical description of the phenomenon and provided empirical evidence for the reality of the atom, the existence of which was being debated at the time (Einstein 1905b). Specifically, he utilized the mathematics of stochastic processes and today the Wiener process is also known as the Einstein-Wiener process.

Einstein helped introduce a new paradigm with his work on Brownian motion: the stochastic modeling of natural phenomena, where statistics play an intrinsic role and the time evolution of a system is essentially random. Technically, the equation for the Brownian particle is similar to a differential equation describing diffusion. In 1908, the physicist Paul Langevin presented a new derivation of Einstein’s results (Langevin 1908). Langevin introduced the first stochastic differential equation, i.e., a differential equation of a “rapidly and irregularly fluctuating random force.” Today this “force” is called a random variable, encapsulating the randomness. What was, however, still missing was a rigorous mathematical theory.

Langevin’s description represents the micro view on randomness. The Langevin equation describes the evolution of the position of a single “stochastic particle.” In 1914, the physicist Adriaan Fokker derived an equation on Brownian motion (Fokker 1914) which Planck proved later. However, the Fokker-Planck equation was applied to quantum mechanics by Planck to no avail. It was later realized that the equation actually describes the behavior of a large population of “stochastic particles.” It thus represents the macro view onrandomness complementing the micro view of the Langevin equation. Formally, the Fokker-Planck equation describes the time evolution of the probability density function of the system. Results can be derived more directly using the Fokker-Planck equation than using the corresponding Langevin stochastic differential equations. See also Sect. 5.4.1.

Back in the early 1900s, the mathematician Andrey Markov introduced a prototypical stochastic process, satisfying certain properties (Markov 1906). In essence, a Markov process is memory-less: only the present state of the system influences its future evolution. An example is the Einstein-Wiener process related to Brownian motion, introduced by Bachelier. It is a continuous process (in time and the sample path). A discreet example of a Markov process is a random walk, where the path increments are independent and drawn from a Gaussian normal distribution. In the limit of the step size going to zero, the Einstein-Wiener process is recovered. In general, a Markov process is characterized by jumps (in the sample path), drift (of the probability density function) , and diffusion (widening of the probability density function) . It is also a solution to a stochastic differential equation. In 1931, the mathematician Andrey Kolmogorov presented two fundamental equations on Markov processes (Kolmogoroff 1931). It was later realized, that one of them was actually equivalent to the Fokker-Planck equation.

Then, in 1942, the mathematician Kiyoshi Itô developed stochastic calculus, finally laying the formal mathematical foundation for the treatment of randomness. This would pave the way for the stellar rise of financial mathematics. In the mid 1950s the economist Paul Samuelson, the first American to win the Nobel Memorial Prize in Economic Sciences (often referred to as the Nobel prize for economics) , embraced Brownian motion as a model for the stock market. Bachelier’s original work reemerged and the randomness of price movements appeared tamed (Samuelson 1965).

Until now, the understanding of financial markets was strongly influenced by the mathematics of physics. This decoding of reality via formal analytical representations is the paradigm of Volume I of the Book of Nature (see Chap. 2 and Sect. 5.1). It represents the “unreasonable effectiveness of mathematics in the natural sciences” (Wigner 1960). In contrast, Volume II of the Book of Nature contains the algorithmic decoding of complex phenomena (see Sect. 5.2). In this paradigm, algorithms and simulations running in computers replace the mathematical tools for uncovering knowledge, based on the astonishing observation, that simplicity is the fuel for complexity (see Sect. 5.2.2). Historically, the study of financial markets gave one of the first glimpses of Volume II, an extension of the Book of Nature that was unknown to exist at the time. The mathematician Benoît Mandelbrot was observing the properties cotton prices when he uncovered an unintuitive feature. In effect, he discovered the self-similar nature of financial time series (Mandelbrot 1963b). Regardless of the chosen time horizon—days or months—the cotton price charts showed the same characteristics. Today, the modeling of random systems evolving in time utilizing self-similar stochastic processes is fundamental for their understanding (Embrechts and Maejima 2002). Mandelbrot’s discovery in 1963 would mark the starting point for his research leading to the seminal discovery of fractal geometry (Mandelbrot 1982) and the emergence of chaos theory, embracing the non-linear behavior of complex phenomena.

The year 1973 marked financial mathematic’s coming-of-age. The economists Fischer Black and Myron Scholes, both associated with the University of Chicago, presented a paper that set out to revolutionize finance. With the help of the economist Robert Merton, expanding the mathematical understanding, the 1997 Nobel Memorial Prize in economic sciences would be awarded for their work.Footnote 2 Building on Itô’s insights, Black and Scholes derived a model of a financial market containing derivative investment instruments. The crucial mathematical insight was a stochastic partial differential equation, called the Black-Scholes equation

$$\begin{aligned} \frac{\partial V}{\partial t} + \frac{1}{2}\sigma ^2 S^2 \frac{\partial ^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0. \end{aligned}$$
(7.1)

Technically, the formula dictates the price evolution of a European call or put option under the Black-Scholes model. In other words, it gives a theoretical estimate of the price of the derivative. The model can only be applied under certain assumptions and limitations. To this day, the publication has been cited 8,467 times.Footnote 3

The academic success of the Black-Scholes model and the real-world ramifications reveal a pattern also visible in the global financial crisis. In 1993, a hedge fund management firm, called Long-Term Capital Management, was founded. The key people involved where Scholes and Merton. By 1998, the fund had approximately USD 5 billion in assets, controlled over USD 100 billion, and had positions, whose total worth was over a USD 1 trillion.Footnote 4 Then disaster struck. In a nutshellFootnote 5:

Scholes was also a co-founder of the hedge fund Long-Term Capital Management, which was initially extremely successful but later failed spectacularly, which led to a group of large banks bailing them out to prevent an averse reaction in the financial markets.

Today, financial mathematics is an established and active field of economic research (Voit 2005; Hull 2014). The insights from the study of stochastic processes also find their applications in physics, chemistry, and the natural sciences (Gardiner 1985).

1.1.2 The Rise of the Quant

On the 19th of October 1993, the US House of Representatives voted 264 to 159 to reject further financing for a particle accelerator in Texas. The Superconducting Super Collider (SSC), for which two billion dollars had already been spent, was a mammoth project. Its planned ring circumference was 87.1 kms and the planned collision energy 40 TeV. These specifications dwarf the Large Hadron Collider (LHC) near Geneva in Switzerland, with a ring circumference of 27 kms and a collision energy of 13 TeV. Nonetheless, the LHC is currently the most complex experimental facility ever built, involving scientists from over 100 countries, and the largest single machine in the world.Footnote 6 As a technology spin-off, the Worldwide LHC Computing Grid emerged as the largest ever distributed computing grid.

Back in 1993, the SSC’s estimated total costs threatened to spiral out of control. Pressured by budget saving, US President Bill Clinton signed a bill officially terminating the project. Like the flap of the mythical butterfly, this event would set out to change the course of finance —and the world—forever. In the words of Stephen Blyth, a professor at Harvard University, reminiscing about the past (Blyth 2014):

This [the termination of the SSC] was not good news for two of my Harvard roommates, PhD students in theoretical physics. Seeing the academic job market for physicists collapsing around them, they both found employment at a large investment bank in New York in the nascent field of quantitative finance. It was their assertion that derivative markets, whatever in fact they were, seemed mathematically challenging that catalyzed my own move to Wall Street from an academic career.

The “quant” was born (Derman 2004), the quantitative analyst with a scientific background, very well-versed in the cabala of mathematics. As a result, the level of mathematical complexity exploded and financial tools became more abstract and removed from reality. Like a matryoshka doll, the layers of complexity are nested within each other—derivatives of derivatives of derivatives. A dangerous mixture was concocted, between the mathematical wizards and the applied practitioners (Salmon 2009):

Their [the quants] managers, who made the actual calls [big asset-allocation decisions], lacked the math skills to understand what the models were doing or how they worked.

During the 2008 financial crisis, and the ensuing sovereign debt crisis, everything unraveled. It became painfully clear that ruling financial and economic elites had opened Pandora’s box. In a general critique, the physicist and complexity researcher Dirk Helbing laments (quoted in Cho 2009):

We spend billions of dollars trying to understand the origins of the universe, while we still don’t understand the conditions for a stable society, a functioning economy, or peace.

Or more specifically, my observation that (quoted in Smith 2014):

[...] basically we don’t know how these [financial and economic] systems work. We created them, but really they have a life of their own.

The burning question remains: Why did no one predict this global cataclysmic event? It is truly remarkable, that the sum of all this intellectual prowess culminated in such a far-reaching disaster. In hindsight, there where many red flags being ignored.

1.2 The Global Financial Crisis

Perhaps the biggest threat to the emergence of adaptive, resilient, and sustainable financial systems is ideology. In the words of Alan Greenspan who served as Chairman of the US Federal Reserve from 1987 to 2006, during a hearingFootnote 7 before the Congressional Committee for Oversight and Government Reform about the financial crisis and the role of federal regulators on October 23, 2008:

Well, remember, though, [...] ideology [...] is a conceptual framework with the way people deal with reality. Everyone has one. You have to. To exist, you need an ideology. The question is, whether it [...] is accurate or not.

Greenspan, who championed the deregulation of the US banking system, concluded:

What I am saying to you is, yes, I found a flaw. I don’t know how significant or permanent it is, but I have been very distressed by that fact. [...] I found a flaw in the model that I perceived is the critical functioning structure that defines how the world works, so to speak. [...] I was shocked, because I had been going for 40 years or more with very considerable evidence that it was working exceptionally well.

An ideological commitment can tempt oneself to ignore red flags. In a summary by the economist and Nobel laureate Paul Krugman, taken from a New York Times article titled “How Did Economists Get It So Wrong” (Krugman 2009):

They [economists] turned a blind eye to the limitations of human rationality that often lead to bubbles and busts; to the problems of institutions that run amok; to the imperfections of markets—especially financial markets—that can cause the economy’s operating system to undergo sudden, unpredictable crashes; and to the dangers created when regulators don’t believe in regulation.

Or more scathingly Krugman, quoted in The Economist (2009):

The past 30 years of macroeconomics was spectacularly useless at best, and positively harmful at worst.

1.2.1 The Chicago School

Technically, what is commonly know as the Nobel prize for economics is in fact the Nobel Memorial Prize in economic sciences, established in 1968 by the Swedish National Bank in memory of Alfred Nobel. The creation of this award has received criticism. One notable objection was made by the economist Friedrich August von Hayek, during his speechFootnote 8 at the Nobel Banquet, December 10, 1974, after his nomination for the prize:

Now that the Nobel Memorial Prize for economic science has been created, one can only be profoundly grateful for having been selected as one of its joint recipients, and the economists certainly have every reason for being grateful to the Swedish Riksbank for regarding their subject as worthy of this high honour.

Yet I must confess that if I had been consulted whether to establish a Nobel Prize in economics, I should have decidedly advised against it.

[...] I do not yet feel equally reassured concerning my [...] cause of apprehension. It is that the Nobel Prize confers on an individual an authority which in economics no man ought to possess. This does not matter in the natural sciences. Here the influence exercised by an individual is chiefly an influence on his fellow experts; and they will soon cut him down to size if he exceeds his competence. But the influence of the economist that mainly matters is an influence over laymen: politicians, journalists, civil servants and the public generally. There is no reason why a man who has made a distinctive contribution to economic science should be omnicompetent on all problems of society—as the press tends to treat him till in the end he may himself be persuaded to believe.

These cautionary words of von Hayek would turn out to be prophetic. The University of Chicago has established itself as the leading authority of economic thought. The institution is associated with 29 economic Nobel laureates,Footnote 9 far more than any other institution. The “Chicago School” of economics is best known for advocating a particular brand of economic thought, namely the promotion of economic liberalism, intellectually backed by what is know as neoclassical economic theory. In general, government intervention is rejected in favor of allowing the free market and rational individuals to best allocate resources. In detail, neoclassical economics is an economic paradigm that relates supply and demand to an individual’s rationality and her ability to maximize utility or profit, in an equilibrium state, where the market is guided by an “invisible hand” (Smith 1776). Neoclassical economics also utilizes heavy mathematical machinery to study various aspects of the economy. The theory is associated with concepts like the market-efficiency hypothesis from financial economics and the trickle-down effect from political economics.

The failure of most economists to foresee the financial crisis is not only a challenge to the economics profession as a whole, but especially to the Chicago School. Indeed, the crisis help deepen the intellectual trenches. In the words of Krugman (2009):

And in the wake of the crisis, the fault lines in the economics profession have yawned wider than ever. Lucas [Robert Lucas from the University of Chicago] says the Obama administration’s stimulus plans are “schlock economics,” and his Chicago colleague John Cochrane says they’re based on discredited “fairy tales.” In response, Brad DeLong of the University of California, Berkeley, writes of the “intellectual collapse” of the Chicago School, and I myself have written that comments from Chicago economists are the product of a Dark Age of macroeconomics in which hard-won knowledge has been forgotten.

Prompted by the crisis, some combatants have switched the trenches. For instance, Richard Posner, a University of Chicago economist and judge. In the last decades, the Chicago School economists were successful in displacing another school of economic thought called “Keynesianism.” John Maynard Keynes was an influential economist who died in 1946. While the Chicago School of thought is characterized by laissez-faire economics, in sharp contrast, Keynesian economics advocated that the best way to rescue an economy from a recession is for the government to borrow money and increase demand by infusing the economy with capital to spend. In this context “Richard A. Posner has shocked the Chicago School by joining the Keynesian revival” (Cassidy 2010a). Indeed (Cassidy 2010a):

Earlier this year, Posner published “A Failure of Capitalism” in which he argues that lax monetary policy and deregulation helped bring on the current slump. “We are learning from it that we need a more active and intelligent government to keep our model of a capitalist economy from running off the rails,” Posner writes. “The movement to deregulate the financial industry went too far by exaggerating the resilience —the self-healing powers—of laissez-faire capitalism.”

Another critique of the Nobel Memorial Prize in economic sciences is motivated by the fact that it awarded the economist Milton Friedman for the 1976 prize in part for his work on monetarism. Friedman was accused of supporting the military dictatorship in Chile. Specifically, a group of Chilean economists, following their training as economists at the University of Chicago, most under Friedman, found in Augusto Pinochet’s regime an ideal breeding ground for the first radical free market strategy implementation in a developing country. The “Chicago Boys” enforced economic liberalization, including currency stabilization, removed tariff protections for local industry, banned trade unions, and privatized social security and hundreds of state-owned enterprises. The results of the implementation of these economic policies have been mixed and analyzed from opposing points of view. Friedman himself called it “the Miracle of Chile.” Others have been less enthusiastic, for instance (Petras and Vieux 1990). Specifically (Steger and Roy 2010, p. 100f.):

During his [Pinochet] authoritarian rule, Chile’s economy stabilized in terms of inflation and GDP growth rate, but the middle and lower class lost ground as economic inequality increased dramatically. The country’s richest 10% benefited the most from the neoliberal reforms as their incomes almost doubled during the Pinochet years. To this day, Chile has remained one of the world’s 15 most unequal nations. The mixed economic results of the “neoliberal revolution” that swept the country from the 1970s to the 1990s continue to generate heated discussions among proponents and detractors of the Chicago School over the virtues of externally imposing free-market reforms.

1.2.2 The Crisis of Mathematics

The rise of the quant also played a crucial role in setting the stage for the disaster of the global financial crisis. Again (Krugman 2009):

As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.

For instance, the neoclassical economic model builds on equilibrium theory. In particular, the mathematical framework of the dynamic stochastic general equilibrium (DSGE) models which heavily influence macroeconomics and are actively used by central banks. They assume, by construction, a world which is always in equilibrium, governed by the aggregated behavior of a representative agent maximizing their utility. This paradigm has recently also been criticized by the Nobel laureate Joseph Stiglitz, in the context of the financial crisis (Stiglitz 2018):

This paper provides a critique of the DSGE models that have come to dominate macroeconomics during the past quarter-century. It argues that at the heart of the failure were the wrong microfoundations, which failed to incorporate key aspects of economic behaviour, e.g. incorporating insights from information economics and behavioural economics. Inadequate modelling of the financial sector meant they were ill-suited for predicting or responding to a financial crisis; and a reliance on representative agent models meant they were ill-suited for analysing either the role of distribution in fluctuations and crises or the consequences of fluctuations on inequality.

Another prominent example of mathematics run amok, is an equation that, through its inappropriate application, helped create a market that turned out to be a castle in the air. In 2000, the mathematician David X. Li presented an equation to measure correlation between unrelated events (Li 2000). The Gaussian copula function was the heart of the theory

$$\begin{aligned} \text {Pr} \left[ T_A< 1, T_B < 1 \right] = \varPhi _2 \left( \varPhi ^{-1} (F_A(1), \varPhi ^{-1}(F_B(1),\gamma ) \right) , \end{aligned}$$
(7.2)

where \(F_A\) and \(F_B\) are the distribution functions for the survival timesFootnote 10 \(T_A\) and \(T_B\), \(\varPhi _2\) is the bivariate accumulative normal distribution function, \(\varPhi ^{-1}\) is the inverse of a univariate normal distribution function, and \(\gamma \) is the all-powerful correlation parameter, which reduces the otherwise intractable correlation to a single constant.

The elegant mathematical formula was used to model complex correlated risks. It became ubiquitous in finance and allowed for the supposed accurate pricing of a wide range of investments that were previously too complex. Armed with Li’s formula, Wall Street’s quants saw great new possibilities (Salmon 2009):

With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.

At the heart of this new investment universe lurked the collateralized debt obligation (CDO), a type of structured asset-backed security. With Li’s magic wand, the CDOs, originally developed for the dull corporate debt markets, could be greatly enhanced. Now, CDOs could potentially be correctly priced for investments that were previously unimaginable, such as mortgages and mortgage-backed securities. The market soared (Salmon 2009):

The CDS [credit default swap, effectively an insurance against non-payment] and CDO markets grew together, feeding on each other. At the end of 2001, there was $920 billion in credit default swaps outstanding. By the end of 2007, that number had skyrocketed to more than $62 trillion. The CDO market, which stood at $275 billion in 2000, grew to 4.7 trillion by 2006.

Another culprit, enabling the mushrooming chimera, were the ratings agencies. Motivated by the apparent certainty which Li’s risk correlation number infused, new structured products were being assembled. One successful idea was to tranche CDO pools, now backed by subprime mortgages, to create triple-A rated securities. The ratings agencies were happy to do this, even if none of the components were themselves anything close to triple-A. For the agencies the conflict of interest is obvious: Don’t bite the hand that feeds you. Indeed, the relationship is symbiotic (Taibbi 2013):

[...] banks needed them [ratings agencies] to sign off on the bogus math of the subprime era—the math that allowed banks to turn pools of home loans belonging to people so broke they couldn’t even afford down payments into securities with higher credit ratings than corporations with billions of dollars in assets.

The rest is history. Once again, the flapping of a butterfly’s wing set of a chain reaction of path-dependent chaos. Subprime lending fueled the housing bubble, the collapse of which lead to the global financial crisis and the ensuing sovereign debt crisis.

Looking back, one might ask oneself how no one could have predicted at least some negative fallout from those practices of Wall Street. Herd mentality can be a strong cognitive bias (Salmon 2009):

And it [Gaussian copula] became so deeply entrenched—and was making people so much money —that warnings about its limitations were largely ignore.

Essentially, Li’s equation assumed that correlation was a constant rather than something dynamic. An assumption that is actually correct most of the time. Only during rare extreme events, like financial crises, markets align and correlations soar (Salmon 2009):

In hindsight, ignoring those warnings [that correlation is not a constant] looks foolhardy. But at the time, it was easy. Banks dismissed them, partly because the managers empowered to apply the brakes didn’t understand the arguments between various arms of the quant universe. Besides, they were making too much money to stop.

Indeed, an unfortunate chain of command had been established, were mathematical prowess and financial acumen becam segregated (Salmon 2009):

Another [reason] was that the quants, who should have been more aware of the copula’s weaknesses, weren’t the ones making the big asset-allocation decisions. Their managers, who made the actual calls, lacked the math skills to understand what the models were doing or how they worked.

And the ratings agencies? (Taibbi 2013):

Why didn’t rating agencies build in some cushion for this sensitivity to a house-price-depreciation scenario? Because if they had, they would have never rated a single mortgage-backed CDO.

Only years later, the following became publicly known (Taibbi 2013):

Thanks to a mountain of evidence gathered for a pair of major lawsuits by the San Diego-based law firm Robbins Geller Rudman and Dowd, documents that for the most part have never been seen by the general public, we now know that the nation’s two top ratings companies, Moody’s and S&P, have for many years been shameless tools for the banks, willing to give just about anything a high rating in exchange for cash.

In closing (Salmon 2009):

David X. Li, it’s safe to say, won’t be getting that Nobel anytime soon. One result of the collapse has been the end of financial economics as something to be celebrated rather than feared. And Li’s Gaussian copula formula will go down in history as instrumental in causing the unfathomable losses that brought the world financial system to its knees.

It is fair to say that the mathematical complexity in finance and economics has reached a level where its relevance and meaning is hard to detect. This is similar to the Bogdanov affair in physics, an academic dispute about the relevance of publications in theoretical physics which appeared in reputable, peer-reviewed scientific journals. Some prominent physicists alleged that the contents was in fact just a meaningless combinations of buzzwords and fancy-looking equations. See Sect. 9.1.4 for details. Have finance, economics, and modern theoretical physics transformed into postmodern narratives which defy meaning and understanding, where mathematical incantation has become the sole justification? To illustrate the level of abstraction, the following two equations are extracted from two very different sources. One is a mathematical model describing the evolution of the volatility of an underlying asset, namely the price of an European option on a risky asset with stochastic volatility. The other is from string/M-theory, where “baryon number violation is discussed in gauge unified orbifold models of type II string theory with intersecting Dirichlet branes.” But which is which? Is the following equation from theoretical physics?

$$\begin{aligned} \begin{aligned} a_H^{\phi ,\psi }(u,v)&= \frac{1}{2}\int _\varOmega y\frac{\partial u}{\partial x}\frac{\partial \overline{v}}{\partial x}\phi ^2\psi ^2 +\int _\varOmega y\frac{\partial u}{\partial x}\overline{v}\left( \frac{\phi '}{\phi }\right) \phi ^2\psi ^2+\frac{\sigma ^2}{2}\int _\varOmega y\frac{\partial u}{\partial y}\frac{\partial \overline{v}}{\partial y}\phi ^2\psi ^2\\&\quad +\frac{\sigma ^2}{2}\int _\varOmega \frac{\partial u}{\partial y}\overline{v}\phi ^2\psi ^2+\mu \sigma ^2\int _\varOmega y^2\frac{\partial u}{\partial y}\overline{v}\phi ^2 \psi ^2+2\rho \sigma \int _\varOmega y\frac{\partial u}{\partial y}\overline{v}\left( \frac{\phi '}{\phi }\right) \phi ^2\psi ^2\\&\quad +\rho \sigma \int _\varOmega y\frac{\partial u}{\partial y}\frac{\partial \overline{v}}{\partial x}\phi ^2\psi ^2-\int _\varOmega (\omega \rho \sigma y^2-\frac{1}{2}y+r)\frac{\partial u}{\partial x}\overline{v}\phi ^2\psi ^2\\&\quad -\int _\varOmega [\omega \sigma ^2 y^2+\kappa (m-y)]\frac{\partial u}{\partial y}\overline{v}\phi ^2\psi ^2\\&\quad -\int _\varOmega \left[ \frac{1}{2}\omega \sigma ^2 y(\omega y^2+1)+\omega y\kappa (m-y)-r\right] u\overline{v}\phi ^2\psi ^2.\\ \end{aligned} \end{aligned}$$
(7.3)

Or the next one?

$$\begin{aligned} \begin{aligned}&S_{cl} = \frac{1}{2 \pi \alpha ' } \int _C d^2 z (\partial X \bar{\partial }\bar{X} +\bar{\partial }X \partial \bar{X} ) \\&\equiv \frac{1}{2 \pi \alpha ' } \int _C d^2 z (\vert \partial X \vert ^2 + \vert \bar{\partial }X \vert ^2 = V_{11} \vert v_A \vert ^2 +V _{22} \vert v_B \vert ^2 + 2 \mathfrak {R}(V_{12} v_A v_B ^\star ) , \\&\bigg [ 4 \pi V_{ii} = \vert b_a \vert ^2 \int _C d ^2 z \vert \omega _{\theta , \theta '} (z) \vert ^2 + \vert c_a \vert ^2 \int _C d ^2 z \vert \omega _{ 1-\theta ,1- \theta '} (z) \vert ^2 \ [i =1,2], \\&4 \pi V_{12} = b_1 \bar{b}_2 \int _C d ^2 z \vert \omega _{\theta , \theta '} (z) \vert ^2 + c_1 \bar{c}_2 \int _C d ^2 z \vert \omega _{1-\theta , 1- \theta '} (z) \vert ^2 \bigg ]. \end{aligned} \end{aligned}$$
(7.4)

To the uninitiated, both equations appear to stem from the same esoteric source of hidden symbolism.Footnote 11

Indeed, also the financial behemoths themselves have reached a level of inherent complexity obfuscating transparency and accountability. How else should the following headline be interpreted: “Bank of America Finds a Mistake: $4 Billion Less Capital.” In detail (Eavis and Corkery 2014):

Bank of America disclosed on Monday that it had made a significant error in the way it calculates a crucial measure of its financial health, suffering another blow to its effort to shake its troubled history.

The mistake, which had gone undetected for several years, led the bank to report recently that it had 4 billion more capital than it actually had.

1.2.3 Living in Denial

Returning to the failure of economics in 2008, Lord Adair Turner, then head of the U.K. financial services authority, observed (quoted in Farmer et al. 2012):

But there is also a strong belief, which I share, that bad or rather over-simplistic and overconfident economics helped create the crisis. There was a dominant conventional wisdom that markets were always rational and self-equilibrating, that market completion by itself could ensure economic efficiency and stability, and that financial innovation and increased trading activity were therefore axiomatically beneficial.

In the same vein, a quote by Jean-Claude Trichet, then Governor of the European Central Bank, in November 2010 (quoted in Farmer et al. 2012):

When the crisis came, the serious limitations of existing economic and financial models immediately became apparent. Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner. As a policy-maker during the crisis, I found the available models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.

These last two quotes can be read as a critique of neoclassical economics. Furthermore, in the words of Krugman (2009):

In short, the belief in efficient financial markets blinded many if not most economists to the emergence of the biggest financial bubble in history. And efficient-market theory also played a significant role in inflating that bubble in the first place.

Eugene Fama, an economist of the University of Chicago, Nobel laureate, and the father of the efficient-market hypothesis (Fama 1970) asserted in the aftermath of the financial crisis (quoted in Cassidy 2010b):

I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.

Indeed, when ideology turns to dogma, all discourse threatens to plummet to a level of pure subjectivity. When quizzed about Krugman’s critique and Posner’s intellectual betrayal, Fama commented (quoted in Cassidy 2010a):

If you are getting attacked by Krugman, you must be doing something right.

He’s [Posner] not an economist. He’s an expert on law and economics.

The apparent denial and obliviousness of proponents of the Chicago School to the reality of the crisis has been observed by other commentators, such as the physicist and author Mark Buchanan. He reports (Buchanan 2009):

In an essay in The Economist, Robert Lucas, one of the key figures behind the present neo-classical theory of macroeconomic systems, even argued that the tumultuous events of the recent crisis can be taken as further evidence supporting the efficient-markets hypothesis of neo-classical theory, despite the fact that it disputes the possible existence of financial bubbles.

When Fama was asked how his theory of efficient markets had fared in the crisis, he replied (quoted in Cassidy 2010a):

I think it did quite well in this episode.

Indeed, the true culprit is easily found for him (Cassidy 2010a):

In addition to accusing the government of causing the subprime problem, Fama argues that it botched its handling of last fall’s financial crisis. Rather than bailing out A.I.G., Citigroup, and other firms, Fama says, the Treasury Department and the Federal Reserve should have allowed them to go bankrupt. “Let them all fail,” he said, with another laugh.

The Chicago School economist John Cochrane, attacked by Krugman in (2009), defended his position in detail in an article called “How Did Paul Krugman Get It So Wrong?” (Cochrane 2011). To him, the whole fuss is due to the following assertions (Cochrane 2011):

If a scientist, he [Krugman] might be an AIDS-HIV disbeliever, a creationist or a stalwart that maybe continents do not move after all. [...] The only explanation that makes sense to me is that Krugman isn’t trying to be an economist: he is trying to be a partisan, political opinion writer.

Bringing the discussion back to are more objective level, the authoritative magazine The Economist , known for its economic liberalism, observed the following. Setting the stage (The Economist 2009):

Of all the economic bubbles that have been pricked, few have burst more spectacularly than the reputation of economics itself. [...] In the public mind an arrogant profession has been humbled.

However (The Economist 2009):

In its crudest form—the idea that economics as a whole is discredited—the current backlash has gone far too far. [...] two central parts of the discipline—macroeconomics and financial economics—are now, rightly, being severely re-examined. There are three main critiques: that macro and financial economists helped cause the crisis, that they failed to spot it, and that they have no idea how to fix it.

In detail (The Economist 2009):

The first charge is half right. Macroeconomists, especially within central banks, were too fixated on taming inflation and too cavalier about asset bubbles. Financial economists, meanwhile, formalised theories of the efficiency of markets, fuelling the notion that markets would regulate themselves and financial innovation was always beneficial. Wall Street’s most esoteric instruments were built on these ideas.

But economists were hardly naive believers in market efficiency. Financial academics have spent much of the past 30 years poking holes in the “efficient market hypothesis” . A recent ranking of academic economists was topped by Joseph Stiglitz and Andrei Shleifer, two prominent hole-pokers.

[...]

The charge that most economists failed to see the crisis coming also has merit. To be sure, some warned of trouble. The likes of Robert Shiller of Yale, Nouriel Roubini of New York University and the team at the Bank for International Settlements are now famous for their prescience. But most were blindsided.

[...]

What about trying to fix it? Here the financial crisis has blown apart the fragile consensus between [neoclassical] purists and Keynesians that monetary policy was the best way to smooth the business cycle.

Keynesians, such as Mr Krugman, have become uncritical supporters of fiscal stimulus. Purists are vocal opponents. To outsiders, the cacophony underlines the profession’s uselessness.

The article concludes (The Economist 2009):

Add these criticisms together and there is a clear case for reinvention, especially in macroeconomics. [...]

But a broader change in mindset is still needed. Economists need to reach out from their specialised silos: macroeconomists must understand finance, and finance professors need to think harder about the context within which markets work. And everybody needs to work harder on understanding asset bubbles and what happens when they burst. For in the end economists are social scientists, trying to understand the real world. And the financial crisis has changed that world.

Entrenched ideologies do not only plague sociopolitical intellectual thought systems. Even the hard sciences suffer from dogmatic world views. Planck gloomily observed in his autobiography (Planck 1950, pp. 33f.):

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

See also Sect. 9.1.3 on Kuhnian paradigm changes.

But perhaps things are not as bleak as one might think. For instance, forgotten knowledge is reemerging. The crash of 2008 has also been dubbed as the “Minsky Moment,” because Hyman Minsky’s financial instability hypothesis is widely regarded as having predicted the crisis (Minsky 1992). See also Minsky (2016). In addition, more heterodox ideas are receiving attention from some notable economists (Reinhart and Rogoff 2009; Akerlof and Shiller 2010).

2 A Call to Arms

Neoclassical economics has been attacked from many angles in the past decades. Philosophers criticized the whole enterprise as being pseudo-scientific (Latsis 1972), based on the demarcation offered by Imre Lakatos’s philosophy of science (Musgrave and Pigden 2016). However, of all the potential design flaws of neoclassical economics —its concepts which are only relevant when all or many strict assumptions hold, the focus on isolated agents, the omission of evolutionary or adaptive dynamics, the exclusion of feedback loops, etc.—one practice is particularly troubling. It is the disregard of empirical data. In other words, the ideas of neoclassical economics transcend any empirical data. This is in stark contrast to a foundational guideline principle in science: “Let the data speak!” The physicist and pioneer of econophysics,Footnote 12 Jean-Philippe Bouchaud, has offered continued empirical evidence challenging the neoclassical paradigm and narrative. For instance, the insights that real-world markets are affected by feedback loops and can exhibit endogenous shocks, i.e., destabilize themselves without any external trigger (Bouchaud 2011). Bouchaud summarizes (Bouchaud 2008):

To me, the crucial difference between physical sciences and economics or financial mathematics is rather the relative role of concepts, equations and empirical data. Classical economics is built on very strong assumptions that quickly become axioms: the rationality of economic agents, the invisible hand and market efficiency, etc. An economist once told me, to my bewilderment: These concepts are so strong that they supersede any empirical observation. As Robert Nelson argued in his book, Economics as Religion, the marketplace has been deified.

Mathematics without any empirical anchoring will always remain in the Platonic realm of abstractions lacking any real-world application. It does not suffice to mathematicize a theory to automatically render it a faithful description of reality. Just by proclaiming the following, no insights are guaranteed (a quote from Chicago School’s Robert Lucas from Labini 2016, p. 63):

Economic theory is mathematical analysis. Everything else is just talk and pictures.

It is astounding how far mathematical economics has ventured from any empirical rooting (Labini 2016, p. 64):

The most famous work of Samuelson [Nobel laureate Paul Samuelson] is one of the classics of mathematical economics, “Foundations of Economic Analysis”. [...] Samuelson, in his book over 400 pages full of mathematical formulas, does not derive a single result that can be compared with observed data. There is even no mention of any empirical data in the book of Samuelson!

So then (Labini 2016, p. 64):

In conclusion, neoclassical economics, unlike physics, has not achieved either precise explanations or successful predictions through the use of mathematics. Thus, this is the main difference between neoclassical economics and physics.

As a result (Bouchaud 2008):

Compared to physics, it seems fair to say that the quantitative success of the economic sciences is disappointing.

2.1 Embracing Complexity

One key proposition of Part I of this book has been the categorization of human knowledge generation into the two paradigms of the fundamental-analytical and complex-algorithmic dichotomies (see Chap. 5, especially Sect. 5.4). In this context, even if one does tie the mathematical machinery of finance and economics to empirical data, the results are expected to be modest. In essence, we should not search for the understanding of finance and economics in Volume I of the Book of Nature (i.e., the fundamental-analytical paradigm). Indeed, people have started to flip through Volume II for answers (in the complex-algorithmic paradigm). In a nutshell, we do not need the physics of economics but rather, complexity economics.

Some notable complexity researches published an article, titled “Economic Networks: The New Challenges,” in the prestigious journal Science. They observed (Schweitzer et al. 2009):

The current economic crisis illustrates a critical need for new and fundamental understanding of the structure and dynamics of economic networks. Economic systems are increasingly built on interdependencies, implemented through trans-national credit and investment networks, trade relations, or supply chains that have proven difficult to predict and control. We need, therefore, an approach that stresses the systemic complexity of economic networks and that can be used to revise and extend established paradigms in economic theory. This will facilitate the design of policies that reduce conflicts between individual interests and global efficiency, as well as reduce the risk of global failure by making economic networks more robust.

The authors conclude (Schweitzer et al. 2009):

In summary, we anticipate a challenging research agenda in economic networks, built upon a methodology that strives to capture the rich process resulting from the interplay between agents’ behavior and the dynamic interactions among them.

Indeed (Catanzaro and Buchanan 2013):

Our developing scientific understanding of complex networks is being usefully applied in a wide set of financial systems. What we’ve learned from the 2008 crisis could be the basis of better management of the economy—and a means to avert future disaster.

The invitation to embrace complexity thinking has also been extended to financial economics in a joint publication by complexity researchers, financial practitioners, and economists, including Stiglitz (Battiston et al. 2013):

The intrinsic complexity of the financial derivatives market has emerged as both an incentive to engage in it, and a key source of its inherent instability. Regulators now faced with the challenge of taming this beast may find inspiration in the budding science of complex systems.

This call to arms has also been made by financial supervisors, regulators, and policymakers. The International Monetary Fund (IMF) , in collaboration with the Institute for New Economic Thinking (INET), and the Deutsche Bundesbank hosted a two-day conference on financial networks.Footnote 13 In summary, “financial networks [are] key to understanding systemic risk” and in detail (Minoiu and Sharma 2014):

With financial markets around the world so interconnected, the analysis of “networks” in the financial system would help deepen understanding of systemic risk and is key to preventing future financial crises, say leading researchers and policymakers at a conference on Interconnectedness: Building Bridges between Research and Policy.

Furthermore, European Central Bank Governor Trichet mused (quoted in Farmer et al. 2012):

[...] we need to develop complementary tools to improve the robustness of our overall framework. In this context, I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analysing complex dynamic systems in a rigorous way. These models have proved helpful in understanding many important but complex phenomena: epidemics, weather patterns, crowd psychology, magnetic fields.

Finally, Andy Haldane, executive director of financial stability at the Bank of England also weighed in. In an article, aptly titled “To Navigate Economic Storms We Need Better Forecasting,” he pointed out (Haldane 2011):

Finance is a classic complex, adaptive system, similar to an ecosystem. The growth in its scale, complexity and adaptation in the past generation alone would rival that of most other complex systems in the past century. [...]

Yet this dense cat’s cradle of finance has been woven largely out of sight. At best we are able to snatch passing glimpses of it, for data are incomplete, local and lagging. Making sense of the financial system is more an act of archaeology than futurology.

How so? Because, at least historically, finance has not thought of itself as a system. Instead, financial theory, regulation and data-collection has tended to focus on individual firms. Joining the dots was in no one’s job description.

Theorists were partly to blame. Economics has always been desperate to burnish its scientific credentials and this meant grounding it in the decisions of individual people. By itself, that was not the mistake. The mistake came in thinking the behaviour of the system was just an aggregated version of the behaviour of the individual. Almost by definition, complex systems do not behave like this.

Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behaviour of any one node. To make an analogy, you cannot understand the brain by focusing on a neuron - and then simply multiplying by 100 billion.

On a personal note, the article the last quote was taken from was prompted by a study I co-authored. The publication was an empirical analysis of the global corporate ownership network (Vitali et al. 2011 and Sect. 7.3.2.1). The research was reported on by the science magazine New Scientist (Coghlan and MacKenzie 2011). The article quickly went viral.Footnote 14 Two reasons can perhaps explain the wide dissemination and interest. First, the study can be understood as an early comprehensive application of the paradigms of complex systems to economics. Based on data and algorithms, it appeared in a vacuum which stood in stark contrast to the paradigms of neoclassical economics. Second, the article appeared in the very week that the Occupy Wall Street protests became an international phenomenon. New Scientist placed the following headline on the front cover of that issue (Coghlan and MacKenzie 2011):

Exclusive: The network that runs the world

Mathematics reveals the reality behind the anti-capitalist protests

The article itself was titles “Revealed—the capitalist network that runs the world.” Negative ramifications soon followed. Some pundits did not believe in the value of applying insights from complex systems in an economic context.Footnote 15 Then people who are mesmerized by conspiracy theories understood our study as the unmasking of a global elite—the elusive and all-powerful puppet-masters controlling our very lives.Footnote 16 Finally, many people believed that our study was politically motivated, following some agenda, and not a data-driven scientific analysis of a real-world complex economic network. Therefore our study was placed into the realm of belief systems and exposed to ugly political fervor. To give an unsavory example of the depth of the ideological trenches involved, the following e-mail was sent to New Scientist in response to their coverage of our research:

From: FUCKYOU@COMMIESCUM.COM

Sent: 24 October 2011 17:58

To: New Scientist Letters (RBI-UK)

Subject: Contact Us

Name: YOU FUCKING COMMIE SCUM

Country: USA

Message: I HOPE YOU FUCKERS GET TORTURED TO DEATH AFTER YOUR CHILDREN ARE KILLED IN FRONT OF YOU.

On a brighter note, things appear to be slowly changing. More economists are beginning to adopt ideas from complexity economics. Specifically, the relevance of focusing on the analysis of empirical data from a network perspective is recognized, requiring researchers from different fields to join forces. Such interdisciplinary research usually does not find its way into economics journals. However, a recent publication—replicating our study (Vitali et al. 2011) with a different data set and confirming the level of concentration of control in 2007—appeared in the economics journal called Structural Change and Economic Dynamics (Brancaccio et al. 2018). The authors also analyze the centralization of control between 2001 and 2016, observing an increase, especially after 2007. Such research also helps uncover blind spots in the orthodox economic thinking, where certain topics are deemed less relevant. For instance, the very notion of power has, interestingly, not received much attention from economists (Glattfelder and Battiston 2019). Moreover, the whole idea of capital centralization (in the sense of ownership and control concentration) has also “never been a very popular subject in the academic literature” (Brancaccio et al. 2018). Indeed (Brancaccio et al. 2018):

[...] the existence or not of a global tendency of capital to centralize in a few hands, and the related complex structural economic dynamics that may imply, remain an unresolved mystery.

Until now. By inspecting the temporal dynamics of the global ownership network, an increase in global centralization of capital is observed. In summary (Brancaccio et al. 2018):

In the early years of the 21st century, especially since the 2007 crisis, Marx’s thesis of a global tendency towards the centralization of capital seems to find an empirical confirmation.

This is the risk one takes, if one “let’s the data speak.” Unfashionable ideas, as seen through a particular predominant ideological lens, can suddenly reemerge with empirical backing. In the case at hand (Marx 1867, 1894; Leontief 1938).

2.2 Reforming Finance and Economics

Analyzing finance and economics with the tools from complexity theory is only a first step. An ideological reform is called for. Specifically, my proclamation that (Glattfelder 2011):

Ideas relating to finance, economics, politics, society, are very often tainted by people’s personal ideologies. I really hope that this complexity perspective allows for some common ground to be found. It would be really great if it has the power to help end the gridlock created by conflicting ideas, which appears to be paralyzing our globalized world. Reality is so complex, we need to move away from dogma.

There have been many propositions put forward, aimed at fixing the state of economics. One has been to admitting that there is actually a problem and calling for a pluralism in economic teaching and thinking. In other words, allowing for heterodox economic theories. For instance, over 65 associations of economics students from over 30 different countries have written an open letterFootnote 17 laying out this vision.

Another psychological step away from ideological entrenchment is admitting that there is no “silver bullet.” An open-minded and pragmatic approach is called for. In the words of the economist Tim Harford (Harford 2011):

I’m not trying to say we can’t solve complicated problems in a complicated world. But the way we solve them is with humility and to actually use a problem-solving technique that works. Now you show me a successful complex system, and I will show you a system that has evolved through trial and error.

Intellectual myopia is perhaps the biggest challenge in our post-truth era, where everyone is certain their own ideas are correct and all other challenging ones are wrong. In this context, open-mindedness and and a self-critical analysis of one’s own beliefs could be the magic cure.

Another major challenge for the still-prevailing paradigm in economics lies in its blind spot: externalities. “An externality is a consequence of an economic activity experienced by unrelated third parties.Footnote 18” The epitome of a negative externality is the failure to factor in ecological constraints into economic thinking. In other words, as long as clean air and water, glaciers and polar caps, forests, biodiversity, etc. do note have a price tag, economic activity will not value them. By not pricing ecological externalities, no incentives preventing irrational behavior are given and no innovation is fostered. It becomes economically viable to extract resources at one end of the world, which, after being shipped to other parts of the world for consumption, are disposed off at yet another location on the globe. For a planet with finite resources, such a linear system spells doom sooner or later, as extraction becomes exploitation and disposal results in pollution (discussed in the Epilogue). Factoring in externalities explicitly means incorporating a complex systems point of view, as this translates into holistic systems thinking explicitly identifying feedback loops.

However, the best way to herald the start of complexity economics would be to present a successful application. Perhaps an obvious implementation is the forecasting of economic turbulences. This vision is outlined in Haldane’s article mentioned above, titled “To Navigate Economic Storms We Need Better Forecasting” (Haldane 2011):

It [a real-time map of financial flows] would allow regulators to issue the equivalent of weather-warnings—storms brewing over Lehman Brothers, credit default swaps and Greece. It would enable advice to be issued—keep a safe distance from Bear Stearns, sub-prime mortgages and Icelandic banks. And it would enable “what-if” simulations to be run—if UK bank Northern Rock is the first domino, what will be the next?

This, however, can only be achieved with data-driven interdisciplinary research, embracing complex networks. Such an approach to finance and economics has the power to comprehensively assess the true state of the system. Complexity thinking can uncover hidden features and patterns of organization which would otherwise go undetected. Especially the mitigation of systemic risk is, by definition, in the domain of complex networks. Its application is an invitation to move away from too-big-to-fail thinking of agents in isolation to a network of interdependence (see Sect. ). Such a shift can offer crucial information to policy makers (Glattfelder 2016).

3 Complexity Finance and Economics

The shift away from analyzing the structure of “things” towards analyzing their patterns of interaction represents a true paradigm shift, and one that has impacted computer science, biology, physics, and sociology. The need to bring about such a shift in the realm of finance and economics was highlighted in the last section. Unfortunately, there exists another major challenge next to the prevailing prohibitive mindset: an unfortunate lack of data in these fields. Whereas the study of complex systems in other domains are affected by data deluge, there is an ironic scarcity of data coming from our countless financial and economic interactions. A lot of the much-wanted data is either proprietary and hence not available from commercial institutions or it is too sensitive to be disclosed by regulatory bodies. Nonetheless, complexity science has slowly been successfully applied to financial and economic systems. In essence, the paradigm shift is characterized by the following elements:

  • empirical focus;

  • data science;

  • algorithm-driven methodology;

  • computer simulations;

  • decentralized architectures;

  • interdisciplinary research;

  • a plurality of ideas.

Moreover, at the heart of this approach lie agent-based models and complex network analysis.

3.1 Multi-Agent Systems

Agent-based models are prototypical in the complex systems paradigm of decoding nature and human collective behavior (Axelrod 1997; Weiss 1999; Bonabeau 2002; Šalamon 2011; Helbing 2012). They represent a bottom-up approach incorporating the main insight from complexity thinking: the richness of structures comes from the simple rules of interaction of agents (see Sects. 5.2.1 and 5.2.2). Elementary and early examples are cellular automata (see Sect. 5.2.2). In a general context, agent-based models are composed of

  • agents (specified at a level of granularity);

  • decision-making heuristics and learning rules;

  • non-linear interaction topology;

  • external conditions;

and are typically implemented as computer simulations. They describe the micro-level interactions leading to the emergence of structure and organization at the macro level. From a conceptual point of view, structural information (represented by the data) is transformed by functional information (representing the algorithm) into pragmatic information, telling the agents how to operate in a specific context (Ebeling et al. 1998).

The most basic setting, where featureless agents interact with each other, can be enhanced. For one, the agents can be allowed to have internal states, where they can store energy and information (Schweitzer 2003). Furthermore, some agent-based models do not require that all the agents interact directly and have complete information about the system. In such settings, an efficient simulation setup has to only process the information to simulate the agents’ local and short-time behavior. One technical solution is the blackboard architecture, a fundamental programming paradigm of early artificial intelligence research (Engelmore and Morgan 1988). In essence, the system evolves via the agents writing, reading, and processing information on a centralized “blackboard.” A modern variation of this mechanism is the idea of an adaptive landscape (Schweitzer 2003). Every action of each agent changes the state of the adaptive landscape (locally or globally). In turn, the changes in the landscape affect the actions of other agents. A non-linear feedback loop, between the individual and collective behavior, emerges.

An agent-based approach to finance and economics can be understood as the first attempt to apply complexity theory in those fields (Sornette 2014). In essence, these domains are modeled as complex adaptive systems, comprised of interacting autonomous agents (Tesfatsion 2003; LeBaron 2006; Miller et al. 2008). Now, the world has come alive as a dynamic systems of interacting agents. This is the polar opposite of the DSGE models of neoclassical economics, deploying an aggregation of a single representative agent optimizing its utility. Agent-based computational economics replaces the theoretical assumption of a mathematical optimization by these agents in equilibrium by a less restrictive postulate of agents with bounded rationality (Simon 1982) adapting to market forces. In essence, bounded rationality implies that individuals make decisions with limited rationality, due to cognitive limitations, the restricted time to make the decision, and the tractability of the decision problem. In summary, in the words of Trichet (quoted in Sornette 2014):

First, we have to think about how to characterize the homo economicus at the heart of any model. The atomistic, optimizing agents underlying existing models do not capture behavior during a crisis period. We need to deal better with heterogeneity across agents and the interaction among those heterogeneous agents. We need to entertain alternative motivations for economic choices. Behavioral economics draws on psychology to explain decisions made in crisis circumstances. Agent-based modeling dispenses with the optimization assumption and allows for more complex interactions between agents.

3.1.1 ...Of Financial Markets

Financial markets are particularly well suited for agent-based explorations, as they represent a coherent framework for understanding the complexity. Moreover, financial data are generally plentiful, accurate, and readily available. In a nutshell (LeBaron 2006):

Financial markets are an important challenge for agent-based computational modelers. Financial markets may be one of the important early areas where agent-based methods show their worth, for two basic reasons. First, the area has many open questions that more standard modeling approaches have not been able to resolve. Second there is a large amount of financial data available for testing.

Many agent-based models of financial markets have been proposed (LeBaron 2006; Samanidou et al. 2007). One of the earliest ones analyzed market instability in the wake of the 1987 financial crisis (Kim and Markowitz 1989). Another agent-based model was inspired by a challenge in game theory, called the El Farol Bar problem. It “was inspired by the bar El Farol in Santa Fe which offers Irish music on Thursday nights” (Arthur 1994). Consider 100 people deciding independently each week whether to go to a bar on a certain night. The bar is quite small, and it is no fun to go there if it’s too crowded. Every person decides to visit the bar if he or she expects fewer than 60 people to show up. Otherwise the person stays at home. Everyone has to decide at the same time whether they will go to the bar or not. This game-theoretic problem has a famous representation as an agent-based model, called the Minority Game (Challet and Zhang 1997). The model has been applied to financial markets and reflects the competition among a finite number of agents over a resource. In particular, the inductive reasoning implied by the Minority Game leads to a system of many interacting and heterogeneous degrees of freedom, resulting in complex dynamics.

In the context of the scaling properties observed in financial markets (see Sect. 6.4.1), a multi-agent model was proposed which could replicate those universal features. The model incorporates the idea that scaling arises from the mutual interplay of market participants (Lux and Marchesi 1999, 2000). In detail, the scaling properties are generated by the interaction of economic agents with heterogeneous beliefs and strategies in the simulated market. The Lux-Marchesi model was inspired by the study of herd behavior in ant colonies and applications of statistical mechanics to sociology and political sciences (Samanidou et al. 2007).

Another agent-based model, proposed to decode the dynamics of financial markets, is the Cont-Bouchaud model (Cont and Bouchaud 2000). It is a simple model, having only a few free parameters, contrasting the “terribly complicated” Lux-Marchesi model. The traders are quite unsophisticated and exhibit herding behavior, where buyers and sellers group into larger dependent sets, which then move together. The model can reproduce many stylized facts of financial markets: volatility clustering, positive correlations between trading volume and price fluctuations, as well as a power-law distribution of stock price variations. For details, see also Stauffer (2001).

In Andersen and Sornette (2005) another agent-based model was introduced, based on the Minority game on time series of financial returns. It was one of the first models to establish the existence of “pockets of predictability” in stock markets. In other words, the return predictability is a localized phenomenon, in which short periods with significant predictability (the “pockets”) appear in otherwise long periods with little or no evidence of return predictability. In contrast, an efficient market is one in which price changes are completely random and unpredictable all the time. In the model the collective organization of agents and of their strategies is the driver of predictability in the market. Namely, transient herding phases in the population of agents leads to the emergence of the pockets of predictability.

3.2 Network Thinking

The historical and empirical roots of complex network theory can be found in sociology, as described in Sect. 5.2.3. So, too, the first people to apply network thinking to economics were sociologists. One central theme is that of social capital (Coleman 1990; Burt 1992; Putnam 1993). It is a form of economic capital in which social networks are central and where transactions are motivated by reciprocity, trust, and cooperation. Specifically (Burt 2001):

Social capital is the contextual complement to human capital. The social capital metaphor is that the people who do better are somehow better connected. Certain people or certain groups are connected to certain others, trusting certain others, obligated to support certain others, dependent on exchange with certain others. Holding a certain position in the structure of these exchanges can be an asset in its own right.

Implicit in this reasoning is the idea of network centrality, where the specific location in the network can bestow a node with added relevance.

In this networked context, the notion of “structural holes” emerges (Burt 1992). A structural hole is understood as a conceptual gap between nodes in the network which have complementary access to information. The notion is similar to the “strength of weak ties” (recall Sect. 5.2.3). “The structural hole argument is that social capital is created by a network in which people can broker connections between otherwise disconnected segments” (Burt 2001), leading to the emergence of the gatekeeper nodes, relaying valuable information between groups. From such networks of interaction a model of the economy can be derived (White 2002). These network-based approaches are empirically motivated and stand in stark contrast to the paradigms of neoclassical economics.

Since the turn of the millennium, the study of complex economic networks has started to gain popularity. For instance, witnessed by the research on

  • diffusion of innovation (Schilling and Phelps 2007; König et al. 2009);

  • trade relations (Serrano and Boguñá 2003; Garlaschelli and Loffredo 2004a, b; Reichardt and White 2007; Fagiolo et al. 2008, 2009);

  • shared board directors (Strogatz 2001; Battiston and Catanzaro 2004);

  • similarity of products (Hidalgo et al. 2007);

  • credit relations (Boss et al. 2004; Iori et al. 2008);

  • price correlation (Bonanno et al. 2003; Onnela et al. 2003);

  • corporate ownership structures (Glattfelder and Battiston 2009, 2019; Vitali et al. 2011; Glattfelder 2013, 2016; Garcia-Bernardo et al. 2017; Fichtner et al. 2017).

In the following, the study of corporate ownership networks will be introduced—a field of complexity science which saw its coming of age in the wake of the financial crisis (Glattfelder and Battiston 2009, 2019; Vitali et al. 2011; Coghlan and MacKenzie 2011; Haldane 2011; Battiston et al. 2012).

3.2.1 Ownership Networks

A corporation is a legal entity having its own privileges, similar to a natural person. Corporations have limited liability, can sue, borrow or lend money, buy and sell shares, and takeover and merge with other corporations. From its business activities the corporation strives to generate profits. The money required for new investments can come from two external sources. First, debt is sold in the form of bonds to investors or financial institutions. Second, shares of stock can be issued. The stock represents the original capital invested into the business by its founders and serves as a security. It is also referred to as equity securities, or equity for short. The entities owning shares in the stock of a company are called stockholders, or, synonymously, shareholders. An ownership relation represents the percentage of ownership a shareholder has in the firm’s equity capital. Each shareholder potentially has the right to a fraction of the firm’s revenue (in the form of dividends) and to a voice in the decision making process (meaning voting rights at the shareholder meetings). Hence, a percentage of ownership in the equity capital can yield cash-flow rights and voting rights. All shareholders collectively owning a company have complete control over its strategic business decisions and financial strategies. Next to voting, this control can also be exerted by appointing the board of directors, which in turn elects the senior management.

In a network context, ownership relations are encoded in the adjacency matrixW describing the network topology (recall Sect. 6.3.2). Thus, for the case where \(W_{ij}>0\), shareholder i owns \(W_{ij}\) percent of company j, establishing an ownership link, seen in Fig. . An ownership network is comprised of various economic actors, including natural persons, families, foundations, research institutes, public authorities, states, and government agencies. However, as shares can only be issued by companies, all other economic actors only have outgoing links (i.e., they cannot be owned). As a result, the connectivity of the network is given by companies holding shares in each other, also called cross-shareholdings. See Fig.  for an example.

Fig. 7.1
figure 1

A simple chain of ownership relations. Shareholder i owns \(W_{ij}\) percent of company j, which in turn holds \(W_{jk}\) percent of company k

Fig. 7.2
figure 2

Ownership network layout. The example shows the backbone (Glattfelder and Battiston 2009) of the Japanese stock-market

Ownership networks have been studied in different contexts. Pioneering work analyzed the impact of globalization forces (Kogut and Walker 2001) and corporate governance reforms (Corrado and Zollo 2006) over time on the network topology of countries and found that there was an unexpected resilient structure of powerful agents which was unaffected by these external forces. Other work, utilizing a Level 3 type analysis—incorporating weighted and directed links and the value of corporations (proxied by operating revenue), see Sect. 6.3.2—focused on a cross-country analysis. Contrary to textbook belief, it was revealed that in markets with many widely held corporations (mostly in Anglo-Saxon countries), this local distribution of ownership actually goes hand in hand with a global concentration of ownership (and control) lying in the hands of few powerful shareholders, only visible from the bird’s-eye view given by the network perspective (Glattfelder and Battiston 2009). Finally, the 3-level analysis of the global ownership network revealed the following (Vitali et al. 2011; Glattfelder and Battiston 2019). The network has a hierarchical structure with a tiny, highly interconnected core comprised of the most powerful corporations. In essence, the global ownership network displays fractal properties and by zooming into its fabric one finds a hierarchy of nested substructures and an ultimate distillate of shareholder power: a tiny cabal of mostly financial institutions and asset managers, seen in Fig. . Furthermore, the overall distribution of (direct and indirect) economic power (in the sense of control or influence) is highly skewed. Effectively, this power is much more unequally distributed than income or wealth.

Fig. 7.3
figure 3

The “super-entity” nested within the global ownership network (Glattfelder and Battiston 2019) of 2012. It is a subnetwork comprised of 128 highly influential nodes—from the financial, energy, and automobile sectors—potentially able to influence 16% of the value within the network, namely about USD 20 trillion. They represent less than 0.0004% of the 35,839,090 actors present in the entire 2012 global ownership network. There are 2,782 ownership links present in the structure. The nodes are scaled by Influence Index. Different shadings reflect a country partitioning (US, GB, remaining EU plus CH, JP, tax havens)

Such findings have raised eyebrows in the economics community. Especially, as the notion of shareholder power of financial institutions and asset managers was contested at the time. Such corporations were seen as passive owners with no incentives to confront the companies they owned with any form of shareholder activism. Today, the tides are starting to turn (Hill and Thomas 2015):

Entering the twenty-first century we also enter ongoing debates [about shareholder power], with new developments [...] whose full implications are still working themselves out. [T]his latest iteration of shareholder activism appeared to have genuinely changed the dynamics of shareholder power. [P. 27]

[P]urely passive funds are on a path to owning a majority of US public equity. One result of this trend is likely to be increasing pressure on index funds to find ways to engage in governance activities. [P. 93]

Hedge fund activism is a recent, but now prominent, topic in academic research. Since 2006, scholarship on hedge fund activism has grown from virtually non-existent to mainstream. [P. 93]

Other scholars, also analyzing the global ownership network, have also recently concluded (Fichtner et al. 2017):

The analysis of the voting behavior underscores that the Big Three [BlackRock, Vanguard, and State Street] may be passive investors, but they are certainly not passive owners. They evidently have developed the ability to pursue a centralized voting strategy—a fundamental prerequisite to using their shareholder power effectively. In addition to this direct exercise of shareholder power, the extent of the concentration of ownership in the hands of the Big Three may also lead to a position of structural power.

The concentration of ownership and power has economy-wide implications. In particular, the challenges related to anti-competitiveness (Azar et al. 2015, 2016), tax avoidance (The Economist 2016a), offshore financial centers (Garcia-Bernardo et al. 2017), and systemic risk (Battiston et al. 2012, 2016) appear in a new, more ominous light. However, by uncovering the otherwise hidden patterns of organization in the data, these network studies are potentially valuable for policy makers (Glattfelder 2016). As an example, the restrictive and myopic too-big-to-fail thinking can be greatly enhanced. By incorporating the network of interaction, one can move beyond the isolation of nodes and embrace the network with a “too-connected-to-fail” perspective. Ultimately, “too-central-to-fail” identifies potential trouble spots in the system, which only become visible by computing the centrality scores of all the nodes in the network.

3.2.2 ...And Network Centrality

The notion of centrality refers to a structural attribute of the nodes in a network which depends on their position in the network (Katz 1953; Hubbell 1965; Bonacich 1972). In general, centrality refers to the extent to which a network is organized around a single node. A popular family of centrality measures is called eigenvector centrality and quantifies the relevance of a node in a network. These are a feedback-type centrality measures, where a node is more central the more central its neighbors are themselves. Google’s PageRank is an example of such a centrality measure (Brin and Page 1998), see Sect. 6.4.3.3. A variant of such centrality measures is the vector \(\chi \), containing the centrality scores of all the nodes in the network. In matrix notation it is defined as

$$\begin{aligned} \chi = W \chi + W v. \end{aligned}$$
(7.5)

See Glattfelder and Battiston (2009), Vitali et al. (2011), Glattfelder (2013, 2019). In plain words: The centrality score \(\chi _i\) of node i depends on the centrality scores \(\chi _j\) of all its neighbors j and their intrinsic node value \(v_j\). In mathematical terms

$$\begin{aligned} \chi _i = \sum _j W_{ij} \chi _j + W_{ij} v_j. \end{aligned}$$
(7.6)

The solution to (7.5) is given by

$$\begin{aligned} \chi = (\mathbb {1} - W )^{-1} W v, \end{aligned}$$
(7.7)

with the identity matrix \(\mathbb {1}\). In essence, the centrality is computed solely from the adjacency matrixW utilizing the (computationally intense) mathematical operation of matrix inversion.

Economic power in ownership networks is equivalent to this network centrality. Consider the following expression

$$\begin{aligned} p_i = \sum _{j \in \varGamma (i)} W_{ij} v_j, \end{aligned}$$
(7.8)

where \(\varGamma (i)\) is the set of indices of the neighbors of i. In other words, \(\varGamma (i)\) denotes all the companies in the portfolio of shareholder i and \(p_i\) is a simple proxy for the direct portfolio value of i. In the presence of a network the notion of the indirect portfolio emerges naturally. Now, all downstream nodes reachable (with at least two “hops”) from i are included in the calculation, yielding all the following paths

$$\begin{aligned} \begin{aligned} \hat{p}_i =&\sum _{j \in \varGamma (i)} \sum _{k \in \varGamma (j)} W_{ij} W_{jk} v_k + \cdots +\\&\sum _{j_1 \in \varGamma (i)}\sum _{j_2 \in \varGamma (j_1)} \cdots \sum _{j_{m-1} \in \varGamma (j_{m})} W_{i j_1} W_{j_1 j_2} \cdots W_{j_{m-1} j_m} v_{j_m} + \cdots . \end{aligned} \end{aligned}$$
(7.9)

As a result, one can assign the sum of the direct and indirect portfolio value in USD to each shareholder i, retrieving the total portfolio value

$$\begin{aligned} \xi _i = p_i + \hat{p}_i. \end{aligned}$$
(7.10)

In matrix notation, this can be re-expressed as

$$\begin{aligned} \xi = \sum _{l=1}^{\infty } W^l v, \end{aligned}$$
(7.11)

where \(W^l\), by design, encodes all paths of length l in the network. Thus (7.11) considers all paths of all lengths in the network. The vector \(\xi \) is the resulting total portfolio value.

By utilizing the series expansion

$$\begin{aligned} (\mathbb {1}- W)^{-1} = \mathbb {1} + W + W^2 + W^3 +\cdots \end{aligned}$$
(7.12)

one finds that

$$\begin{aligned} (\mathbb {1}- W)^{-1} W = W (\mathbb {1}- W)^{-1} = \sum _{n=1}^\infty W^n. \end{aligned}$$
(7.13)

As a consequence

$$\begin{aligned} \chi = (\mathbb {1} - W )^{-1} W v = \sum _{l=1}^{\infty } W^l v = \xi . \end{aligned}$$
(7.14)

Thus, the total portfolio value \(\xi _i\), measured in USD, is nothing other than the network centrality measure \(\chi _i\), encoding the relevance of nodes in a directed and weighted network where a value \(v_i\) is attached to the nodes. This is an elegant example of a Level 3 type network analysis (Sect. 6.3.2), where the context of the real-world network is reinterpreted using pure network measures.

4 The Past, Present, and Future of Economic Interactions

The creation of money is perhaps one of the greates collective cognitive revolutions of mankind. It marked the beginning of “a new inter-subjective reality that exists only in people’s shared imagination” (Harari 2015, p. 197). It has been argued that the history of money, beginning about 5,000 years ago, is in fact the history of debt and trust (Graeber 2011). In effect, debt is seen as the oldest means of trade, where cash and barter are limited to situations of low trust involving strangers. Money is the universal tool fostering countless human collaborations, from empires to science. Indeed (Harari 2015, p. 294):

Science, industry and military technology intertwined only with the advent of the capitalist system and the Industrial Revolution. Once this relationship was established, it quickly transformed the world.

From this seed, Europe’s global dominance would emerge. “In 1775 Asia accounted for 80 per cent of the world economy” (Harari 2015, p. 312), dwarfing Europe. “In 1950 western Europe and the United States together accounted for more than half of global production” (Harari 2015, p. 312).

The history of money is also the history of human psychology and ethics, where self-interest is pitched against cooperation. Greed and fraud promise short-term enrichment but threaten the long-term formation of an equitable and sustainable society living in ecological balance with the biosphere supporting life on Earth. The shrewd, cunning business acumen of individuals is contrasted by the potential for human collective intelligence, manifested in adaptive, robust, and resilient financial and economic systems. This means that the prevailing architectures of power play a key role in taming or exacerbating complexity.

4.1 The Imperial Power of Profit

How did Europe, and the US by extension, achieve world dominance in such a short time-span? The answer lies in the marriage of modern science and capitalism. Specifically, the “military-industrial-scientific complex,” which first emerged as a feedback loop between science, empire, and capital 500 years ago, has been the main driving force of supremacy ever since.Footnote 19 Scientific knowledge (and inquisitiveness) paired with capital unleashes technological wizardry, which lends wings to imperialism.Footnote 20

And so then, inexorably, the baton was passed on. In a world dominated by Chinese, Muslim, and Indians, the Italian maritime explorer and navigator Christopher Columbus, supported by the Spanish Crown, set out to chart the world in 1492. His discoveries allowed the Spanish to conquer America and claim vast and untapped resources. Boldly charting unknown geographical terrain was only the beginning. By constructing novel capitalist systems, more untapped power could be mobilized. Yet again the baton was passed on.

The Dutch ensured their success through credit. Namely, by founding the world’s first stock exchange in Amsterdam, shares in limited liability joint-stock companies could be traded. For instance, the selling of shares of the Dutch East India Company (known as VOC) which helped finance the rise of the Dutch empire and the colonization of Indonesia. Within a century, the Dutch had replaced the global supremacy of the Spanish and Portuguese.

The penultimate baton pass in mankind’s race for global hegemony happened when the British empire came to glory—the largest the world has ever seen. Again, a limited liability joint-stock company was at the center of the action. By 1857, the East India Company commanded a private army of over 271,000 troops (Schmidt 1995). Finally, on the 4th of July 1776 the United States Declaration of Independence was signed. With the rise of an all-powerful military-industrial-scientific complex in the US in the 1960s, the fate was sealed and history took its course—a course that may appear inevitable in hindsight. Capitalism and science reign supreme and, with spectacular success, conquer and dominate nearly all domains of human affairs.

4.2 The Dark Side: The Economics of Greed and Fraud

At its very core, capitalism is based on the trust in the future, allowing for progress. In a nutshell, growth is the main driver of capitalism. It is a feedback loop, where the trust in the future translates into investments and credits which, in turn, result in economic growth, justifying the trust. However, this cycle comes with a human and ecological cost. In the following, the toll on the human psyche is discussed.Footnote 21

4.2.1 Self-Interest, Cooperation, Suffering, Meaning, and Happiness

Modern capitalism is truly a novel paradigm in the history of human thought. With Smith’s economic manifesto, An Inquiry into the Nature and Causes of the Wealth of Nations (Smith 1776), a new universal order emerged, akin to the immutable laws of nature. The all-powerful and all-knowing “invisible hand,” guiding markets based on self-interested profits, became canon. In essence (Harari 2015, p. 348):

Yet Smith’s claim that the selfish human urge to increase private profits is the basis for collective wealth is one of the most revolutionary ideas in human history—revolutionary not just from an economic perspective, but even more so from a moral and political perspective. What Smith says is, in fact, that greed is good, and that by becoming richer I benefit everybody, not just myself. Egoism is altruism .

Self-Interest

A modern-day proponent of the virtues of selfishness, and the powers of laissez-faire capitalism, was the Russian-American novelist-philosopher Ayn Rand (Rand 1964). She understood overt self-interest as the prime moral virtue and redefined altruism as evil.

Her philosophy of “objectivism” (Peikoff 1993), next to the ethical claims, declares real knowledge to be metaphysically objective and thus skepticism pointless. In other words, reality exists independently of consciousness, the human mind is in direct contact with reality through sense perception, and one can attain objective knowledge from perception through the process of concept formation and inductive logic. While there are arguably some points of contact with logical empiricism (Sect. 9.1.1) and critical rationalism (Sect. 9.1.2), Rand’s philosophy represents the polar opposite of postmodernism (Sect. 9.1.4), constructivism (Sect. 9.1.5), relativism (Sect. 9.1.6), and poststructuralism (Sect. 6.2.2)—indeed, it opposes the entire information-theoretic and participatory paradigm discussed in Chaps. 13 and 14.

Her relationship with academic philosophers has been ambiguous. Seen, for instance, in the following (Badhwar and Long 2017):

For all her popularity, however, only a few professional philosophers have taken her work seriously. As a result, most of the serious philosophical work on Rand has appeared in non-academic, non-peer-reviewed journals, or in books, and the bibliography reflects this fact. We discuss the main reasons for her rejection by most professional philosophers in the first section. Our discussion of Rand’s philosophical views, especially her moral-political views, draws from both her non-fiction and her fiction, since her views cannot be accurately interpreted or evaluated without doing so. [...]

Her philosophical essays lack the self-critical, detailed style of analytic philosophy, or any serious attempt to consider possible objections to her views. Her polemical style, often contemptuous tone, and the dogmatism and cult-like behavior of many of her fans also suggest that her work is not worth taking seriously. Further, understanding her views requires reading her fiction, but her fiction is not to everyone’s taste. It does not help that she often dismisses other philosophers’ views on the basis of cursory readings and conversations with a few philosophers and with her young philosophy student acolytes. Some contemporary philosophers return the compliment by dismissing her work contemptuously on the basis of hearsay. Some who do read her work point out that her arguments too often do not support her conclusions. This estimate is shared even by many who find her conclusions and her criticisms of contemporary culture, morality, and politics original and insightful. It is not surprising, then, that she is either mentioned in passing, or not mentioned at all, in the entries that discuss current philosophical thought about virtue ethics, egoism, rights, libertarianism, or markets.

Cooperation

Returning to science and complexity, the case has been made that in fact altruism and cooperation are the successful driving forces behind evolution (Trivers 1971; Axelrod and Hamilton 1981; Axelrod 1997; Nowak and Highfield 2011; Tomasello 2014; Damasio 2018). Indeed (Nowak and Highfield 2011, back cover):

Some people argue that issues such as charity, fairness, forgiveness and cooperation are evolutionary loose ends, side issues that are of little consequence. But as Harvard’s celebrated evolutionary biologist Martin Nowak explains in his ground-breaking and controversial book, cooperation is central to the four-billion-year-old puzzle of life. Indeed, it is cooperation, not competition, that is the defining human trait.

Next to the scientific understanding of the evolutionary benefits of altruism, most religious traditions contain elements sanctifying the virtues of modesty, frugality, humbleness, gratitude, benevolence, and generosity. The motivation being the focus on the afterlife, where this-worldliness holds limited appeal. Indeed, every monk—from Buddhism, Jainism, Hinduism, Christianity, Judaism, and Sufism—epitomizes this ancient ideal: a lifestyle withdrawn from mainstream society, characterized by abstinence from any sensual pleasures and distractions, purely focused on contemplation and meditation. In other words, a chosen life of asceticism. Within this context, the Protestant Reformation brought about a reorientation and a break with the past (Tarnas 1991, p. 246):

Whereas traditionally the pursuit of commercial success was perceived as directly threatening to the religious life, now the two were recognized as mutually beneficial. [...] Within a few generations, the Protestant work ethic, along with the continued emergence of an assertive and mobile individualism, had played a major role in encouraging the growth of an economically flourishing middle class tied to the rise of capitalism.

This profoundly secularizing effect on Western culture is rather paradoxical, as the Reformation’s “essential character was so intensely and unambiguously religious” (Tarnas 1991, p. 240 and Sect. 5.3.1). Nonetheless, the road towards a capitalist-consumerist paradigm was paved (Harari 2015, p. 348):

The new ethic promises paradise on condition that the rich remain greedy and spend their time making more money, and that the masses give free rein to their cravings and passions—and buy more and more.

Suffering

One spiritual tradition singles out greed as a particularly devious and malignant root of suffering. According to the Buddhist tradition, the first teachings that the Buddha gave in Sarnath around 600 B.C.E. after attaining enlightenment (and liberation from saṃsāra , the endless cycle of rebirth) are know as the Dhammacakkappavattana Sutta.Footnote 22 Within the text, the Four Noble Truths are given, outlining the reasons for suffering. The Noble Truth of suffering (dukkha) summarizes:

  • Birth, aging, sickness, and death are painful.

  • Sorrow, lamentation, physical pain, grief, and despair are painful.

  • Union with what is disliked and separation from what is liked are painful.

  • Not to get what one wants is painful.

  • Clinging to the five aggregates that form a person (known as Skandha and relating to matter, sensation, perception, mental formations, and consciousness) is painful.

The Noble Truth of the origin of suffering is unquenchable thirst bound up with passionate greed. Namely, the thirst for sense-pleasures, existence and becoming, and self-annihilation. In essence, suffering arises due to the never-ending and pointless pursuit of ephemeral feelings—the craving and cling to impermanent states and things which are, by their own nature, incapable of creating lasting satisfaction. The Buddhist Wheel of Life (bhavacakra) is a symbolic representation of Saṃsāra, the endless cycle of existence. It details the six realms of rebirth. One notable one is the hungry ghost realm (Goodman 2017):

Hungry ghosts are depicted with large bellies and tiny mouths; driven by greed, they seek endlessly for something to eat or drink, but even when they find a morsel they can swallow, it turns into filth or fire in their mouths.

This appears as a very apt metaphor for unchecked consumerism, lacking any meaning, value, and appreciation. In any case, the Noble Eightfold Path is the Buddha’s proposed remedy for suffering. It describes self-experiential practices based on meditation, leading to liberation.

Perhaps when confronted with one’s own mortality, one can see clearer and deeper. Looking back at our lives, we can thus identify meaningful or futile endeavors. Bronnie Ware worked as a nurse in a palliative care unit. Her experiences with the dying were captured in her book Top Five Regrets of the Dying (Ware 2011). There we can read:

  1. 1.

    I wish I’d had the courage to live a life true to myself, not the life others expected of me.

  2. 2.

    I wish I’d had the courage to express my feelings.

  3. 3.

    I wish I had stayed in touch with my friends.

  4. 4.

    I wish that I had let myself be happier.

However, one regret is central to the context of this section: I wish I hadn’t worked so hard. In Ware’s wordsFootnote 23: “All of the men I nursed deeply regretted spending so much of their lives on the treadmill of a work existence.”

Meaning

From such a perspective it seems obvious that the accumulation of material wealth appears fruitless and hollow. Faced with the absolute certainty of one’s own physical death, the life one chooses to live may feel like an insignificant blink in the grand scheme of existence. To try and fill it with meaning is then possibly more important and rewarding than to fill it with an ephemeral accumulation of wealth. Especially, if many people experience their working lives as dull, devoid of meaning, and ultimately futile.

The anthropologist David Graeber, who became known to a wider audience through his book Debt: The First 5000 Years (Graeber 2011), wrote a provocative and controversial article called On the Phenomenon of Bullshit Jobs (Graeber 2013). The piece struck a resonance and was widely shared in the Internet.Footnote 24 In a nutshell:

Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it.

Others have also commented on the gloomy and cynical caricature life can become if it is defined solely by the shallow pursuit of the capitalist-consumerist mirage seemingly dictated by society. The economist Tim Jackson succinctly observed (Jackson 2010):

This is a strange, rather perverse story. Just to put it in very simple terms: it’s a story about us, people, being persuaded to spend money we don’t have, on things we don’t need, to create impressions that won’t last, on people we don’t care about.

Or, in the slightly different words of the author Nigel Marsh (Marsh 2010):

And the reality of the society that we’re in is there are thousands and thousands of people out there leading lives of quiet, screaming desperation, where they work long, hard hours at jobs they hate to enable them to buy things they don’t need to impress people they don’t like.

Happiness

Maybe it is consoling to note that some economists have placed the pursuit of happiness in the center of their research, giving rise to happiness economics. Indeed (Frey and Stutzer 2002, back cover):

Curiously, economists, whose discipline has much to do with human well-being, have shied away from factoring happiness into their work.

On a personal note, having worked in the finance industry for over a decade,Footnote 25 I feel that only very few people are psychologically equipped with the capabilities to derive sustainable happiness from a large paycheck while remaining independent and free in their life planing.Footnote 26 This sentiment was also expressed by a risk and compliance consultant at a major bank (Luyendijk 2011):

My sense is that a lot of people in finance hate what they do. There’s no passion. But they are trapped by the money.

Overall (Haybron 2013, p. 56):

Affluence is a double-edged sword: it can buffer us from many ills and sate some of our wants. Yet it also tends to increase those wants and creates new vulnerabilities.

Empirical evidence suggest the mechanisms which are at work (Frey and Stutzer 2002, p. 78f.):

Wants are insatiable. The more one gets, the more one wants. As long as one has a yearly income of $50,000, an income of $100,000 seems a lot. But as soon as one has achieved it, one craves $200,000. The expected marginal utility of income does not seem to decrease much if at all.

Moreover, there appears to be a threshold of income, above which happiness starts to lose its traction (Frey and Stutzer 2002, p. 83):

At low levels of income, a rise in income strongly raises well-being. But once an annual income of about U.S. $15,000 has been reached, a rise in income level has a smaller effect on happiness. Higher income is still experienced as raising well-being, but at a lower rate. For Switzerland, in contrast, the highest income recipients even report somewhat lower well-being than does the income group immediately below.

Strikingly (Frey and Stutzer 2002, p. 91f.):

Individuals do not value absolute income, but compare it to the income of relevant others. This opens up the issue of what persons or groups one compares oneself with.

The concept of the “hedonic treadmill” asserts that people adapt to improving economic conditions in such a way that no improvement of happiness is attained (Brickman and Campbell 1971). The Economist also chimed in The Economist (2012b), summarizing:

So levels of income are, if anything, inversely related to felicity. Perceived happiness depends on a lot more than material welfare.

A particularly astounding study compared major lottery winners with paralyzed accident victims (Brickman et al. 1978). It was concluded that happiness is indeed relative and that habituation erodes the impact of ill or good fortune—even for life changing events. The paraplegics exhibited a strong nostalgia effect, making them rate their past as much happier. And surprisingly (Brickman et al. 1978)

It should be noted, however, that the paraplegic rating of present happiness is still above the midpoint of the scale and that the accident victims did not appear nearly as unhappy as might have been expected.

In contrast, the “lottery winners were not happier than controls and took significantly less pleasure from a series of mundane events” (Brickman et al. 1978).

If such insights, based on individual psychological behaviors, are extended to larger groups, some well-established indices, measuring the development of countries, are challenged. For instance, the gross domestic product (GDP) or the Human Development Index (HDI) are found to be missing nuance. In 2011, the UN General Assembly resolution 65/309 Happiness: Towards a Holistic Definition of DevelopmentFootnote 27 invited the member countries to assess the happiness of their people in an effort to establish a data-driven approach to public policy. The World Happiness Report is now an annual publication of the United Nations,Footnote 28 ranking national happiness.Footnote 29 Another initiative is the Happy Planet Index,Footnote 30 an index of human well-being and environmental impact. In other words, a nation’s success depends on its ability to create happy and healthy lives for its citizens within environmental limits. According to this metric, the US is ranked 108th out of 140 indexed countries, with Costa Rica leading the list.

What has Buddhism to say to all of this, as it is inherently a practice to attain (individual and collective) happiness? Basically three things. Happiness is an inner state of being which can be cultivated independently of external factors. Then, meditation, the practice of transforming the mind by observing the mind, is central to that aim. Finally, the feeling of unconditional compassion towards all sentient beingsFootnote 31 is a direct shortcut to happiness —the antidote to suffering. In essence, altruism is the core concept of Buddhism (Ricard 2013). Buddhism has also attracted the attention of scientists. For one, ancient concepts discovered by Buddhist meditators while observing their own stream of consciousness reveal a reality which is endowed with many of the paradoxes modern physics is grappling with, especially quantum mechanics (Ricard and Trinh 2001)Footnote 32. Furthermore, the cultivation of mindfullness during meditation has a measurable impact on the brain. This example of brain plasticity manifested by meditators (Lutz et al. 2004) has caught the interests of neuroscinetists (Ricard and Singer 2017). An embodiment of the crossfertilization of science and Buddhism is Matthieu Ricard. He received a Ph.D. in molecular genetics from the Pasteur Institute in 1972. After graduation, Ricard decided to spend the rest of his life practicing Tibetan Buddhism in the Himalayas. In his words (Ricard 2004):

[Meditation] means familiarization with a new way of being, a new way of perceiving things, which is more in adequation with reality, with interdependence, with the stream and continuous transformation, which our being and our consciousness is.

Ricard has been dubbed by the media as “‘the world’s happiest man’ after a study found remarkably ‘happy’ patterns of brain activity” (Haybron 2013, p. 60). Indeed (Haybron 2013, p. 60):

In a number of studies, Ricard has displayed exceptional powers of self-awareness and control. [...] In one study researchers subjected him to a loud noise like a gunshot while meditating. He showed little of the normal startle response. This is not normally thought to be the sort of thing a person can control.

In the works of Ricard (Haybron 2013, p. 29):

By “happiness” I mean here a deep sense of flourishing that arises from an exceptionally healthy mind. This is not a mere pleasurable feeling, a fleeting emotion, or a mood, but an optimal state of being

Lutz et al. (2004) analyzed the brain activity of seasoned meditators (10,000–50,000 hours of practice) inducing a state of “unconditional loving-kindness and compassion.” The sturdy found that “their gamma levels leapt 600–800%. [...] these jumps in high-amplitude gamma activity are the highest ever reported in the scientific literature apart from pathological conditions like seizures” (Brockman 2009, p. 274).

4.2.2 The Fruits of Fraud

Economic success can be interpreted in network terms as centrality. The more successful/central a player is, the more ties are shared with other successful/central players—in formal and informal networks. This can result in a position of privileged information with a tempting potential for leverage. The notion of moral hazard describes a situation where two parties engage in an interaction with incomplete information about each other. Specifically, one party engages in risky behavior knowing that it is protected against the risk and that the other party will incur the cost. Tempted by greed, moral hazards can quickly turn into immoral behavior and fraud.

The financial industry has a troubled history of misconduct. A recent and illuminating example is that of the London Inter-bank Offered Rate, known by its acronym LIBOR. In essence, it represents the average interest rates at which a selection of major banks may obtain short-term loans in the London interbank market. The LIBOR is the world’s most widely-used benchmark for short-term interest rates and it affects a lot of value in the economy. For instance, “at least an estimated $350 trillion in derivatives and other financial products are tied to it” (The New York Times 2012). In order to calculate the LIBOR, every weekday, the 11 to 18 contributor banks are asked to estimate the rate at which they could borrow funds from other banks. From this list of numbers, some of the lowest and highest values are discarded and the average of the remaining ones is taken as the rate for that day. Effectively, the sophistication of a spreadsheets is all that is required to set a number to which vasts amounts of money are tied to. And, of course, honesty.

In 2012, it was uncovered that major global banks had been manipulating this simplistic calculation for perhaps decades, culminating in 2008, when the tweaking of the LIBOR made the financial situation of the banks appear healthier than it actually was during the turmoil of the financial crisis. As a result of the fraudulent collusion, the involved banks where fined about 9 trillion USDFootnote 33 and some traders were sentenced to prison.

Fig. 7.4
figure 4

The network of collusion. The 12 major global financial institutions are show which were caught up in one, or more, of the five scandals of systematic market manipulation. The banks are associated via dark undirected links to the scandals. The financial institutions are linked by ownership relations among themselves and the labels are scaled by degree. Details are given in the text and Table

The LIBOR scandal, unfortunately, was not an isolated incident. Various other systematic market manipulations were discovered. For instanceFootnote 34:

  • Banks colluded from December 2007 until January 2013 to manipulate the 5.3 trillion-a-day foreign exchange market for their own financial gain. The banks agree to pay USD 4.3 billion to resolve the claims.

  • Banks were accused of conspiring to control the USD 16 trillion credit default swap market in violation of US antitrust laws. The banks agree to pay USD 1.9 billion to resolve the claims.

In 2014, a two-year investigation conducted by the Senate Permanent Subcommittee on Investigations accused banks of manipulating commodity prices. In 2015, a bank probe into precious metal collusion opened. It is somewhat surprising that always the same culprits appear in all the scandals. See Fig.  for the network of involved banks. Table 7.1 shows the numbers involved. In effect, 9 financial institutions, with a combined market capitalization of USD 694,643,932,000, commanding USD 13,113,044,863,000 in assets, paid a total of USD 71,255,650,000 in fines, over a seven year period, to settle charges. While a yearly fine of about USD 10 billion may appear large, it actually only accounts for 2.6% of the combined operating revenue of the institutions in 2014. Considering the amount of money that was potentially gained by the fraud, and the damage that was incurred by society, the fines lose their deterrent effect. Moreover, one wonders if these revelations only represent the tip of the iceberg, representing something more sinister that has been forming for a long time. For instance, Goldman Sachs, has been compared to a “great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money” and accused of engineering “every major market manipulation since the Great Depression” (Taibbi 2010).

Table 7.1 A sample of major global banks involved in systematic market manipulations. All numbers are in 1,000 USD. The financial information is taken from the Orbis database (https://orbis.bvdinfo.com/) for 2014. The fines paid are form a seven year time period (2007–2014). See https://blogs.ft.com/ftdata/2015/07/22/bank-fines-data/

The banks caught up in fraudulent behavior have always stated that the misconduct was perpetrated by isolated individuals—rogue traders. Any culture of systematic fraud and greed was categorically denied by the senior management. In contrast, convicted bankers often claimed that they operated in an environment they believed fostered and rewarded such behavior, where management was complicit in such patterns of misconduct. The Ghanaian UBS trader Kweku Adoboli was convicted of fraud in 2012, and incarcerated, for engaging in unauthorized trading that cost the bank over USD 2 billion. He unsuccessfully appealed against the conviction, relativizing his role by claiming that his senior managers were aware of his actions and encouraged him to take risks (BBC News 2012):

Adoboli said he had “lost control in the maelstrom of the financial crisis”, and was doing well until he changed from a conservative “bearish” position to an aggressive “bullish” stance under pressure from senior managers.

Adoboli’s defence lawyer told a jury to blame the institution, not the individual (The Economist 2012a):

[The lawyer] showcased Mr Adoboli’s trial as an indictment of UBS’s and other banks’ poor reputations. He cited other reported misdemeanors as evidence of a widespread cultural failure: UBS’s $50 billion losses from subprime mortgages, its $780 m fine for helping wealthy American’s clients shirk tax, accusations of Libor rate-fixing and PPI [payment protection insurance] misselling.

4.2.3 The Neoliberal World Order

Classical liberalism was intertwined with the rise of the Enlightenment movement beginning in the late 17th Century. Thinkers like Thomas Hobbes and John Locke argued that all men are free and equal. It followed that (Steger and Roy 2010, p. 5):

Naturally endowed with the right to life, liberty, and property, humans could legitimately establish only limited governments, whose chief task consisted of securing and protecting these individual rights, especially private property.

In essence, classical liberalism celebrated free markets and individual self-expression. The Great Depression, impacting the beginning of the 20th Century, led economic thinkers to reevaluate the role of the government. Especially Keynes, whose influence grew at the time, advocated government spending during economic crisis.Footnote 35 The ideas of classical liberalism lost their appeal and justification and only in the middle of the 20th Century they reemerged within the new doctrine of neoliberalism. Yet again, the flapping of a butterfly’s wing set of a chain reaction that would change the world for ever.

The founding fathers of neoliberalism probably would never have dared to dream about such a world-spanning success of their ideology when they first met in 1947 in the Swiss resort of Mont Pèlerin. A society was then founded by von Hayek and it soon attracted the interests of like-minded intellectuals, notably Nobel-prize winner Friedman of the Chicago School. A global network of organizations emerged, supporting the ideology, which would establish neoliberalism as the world’s dominant economic paradigm by the 1990s.

At its core, neoliberalism is a set of beliefs advocating deregulation of the economy, liberalization of trade and industry, and privatization of state-owned enterprises. Inspired by the ideology, policy issues emerged, aimed at restricting governments’ powers through minimal taxation and deficit reduction combined with spending restraints. Specifically, the high taxation of wealthy individuals and social welfare programs for all were perceived as fundamentally misguided. Laissez-faire economics and individual self-interest ruled. Neoliberalism has been associated with Ronald Reagan, Margaret Thatcher, Bill Clinton, Tony Blair, Boris Yeltsin, and George W. Bush. The ideology first became visible in the 1980s, with Pinochet’s regime, greatly inspired by the Chicago School and Friedman. After that, the ideas spread around the globe. See Steger and Roy (2010) for details.

The Secret of Success

To this day, the prevailing economic orthodoxy is the neoliberal strain of capitalism. It has been argued to be an ideology for the ultra-rich. Indeed, neoliberalism has resulted in spectacular spoils for some. To picture the nearly unfathomable success of the top earners, imagine people’s height being proportional to their income. Now imagine the entire adult population of the US walking past you, in ascending order of income, in a single hour. The following would unfold (The Economist 2011):

The first passers-by, the owners of loss-making businesses, are invisible: their heads are below ground. Then come the jobless and the working poor, who are midgets. After half an hour the strollers are still only waist-high, since America’s median income is only half the mean. It takes nearly 45 minutes before normal-sized people appear. But then, in the final minutes, giants thunder by. With six minutes to go they are 12 feet tall. When the 400 highest earners walk by, right at the end, each is more than two miles tall.

Also see Sect. 6.4.2 for the Pareto principle and Sect. 6.4.3.2 for scaling-law distributions. A point in case is hedge fund manager David Tepper. In 2009, he successfully bet that the US government would not let the big banks fail, personally earning him an estimated USD 4 billion that year (Schwartz and Story 2010).

One might point out that such financial spoils are justified as they are based on the willingness to take on risk and the luck of winning. However, the practices of financial institutions appears to paint a different picture (Story 2008):

For [the investment manager] Dow Kim, 2006 was a very good year. While his salary at Merrill Lynch was $350,000, his total compensation was 100 times that—$35 million. The difference between the two amounts was his bonus, a rich reward for the robust earnings made by the traders he oversaw in Merrill’s mortgage business. [...]

But Merrill’s record earnings in 2006—$7.5 billion—turned out to be a mirage. The company has since lost three times that amount, largely because the mortgage investments that supposedly had powered some of those profits plunged in value.

Unlike the earnings, however, the bonuses have not been reversed.

The causal relation, that corporate performance is reflected in the compensation of managers, appears to be a thing of the past (Khurana and Zelleke 2009):

Take the now-infamous example of the recently ousted Merrill Lynch chief John Thain, who not only splurged on his office decor [spending over $1 million] but also had the audacity to propose a $10 million bonus for himself. In recognition of what? A year’s work in which the company continued to make bad business decisions, lost about 80% of its value, sold itself to Bank of America to stave off possible collapse and appears to have seriously damaged its buyer’s franchise?

Once again, a systemic culture appears to permeate through an entire industry—a culture flirting with greed and fraud. Michael Lewis worked for Salomon Brothers during the end of the 1980s. He then resigned to write the book Liar’s Poker (Lewis 1990) based on his experiences at the company, befor becoming a financial journalist. Looking back, he recalls (Lewis 2011, p. xiii):

The willingness of a Wall Street investment bank to pay me hundreds of thousands of dollars to dispense investment advice to grown-ups remains a mystery to me to this day. I was twenty-four years old, with no experience of, or particular interest in, guessing which stocks and bonds would rise and which would fall. [...] Believe me when I tell you that I hadn’t the first clue. I’d never taken an accounting course, never run a business, never even had savings of my own to manage. I stumbled into a job at Salomon Brothers in 1985, and stumbled out, richer, in 1988, and even though I wrote a book about the experience, the whole thing still strikes me as preposterous—which is one of the reasons the money was so easy to walk away from. I figured the situation was unsustainable. Sooner rather than later, someone was going to identify me, along with a lot of people more or less like me, as a fraud.

Digging deeper, Geraint Anderson, a former investment banker, also wrote a book about his personal experiences, called City Boy: Beer and Loathing in the Square Mile (Anderson 2010). Some of the reviews readFootnote 36:

  • London’s pernicious financial world reveals itself in all its ugliness. [Daily Mail]

  • An effective indictment of the narcissism and decadence of City life. [The Times]

  • As a primer to back-stabbing, bullying, drug-taking, gambling, boozing, lap-dancing, this takes some beating. [Evening Standard]

Indeed, one wonders how much of the aggressive and reckless risk taking in finance is fueled by amphetamine -type stimulants and testosterone (Coates 2012). Yet again, everything appears to boil down to human psychology. Dacher Keltner, a professor of psychology, argues that true power requires modesty and empathy, not force and coercion (Keltner 2017). However, he observes (Keltner 2007):

[...] studies also show that once people assume positions of power, they’re likely to act more selfishly, impulsively, and aggressively, and they have a harder time seeing the world from other people’s points of view. This presents us with the paradox of power: The skills most important to obtaining power and leading effectively are the very skills that deteriorate once we have power.

Perhaps the greatest impact on the human psyche comes from greed. After a former Goldman Sachs executive director resigned, he wrote an opinion piece in the New York Times, where he heavily criticizes the firm’s ethical culture and moral conduct (Smith 2012):

The firm changed the way it thought about leadership. Leadership used to be about ideas, setting an example and doing the right thing. Today, if you make enough money for the firm (and are not currently an ax murderer) you will be promoted into a position of influence.

Furthermore, the entire systems appears to be a very uneven playing field—the polar opposite of a meritocracy. Again, an insider reports. Andrew Lahde founded a small hedge fund which came into the spotlight after it returned 866% in one year, betting against the subprime collapse. In 2008, he closed the fund and wrote a “goodbye letter” to his investors. There one can read (Lahde 2008):

These people who were (often) truly not worthy of the education they received (or supposedly received) rose to the top of companies such as AIG, Bear Stearns and Lehman Brothers and all levels of our government. All of this behavior supporting the Aristocracy, only ended up making it easier for me to find people stupid enough to take the other side of my trades. [...]

I now have time to repair my health, which was destroyed by the stress I layered onto myself over the past two years, as well as my entire life —where I had to compete for spaces in universities and graduate schools, jobs and assets under management—with those who had all the advantages (rich parents) that I did not. May meritocracy be part of a new form of government, which needs to be established.

On the issue of the U.S. Government, I would like to make a modest proposal. First, I point out the obvious flaws, whereby legislation was repeatedly brought forth to Congress over the past eight years, which would have reigned in the predatory lending practices of now mostly defunct institutions. These institutions regularly filled the coffers of both parties in return for voting down all of this legislation designed to protect the common citizen. This is an outrage, yet no one seems to know or care about it.

Indeed, the billionaire Warren Buffet wrote an op-ed article in the New York Times where he laconically commented:

These and other blessings [extraordinary tax breaks] are showered upon us [the ultra-rich] by legislators in Washington who feel compelled to protect us, much as if we were spotted owls or some other endangered species. It’s nice to have friends in high places.

The claim that unique intelligence, creativity, drive, and hard work result in success appears no longer true. By simply being part of an elite in-group, i.e., possessing a high network centrality, one is potentially rewarded with great wealth. We seem to live in a global plutocratic oligarchy. Indeed, “if wealth was the inevitable result of hard work and enterprise, every woman in Africa would be a millionaire” (Monbiot 2011a). Moreover, the University of St. Gallen in Switzerland, considered to be one of the leading business schools in Europe, offered a lecture on the emergence of new markets, where the increasing importance of innovation is discussed. One of the topics of the course is about leadership, power and conflict (taken from the 2007 syllabus,Footnote 37 translation mine):

Innovation is not generated in the power center (management) of a corporation, but instead, exactly by such employees, who diverge from the prevailing mindset of the company.

It appears to be a fair assessment that the innovative employee will not reap the rewards of his creativity—in contrast to the senior management. An article in the Harvard Business Review asks why so many incompetent men become leaders? The answer (Chamorro-Premuzic 2013):

In my view, the main reason for the uneven management sex ratio is our inability to discern between confidence and competence.

This is consistent with the finding that leaderless groups have a natural tendency to elect self-centered, overconfident and narcissistic individuals as leaders, and that these personality characteristics are not equally common in men and women.

In other words, what it takes to get the job is not just different from, but also the reverse of, what it takes to do the job well.

But in the end, should we really be concerned about the incomes of the ultra-rich and their lifestyles? They are the praised job creators, trickling down their wealth. Indeed, neoliberalism promises to make us all richer. Or not?

The Rise of Inequality

On the 13th of April 2010, the Dalai Lama tweetedFootnote 38:

Economic inequality, especially that between developed and developing nations, remains the greatest source of suffering on this planet.

Indeed, today we witness a spectacularly unequal distribution of wealth worldwide. Perhaps this title of an article on a British business news site epitomizes the astounding degree of disparity best (Jacobs 2018):

Just 9 of the world’s richest men have more combined wealth than the poorest 4 billion people.

Inequality is scale-invariant (see Sect. 6.4.1). In other words, it affects all levels of income. As a result, as one can read in an article in the Wall Street Journal , titled The Real Wealth Gap: Between the Rich and Super-Rich (Frank 2012):

Forget the 99 versus the one percent. Consider the economic battle raging between the one percent and the 0.0001%.

On the other end of the spectrum, in 2011, there existed 71% of people globally living on USD 10 or less per day (Kochhar 2015). Moreover, the share of people worldwide living on less than USD 1.90 was 766 million in 2013.Footnote 39 The majority of the poor live in rural areas, are poorly educated, and over half are under 18 years of age (World Bank 2017). The poorest 2 billion people spend about 50–70% of their income on food (Brown 2011).

From a complexity perspective it is not surprising to find that income and wealth are distributed in such an uneven manner. It is a universal feature of complex systems —from non-living and living domains—to display properties which are distributed according to a scaling law (see Sects. 6.4.1 and 6.4.3.2). In a nutshell, such a distribution means that nearly all the entities comprising a complex system are only marginally relevant, while a select few are of paramount importance. Although, from a human perspective this may sound unfair and undemocratic, it reflects a universal organizing principle, unconcerned with human affairs. The challenge then is to either force these distributions to morph into a different form, or, perhaps easier, try and tweak the scaling-law exponent in such a way that the resulting inequality is less pronounced.

Inequality has haunted humanity since the dawn of time. Although it may have been rising for several thousand years (Kohler et al. 2017), there was a brief period when things looked bright. As the world emerged from World War II, the political applications of Keynesian ideas brought about the “golden age of controlled capitalism” from about 1945 until 1975 (Steger and Roy 2010). The American “New Deal” and British “welfarism” resulted in an expanding middle class. High taxation of wealthy individuals and profitable corporations was offset by rising middle-class wages and increased social services. As explained earlier, the 1980s marked the start of the budding success of neoliberalism. Then tings started to change. Empirical research on capitalized income tax data in the US reveals the following. Tracking the evolution of the share of wealth owned by the top richest 10, 1, and 0.1%, respectively, a distinct pattern emerges. Looking at a period from 1930 until 2013, the wealth share resembles a U-shaped function. In other words, starting in 1930 with high inequality, the wealth of the top owners declined, reached a minimum of inequality between 1978 and 1986, only to steadily increase again after. As an example, the share of total household wealth held by the 0.1% richest families was about 23% in 1930, dropped to approximately 7% in 1978, and rose back up to roughly 22% in 2013. For details, see Saez and Zucman (2016). As a result, we started to become accustomed to statistics like (Davies et al. 2008):

The wealth share estimates reveal that the richest 2 per cent of adult individuals own more than half of all global wealth, with the richest 1 per cent alone accounting for 40% of global assets. The corresponding figures for the top 5% and the top 10% are 71 and 85%, respectively. In contrast, the bottom half of wealth holders together hold barely 1 per cent of global wealth. Members of the top decile are almost 400 times richer, on average, than the bottom 50%, and members of the top percentile are almost 2,000 times richer.

This appears almost quaint when compared to the situation ten years later in 2018, described by the headline mentioned above comparing 9 people to 4 billion people.

Some economists are more concerned about inequality than others (Stiglitz 2012; Piketty 2014; Maxton and Randers 2016). Perhaps Joseph Stiglitz, as an eminent and influential economist, finds the clearest words to explain the mechanisms of the problem next to proposing common sense recommendations. Summarizing the status quo (Stiglitz 2011):

While the top 1% have seen their incomes rise 18% over the past decade, those in the middle have actually seen their incomes fall. For men with only high-school degrees, the decline has been precipitous—12% in the last quarter-century alone. All the growth in recent decades—and more—has gone to those at the top.

[...]

Those who have contributed great positive innovations to our society, from the pioneers of genetic understanding to the pioneers of the Information Age, have received a pittance compared with those responsible for the financial innovations that brought our global economy to the brink of ruin. First, growing inequality is the flip side of something else: shrinking opportunity.

[...]

But one big part of the reason we have so much inequality is that the top 1% want it that way. The most obvious example involves tax policy. Lowering tax rates on capital gains, which is how the rich receive a large portion of their income, has given the wealthiest Americans close to a free ride.

Moreover (Stiglitz 2016):

[R]ent-seeking means getting an income not as a reward for creating wealth but by grabbing a larger share of the wealth that would have been produced anyway. Indeed, rent-seekers typically destroy wealth, as a by-product of their taking away from others.

Growth in top incomes in the past three decades has been driven mainly in two occupational categories: those in the financial sector (both executives and professionals) and non-financial executives. Evidence suggests that rents have contributed on a large scale to the strong increase in the incomes of both.

[...]

A second argument centres on the popular misconception that those at the top are the job creators, and giving more money to them will thus create more jobs. Industrialised countries are full of creative entrepreneurial people throughout the income distribution. What creates jobs is demand: when there is demand, firms will create the jobs to satisfy that demand (especially if we can get the financial system to work in the way it should, providing credit to small and medium-sized enterprises).

Unsurprisingly, economic apologists have tried to find justifications (Stiglitz 2011):

Economists long ago tried to justify the vast inequalities that seemed so troubling in the mid-19th century—inequalities that are but a pale shadow of what we are seeing in America today. The justification they came up with was called “marginal-productivity theory.” In a nutshell, this theory associated higher incomes with higher productivity and a greater contribution to society. It is a theory that has always been cherished by the rich. Evidence for its validity, however, remains thin.

Specifically (Stiglitz 2016):

In the middle of the twentieth century, it came to be believed that “a rising tide lifts all boats”: economic growth would bring increasing wealth and higher living standards to all sections of society.

Resources given to the rich would inevitably “trickle down” to the rest. It is important to clarify that this version of old-fashioned “trickle-down economics” did not follow from the postwar evidence. The “rising-tide hypothesis” was equally consistent with a “trickle-up” theory—give more money to those at the bottom and everyone will benefit; or with a “build-out from the middle” theory—help those at the centre, and both those above and below will benefit.

Inequality poses many challenges to society (Stiglitz 2011):

The more divided a society becomes in terms of wealth, the more reluctant the wealthy become to spend money on common needs. The rich don’t need to rely on government for parks or education or medical care or personal security—they can buy all these things for themselves.

In the same vein (Stiglitz 2016):

[S]ocieties with greater inequality are less likely to make public investments which enhance productivity, such as in public transportation, infrastructure, technology and education. If the rich believe that they don’t need these public facilities, and worry that a strong government which could increase the efficiency of the economy might at the same time use its powers to redistribute income and wealth, it is not surprising that public investment is lower in countries with higher inequality.

[...]

In fact, as empirical research by the IMF has shown, inequality is associated with economic instability. In particular, IMF researchers have shown that growth spells tend to be shorter when income inequality is high. This result holds also when other determinants of growth duration (like external shocks, property rights and macroeconomic conditions) are taken into account: on average, a 10-percentile decrease in inequality increases the expected length of a growth spell by one half. The picture does not change if one focuses on medium-term average growth rates instead of growth duration. Recent empirical research released by the OECD shows that income inequality has a negative and statistically significant effect on medium-term growth. It estimates that in countries like the US, the UK and Italy, overall economic growth would have been six to nine percentage points higher in the past two decades had income inequality not risen.

Finally (Stiglitz 2011):

Of all the costs imposed on our society by the top 1% , perhaps the greatest is this: the erosion of our sense of identity, in which fair play, equality of opportunity, and a sense of community are so important.

Stiglitz offers the following recommendations (Stiglitz 2016):

Reforms include more support for education, including pre-school; increasing the minimum wage; strengthening earned-income tax credits; strengthening the voice of workers in the workplace, including through unions; and more effective enforcement of anti-discrimination laws. But there are four areas in particular that could make inroads in the high level of inequality which now exists.

First, executive compensation (especially in the US) has become excessive, and it is hard to justify the design of executive compensation schemes based on stock options. Executives should not be rewarded for improvements in a firm’s stock market performance in which they play no part. [...]

Second, macroeconomic policies are needed that maintain economic stability and full employment. [...]

Third, public investment in education is fundamental to address inequality. A key determinant of workers’ income is the level and quality of education. If governments ensure equal access to education, then the distribution of wages will reflect the distribution of abilities (including the ability to benefit from education) and the extent to which the education system attempts to compensate for differences in abilities and backgrounds. If, as in the United States, those with rich parents usually have access to better education, then one generation’s inequality will be passed on to the next, and in each generation, wage inequality will reflect the income and related inequalities of the last.

Fourth, these much-needed public investments could be financed through fair and full taxation of capital income. [...]

Finally, Stiglitz makes a gloomy prophecy (Stiglitz 2011):

Governments have been toppled in Egypt and Tunisia. Protests have erupted in Libya, Yemen, and Bahrain. The ruling families elsewhere in the region look on nervously from their air-conditioned penthouse—will they be next? [...] As we gaze out at the popular fervor in the streets, one question to ask ourselves is this: When will it come to America? In important ways, our own country has become like one of these distant, troubled places.

This sentiment is shared by the billionaire Nick Hanauer. His assessment is quite alarming (Hanauer 2014):

You probably don’t know me, but like you [the article is written as a memo addressed to his fellow billionaires] I am one of those .01%ers, a proud and unapologetic capitalist. I have founded, co-founded and funded more than 30 companies across a range of industries—from itsy-bitsy ones like the night club I started in my 20s to giant ones like Amazon.com, for which I was the first nonfamily investor.

At the same time that people like you and me are thriving beyond the dreams of any plutocrats in history, the rest of the country—the 99.99%—is lagging far behind. The divide between the haves and have-nots is getting worse really, really fast. In 1980, the top 1% controlled about 8% of U.S. national income. The bottom 50% shared about 18%. Today the top 1% share about 20 %; the bottom 50%, just 12%. And so I have a message for my fellow filthy rich, for all of us who live in our gated bubble worlds: Wake up, people. It won’t last.

If we don’t do something to fix the glaring inequities in this economy, the pitchforks are going to come for us. No society can sustain this kind of rising inequality. In fact, there is no example in human history where wealth accumulated like this and the pitchforks didn’t eventually come out.

On a side note, Hanauer also prefers complexity economics over laissez-faire economics (Liu and Hanauer 2016).

It could well be that the unprecedented success of the neoliberal doctrine will lead to its own undoing. A small group of corporations have enjoyed spectacular profits in the last decade. This could turn out to be a fatal bug in the economy’s operating system. For one (Davidson 2016):

Collectively, American businesses currently have $1.9 trillion in cash, just sitting around. Not only is this state of affairs unparalleled in economic history, but we don’t even have much data to compare it with, because corporations have traditionally been borrowers, not savers.

Such inefficiencies pose a threat to capitalism, as profits are an essential part of the system and should be invested. Indeed (The Economist 2016c):

But high profits across a whole economy can be a sign of sickness. They can signal the existence of firms more adept at siphoning wealth off than creating it afresh [...]

The problem being that excessively high profits signal a lack of competition (The Economist 2016a). Greatly exacerbating the conundrum are the large asset managers. The emergence of all-powerful passive investment funds (recall Sect. 7.3.2.1) threaten the very fabric of capitalism. Some pundits have been candid about their opinion (The Economist 2016b):

In August [2016] analysts at Sanford C. Bernstein, a research firm, thundered: “A supposedly capitalist economy where the only investment is passive is worse than either a centrally planned economy or an economy with active market-led capital management.”

In its extreme conclusion, a financial system comprised mostly of passive funds could indeed spell the end of capitalism (Zweig 2016):

Even John C. Bogle, the founder of Vanguard Group who launched the first index mutual fund 40 years ago this month, agrees that passive investing can get too big for anybody’s good. “What happens when everybody indexes?” he asks. “Chaos, chaos without limit. You can’t buy or sell, there is no liquidity, there is no market.”

There has been a lot said and written about the perceived evils of neoliberalism. For instance (Monbiot 2007, 2011b, 2016, 2017; Verhaeghe 2014; Ostry et al. 2016; Dillow 2017; Rodrik 2017; Metcalf 2017; Deneen 2018).

4.3 The Blockchain: A Decentralized Architecture for the Economy

Adam Smith’s intuition about the all-powerful and all-knowing “invisible hand” guiding markets was perhaps not too far off the mark (Smith 1776). He simply misunderstood the mechanisms that would lead to self-correcting and resilient behavior. Indeed, it is surely not unrestrained self-interest which magically allows human systems to display signs of collective intelligence. However, his belief in emergent properties can be interpreted as an early vision of complexity science. Today, we do in fact know what can foster adaptive and robust behavior in complex systems. The blueprint is characterized by decentralization. In other words, the science of simple rules of interaction—encoding complexity—is invoked (see Sect. 5.2.2). The network reigns supreme.

It is an interesting observation that the design of most human systems is governed by a very specific architecture: the pyramid of power. From emperors and empresses, kings and queens, popes, chief rabbis, caliphs, czars, heads of states, generals of the armies, senior academic administrators, presidents of the board of directors, and CEOs, centralized power emanates downwards through a pyramidal organizational structure of subordinates. While this design choice has obvious and historical reasons, a crucial question is how well it can cope with increasing complexity. Indeed, it appears as though in our world today—characterized by accelerating sophistication and interconnectivity—pyramids of control are not sufficient for tackling current and future global challenges. Perhaps our tribal human design patterns have reached their expiry date. Moreover, existing power is often very preoccupied with the retention of power. This can lead to temptations resulting in greed and fraud. In this context, it is consoling to know that nature has always seemed to favor bottom-up approaches engineering and containing complexity (see Sect. 5.2.4)—biologically-driven decentralization.

Today, we are seeing the glimpse of an emerging decentralized architecture for finance and economics. Although still in its infancy, the technology represents a truly novel paradigm with great disruptive potential: The decentralized architecture enforces transparency, security, and auditability by design, in a network where the nodes do not need to trust each other. Since inception, the innovation driving this organizational change has seen many phases in its evolution: from a specific digital-currency to a distributed ledger, hosting and executing code (called smart contracts) . Indeed, distributed ledger technology (DLT) represents the third step in the evolution of the Internet:

  1. 1.

    1985–2000: The Internet of information.

  2. 2.

    2000–2015: The Internet of services.

  3. 3.

    Since 2015: The Internet of value.

The next step will most likely be the Internet of things,Footnote 40 where billions of devices are equipped with an IP address and are assimilated into the network. This vision has also been called Industry 4.0 (after mechanization, mass production, and automatization).

This current wave of decentralization, building upon cryptography, first emerged in 2009 and was initiated by Bitcoin. Originally, most people thought of the crypto-currency as only being useful for criminal activity in the darknet. Moreover, Bitcoin exchanges were plagued by scandals. Only slowly, it was realized that the actual revolution was the underlying database, called the blockchain. In a nutshell, a blockchain is a decentralized, fail-proof, and tamper-proof public ledger. It utilizes cryptography to solve the Byzantine Generals’ Problem (Lamport et al. 1982), a problem in consensus-making in a system where communication channels cannot be trusted. In other words, despite the lack of central governance and trust, a blockchain allows a self-governing network (with no middleman) to be trustfully operated. As soon as this innovation was recognized by a wider public, the tides turned. Especially in the finance community:

Banks put aside suspicion and explore shared database that drives Bitcoin. [(Shubber 2015) in The Financial Times]

Distributed ledgers, or blockchains, have the potential to dramatically reshape the capital markets industry, with significant impact on business models, reductions in risk and savings of cost and capital. (McKinsey & Company Report 2015)

The technology behind bitcoin could transform how the economy works. (The Economist 2015b)

To this day, no one knows who is responsible for the invention. There is only the pseudonym Satoshi Nakamoto associated with the designer (Nakamoto 2009). However, he (or she) did leave a message for the world in the first block of the Bitcoin blockchain—the genesis block. It contains a reference to a news article, “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks,” thought to be a commentary on the global financial crisis and the instabilities of fractional-reserve banking.Footnote 41 Assuming Nakamoto’s skeptical demeanor towards banking and finance, he was probably in despair when greed hijacked Bitcoin. In January 2017, a Bitcoin (BTC) was roughly worth 900 USD. By December, the rate had nearly climbed to BTC/USD 20,000. Crypto-millionaires—and billionaires—emerged overnight and the fear of missing out (FOMO) fueled the, predictably unsustainable, rally.

In an interesting twist of events, Switzerland, a historic epicenter of old-school banking,Footnote 42 emerged as one of the leading jurisdictions supporting DLT. This quickly lead to a flourishing initial coin offering (ICO) market, where some of the biggest offerings happened in the “crypto valley,” describing a geographical area located around Zug and Zurich. An ICO is the crypto-equivalent of an initial public offering (IPO), where the shares of a company are sold on an exchange for the first time. In an ICO, a private company can sell tokens—representing some kind of utility (e.g., access, usage, or voting rights) or promising some reward or benefit—to non-professional investors in a currently still unregulated market.Footnote 43 An ICO is based on a promise of future success and often companies only have a vision or technical white-paper to offer as justification. Nonetheless, many start-up companies have been spectacularly successful in raising funds via ICOs. Indeed, the amount of money invested by traditional venture capitalists and angel investors in 2017 has been dwarfed by the ICO money (Rowley 2018), a concept that was still mostly unheard of in 2016. A blockchain platform called EOS, promising to revolutionize everything we know about blockchains, currently raised nearly USD 1 billion.Footnote 44 Their ICO is planned to last for a whole year—an unprecedented move. It is remarkable, that a start-up company offering only a vague ideaFootnote 45 has raised the most capital in the shortest time. Even more astounding: “EOS raises $700M despite token affording no ‘rights, uses, purpose, or features’” (Haig 2017). Not surprisingly, this extreme hype invites many scammers and regulators worldwide are nervously observing these developments.

The original goal of cryptography was to establish a secure communication channel in the presence of malicious parties. Indeed, the encryption of messages has a long history. Today, cryptography is based on assumptions about the computational hardness of problems. In a nutshell, cryptography provides a one-way street: it is very easy to digitally encrypt a message, but practically impossible to extract the original message from ciphertext. Next to encryption and secure communication, modern cryptography also addresses the topics of authentication and authorization. In the context of the blockchain, public-key cryptography is essential. See the Diffie-Hellman key exchange (Diffie and Hellman 1976) and the RSA cryptosystem (Rivest et al. 1978). Another core concept is the hash function, a one-way function that maps an arbitrary block of data to a fixed-sized bit string (popular algorithms are MD5, SHA1, or SHA2). A Merkle tree is a data structure assembled from hashes. It is a tree in which every leaf node (i.e., nodes with no descendants) holds some data and every non-leaf node (i.e., nodes with descendants) holds a cryptographic hash of the data of its child nodes. The underlying data structure that powers blockchains is a Merkle tree. Every block contains the current history of valid transactions which are secured using cryptography (by miners) and linked to the existing blockchain, in effect broadcasting the information to all the nodes in the network.

Group theory is also relevant to cryptography. In this book, groups were encountered in the context of symmetry in Chap. 3 (specifically Sects. 3.1.2, 3.1.3, and 3.1.4) and unification in Sect. 5.3.2. Groups can have a continuous (Sect. 5.3.1) or discrete (Sect. 5.3.2) character. For cryptography, discrete groups are relevant. In a nutshell, they are structures that consists of a set of symbols and an operation which combines any two of its elements to form a third one. Modular arithmetic is a system of arithmetic for integers, where for three positive integers a, b, and n

$$\begin{aligned} a \equiv b \mod n, \end{aligned}$$
(7.15)

is shorthand for \(a - b\) being a multiple of n. As an example, \(38 \equiv 14 \mod 12\). Integers modulo a prime number p, with multiplication as the (group) operation, form a group (if all multiples of p are excluded). In such groups, it is hard to compute the discrete logarithm (to some base B). Symbolically, \(\log _B (y)\) denotes a number x such that \(B^x = y\). For the discrete group, the equation

$$\begin{aligned} y \equiv B^x \mod p, \end{aligned}$$
(7.16)

is, in effect, a one-way street. It is very easy to verify the value of y given x, but solving for x in (7.16) is computationally hard. For more on cryptography, see Ferguson et al. (2010), Katz and Lindell (2014), Schneier (2015).

The blockchain, with its open protocols and open-source code, has been hailed as the remedy to many problems. For instance, today about two billion people in the world still do not have a bank account. DLT has the potential to help the unbanked (and underbanked) by allowing them to create their own financial alternatives in an efficient, transparent, and scalable manner (Thellmann 2018). Moreover, Bitcoin promised the possibility of transferring micro-payments around the world. Then, the inherently transparent and unmodifiable audit trail offered by DLT can help combat corruption. As an example, in many countries land registries are badly kept or mismanaged. DLT offers a way for people who do not know or trust each other to create a record of who owns what (The Economist 2015a). Indeed, the very nature of exchanges could be affected, as atomic swaps—cryptographically powered smart contracts that enables two parties to exchange different crypto-currencies or tokens without counterparty risk—allow instant settlement without the need for clearing. In effect, this represents an innovation over traditional over-the-counter (OTC) trading markets, such as the colossal foreign exchange market, with an average daily turnover of approximately USD five trillion (Bank of International Settlement 2016). From a legal perspective, the attributes of tokens (uniqueness, immutability, transferability, enforceability, and controllable access) are similar to those of civil property. In essence, digital or crypto property can be created utilizing blockchains. This concept is now a challenge for legal writers. The list of potential applications and disruptions, initiated by this move from pyramids of hierarchical organization to a distributed and decentralized network of agents, appears endless. However, it is impossible to gauge the future impact of DLT. It is similar to the challenge of trying to asses the potential of the nascent Internet in the early 1990s. No one had the audacity to predict what today has emerged form this initial network, then comprised of a few million computers, now affecting every aspect of modern human life. Experts agree, independent of the development of global regulatory responses, DLT is here to stay. The genie is out of the bottle and we are on a road to decentralized finance.

This is not to say that there aren’t many problems plaguing the current blockchain prototypes. The sudden and unexpected success of Bitcoin quickly revealed its flaws. Scalability issues became predominant, leading to latency in the networkFootnote 46 and sky-rocketing transaction fees.Footnote 47 Most worryingly, Bitcoin turned out to be an ecological disaster due to its proof-of-work consensus mechanism, leading miners to set up vast data centers that require huge amounts of energy to solve the cryptographic challenge associated with every block (Atkin 2017). In comparison with the Internet, which had to overcome many technological hurdles in its evolution, DLT is also expected to solve many of the current challenges. For instance, there now exists an alternative consensus mechanism, called proof-of-stake, which significantly cuts back on energy use. For addressing scalability constraints, new paradigms are being explored. As an example, researchers have started to look into specific network topologies, essentially transforming the chains of the blockchains into a directed acyclic graph (DAG) of transactions (Lee 2018).

Perhaps the most ambitious vision is to turn blockchains into a global decentralized computing system. If Bitcoin is Blockchain 1.0, then EthereumFootnote 48 is Blockchain 2.0. Essentially, Ethereum is a Turing-complete blockchain, executing code (smart contracts) that can solve any reasonable computational problem. On the horizon, one blockchain project is designing a “decentralized public compute utility” utilizing formal methods from theoretical computer science (such as the rho-calculus computational formalism) . The founders acknowledge (Eykholt et al. 2017):

Together with the blockchain industry, we are still at the dawn of this decentralized movement. Now is the time to lay down a solid architectural foundation. The journey ahead for those who share this ambitious vision is as challenging as it is worthwhile [...]

But until the new reality emerges, I guess, the following holds true (Glattfelder 2014):

I personally believe that there is no fundamental reason, why the systems we humans engineer cannot also exhibit collective intelligence. You know, show behavior that is sustainable, adaptive, and resilient. But for things to change I believe we all have to ask ourselves what is our relationship to money. And if it is conducive to happiness —on a personal and collective level.