16.1 Introduction

“How did economists get it so wrong?”. Facing the financial crisis, this question was brilliantly articulated by the Nobel prize winner of 2008, Paul Krugman, in the New York Times [2]. A number of prominent economists even sees a failure of academic economics [3]. Remarkably, the following declaration has been signed by more than 2000 scientists [4]: “Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy … the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth … economists fell back in love with the old, idealized vision of an economy in which rational individuals interact in perfect markets, this time gussied up with fancy equations … Unfortunately, this romanticized and sanitized vision of the economy led most economists to ignore all the things that can go wrong. They turned a blind eye to the limitations of human rationality that often lead to bubbles and busts; to the problems of institutions that run amok; to the imperfections of markets—especially financial markets—that can cause the economy’s operating system to undergo sudden, unpredictable crashes; and to the dangers created when regulators don’t believe in regulation. … When it comes to the all-too-human problem of recessions and depressions, economists need to abandon the neat but wrong solution of assuming that everyone is rational and markets work perfectly.”

Apparently, it has not always been like this. DeLisle Worrellwrites: “Back in the sixties … we were all too aware of the limitations of the discipline: it was static where the world was dynamic, it assumed competitive markets where few existed, it assumed rationality when we knew full well that economic agents were not rational … economics had no way of dealing with changing tastes and technology … Econometrics was equally plagued with intractable problems: economic observations are never randomly drawn and seldom independent, the number of excluded variables is always unmanageably large, the degrees of freedom unacceptably small, the stability of significance tests seldom unequivocably established, the errors in measurement too large to yield meaningful results …” [5].

In the following, we will try to identify the scientific challenges that must be addressed to come up with better theories in the near future. This comprises practical challenges, i.e. the real-life problems that must be faced (see Sect. 16.2), and fundamental challenges, i.e. the methodological advances that are required to solve these problems (see Sect. 16.3). After this, we will discuss, which contribution can be made by related scientific disciplines such as econophysics and the social sciences.

The intention of this contribution is constructive. It tries to stimulate a fruitful scientific exchange, in order to find the best way out of the crisis. According to our perception, the economic challenges we are currently facing can only be mastered by large-scale, multi-disciplinary efforts and by innovative approaches [6]. We fully recognize the large variety of non-mainstream approaches that has been developed by “heterodox economists”. However, the research traditions in economics seem to be so powerful that these are not paid much attention to. Besides, there is no agreement on which of the alternative modeling approaches would be the most promising ones, i.e. the heterogeneity of alternatives is one of the problems, which slows down their success. This situation clearly implies institutional challenges as well, but these go beyond the scope of this contribution and will therefore be addressed in the future.

16.2 Real-World Challenges

Since decades, if not since hundreds of years, the world is facing a number of recurrent socio-economic problems, which are obviously hard to solve. Before addressing related fundamental scientific challenges in economics, we will therefore point out practical challenges one needs to pay attention to. This basically requires to classify the multitude of problems into packages of interrelated problems. Probably, such classification attempts are subjective to a certain extent. At least, the list presented below differs from the one elaborated by Lomborg et al. [7], who identified the following top ten problems: air pollution, security/conflict, disease control, education, climate change, hunger/malnutrition, water sanitation, barriers to migration and trade, transnational terrorism and, finally, women and development. The following (non-ranked) list, in contrast, is more focused on socio-economic factors rather than resource and engineering issues, and it is more oriented at the roots of problems rather than their symptoms:

  1. 1.

    Demographic change of the population structure (change of birth rate, migration, integration…)

  2. 2.

    Financial and economic (in)stability (government debts, taxation, and inflation/ deflation; sustainability of social benefit systems; consumption and investment behavior…)

  3. 3.

    Social, economic and political participation and inclusion (of people of different gender, age, health, education, income, religion, culture, language, preferences; reduction of unemployment…)

  4. 4.

    Balance of power in a multi-polar world (between different countries and economic centers; also between individual and collective rights, political and company power; avoidance of monopolies; formation of coalitions; protection of pluralism, individual freedoms, minorities…)

  5. 5.

    Collective social behavior and opinion dynamics (abrupt changes in consumer behavior; social contagion, extremism, hooliganism, changing values; breakdown of cooperation, trust, compliance, solidarity…)

  6. 6.

    Security and peace (organized crime, terrorism, social unrest, independence movements, conflict, war…)

  7. 7.

    Institutional design (intellectual property rights; over-regulation; corruption; balance between global and local, central and decentral control…)

  8. 8.

    Sustainable use of resources and environment (consumption habits, travel behavior, sustainable and efficient use of energy and other resources, participation in recycling efforts, environmental protection…)

  9. 9.

    Information management (cyber risks, misuse of sensitive data, espionage, violation of privacy; data deluge, spam; education and inheritance of culture…)

  10. 10.

    Public health (food safety; spreading of epidemics [flu, SARS, H1N1, HIV], obesity, smoking, or unhealthy diets…)

Some of these challenges are interdependent.

16.3 Fundamental Challenges

In the following, we will try to identify the fundamental theoretical challenges that need to be addressed in order to understand the above practical problems and to draw conclusions regarding possible solutions.

The most difficult part of scientific research is often not to find the right answer. The problem is to ask the right questions. In this context it can be a problem that people are trained to think in certain ways. It is not easy to leave these ways and see the problem from a new angle, thereby revealing a previously unnoticed solution. Three factors contribute to this:

  1. 1.

    We may overlook the relevant facts because we have not learned to see them, i.e. we do not pay attention to them. The issue is known from internalized norms, which prevent people from considering possible alternatives.

  2. 2.

    We know the stylized facts, but may not have the right tools at hand to interpret them. It is often difficult to make sense of patterns detected in data. Turning data into knowledge is quite challenging.

  3. 3.

    We know the stylized facts and can interpret them, but may not take them seriously enough, as we underestimate their implications. This may result from misjudgements or from herding effects, i.e. from a tendency to follow traditions and majority opinions. In fact, most of the issues discussed below have been pointed out before, but it seems that this did not have an effect on mainstream economics so far or on what decision-makers know about economics. This is probably because mainstream theory has become a norm [8], and alternative approaches are sanctioned as norm-deviant behavior [9, 10].

As we will try to explain, the following fundamental issues are not just a matter of approximations (which often lead to the right understanding, but wrong numbers). Rather they concern fundamental errors in the sense that certain conclusions following from them are seriously misleading. As the recent financial crisis has demonstrated, such errors can be very costly. However, it is not trivial to see what dramatic consequences factors such as dynamics, spatial interactions, randomness, non-linearity, network effects, differentiation and heterogeneity, irreversibility or irrationality can have.

16.3.1 Homo Economicus

Despite of criticisms by several nobel prize winners such as Reinhard Selten (1994), Joseph Stiglitz and George Akerlof (2001), or Daniel Kahneman (2002), the paradigm of the homo economicus, i.e. of the “perfect egoist”, is still the dominating approach in economics. It assumes that people would have quasi-infinite memory and processing capacities and would determine the best one among all possible alternative behaviors by strategic thinking (systematic utility optimization), and would implement it into practice without mistakes. The Nobel prize winner of 1976, Milton Friedman, supported the hypothesis of homo economicus by the following argument: “irrational agents will lose money and will be driven out of the market by rational agents” [11]. More recently, Robert E. Lucas Jr., the Nobel prize winner of 1995, used the rationality hypothesis to narrow down the class of empirically relevant equilibria [12].

The rational agent hypothesis is very charming, as its implications are clear and it is possible to derive beautiful and powerful economic theorems and theories from it. The best way to illustrate homo economicus is maybe a company that is run by using optimization methods from operation research, applying supercomputers. Another example are professional chess players, who are trying to anticipate the possible future moves of their opponents. Obviously, in both examples, the future course of actions can not be fully predicted, even if there are no random effects and mistakes.

It is, therefore, no wonder that people have repeatedly expressed doubts regarding the realism of the rational agent approach [13, 14]. Bertrand Russell, for example, claimed: “Most people would rather die than think”. While this seems to be a rather extreme opinion, the following scientific arguments must be taken seriously:

  1. 1.

    Human cognitive capacities are bounded [16, 17]. Already phone calls or conversations can reduce people’s attention to events in the environment a lot. Also, the abilities to memorize facts and to perform complicated logical analyses are clearly limited.

  2. 2.

    In case of NP-hard optimization problems, even supercomputers are facing limits, i.e. optimization jobs cannot be performed in real-time anymore. Therefore, approximations or simplifications such as the application of heuristics may be necessary. In fact, psychologists have identified a number of heuristics, which people use when making decisions [18].

  3. 3.

    People perform strategic thinking mainly in important new situations. In normal, everyday situation, however, they seem to pursue a satisficing rather than optimizing strategy [17]. Meeting a certain aspiration level rather than finding the optimal strategy can save time and energy spent on problem solving. In many situation, people even seem to perform routine choices [14], for example, when evading other pedestrians in counterflows.

  4. 4.

    There is a long list of cognitive biases which question rational behavior [19]. For example, individuals are favorable of taking small risks (which are preceived as “chances”, as the participation in lotteries shows), but they avoid large risks [20]. Furthermore, non-exponential temporal discounting may lead to paradoxical behaviors [21] and requires one to rethink, how future expectations must be modeled.

  5. 5.

    Most individuals have a tendency towards other-regarding behavior and fairness [22, 23]. For example, the dictator game [24] and other experiments [25] show that people tend to share, even if there is no reason for this. Leaving a tip for the waiter in a restaurant that people visit only once is a typical example (particularly in countries where tipping is not common) [26]. Such behavior has often been interpreted as sign of social norms. While social norms can certainly change the payoff structure, it has been found that the overall payoffs resulting from them do not need to create a user or system optimum [2729]. This suggests that behavioral choices may be irrational in the sense of non-optimal. A typical example is the existence of unfavorable norms, which are supported by people although nobody likes them [30].

  6. 6.

    Certain optimization problems can have an infinite number of local optima or Nash equilibria, which makes it impossible to decide what is the best strategy [31].

  7. 7.

    Convergence towards the optimal solution may require such a huge amount of time that the folk theorem becomes useless. This can make it practically impossible to play the best response strategy [32].

  8. 8.

    The optimal strategy may be deterministically chaotic, i.e. sensitive to arbitrarily small details of the initial condition, which makes the dynamic solution unpredictable on the long run (“butterfly effect”) [33, 34]. This fundamental limit of predictability also implies a limit of control—two circumstances that are even more true for non-deterministic systems with a certain degree of randomness.

In conclusion, although the rational agent paradigm (the paradigm of homo economicus) is theoretically powerful and appealing, there are a number of empirical and theoretical facts, which suggest deficiencies. In fact, most methods used in financial trading (such as technical analysis) are not well compatible with the rational agent approach. Even if an optimal solution exists, it may be undecidable for practical reasons or for theoretical ones [35, 36]. This is also relevant for the following challenges, as boundedly rational agents may react inefficently and with delays, which questions the efficient market hypothesis, the equilibrium paradigm, and other fundamental concepts, calling for the consideration of spatial, network, and time-dependencies, heterogeneity and correlations etc. It will be shown that these points can have dramatic implications regarding the predictions of economic models.

16.3.2 The Efficient Market Hypothesis

The efficient market hypothesis (EMH) was first developed by Eugene Fama [37] in his Ph.D. thesis and rapidly spread among leading economists, who used it as an argument to promote laissez-faire policies. The EMH states that current prices reflect all publicly available information and (in its stronger formulation) that prices instantly change to reflect new public information.

The idea of self-regulating markets goes back to Adam Smith [38], who believed that “the free market, while appearing chaotic and unrestrained, is actually guided to produce the right amount and variety of goods by a so-called “invisible hand”.” Furthermore, “by pursuing his own interest, [the individual] frequently promotes that of the society more effectually than when he intends to promote it” [39]. For this reason, Adam Smith is often considered to be the father of free market economics. Curiously enough, however, he also wrote a book on “The Theory of Moral Sentiments” [40]. “His goal in writing the work was to explain the source of mankind’s ability to form moral judgements, in spite of man’s natural inclinations towards self-interest. Smith proposes a theory of sympathy, in which the act of observing others makes people aware of themselves and the morality of their own behavior … [and] seek the approval of the “impartial spectator” as a result of a natural desire to have outside observers sympathize with them” [38]. Such a reputation-based concept would be considered today as indirect reciprocity [41].

Of course, there are criticisms of the efficient market hypothesis [42], and the Nobel prize winner of 2001, Joseph Stiglitz, even believes that “There is not invisible hand” [43]. The following list gives a number of empirical and theoretical arguments questioning the efficient market hypothesis:

  1. 1.

    Examples of market failures are well-known and can result, for example, in cases of monopolies or oligopolies, if there is not enough liquidity or if information symmetry is not given.

  2. 2.

    While the concept of the “invisible hand” assumes something like an optimal self-organization [44], it is well-known that this requires certain conditions, such as symmetrical interactions. In general, however, self-organization does not necessarily imply system-optimal solutions. Stop-and-go traffic [45] or crowd disasters [46] are two obvious examples for systems, in which individuals competitively try to reach individually optimal outcomes, but where the optimal solution is dynamically unstable.

  3. 3.

    The limited processing capacity of boundedly rational individuals implies potential delays in their responses to sensorial inputs, which can cause such instabilities [47]. For example, a delayed adaptation in production systems may contribute to the occurrence of business cycles [48]. The same applies to the labor market of specially skilled people, which cannot adjust on short time scales. Even without delayed reactions, however, the competitive optimization of individuals can lead to suboptimal individual results, as the “tragedy of the commons” in public goods dilemmas demonstrates [49, 50].

  4. 4.

    Bubbles and crashes, or more generally, extreme events in financial markets should not occur, if the efficient market hypothesis was correct (see next subsection).

  5. 5.

    Collective social behavior such as “herding effects” as well as deviations of human behavior from what is expected from rational agents can lead to such bubbles and crashes [51], or can further increase their size through feedback effects [52]. Cyclical feedbacks leading to oscillations are also known from the beer game [53] or from business cycles [48].

16.3.3 Equilibrium Paradigm

The efficient market paradigm implies the equilibrium paradigm. This becomes clear, if we split it up into its underlying hypotheses:

  1. 1.

    The market can be in equilibrium, i.e. there exists an equilibrium.

  2. 2.

    There is one and only one equilibrium.

  3. 3.

    The equilibrium is stable, i.e. any deviations from the equilibrium due to “fluctuations” or “perturbations” tend to disappear eventually.

  4. 4.

    The relaxation to the equilibrium occurs at an infinite rate.

Note that, in order to act like an “invisible hand”, the stable equilibrium (Nash equilibrium) furthermore needs to be a system optimum, i.e. to maximize the average utility. This is true for coordination games, when interactions are well-mixed and exploration behavior as well as transaction costs can be neglected [54]. However, it is not fulfilled by so-called social dilemmas [49].

Let us discuss the evidence for the validity of the above hypotheses one by one:

  1. 1.

    A market is a system of extremely many dynamically coupled variables. Theoretically, it is not obvious that such a system would have a stationary solution. For example, the system could behave periodic, quasi-periodic, chaotic, or turbulent [8183, 8587, 94]. In all these cases, there would be no convergence to a stationary solution.

  2. 2.

    If a stationary solution exists, it is not clear that there are no further stationary solutions. If many variables are non-linearly coupled, the phenomenon of multi-stability can easily occur [55]. That is, the solution to which the system converges may not only depend on the model parameters, but also on the initial condition, history, or perturbation size. Such facts are known as path-dependencies or hysteresis effects and are usually visualized by so-called phase diagrams [56].

  3. 3.

    In systems of non-linearly interacting variables, the existence of a stationary solution does not necessarily imply that it is stable, i.e. that the system will converge to this solution. For example, the stationary solution could be a focal point with orbiting solutions (as for the classical Lotka-Volterra equations [57]), or it could be unstable and give rise to a limit cycle [58] or a chaotic solution [33], for example (see also item 1). In fact, experimental results suggest that volatility clusters in financial markets may be a result of over-reactions to deviations from the fundamental value [59].

  4. 4.

    An infinite relaxation rate is rather unusual, as most decisions and related implemenations take time [15, 60].

The points listed in the beginning of this subsection are also questioned by empirical evidence. In this connection, one may mention the existence of business cycles [48] or unstable orders and deliveries observed in the experimental beer game [53]. Moreover, bubbles and crashes have been found in financial market games [61]. Today, there seems to be more evidence against than for the equilibrium paradigm.

In the past, however, most economists assumed that bubbles and crashes would not exist (and many of them still do). The following quotes are quite typical for this kind of thinking (from [62]): In 2004, the Federal Reserve chairman of the U.S., AlanGreenspan, stated that the rise in house values was “not enough in our judgment to raise major concerns”. In July 2005 when asked about the possibility of a housing bubble and the potential for this to lead to a recession in the future, the present U.S. Federal Reserve chairman Ben Bernanke (then Chairman of the Council of Economic Advisors) said: “It’s a pretty unlikely possibility. We’ve never had a decline in housing prices on a nationwide basis. So, what I think is more likely is that house prices will slow, maybe stabilize, might slow consumption spending a bit. I don’t think it’s going to drive the economy too far from it’s full path though.” As late as May 2007 Bernanke stated that the Federal Reserve “do not expect significant spillovers from the subprime market to the rest of the economy”.

According to the classical interpretation, sudden changes in stock prices result from new information, e.g. from innovations (“technological shocks”). The dynamics in such systems has, for example, been described by the method of comparative statics (i.e. a series of snapshots). Here, the system is assumed to be in equilibrium in each moment, but the equilibrium changes adiabatically (i.e. without delay), as the system parameters change (e.g. through new facts). Such a treatment of system dynamics, however, has certain deficiencies:

  1. 1.

    The approach cannot explain changes in or of the system, such as phase transitions (“systemic shifts”), when the system is at a critical point (“tipping point”).

  2. 2.

    It does not allow one to understand innovations and other changes as results of an endogeneous system dynamics.

  3. 3.

    It cannot describe effects of delays or instabilities, such as overshooting, self-organization, emergence, systemic breakdowns or extreme events (see Sect. 16.3.4).

  4. 4.

    It does not allow one to study effects of different time scales. For example, when there are fast autocatalytic (self-reinfocing) effects and slow inhibitory effects, this may lead to pattern formation phenomena in space and time [63, 64]. The formation of settlements, where people agglomerate in space, may serve as an example [65, 66].

  5. 5.

    It ignores long-term correlations such as memory effects.

  6. 6.

    It neglects frictional effects, which are often proportional to change (“speed”) and occur in most complex systems. Without friction, however, it is difficult to understand entropy and other path-dependent effects, in particular irreversibility (i.e. the fact that the system may not be able to get back to the previous state) [67]. For example, the unemployment rate has the property that it does not go back to the previous level in most countries after a business cycle [68].

16.3.4 Prevalence of Linear Models

Comparative statics is, of course, not the only method used in economics to describe the dynamics of the system under consideration. As in physics and other fields, one may use a linear approximation around a stationary solution to study the response of the system to fluctuations or perturbations [69]. Such a linear stability analysis allows one to study, whether the system will return to the stationary solution (which is the case for a stable [Nash] equilibrium) or not (which implies that the system will eventually be driven into a new state or regime).

In fact, the great majority of statistical analyses use linear models to fit empirical data (also when they do not involve time-dependencies). It is know, however, that linear models have special features, which are not representative for the rich variety of possible functional dependencies, dynamics, and outcomes. Therefore, the neglection of non-linearity has serious consequences:

  1. 1.

    As it was mentioned before, phenomena like multiple equilibria, chaos or turbulence cannot be understood by linear models. The same is true for self-organization phenomena or emergence. Additionally, in non-linearly coupled systems, usually “more is different”, i.e. the system may change its behavior fundamentally as it grows beyond a certain size. Furthermore, the system is often hard to predict and difficult to control (see Sect. 16.3.8).

  2. 2.

    Linear modeling tends to overlook that a strong coupling of variables, which would show a normally distributed behavior in separation, often leads to fat tail distributions (such as “power laws”) [70, 71]. This implies that extreme events are much more frequent than expected according to a Gaussian distribution. For example, when additive noise is replaced by multiplicative noise, a number of surprising phenomena may result, including noise-induced transitions [72] or directed random walks (“ratchet effects”) [73].

  3. 3.

    Phenomena such as catastrophes [74] or phase transition (“system shifts”) [75] cannot be well understood within a linear modeling framework. The same applies to the phenomenon of “self-organized criticality” [79] (where the system drives itself to a critical state, typically with power-law characteristics) or cascading effects, which can result from network interactions (overcritically challenged network nodes or links) [77, 78]. It should be added that the relevance of network effects resulting from the on-going globalization is often underestimated. For example, “the stock market crash of 1987, began with a small drop in prices which triggered an avalanche of sell orders in computerized trading programs, causing a further price decline that triggered more automatic sales.” [80]

Therefore, while linear models have the advantage of being analytically solvable, they are often unrealistic. Studying non-linear behavior, in contrast, often requires numerical computational approaches. It is likely that most of today’s unsolved economic puzzles cannot be well understood through linear models, no matter how complicated they may be (in terms of the number of variables and parameters) [8194]. The following list mentions some areas, where the importance of non-linear interdependencies is most likely underestimated:

  • Collective opinions, such as trends, fashions, or herding effects.

  • The success of new (and old) technologies, products, etc.

  • Cultural or opinion shifts, e.g. regarding nuclear power, genetically manipulated food, etc.

  • The “fitness” or competitiveness of a product, value, quality perceptions, etc.

  • The respect for copyrights.

  • Social capital (trust, cooperation, compliance, solidarity, …).

  • Booms and recessions, bubbles and crashes.

  • Bank panics.

  • Community, cluster, or group formation.

  • Relationships between different countries, including war (or trade war) and peace.

16.3.5 Representative Agent Approach

Another common simplification in economic modeling is the representative agent approach, which is known in physics as mean field approximation. Within this framework, time-dependencies and non-linear dependencies are often considered, but it is assumed that the interaction with other agents (e.g. of one company with all the other companies) can be treated as if this agent would interact with an average agent, the “representative agent”.

Let us illustrate this with the example of the public goods dilemma. Here, everyone can decide whether to make an individual contribution to the public good or not. The sum of all contributions is multiplied by a synergy factor, reflecting the benefit of cooperation, and the resulting value is equally shared among all people. The prediction of the representative agent approach is that, due to the selfishness of agents, a “tragedy of the commons” would result [49]. According to this, everybody should free-ride, i.e. nobody should make a contribution to the public good and nobody would gain anything. However, if everybody would contribute, everybody could multiply his or her contribution by the synergy factor. This example is particularly relevant as society is facing a lot of public goods problems and would not work without cooperation. Everything from the creation of public infrastructures (streets, theaters, universities, libraries, schools, the World Wide Web, Wikipedia etc.) over the use of environmental resources (water, forests, air, etc.) or of social benefit systems (such as a public health insurance), maybe even the creation and maintainance of a commonly shared language and culture are public goods problems (although the last examples are often viewed as coordination problems). Even the process of creating public goods is a public good [95].

While it is a well-known problem that people tend to make unfair contributions to public goods or try to get a bigger share of them, individuals cooperate much more than one would expect according to the representative agent approach. If they would not, society could simply not exist. In economics, one tries to solve the problem by introducing taxes (i.e. another incentive structure) or a “shadow of the future” (i.e. a strategic optimization over infinite time horizons in accordance with the rational agent approach) [96, 97]. Both comes down to changing the payoff structure in a way that transforms the public good problem into another one that does not constitute a social dilemma [98]. However, there are other solutions of the problem. When the realm of the mean-field approximation underlying the representative agent approach is left and spatial or network interactions or the heterogeneity among agents are considered, a miracle occurs: Cooperation can survive or even thrive through correlations and co-evolutionary effects [99101].

A similar result is found for the public goods game with costly punishment. Here, the representative agent model predicts that individuals avoid to invest into punishment, so that punishment efforts eventually disappear (and, as a consequence, cooperation as well). However, this “second-order free-rider problem” is naturally resolved and cooperation can spread, if one discards the mean-field approximation and considers the fact that interactions take place in space or social networks [56]. Societies can overcome the tragedy of the commons even without transforming the incentive structure through taxes. For example, social norms as well as group dynamical and reputation effects can do so [102]. The representative agent approach implies just the opposite conclusion and cannot well explain the mechanisms on which society is built.

It is worth pointing out that the relevance of public goods dilemmas is probably underestimated in economics. Partially related to Adam Smith’s belief in an “invisible hand”, one often assumes underlying coordination games and that they would automatically create a harmony between an individually and system optimal state in the course of time [54]. However, running a stable financial system and economy is most likely a public goods problem. Considering unemployment, recessions always go along with a breakdown of solidarity and cooperation. Efficient production clearly requires mutual cooperation (as the counter-example of countries with many strikes illustrates). The failure of the interbank market when banks stop lending to each other, is a good example for the breakdown of both, trust and cooperation. We must be aware that there are many other systems that would not work anymore, if people would lose their trust: electronic banking, e-mail and internet use, Facebook, eBusiness and eGovernance, for example. Money itself would not work without trust, as bank panics and hyperinflation scenarios show. Similarly, cheating customers by selling low-quality products or selling products at overrated prices, or by manipulating their choices by advertisements rather than informing them objectively and when they want, may create profits on the short run, but it affects the trust of customers (and their willingness to invest). The failure of the immunization campaign during the swine flu pandemics may serve as an example. Furthermore, people would probably spend more money, if the products of competing companies were better compatible with each other. Therefore, on the long run, more cooperation among companies and with the customers would pay off and create additional value.

Besides providing a misleading picture of how cooperation comes about, there are a number of other deficiencies of the representative agent approach, which are listed below:

  1. 1.

    Correlations between variables are neglected, which is acceptable only for “well-mixing” systems. According to what is known from critical phenomena in physics, this approximation is valid only, when the interactions take place in high-dimensional spaces or if the system elements are well connected. (However, as the example of the public goods dilemma showed, this case does not necessarily have beneficial consequences. Well-mixed interactions could rather cause a breakdown of social or economic institutions, and it is conceivable that this played a role in the recent financial crisis.)

  2. 2.

    Percolation phenomena, describing how far an idea, innovation, technology, or (computer) virus spreads through a social or business network, are not well reproduced, as they depend on details of the network structure, not just on the average node degree [103].

  3. 3.

    The heterogeneity of agents is ignored. For this reason, factors underlying economic exchange, perturbations, or systemic robustness [104] cannot be well described. Moreover, as socio-economic differentiation and specialization imply heterogeneity, they cannot be understood as emergent phenomena within a representative agent approach. Finally, it is not possible to grasp innovation without the consideration of variability. In fact, according to evolutionary theory, the innovation rate would be zero, if the variability was zero [105]. Furthermore, in order to explain innovation in modern societies, Schumpeter introduced the concept of the “political entrepreneur” [106], an extra-ordinarily gifted person capable of creating disruptive change and innovation. Such an extraordinary individual can, by definition, not be modeled by a “representative agent”.

One of the most important drawbacks of the representative agent approach is that it cannot explain the fundamental fact of economic exchange, since it requires one to assume a heterogeneity in resources or production costs, or to consider a variation in the value of goods among individuals. Ken Arrow, Nobel prize winner in 1972, formulated this point as follows [107]: “One of the things that microeconomics teaches you is that individuals are not alike. There is heterogeneity, and probably the most important heterogeneity here is heterogeneity of expectations. If we didn’t have heterogeneity, there would be no trade.”

We close this section by mentioning that economic approaches, which go beyond the representative agent approach, can be found in Refs. [108, 109].

16.3.6 Lack of Micro-Macro Link and Ecological Systems Thinking

Another deficiency of economic theory that needs to be mentioned is the lack of a link between micro- and macroeconomics. Neoclassical economics implicitly assumes that individuals make their decisions in isolation, using only the information received from static market signals. Within this oversimplified framework, macro-aggregates are just projections of some representative agent behavior, instead of the outcome of complex interactions with asymmetric information among a myriad of heterogeneous agents.

In principle, it should be understandable how the macroeconomic dynamics results from the microscopic decisions and interactions on the level of producers and consumers [81, 110] (as it was possible in the past to derive micro-macro links for other systems with a complex dynamical behavior such as interactive vehicle traffic [111]). It should also be comprehensible how the macroscopic level (the aggregate econonomic situation) feeds back on the microscopic level (the behavior of consumers and producers), and to understand the economy as a complex, adaptive, self-organizing system [112, 113]. Concepts from evolutionary theory [114] and ecology [115] appear to be particularly promising [116]. This, however, requires a recognition of the importance of heterogeneity for the system (see the the previous subsection).

The lack of ecological thinking implies not only that the sensitive network interdependencies between the various agents in an economic system (as well as minority solutions) are not properly valued. It also causes deficiencies in the development and implementation of a sustainable economic approach based on recycling and renewable resources. Today, forestry science is probably the best developed scientific discipline concerning sustainability concepts [117]. Economic growth to maintain social welfare is a serious misconception. From other scientific disciplines, it is well known that stable pattern formation is also possible for a constant (and potentially sustainable) inflow of energy [69, 118].

16.3.7 Optimization of System Performance

One of the great achievements of economics is that it has developed a multitude of methods to use scarce resources efficiently. A conventional approach to this is optimization. In principle, there is nothing wrong about this approach. Nevertheless, there are a number of problems with the way it is usually applied:

  1. 1.

    One can only optimize for one goal at a time, while usually, one needs to meet several objectives. This is mostly addressed by weighting the different goals (objectives), by executing a hierarchy of optimization steps (through ranking and prioritization), or by applying a satisficing strategy (requiring a minimum performance for each goal) [119, 120]. However, when different optimization goals are in conflict with each other (such as maximizing the throughput and minimizing the queue length in a production system), a sophisticated time-dependent strategy may be needed [121].

  2. 2.

    There is no unique rule what optimization goal should be chosen. Low costs? High profit? Best customer satisfaction? Large throughput? Competitive advantage? Resilience? [122] In fact, the choice of the optimization function is arbitrary to a certain extent and, therefore, the result of optimization may vary largely. Goal selection requires strategic decisions, which may involve normative or moral factors (as in politics). In fact, one can often observe that, in the course of time, different goal functions are chosen. Moreover, note that the maximization of certain objectives such as resilience or “fitness” depends not only on factors that are under the control of a company. Resilience and “fitness” are functions of the whole system, in particularly, they also depend on the competitors and the strategies chosen by them.

  3. 3.

    The best solution may be the combination of two bad solutions and may, therefore, be overlooked. In other words, there are “evolutionary dead ends”, so that gradual optimization may not work. (This problem can be partially overcome by the application of evolutionary mechanisms [120]).

  4. 4.

    In certain systems (such as many transport, logistic, or production systems), optimization tends to drive the system towards instability, since the point of maximum efficiency is often in the neighborhood or even identical with the point of breakdown of performance. Such breakdowns in capacity or performance can result from inefficiencies due to dynamic interaction effects. For example, when traffic flow reaches its maximum capacity, sooner or later it breaks down. As a consequence, the road capacity tends to drop during the time period where it is most urgently needed, namely during the rush hour [45, 123].

  5. 5.

    Optimization often eliminates reduncancies in the system and, thereby, increases the vulnerability to perturbations, i.e. it decreases robustness and resilience.

  6. 6.

    Optimization tends to eliminate heterogeneity in the system [80], while heterogeneity frequently supports adaptability and resilience.

  7. 7.

    Optimization is often performed with centralized concepts (e.g. by using supercomputers that process information collected all over the system). Such centralized systems are vulnerable to disturbances or failures of the central control unit. They are also sensitive to information overload, wrong selection of control parameters, and delays in adaptive feedback control. In contrast, decentralized control (with a certain degree of autonomy of local control units) may perform better, when the system is complex and composed of many heterogeneous elements, when the optimization problem is NP hard, the degree of fluctuations is large, and predictability is restricted to short time periods [77, 124]. Under such conditions, decentralized control strategies can perform well by adaptation to the actual local conditions, while being robust to perturbations. Urban traffic light control is a good example for this [121, 125].

  8. 8.

    Further on, today’s concept of quality control appears to be awkward. It leads to a never-ending contest, requiring people and organizations to fulfil permanently increasing standards. This leads to over-emphasizing measured performance criteria, while non-measured success factors are neglected. The engagement into non-rewarded activities is discouraged, and innovation may be suppressed (e.g. when evaluating scientists by means of their h-index, which requires them to focus on a big research field that generates many citations in a short time).

    While so-called “beauty contests” are considered to produce the best results, they will eventually absorb more and more resources for this contest, while less and less time remains for the work that is actually to be performed, when the contest is won. Besides, a large number of competitors have to waste considerable resources for these contests which, of course, have to be paid by someone. In this way, private and public sectors (from physicians over hospitals, administrations, up to schools and universities) are aching under the evaluation-related administrative load, while little time remains to perform the work that the corresponding experts have been trained for. It seems naïve to believe that this would not waste resources. Rather than making use of individual strengths, which are highly heterogeneous, today’s way of evaluating performance enforces a large degree of conformity.

There are also some problems with parameter fitting, a method based on optimization as well. In this case, the goal function is typically an error function or a likelihood function. Not only are calibration methods often “blindly” applied in practice (by people who are not experts in statistics), which can lead to overfitting (the fitting of meaningless “noise”), to the neglection of collinearities (implying largely variable parameter values), or to inaccurate and problematic parameter determinations (when the data set is insufficient in size, for example, when large portfolios are to be optimized [126]). As estimates for past data are not necessarily indicative for the future, making predictions with interpolation approaches can be quite problematic (see also Sect. 16.3.3 for the challenge of time dependence). Moreover, classical calibration methods do not reveal inappropriate model specifications (e.g. linear ones, when non-linear models would be needed, or unsuitable choices of model variables). Finally, they do not identify unknown unknowns (i.e. relevant explanatory variables, which have been overlooked in the modeling process).

16.3.8 Control Approach

Managing economic systems is a particular challenge, not only for the reasons discussed in the previous section. As large economic systems belong to the class of complex systems, they are hard or even impossible to manage with classical control approaches [76, 77].

Complex systems are characterized by a large number of system elements (e.g. individuals, companies, countries, …), which have non-linear or network interactions causing mutual dependencies and responses. Such systems tend to behave dynamic rather than static and probabilistic rather than deterministic. They usually show a rich, hardly predictable, and sometimes paradoxical system behavior. Therefore, they challenge our way of thinking [127], and their controllability is often overestimated (which is sometimes paraphrased as “illusion of control”) [80, 128, 129]. In particular, causes and effects are typically not proportional to each other, which makes it difficult to predict the impact of a control attempt.

A complex system may be unresponsive to a control attempt, or the latter may lead to unexpected, large changes in the system behavior (so-called “phase transitions”, “regime shifts”, or “catastrophes”) [75]. The unresponsiveness is known as principle of Le Chatelier or Goodhart’s law [130], according to which a complex system tends to counteract external control attempts. However, regime shifts can occur, when the system gets close to so-called “critical points” (also known as “tipping points”). Examples are sudden changes in public opinion (e.g. from pro to anti-war mood, from a smoking tolerance to a public smoking ban, or from buying energy-hungry sport utilities vehicles (SUVs) to buying environmentally-friendly cars).

Particularly in case of network interactions, big changes may have small, no, or unexpected effects. Feedback loops, unwanted side effects, and circuli vitiosi are quite typical. Delays may cause unstable system behavior (such as bull-whip effects) [53], and over-critical perturbations can create cascading failures [78]. Systemic breakdowns (such as large-scale blackouts, bankruptcy cascades, etc.) are often a result of such domino or avalanche effects [77], and their probability of occurrence as well as their resulting damage are usually underestimated. Further examples are epidemic spreading phenomena or disasters with an impact on the socio-economic system. A more detailed discussion is given in Refs. [76, 77].

Other factors contributing to the difficulty to manage economic systems are the large heterogeneity of system elements and the considerable level of randomness as well as the possibility of a chaotic or turbulent dynamics (see Sect. 16.3.4). Furthermore, the agents in economic systems are responsive to information, which can create self-fulfilling or self-destroying prophecy effects. Inflation may be viewed as example of such an effect. Interestingly, in some cases one even does not know in advance, which of these effects will occur.

It is also not obvious that the control mechanisms are well designed from a cybernetic perspective, i.e. that we have sufficient information about the system and suitable control variables to make control feasible. For example, central banks do not have terribly many options to influence the economic system. Among them are performing open-market operations (to control money supply), adjustments in fractional-reserve banking (keeping only a limited deposit, while lending a large part of the assets to others), or adaptations in the discount rate (the interest rate charged to banks for borrowing short-term funds directly from a central bank). Nevertheless, the central banks are asked to meet multiple goals such as:

  • To guarantee well-functioning and robust financial markets.

  • To support economic growth.

  • To balance between inflation and unemployment.

  • To keep exchange rates within reasonable limits.

Furthermore, the one-dimensional variable of “money” is also used to influence individual behavior via taxes (by changing behavioral incentives). It is questionable, whether money can optimally meet all these goals at the same time (see Sect. 16.3.7). We believe that a computer, good food, friendship, social status, love, fairness, and knowledge can only to a certain extent be replaced by and traded against each other. Probably for this reason, social exchange comprises more than just material exchange [131133].

It is conceivable that financial markets as well are trying to meet too many goals at the same time. This includes:

  • To match supply and demand.

  • To discover a fair price.

  • To raise the foreign direct investment (FDI).

  • To couple local economies with the international system.

  • To facilitate large-scale investments.

  • To boost development.

  • To share risk.

  • To support a robust economy, and

  • To create opportunities (to gamble, to become rich, etc.).

Therefore, it would be worth stuyding the system from a cybernetic control perspective. Maybe, it would work better to separate some of these functions from each other rather than mixing them.

16.3.9 Human Factors

Another aspect that tends to be overlooked in mainstream economics is the relevance of psychological and social factors such as emotions, creativity, social norms, herding effects, etc. It would probably be wrong to interpret these effects just as a result of perception biases (see Sect. 16.3.1). Most likely, these human factors serve certain functions such as supporting the creation of public goods [102] or collective intelligence [134, 135].

As Bruno Frey has pointed out, economics should be seen from a social science perspective [136]. In particular, research on happiness has revealed that there are more incentives than just financial ones that motivate people to work hard [133]. Interestingly, there are quite a number of factors which promote volunteering [132].

It would also be misleading to judge emotions from the perspective of irrational behavior. They are a quite universal and a relatively energy-consuming way of signalling. Therefore, they are probably more reliable than non-emotional signals. Moreover, they create empathy and, consequently, stimulate mutual support and a readiness for compromises. It is quite likely that this creates a higher degree of cooperativeness in social dilemma situations and, thereby, a higher payoff on average as compared to emotionless decisions, which often have drawbacks later on.

16.3.10 Information

Finally, there is no good theory that would allow one to assess the relevance of information in economic systems. Most economic models do not consider information as an explanatory variable, although information is actually a stronger driving force of urban growth and social dynamics than energy [137]. While we have an information theory to determine the number of bits required to encode a message, we are lacking a theory, which would allow us to assess what kind of information is relevant or important, or what kind of information will change the social or economic world, or history. This may actually be largely dependent on the perception of pieces of information, and on normative or moral issues filtering or weighting information. Moreover, we lack theories describing what will happen in cases of coincidence or contradiction of several pieces of information. When pieces of information interact, this can change their interpretation and, thereby, the decisions and behaviors resulting from them. That is one of the reasons why socio-economic systems are so hard to predict: “Unknown unknowns”, structural instabilities, and innovations cause emergent results and create a dynamics of surprise [138].

16.4 Role of Other Scientific Fields

16.4.1 Econophysics, Ecology, Computer Science

The problems discussed in the previous two sections pose interesting practical and fundamental challenges for economists, but also other disciplines interested in understanding economic systems. Econophysics, for example, pursues a physical approach to economic systems, applying methods from statistical physics [81], network theory [139, 140], and the theory of complex systems [85, 87]. A contribution of physics appears quite natural, in fact, not only because of its tradition in detecting and modeling regularities in large data sets [141]. Physics also has a lot of experience how to theoretically deal with problems such as time-dependence, fluctuations, friction, entropy, non-linearity, strong interactions, correlations, heterogeneity, and many-particle simulations (which can be easily extended towards multi-agent simulations). In fact, physics has influenced economic modeling already in the past. Macroeconomic models, for example, were inspired by thermodynamics. More recent examples of relevant contributions by physicists concern models of self-organizing conventions [54], of geographic agglomeration [65], of innovation spreading [142], or of financial markets [143], to mention just a few examples.

One can probably say that physicists have been among the pioneers calling for new approaches in economics [81, 87, 143147]. A particularly visionary book beside Wolfgang Weidlich’s work was the “Introduction to Quantitative Aspects of Social Phenomena” by Elliott W. Montroll and Wade W. Badger, which addressed by mathematical and empirical analysis subjects as diverse as population dynamics, the arms race, speculation patterns in stock markets, congestion in vehicular traffic as well as the problems of atmospheric pollution, city growth and developing countries already in 1974 [148].

Unfortunately, it is impossible in our paper to reflect the numerous contributions of the field of econophysics in any adequate way. The richness of scientific contributions is probably reflected best by the Econophysics Forum run by Yi-Cheng Zhang [149]. Many econophysics solutions are interesting, but so far they are not broad and mighty enough to replace the rational agent paradigm with its large body of implications and applications. Nevertheless, considering the relatively small number of econophysicists, there have been many promising results. The probably largest fraction of publications in econophysics in the last years had a data-driven or computer modeling approach to financial markets [143]. But econophyics has more to offer than the analysis of financial data (such as fluctuations in stock and foreign currency exchange markets), the creation of interaction models for stock markets, or the development of risk management strategies. Other scientists have focused on statistical laws underlying income and wealth distributions, non-linear market dynamics, macroeconomic production functions and conditions for economic growth or agglomeration, sustainable economic systems, business cycles, microeconomic interaction models, network models, the growth of companies, supply and production systems, logistic and transport networks, or innovation dynamics and diffusion. An overview of subjects is given, for example, by Ref. [152] and the contributions to annual spring workshop of the Physics of Socio-Economic Systems Division of the DPG [153].

To the dissatisfaction of many econophysicists, the transfer of knowledge often did not work very well or, if so, has not been well recognized [150]. Besides scepticism on the side of many economists with regard to novel approaches introduced by “outsiders”, the limited resonance and level of interdisciplinary exchange in the past was also caused in part by econophysicists. In many cases, questions have been answered, which no economist asked, rather than addressing puzzles economists are interested in. Apart from this, the econophysics work was not always presented in a way that linked it to the traditions of economics and pointed out deficiencies of existing models, highlighting the relevance of the new approach well. Typical responses are: Why has this model been proposed and not another one? Why has this simplification been used (e.g. an Ising model of interacting spins rather than a rational agent model)? Why are existing models not good enough to describe the same facts? What is the relevance of the work compared to previous publications? What practical implications does the finding have? What kind of paradigm shift does the approach imply? Can existing models be modified or extended in a way that solves the problem without requiring a paradigm shift? Correspondingly, there have been criticisms not only by mainstream economists, but also by colleagues, who are open to new approaches [151].

Therefore, we would like to suggest to study the various economic subjects from the perspective of the above-mentioned fundamental challenges, and to contrast econophysics models with traditional economic models, showing that the latter leave out important features. It is important to demonstrate what properties of economic systems cannot be understood for fundamental reasons within the mainstream framework (i.e. cannot be dealt with by additional terms within the modeling class that is conventionally used). In other words, one needs to show why a paradigm shift is unavoidable, and this requires careful argumentation. We are not claiming that this has not been done in the past, but it certainly takes an additional effort to explain the essence of the econophysics approach in the language of economics, particularly as mainstream economics may not always provide suitable terms and frameworks to do this. This is particularly important, as the number of econophysicists is small compared to the number of economists, i.e. a minority wants to convince an established majority. To be taken seriously, one must also demonstrate a solid knowledge of related previous work of economists, to prevent the stereotypical reaction that the subject of the paper has been studied already long time ago (tacitly implying that it does not require another paper or model to address what has already been looked at before).

A reasonable and promising strategy to address the above fundamental and practical challenges is to set up multi-disciplinary collaborations in order to combine the best of all relevant scientific methods and knowledge. It seems plausible that this will generate better models and higher impact than working in separation, and it will stimulate scientific innovation. Physicists can contribute with their experience in handling large data sets, in creating and simulating mathematical models, in developing useful approximations, in setting up laboratory experiments and measurement concepts. Current research activities in economics do not seem to put enough focus on:

  • Modeling approaches for complex systems [154].

  • Computational modeling of what is not analytically tractable anymore, e.g. by agent-based models [155157].

  • Testable predictions and their empirical or experimental validation [164].

  • Managing complexity and systems engineering approaches to identify alternative ways of organizing financial markets and economic systems [91, 93, 165], and

  • An advance testing of the effectiveness, efficiency, safety, and systemic impact (side effects) of innovations, before they are implemented in economic systems. This is in sharp contrast to mechanical, electrical, nuclear, chemical and medical drug engineering, for example.

Expanding the scope of economic thinking and paying more attention to these natural, computer and engineering science aspects will certainly help to address the theoretical and practical challenges posed by economic systems. Besides physics, we anticipate that also evolutionary biology, ecology, psychology, neuroscience, and artificial intelligence will be able to make significant contributions to the understanding of the roots of economic problems and how to solve them. In conclusion, there are interesting scientific times ahead.

16.4.2 Social Sciences

It is a good question, whether answering the above list of fundamental challenges will sooner or later solve the practical problems as well. We think, this is a precondition, but it takes more, namely the consideration of social factors. In particular, the following questions need to be answered:

  1. 1.

    How to understand human decision-making? How to explain deviations from rational choice theory and the decision-theoretical paradoxes? Why are people risk averse?

  2. 2.

    How does consciousness and self-consciousness come about?

  3. 3.

    How to understand creativity and innovation?

  4. 4.

    How to explain homophily, i.e. the fact that individuals tend to agglomerate, interact with and imitate similar others?

  5. 5.

    How to explain social influence, collective decision making, opinion dynamics and voting behavior?

  6. 6.

    Why do individuals often cooperate in social dilemma situations?

  7. 7.

    How do indirect reciprocity, trust and reputation evolve?

  8. 8.

    How do costly punishment, antisocial punishment, and discrimination come about?

  9. 9.

    How can the formation of social norms and conventions, social roles and socialization, conformity and integration be understood?

  10. 10.

    How do language and culture evolve?

  11. 11.

    How to comprehend the formation of group identity and group dynamics? What are the laws of coalition formation, crowd behavior, and social movements?

  12. 12.

    How to understand social networks, social structure, stratification, organizations and institutions?

  13. 13.

    How do social differentiation, specialization, inequality and segregation come about?

  14. 14.

    How to model deviance and crime, conflicts, violence, and wars?

  15. 15.

    How to understand social exchange, trading, and market dynamics?

We think that, despite the large amount of research performed on these subjects, they are still not fully understood. The ultimate goal would be to formulate mathematical models, which would allow one to understand these issues as emergent phenomena based on first principles, e.g. as a result of (co-)evolutionary processes. Such first principles would be the basic facts of human capabilities and the kinds of interactions resulting from them, namely:

  1. 1.

    Birth, death, and reproduction.

  2. 2.

    The need of and competition for resources (such as food and water).

  3. 3.

    The ability to observe their environment (with different senses).

  4. 4.

    The capability to memorize, learn, and imitate.

  5. 5.

    Empathy and emotions.

  6. 6.

    Signaling and communication abilities.

  7. 7.

    Constructive (e.g. tool-making) and destructive (e.g. fighting) abilities.

  8. 8.

    Mobility and (limited) carrying capacity.

  9. 9.

    The possibility of social and economic exchange.

Such features can, in principle, be implemented in agent-based models [158163]. Computer simulations of many interacting agents would allow one to study the phenomena emerging in the resulting (artificial or) model societies, and to compare them with stylized facts [163, 168, 169]. The main challenge, however, is not to program a seemingly realistic computer game. We are looking for scientific models, i.e. the underlying assumptions need to be validated, and this requires to link computer simulations with empirical and experimental research [170], and with massive (but privacy-respecting) mining of social interaction data [141]. In the ideal case, there would also be an analytical understanding in the end, as it has been recently gained for interactive driver behavior [111].