1 Introduction

One of the advantages of being retired is that you have the time and opportunity to look back without anger and with a historical perspective. Not only to recollect one’s personal life, but the professional one as well. In my case, the academic life. This entails a wide scope of freedom and of context to answer the question: What seems to me the most relevant landmarks in the evolution of the Economy along the last 60 years? In this chapter, I give a personal and probably controversial answer.

The chapter begins with the great failure of the Economy: the gap in the distribution of wealth even in developed countries. It continues asking the questions: What have we learned from a crisis of the seventies? What are we learning from the current crisis? What was wrong with Economics as a social science? Can Experimental Economics (EE) allow us to understand and accommodate the social complexity of the Economy? What is the scope of Artificial Economics (AE)?

Finally, since AE provides solutions to complex problems, can we export socially inspired methods to other areas of Management Engineering? I conclude that there are tools to improve Economics and to help us in designing proper institutional frameworks. However, overcoming the actual economic challenges will require changes in methods and institutions far beyond Economic Policy. The changes must be institutional and can not be delayed. Not so much improvements in Economic Policy as changes in Political Economy.

2 Growth and Inequality

2.1 Growth and Inequality: The Evidence

Economic Theory is concerned with two main issues: how to generate wealth and how to distribute it. If we look with the lenses of Economic History, we can see that the Economy is doing well on wealth generation but badly in its distribution. In the last century, poor countries multiply by 3 their per capita income, from poverty; but the rich ones multiply their per capita income by 6, from richness. The income distribution gap has increased.

In the rich countries, since the early eighties the per capita income gap is widening to a point of becoming a major threat to democracy. As Sachs (2010) warned for the U.S.A. “Amazingly, the richest 1% of American households now has a higher net worth than the bottom 90%. The annual income of the richest 12,000 households is greater than that of the poorest 24 million households… The level of political corruption in America is staggering. Everything now is about money to run electoral campaigns, which have become incredibly expensive. The mid-term elections cost an estimated $4.5 billion, with most of the contributions coming from big corporations and rich contributors. These powerful forces, many of which operate anonymously under US law, are working relentlessly to defend those at the top of the income distribution…. If this continues, a third party will emerge, committed to cleaning up American politics and restoring a measure of decency and fairness. This, too, will take time. The political system is deeply skewed against challenges to the two incumbent parties. Yet the time for change will come”.

Thomas Piketty’s Capital in the Twenty-First Century best-seller and the views of Nobel Prize economists such as Paul Krugman and Joseph Stiglitz published in major newspapers and magazines are increasingly influential, boosting inequality to the top of the political agenda in mainly developed countries. Even the IMF is very concerned with inequality because it hinders growth. In the IMF’s flagship publications, last June, three of its top economists raise the question: Is neoliberalism oversold?

Data of Figs. 1 and 2 are at odds with free markets and marginalism as the pillars of a fair distribution of wealth. It seems that growth comes at the cost of a soaring inequality. What is wrong?

Fig. 1
figure 1

Differentials in the world’s income growth in the last century. Source IMF (World Economic Outlook, May 2000: Asset Prices and the Business Cycle)

Fig. 2
figure 2

Change in share of aggregate U.S Income since 1979. Source U.S. Census Bureau’s Current Population Survey and MN2020

2.2 Trade as the Engine of Growth

Humans are the only living beings that can transform goods in an intentional way and can engage in trading. By transforming, specializing and trading, they generate wealth but not necessarily achieve its fair distribution. Figure 3 shows that goods export in 2015 is 100 times greater than in 1950. An average annual increasing rate of 6%. Yes, exchange correlates with growth.

Fig. 3
figure 3

World merchandise trade volume by product group. Source WTO

Some of my students in engineering were quite surprised with the following example showing that the exchange can generate wealth even without physical inputs to the system: The miracle of exchange.

Imagine that there are two power plants that produce 8 Tm of SO2. The regulator wants to reduce total emission to 4 Tm of SO2 per day. There are two options: (a) each plant has to abate 2 Tm (b) they have to abate 4 Tm but the regulator leaves them to arrange how they clean 4 Tm. Table 1 shows the options and marginal cost of SO2 abatement.

Table 1 Options and marginal cost of SO2 abatement

In the first option, the emissions abatement cost is 300 = 100 + 200 (Plant A); 900 = 300 + 600 (Plant B). Total cost with option 1: 300 + 900 = 1.200€. In the second option, plant A sells 1 permit to plant B for a price not lower than 300€ and abates 3 Tm. The plant B buys one permit to the plant A for a price not greater than 600€ and abates 1 Tm. Total abatement cost of option 2: 100 + 200 + 300 + 300 = 900€ with trade. Net value added by trading: 1200–900 = 300€.

Any trading price between 300€ and 600€ will lead to an increase in social wealth of 300€. The market does not provide the fair transfer price. It is “mute” with respect to the actual permit price in this private personal exchange. Emission permits markets were created to allow for a collective strategy as the one described. They failed, mainly because the governments endorsed free permits to the participant firms, and/or the regulators intentionally design the market to induce clean production firms, as in the EPA Green House Gas (GHG) emission permits market Posada et al. (2007). The lesson to be learned from this simple example is that trading generates wealth but to achieve a distribution of this wealth one needs a properly design institution and a “visible hand”.

A. Smith was well aware of this fact, and he advanced a proper answer: “A regulation which enables those of the same trade to tax themselves, in order to provide for their poor, their sick, their widows and orphans, by giving them a common interest to manage, renders such assemblies necessary. An incorporation not only renders them necessary, but makes the act of the majority binding upon the whole. In a free trade, an effectual combination cannot be established but by the unanimous consent of every single trader, and it cannot last longer than every single trader continues of the same mind” Smith (1776). The Wealth of Nations, Book I, Chapter X.

2.3 The Solow’s Residual: Celebrating the Instability and Change of the Economy

In a seminal paper, Solow (1957) took a long time series sample, 1909–1949, of the USA economy to study the relationship between capital and labor with GDP. He found, an important unexplained residual. This empirical evidence came as a surprise. The rate of accumulation of income was not due to the sum of the accumulation of capital and labor, weighted by their relative shares in the production. There was a 40% of the growth rate unexplained. That is, a residual surplus after remunerate the factors according to their marginal contributions. Given the initial failure to identify the causes of this residual, it was attributed to technological change in a broad sense (techno capital, human capital, innovation, etc.). In an example of academic integrity, Abramovitz (1956) renamed it as “a measure of our ignorance”.

Denote output by Y, labor by L and capital by K, the aggregate production function will be

$${\text{Y}} = {\text{F}}\left( {{\text{L}};{\text{K}}} \right) \times {\text{TFP}}$$

where TFP is the total factor productivity leverage produced by: Externalities, technological advances, learning, better knowledge and management improvements. Note the relevance of intangible factors such as I (institutions), M (management skills) and E (entrepreneurship). To express the residual in rates of change g, let W be the wage and P the price of the output. Then \(\text{g}_{\text{Y}} = \left( {{\text{WL}}/{\text{PY}}} \right){\text{g}}_{\text{L}}\) where \({\text{WL/PY}}\) is the share (cost) of labor in output, denoted by a. Then we can write in rates of change

$${\text{g}}_{\text{Y}} = {\text{a g}}_{{{\text{L}} }}\, {\text{and a similar way g}}_{\text{Y}} = {\text{bg}}_{\text{K}}$$

In a general homogeneous production function, \({\text{Y}} = {\text{A L}}^{\text{a}} {\text{K}}^{\text{b}}\) the “residual” in rates will be

$${\text{residual }} = {\text{g}}_{\text{Y}} - \left({{\text{ag}}_{\text{L}} + {\text{bg}}_{\text{K}} } \right).$$

.

Once estimated a and b one can calculate the effect of increase rate of L and or K on Y and the residual.

Instead of using verbal accounting to explain the residual, an explosion of work was done on the role of several factors in determining growth and it was published in major economic journals trying to fit, not to explain, the residual. It was an outstanding example of bad scientific practice. If you wanted to progress in Mathematical Economics research at that time, it seems that following the flock of the academic establishment, was the easy way to publish in top rank economic journals. Of course, there were other economists undertaking research of true relevance with the focus in explaining endogenous growth as Romer (1990) and Lucas (1993). It is worth mentioning two core ideas that are derived from simple mathematics and creativity.

First idea. Flexibility in capital as a source of growth. Caballero and Lyons (1990) studied how increasing the number of intermediate factors for a given amount of capital will increase production.

Lets consider an homogeneous production function with constant scale economies with respect to labor L and capital K. What happens if K is divided into M intermediate capital factors \({\text{x}}_{\text{i}}\) \({\text{i}} = 1\ldots {\text{M}}\) with constant scale economies (CSE) as well?

\({\text{Q}}_{0} = {\text{ L}}^{{ 1- {\text{ a}}}} {\text{K}^{a}}\) will be the initial production.

With the use of M intermediate factors, each one with the same amount of capital and with CSE we will have

$${\text{Q}}_{\text{M}} = {\text{ L}}^{{ 1- {\text{ a}}}} \sum {{\text{x}}_{\text{i}}^{\text{a}} } = {\text{L}}^{{ 1- {\text{ a}}}} {\text{M }}\left( {{\text{K}}/{\text{M}}} \right)^{\text{a}}\,= {\text{ M}}^{{ 1- {\text{a}} }} \left( {{\text{L}}^{{ 1- {\text{ a}}}} {\text{K}}^{\text{a}} } \right) = {\text{ M}}^{{ 1- {\text{a}} }} {\text{Q}}_{0}$$

An increase in M, the variety of capital factors and maintaining the amount of L and K will increase production.

Second idea. An externality that is endogenous to the firm. Growth caused by knowledge (I + D) as a non-rival factor. Consider an example due to Romer (1990). I keep his numbers although learning in this industry has been exponential. A factory of hard disk drivers (hdd) for computers is using 1000 h of engineering to produce a hdd of 20 Mg. It actually there are 100 employees and a factory investment of $10 millions to produce 100,000 \({\text{hdd/year}} = 2\,{\text{trillion}}\,{\text{Mg/year}}.\)

Alternative one. The rival factors of the company are duplicated (factory and labor). Under the assumption of a constant return of scale production function, the output will double, to 4 trillion Mb of storage per year.

Alternative two. Suppose that the firm could have invested 20,000 h of engineering time (double engineering hours) in the design work instead of 10,000 h and, by doing so, could have a new hdd of 30-MB that could be manufactured with the same factory and workers. When the firm doubles all its inputs, it uses a 20,000-h design, 2 factories, and 200 workers and produces 6 trillion megabytes of storage per year, three times the original output.

Once the effect on growth of structural factors and intangibles was accepted, this “fitting” festival ended up with a narrative approach to establish competitiveness in terms of a mix of criteria. Many of them are intangibles factors, such as the quality of the institutions, the health care systems, the legal system etc. One of the most accepted narrative measures of growth capacity is the World Economic Forum competiveness’ index. It is not by chance that the ranking in competitiveness is correlated with the index of transparency.

In view of these findings, how will be the shape of the firm’s productivity curve? Since it constantly shifts up, the real productivity curve will never reach the decreasing zone. What does this instability imply? It implies that we are always in the decreasing zone of the mean cost function. Of course, in the ideal model, because the life cycle of a particular product will be too short. However, one can always think of a new product as a continuation of the old replaced product. Even for manufactured goods, this means that we approach a “Zero Marginal Cost Economy”. Let us celebrate the instability and continuous change of the real Economy, which does not seem to obey the marginal rules! (Fig. 4).

Fig. 4
figure 4

Solow’s residual, instability and change in Economics

The Earth is an “open system” receiving free solar “exogenous” energy. The Economy is a “social system”, where collective intelligence and technological advances play the role of the sun. However, this source of growth is endogenous, maintaining endogenous development. Thermodynamic ideas are not proper for social systems. Solow’s analysis means welcome instability of the Economy in the medium and long run.

Solow’s residual leverages growth; but does it brings equality? The answer is no.

Figure 5 shows that corporate profits started to take off, relative to GDP growth, around 1985 and they soared before exploding in the last decade. It shows as well the short fall during the two first years of the financial crisis. What are the implications of this fact for growth itself and for inequality? We will answer this question later in the chapter. Now, we will concentrate in what happens with the non-financial activity.

Fig. 5
figure 5

The distribution of growth. Modified from Thompson (2013)

2.4 Preventing Inequality Needs to Reform the Social Contract (Rule of Law)

Following the considerations above, there are three messages that economists and politicians ought to listen and citizens should be aware.

First message: The simplistic market-fundamentalist theories that have shaped Economic Policy during the last four decades are badly misleading, because GDP growth comes at the price of increasing inequality. Some symbols of reach citizens, as Buffet and G. Soros are already well aware of this message.

Second message: We need to rewrite the rules of the Economy and the social contract (“rule of law”) to prevent the soaring rise of inequality and to ensure citizens benefit. It sounds like an indictment but failing to assume this challenge will be the politician’s responsibility and will entail growth stagnation, a divided society and an undermined democracy.

Third message: Economist should never forget that progress with equality is not a question of Economic Policy measures but of Political Economy, such as the following ones.

(a) A reform of the social contract. The employees have to participate in the corporate profits, and of course, they have to share business risks as well. Wages based on productivity are not fair in the medium-long run, as the Solow’s residual shows. Nothing new in the theories of firm’s organization. For example, there are cooperative societies, sharecropping and similar contract arrangements in agriculture and in the fishing sector, since ages. Designing a new framework to accommodate particular types of business legal institutions is not a major problem. Institutional Economics is a well-established field in Management Sciences, where the firm is seen as a network of contracts. It is however a Political Economy question, beyond Economic Policy.

Unemployed people do not have wages nor can participate in the corporate profits, so we have to redesign the labor market to achieve full employment, with stable, yet when necessary, temporal jobs. The abuse of temporal unstable jobs is unacceptable. 63% of the part time jobs in Spain are people that do not find a full time job, according to Euro Stat. The Administration, certainly in Spain, is the main user of these bad practices. For example, Physicians with high qualification, with more than 11 years to graduate and achieving internship experience, can only get one-day contracts as duty doctors. The Court of Justice of the European Union has recently dictated sentence, declaring these practices at odds with the standing Directives, but so far nothing has been done to obey the sentence.

Ways to achieve full employment and decrease duality in the labor market are collaborative working, job-sharing and flexible work. Data of the level of job sharing is scarce but it is an increasing practice in the administration in UK, Switzerland and other advanced European countries. Another measure could be progressive relief contracts according to aging.

(b) Education. If a job is going to be shared, the employees have to be properly qualified. Lack of education is the main barrier to find a job. That means that education has to be recognized in the social contract (rule of law) as the main right of any citizen. It is an investment but at the same time favors equality. It has to be taken as a question of national interest to avoid the vicious circle: Poor people cannot afford education, therefore they cannot access to proper jobs, so they will be permanently poor (Fig. 6).

Fig. 6
figure 6

Education and employment share in the US. Source Albensi et al. (2013)

(c) Reforming the tax system. Education at all the levels and beyond the school and the university, needs financial support, or otherwise will be wishful thinking. This demands a serious reform of the tax system to increase the taxable income, now undermined by legal “tax avoision”. The Economic Policy Institute (EPI) website has published facts to explain how the U.S. corporations rig the rules to dodge the taxes they owe over the last 20 years, Clemente et al. (2016). I quote three of the main facts of the report. The numbers may be different for other developed countries, but the need to increase the taxable income and the legal “tax avoision” practices is similar.

(i) Corporate profits are way up, and corporate taxes are way down. In 1952, corporate profits were 5.5% of the Economy, and corporate taxes were 5.9%. Today, corporate profits are 8.5% of the Economy, and corporate taxes are just 1.9% of GDP. (ii) Corporations used to contribute $1 out of every $3 in federal revenue. Today, despite very high corporate profitability, it is $1 out of every $9. (iii) Many corporations pay an effective tax rate that is one-half (or less) of the official 35% tax rate.

The financial crisis brought attention not only to the problem of inequality, but also had a direct effect on it, as will be discussed later in the chapter.

3 What Have We Learned from the Seventies Crisis?

3.1 The “Augmented” Phillips Curve

As a student at the L.S.E, with a previous degree in electrical engineering, Phillips built an analogue machine using hydraulics to model the working of the UK economy, called MONIAC (Monetary National Income Analogue Computer). In the same vein, he thought that the excess demand and excess supply that restore the equilibrium in a commodity market could be applied to the wage rate and unemployment equilibrium. In Phillips (1958), he analysed the relationship between the rate of inflation and the rate of unemployment. With data for the U.K economy for each year, from 1861 to 1957, he found clear evidence of a negative relation between the rate on inflation and unemployment. “Because of the strong curvature of the fitted relation in the region of low percentage unemployment, there will be a lower average rate of increase of wage rates if unemployment is held constant at a given level, than there will be if unemployment is allowed to fluctuate about that level. These conclusions are of course tentative. There is need for much more detailed research into the relations between unemployment, wage rates, prices and productivity” (Fig. 7).

Fig. 7
figure 7

Inflations and Unemployment in Australia during the 1970 crisis. Source Keen (2009)

The detailed research came soon. Samuelson and Solow replicated the Phillips’s paper for the United States, using data from 1900 to 1960, and they confirmed his conclusions. They labelled the Phillips curve, and soon become the tool for Economic Policy decisions. It was simple and it was as robust and useful as a commodity market model. By implementing the right demand policies, governments hoped to achieve a permanent balance between employment and inflation assuring long-term GDP growth.

The 60s were a period of great moderation but with discrepancies between Keynesians and Monetarists over the Phillips curve and its consequences in terms of monetary and fiscal policies. Friedman (1968) and Phelps (1967) strongly disagreed. They argued that the apparent trade-off disappears if policy makers actually tried to exploit it—that is, if they tried to achieve low unemployment by accepting higher inflation. The agents cannot be fooled by the government unanticipated inflation eroding real wages (Fig. 8).

Fig. 8
figure 8

The effect of oil prices and the “expectations augmented” Phillips Curve

“Phillips wrote his article for a world in which everyone anticipated that nominal prices would be stable and in which that anticipation remained unshaken and immutable whatever happened to actual prices and wages….To state this conclusion differently, there is always a temporary trade-off between inflation and unemployment; there is no permanent trade-off. The temporary trade-off comes not from inflation per se, but from unanticipated inflation”, Friedman (1968).

Fig. 9
figure 9

The Phillips curve was so fashionable that even analog machines were built

The Oil crisis changed the stable context; the use of monetary expansion generated more inflation with unemployment. Friedman made his contribution in 1968, before the Oil crisis and it is a good example of the Chicago’s wisdom from discontent. However, he was not recognized by the academic establishment until the new context of the Oil crisis “experiment”. Before the crisis, there were scenarios different to the UK and the USA data, but it was better for Keynesians and policy makers to keep the new toy, and to ignore these alternative contexts (Fig. 9).

Nevertheless, by the middle of the seventies, the expectations augmented Phillips curve was accepted and Monetarism wisdom as well. Rules versus discretion in economic monetary policy, was a consequence. The rate of change m should be determined from the target rate of inflation π and the expected rate of growth y: \({\text{m}} =\uppi +\uplambda{\text{ y}}\).

3.2 Rational Expectations

The main contribution of Friedman and Phelps is clearly emphasized in the above Friedman’s quotation. The temporary trade-off comes not from inflation per se, but from “unanticipated inflation” opening a new debate that still goes on. That the economic structural equations should include expectations was pointed by Keynes, and they were already incorporated in the consumption and investment functions, with narrative explanations: The life cycle by Modigliani, the permanent income by Friedman and extrapolative (memory) by Klein. However, in this crisis it affected the aggregated supply (AS).

How to model these expectations is an open question, and it is a reason why macroeconomic predictions of available models differ substantially in the medium and long run, Blanchard and Johnson (2013).

The recent crisis has demonstrated the inadequacy of models based on the assumption of rational expectations (RE) even with the last twist of what do we mean by RE. The theoretical guardians of RE and the thousands of users of econometric models remain silent in their trenches. The issues raised 40 years ago are still relevant today. Let us recall some definitions of RE proposed by distinguished protagonists of the debate.

Muth (1961) introduced the notion of RE: “In order to explain these phenomena, I should like to suggest that expectations, since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory”.

Sargent (1993) gives us an interpretation of this definition. “The idea of rational expectations is … said to embody the idea that economists and the agents they are modeling should be placed on an equal footing: The agents in the model should be able to forecast and profit-maximize and utility-maximize as well as the economistor should we say econometricianwho constructed the model”.

He goes over himself to claim “Rational expectations impose two requirements on economic models: Individual rationality, and mutual consistency of perceptions. When implemented numerically or econometrically, rational expectations models impute much more knowledge to the agents within the model (who use the equilibrium probability distributions in evaluating their Euler equations) than is possessed by an econometrician, who faces estimation and inference problems that the agents in the model have somehow solved. I interpret a proposal to build models with bounded ‘rational agents’ as a call to retreat from the second piece of rational expectations (mutual consistence of expectations) by expelling rational agents from our model environments and replacing them with ‘artificially intelligent’ agents who behave like econometricians. These ‘econometricians’ theorize, estimate and adapt in attempting to learn about probability distributions which, under rational expectations, they already know”.

It is difficult to grasp the message of this rhetorical masterpiece. It seems that following his numerous works, after using VAR time series models as a proxi for an econometrician’s rationality (that is models without theory) he was interested in how the econometrician agent learned with artificial intelligent agents. In particular, if using recursive least squares (well known in identification systems in Control Theory) will converge to a (RE) equilibrium (Marcet and Sargent 1989).

The reader probably will share the following indictment: “I am not well qualified to criticize the theory of rational expectations and the efficient market hypothesis because as a market participant I considered them so unrealistic that I never bothered to study them. That is an indictment in itself but I shall leave a detailed critique of these theories to others” Soros (2012). One of the earliest critics was my paper Hernández-Iglesias and Hernández-Iglesias (1981). In what follows we present the fundamental arguments referring the interested reader to the article for the analytical details.

3.3 Rational and Adaptive Expectations

Three questions need a proper answer. How do agents form expectations? Could adaptive expectations (AEx) be equivalent to full rational expectations? Can fast and frugal individual predictions lead to aggregate social expectations that will also end up been rational?

I am glad to find out that some work I did in my early academic carrier may be useful 40 years later, to answer these questions. In the early seventies, I was a student in the PhD programme in the Control and Computing Department of the Imperial College (IC), London. The IC was involved in a quite new project to apply Control Theory to Econometrics (PREM) with the Queen Mary College as partner then, and the LBS later on, in what ended up as the Forecasting Unit. At the same time, my brother Feliciano just returned to Spain as a Ph.D. in Economics at the U. of Chicago. He was an adviser at the Spanish Ministry of Industry, which was the responsible of the Spanish Business Intentions Survey (SBIS). The National Statistics Institute mainly in hands of economists, wanted to take over the SBIS. I was in charge of a one-year project to assess the predictive capacity and the internal and external consistency of the SBIS with the time series methods of Box-Jenkins. The approach at the I.C was quite advanced and different to the B-J methods in several relevant issues. I set up a team with other two Ph.D. students and start developing our own suite of programs in Systems Identification (SYSID) with available knowledge in prediction errors methods and adaptative control that later on led to expectations, optimization methods (Ghahramani 1998) and prediction error methods (Ljung 1999). A short description of the results was published in Hernández et al. (1979).

Looking at the conundrum of rational expectations and at the shocking results in causality tests with economic time series, from this alternative engineering approach, could be a good novel idea. After debating with my brother for more than a year, I decided to stop deliberating and to send the paper for acceptance review to the editor of The Journal of Econometrics, Zellner. We had a very encouraging and friendly response with several deep remarks and some references to read, that were of great help, and the revised paper was published (Hernández-Iglesias and Hernández-Iglesias 1981). The paper has been compulsory reading in the postgraduate courses delivered by Zellner at the U. of Chicago for some years. It received attention in Europe, but given its disruptive nature and the disagreement with some of the visible leaders of the RE “revolution” in the US, we have no response to our indictments from them, except from Zellner and Cagan.

3.4 Expectations and Efficiency

At that time, expectations models in practice were linear distributed lagged models such as in Cagan (1956) studying the monetary dynamics of hyperinflation: \(\hat{y}_{t} \left( k \right) =\) \(\mathop \sum \nolimits_{{{\text{i}} = 0}}^{{{\text{i}} = {\text{n}}}} {\text{l}}_{\text{i}} \left( {\text{k}} \right){\text{y}}_{{{\text{t}} - {\text{i}}}}\) where \(\hat{y}_{t} \left( k \right)\) is the anticipated value from t for k periods ahead. Because multicolinearity is present, it was not possible to estimate the l i weights and the extra condition of parsimony was imposed by assuming a geometric decay in the weights. As a “behavioral” alternative the expectations could be approximated by an error learning process \(\hat{y}_{t}\,{-}\,\hat{y}_{t - 1} = \alpha \left( {\hat{y}_{t}\, {-} \, \hat{y}_{t - 1} } \right) =\upalpha\hat{\varepsilon }_{t}\) of the prediction error \(\hat{\varepsilon }{\text{t}}\). This approximation was efficient only if the univariate time series follows an ARMA (1, 1): \({\text{y}}_{\text{t}}\, {-}\,{\text{y}}_{{{\text{t}} - 1}} = \varepsilon_{\text{t}} {-}\left( { 1 { }{-} \, \alpha } \right)\varepsilon_{{{\text{t}} - 1}}\) Muth (1961).

Can we propose a model for expectations that will extend the equivalence between distributed lagged expectations models and adaptive models, beyond the ARMA (1, 1) case? We showed that this equivalence can be achieved proceeding in two steps:

  1. (a)

    Estimate the Box-Jenkins model of the univariate series \({\text{ARMA}}\left( {{\text{m}},{\text{n}}} \right)\,{\text{say p}}\left( {\text{B}} \right){\text{y}}_{\text{t}} = {\text{q}}\left( {\text{B}} \right)\varepsilon_{\text{t}}\) where p(B) and q(B) are polynomials in the lag operator B of orders m and n

  2. (b)

    The most efficient, one step ahead, causal predictor for time t will in fact satisfy: \(\hat{y}_{t} \left( 1 \right) = \arg \hbox{min} \;E\left( {\hat{y}_{t + 1} - \hat{y}_{t} \left( 1 \right)} \right)^{2} = E\left( {y_{t + 1} /y_{t} ,y_{t - 1} , \ldots ,y_{1} } \right)\) with \(E =\) expected value operator, can be derived from the estimated Box-Jenkins model as

$$\hat{y}_{t} \left( 1 \right) = - \hat{p}_{1} y_{t} - \cdots - \hat{p}_{n} y_{t - n} + \hat{q}_{1} \hat{\varepsilon }_{t} + \cdots + \hat{q}_{m} \hat{\varepsilon }_{t - m}$$

This is equivalent to the self-tuner predictor, Wittenmark (1974), used in identification of linear models in Control Theory, Hernández et al. (1979) and provides a general way to model adaptive expectations.

3.5 Granger’s Causality, RE and AEx Equivalence

Two surprising results were observed when using time series analysis (prediction without theory). One was the good forecasting performance of univariate models, Nelson (1972) compared with structural econometric models. A reasonable explanation of this fact was that structural econometric models contained specification errors and that one should use small models with rational expectations. We preferred the alternative explanation that fast and frugal expectations, as adaptive expectations (AEx) could approach well theoretical rational expectations, RE. Aggregated people expectations, could manifest collective intelligence after all.

The second one was the absence of Granger’s causality (GC) among variables such as money and prices that were related according to theory. This result reinforced our alternative explanation. We interpreted these results as an unexpected welcome proof that under stable regimes, adaptive (prediction without theory) and rational expectations (prediction according to theory) may be equivalent. Rational agents may draw on an information set larger than just the history of the variable being forecasted, including the structure of the relevant system describing the Economy. But if there was no Granger’s causality this larger information set does not improve prediction and AEx are equal to RE.

Under what conditions should this be true?

According to Granger (1969), a variable y is not caused by the variable x if \({\hat{\text{y}}}_{\text{t}} \left( {\text{k}} \right) = {\text{E}}\left( {{\text{y}}_{{{\text{t}} + {\text{k}}}} /{\text{A}}_{\text{t}} } \right) = {\text{arg min E}}\left( {\varepsilon_{\text{t}} \left( {\text{k}} \right)/{\text{A}}_{\text{t}} } \right)^{ 2} = {\text{E}}\left( {{\text{y}}_{{{\text{t}} + {\text{k}}}} /{\text{A}}_{\text{t}} {-}{\text{ x}}_{\text{t}} } \right) = {\text{arg min E }}\left( {\varepsilon_{\text{t}} \left( {\text{k}} \right)/{\text{A}}_{\text{t}} {-}{\text{x}}_{\text{t}} } \right)^{ 2},\) where At represents all available information at time t and At − xt excludes this information about x at the same instant t. Therefore, if y is ‘caused’ by x in the sense of Granger, a pure extrapolative model of expectations about y is irrational in the sense of Muth, and y is endogenous in any model which includes x. This is the relationship between Granger’s causality, rationality and endogeneity. If on the contrary there is no GC, extrapolative and the corresponding adaptive prediction are equivalent to rational expectations.

Of course, for a given sample a variable x may not “cause” in the sense of Granger, another variable y, even though according to theory y is influenced by x. As we reiterate along this chapter Economic Theory is contextually dependent. For example in the case of monthly money and prices time series of the Spanish economy in the period 1965–1976 of great stability, causality was not detected. However, for the German hyperinflation period 1920–1923 GC was detected. For the complete analytical details about when this may occur, see Hernández-Iglesias and Hernández-Iglesias (1981).

For our purpose here, the important message is that the non-GC results indicate that equivalence between extrapolative and rational expectations occurs frequently for relevant economic variables in stable periods.

If this is so, there were strong implications for macroeconomic modelling. The observed coincidence of efficient extrapolative expectations and rational expectations is more the rule than an exception for most available historical samples” (Hernández-Iglesias and Hernández-Iglesias 1981). Ecological knowledge; simple rules, will be equivalent to constructive knowledge according to fully rational rules. In stability periods, RA and AEx are equivalent and citizens do not have to behave as econometricians to form efficient expectations.

4 What Can We Learn from the 2008 Crisis?

Many developed countries are still suffering the crisis that started in 2008. Others have partially recovered at a cost of low rate of growth, a great increased in public debt and more inequality. Great debt, quantitative easing and very low interest rates led to liquidity trap and little options were left for policy measures to increase investment and demand to recover growth. The crisis has shown that orthodox economist pay attention to the beauty of their models, but they forget that Economic Theories are contextual and Economics in the first place has to deal with institutional design. Real state capital with finance capital and insurance are at the heart of current misunderstandings of the economic crisis and recession. Financial and real state credit (FIRE) are not in the econometric models. An urgent task is to include models for the FIRE sector and the corresponding prudential rules. It is a task of Political Economy and Institutional Economics, not of Economic Policy.

The crisis is overlapping with ongoing changes in the supply side: Radical changes in production, coming from the second generation of Information and Communication Technologies. All along the second half of the last century, the effect of technological progress has been non-disruptive and smooth. The technological changes nowadays, are affecting production, management practices, and the labor market in a disruptive way. This means that we need proper answers to the questions raised by the actual crisis and by the ongoing changes in the supply side of the Economy.

4.1 Few Economists Saw the Crisis Coming

Everything was going well. As in the years before the seventies crisis it was a time of great moderation. Macroeconomists were proud of their models and unconscious of a major pitfall: They forgot to include a FIRE module and its relationship with both the monetary and the supply module. In 2008 the perfect storm was building up, the Minsky moment arrived and the self-assured hubris among economists was shaken.

Great economists and policy makers did not saw the crisis. I choose a sample of quotations. Ben Bernanke at the meeting of the Eastern of the Eastern Economic Association 2004. “One of the most striking features of the economic landscape over the past twenty years or so has been a substantial decline in macroeconomic volatility… Several writers on the topic have dubbed this remarkable decline in the variability of both output and inflation “the Great Moderation.” Similar declines in the volatility of output and inflation occurred at about the same time in other major industrial countries, with the recent exception of Japan, a country that has faced a distinctive set of economic problems in the past decade”. Japan was a good warning that something could be wrong but it was better not to spoil the happy days.

In an October 12, 2005 speech to the National Association for Business Economics, the then Federal Reserve Chairman Alan Greenspan spoke about the “development of financial products, such as asset-backed securities, collateral loan obligations, and credit default swaps, that facilitate the dispersion of risk… These increasingly complex financial instruments have contributed to the development of a far more flexible, efficient, and hence resilient financial system than the one that existed just a quarter-century ago.” Greenspan had in February 2005 asserted the US House Financial Services Committee that “I don’t expect that we will run into anything resembling a collapsing [housing] bubble, though it is conceivable that we will get some reduction in overall prices as we’ve had in the past, but that is not a particular problem.”

In August 2008 Blanchard (Nobel Prize 2012), claimed, “For a long while after the explosion of macroeconomics in the 1970s, the field looked like a battlefield. Over time however, largely because facts do not go away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism and herding. None of this deadly however. The state of macro is good

Nevertheless some economists like Shiller, saw the crisis coming, but in general those who saw it, are unconventional economists such as Keen, Minsky and Hudson. The crisis was there but econometricians did not care.

4.2 The Crisis as a Classical Financial Panic in a New Financial System

One of the ideas I insist on in this chapter is that Economics must refer to a specific context that is only certain in the present and in the past. The first thing a reasonable economist can do to understand the crisis is to look for historical similar cases. Bernanke (2013) makes use of the 1907 financial panic episode caused by failures in the financial system of the time to analyse the failures in the actual financial system and the agenda to repair them. I prefer to make use of the last Japanese crisis because starting 20 years earlier can give us not only information about the crisis but also about the problems faced to recover growth, once confidence in the financial systems has been restored.

Since 1961, the Bank of Japan (BOJ) attempted to control directly the volume of commercial bank credit by providing lending targets for selected banks. This policy was applied to a subset of lending institutions. This led to low interest mortgage loans with higher risk. Low interest rates and loose monetary policy fuelled a strong growth and raised stocks and housing prices. Following the Plaza Agreement in 1985, the yen appreciated from around 240 yen to the USD to about 120 yen in less than a year. In response, the Bank of Japan lowered interest rates from 5.5% down to 2.5% in 1987. This dramatic easing of monetary policy at a time of economic strength provoked an explosion of real-estate transactions and high stock prices.

In 1988, Prime Minister Nakasone reduced corporate tax rates from 42 to 30% and reduced top marginal income tax rates from 70 to 40%. The combination of easy monetary policy and expansive fiscal policy led inevitably to the Japanese 1990 stock market crash and a deep fall of housing prices. Equity and asset prices fell, leaving overly leveraged Japanese banks and insurance companies with books full of bad debt. The financial institutions were bailed out through capital injections from the government, loans and cheap credit from the BOJ, and the ability to postpone the recognition of losses, ultimately turning them into “zombie banks” that in turn kept financing “zombie firms” for political interests. Housing prices (index 100 in 1975) fall from 215 (1991) to 130 (1994) and to 100 (2003). A loss of 60% in housing wealth over the crisis period.

A strong increase in government spending and stagnant revenues raised the government debt to more than twice the GDP and a fall of GDP from 7, 1% in 1988 to −5, 5% in 2009 and to 1, 6% in 2013. The lesson is clear. Intervention of the BOJ trying to correct the yuan appreciation, stimulating private investment by monetary expansion, cutting interest rates and fiscal expansionist polices to increase consumption, led to the bubble. Monetary policy after the crash was limited because a liquidity trap and fiscal expansion was not possible due to the huge public and private debt from zombie banks and their linked corporation’s debt. As a last resource, the BOJ was accepting the bad practice of buying debt in private hands. The recovery is being slow and poor. The cause of the crisis was not a failure of the financial system but a misuse of Economic Policy forcing the BOJ to accept too low capital ratios of the banks.

Blanchard’s Macroeconomics textbook (2013 6th edition Chap. 9) provides a detailed account of the causes and policy responses of the 2008 crisis. I summarize the main facts with personal comments. There are some differences with the Japanese crisis. The trigger of the crisis was a fall in the stock prices in Japan, whereas in the US crisis it was a fall in housing prices since 2007. The difference is not relevant since both end up correlating after a short time. Japan was more exposed in relative terms, because the big pension funds and foreign speculative investment.

The other difference is more important. In the 2008 crisis, a complex “engineering” (an “Alphabet Soup” is Blanchards’s label) of products and financial agents (SIVs) not limited to banks and insurance companies were created to avoid the bank’s capital ratio limitation to lending. This is a “shadow financial system” where risk was difficult to assess, opaque, fuzzy and without traceability, Fig. 10. Investors accepting Market-Based securities (MBS) and other forms of securitization were acting without been aware of it as “bankers” in the darkness. These led to at least the following problems, Blanchard (2009):

Fig. 10
figure 10

The shadow finance. Source National Commission on the Causes of the Financial and Economic Crisis in the United States (2011)

  • Assets created, bought and sold were riskier than they appear. With expectations of rising housing prices, subprime mortgages risk was acceptable. However, if housing prices fall, at some point many mortgages would exceed the value of the house, leading to default and foreclosures.

  • Securitization makes difficult to value assets on the balance sheets of financial institutions. This product was in the market in the nineties, but in 2008 more than 60% of the US mortgages were securitized and income streams from these securities were trenched to offer investors different risk flows alternatives.

  • Securitization and globalization led to increasing interconnection of the financial institutions. However, this interconnection destroyed the theoretical advantage of risk avoiding by pooling mortgages. The variance of the sum of m independent random variables with the same distribution is m times less than the variance of a single one, but the tranches of the pooled mortgages are not independent. The financial system became anything but resilient as Greenspan thought and it favoured free riding and moral-hazard behaviors.

The first aid to cope with the panic was restoring confidence, allowing banks and other financial institutions to borrow from the central banks, government bailed outs to banks, insurance companies and corporations, cleaning up banks separating the toxic products (the Troubled Asset Relief Program, TARP) etc. When the financial storm ended, its destructive effects were evident. For example, in the period, 2007–2015 Spanish families have lost 30% of their wealth. However, this reduction comes from their housing wealth. Their financial assets wealth has increased. That means that saving during the storm was the family’s response. This in turn depressed consumption and demand. To put the Economy back in track a further fiscal push was needed since monetary policy, as in Japan, was blocked by a liquidity trap. The result in Spain is an increase in public debt of 60% and a rise in unemployment to 22%. What is much worse, the inequality gap has widened.

The cause of the 2008 crisis, unlike the Japaness one, was changes in the financial system. The great moderation had fooled not only macroeconomists. Financial institutions and regulators also underestimated the financial risks. The result was a financial structure exposed to free riding and moral hazard. Any reform of the sick financial system has to minimize these two lurking dangers.

4.3 A New Financial System. Creating a Socially Useful Financial System

We agree with many economists worldwide that the post crisis task needs more of structural reforms of the financial system and less of Economic Policy firing, although they should go in parallel to recover growth. “But it’s not going to come easily from a political point of view. We need to make courageous decisions, which we’ve been talking about for a long time” (Legarde 2016). To minimize the two lurking dangers, free riding and moral hazard, we need structural reforms and prudential rules.

Looking to the Japanese post crisis one cannot avoid the general feeling that something more fundamental is missing if not wrong in Economics. This feeling has led to the creation of new institutes and scientific societies that joined those already rethinking Economics. One of these is the Institute for New Economic Thinking (INET) founded and financed by Soros. When I was preparing my Keynote session for the Artificial Economics Conference in Barcelona (2104) I found a talk by Bezemer at the 2012 INET meeting in Berlin titled, “creating a socially useful financial system”.

In the talk, he separates the credit that goes to the “productive” system from the credit that goes to the financial and real estate sector (FIRE). In today’s financial world most credit is not spent in creating added productive capacity but to buy assets already issued. According to Bezemer, about 80% in the English-speaking countries bank loans are real estate mortgages and much of the balance is lent against bonds already issued and stocks. To accomplish structural reforms, we really need economic and political institutional arrangements to develop coherent alternatives to mainstream analysis. Why? Because mainstream Economics and consequently the corresponding econometric models do not include the credit-debit balance in the financial investment, insurance and the real state sector (FIRE). Incorporating the FIRE sector in financial models is going to be a difficult battle because this will imply to unveil the effects of the “rentier sectors” on the Economy and to change macroeconomic financial accounting. Finance is not in the mainstream version of Economics.

The accounting approach of finance and credit has its roots in a monetary view of the circular flow of the Economy whose representatives are Marx, Schumpeter, Kalecki, Minsky, Godley, Baker, Keen and Hudson and at times Tobin. It is an heterodox position with respect to the classical and neoclassical economics that maintained that in the long run money is neutral, ignoring that money and credit are not the same thing since “financial engineering” and shadow banking arose. This crisis is an empirical confirmation that money and credit matter.

What happens around 1985 for the first time in the history of the Economy? Figure 11, from Bezemer’s talk, shows credit to the dual sectors, FIRE and non-financial, as a percentage of GDP. The ratio of loans to nominal GDP for the FIRE sector, increased from 50% in 1985 to 250% at the beginning of the crisis. The effect is clear: A tremendous inefficiency in the financial system that is necessary for maintaining the economic activity. The storm began to build around this year.

Fig. 11
figure 11

Lending to the real and FIRE sectors. Source Bezemer (2012b)

I included, of course, this graphic in my Keynote as one of the challenges to Economics: How to model a socially valuable financial system that includes money and credit. I considered, at the time, his social accounting approach a seminal contribution to understand the crisis and the necessary reforms and much more. I was right. I refer some of Bezemer’s messages since then:

  1. (i)

    The credit nature of money has macroeconomic significance, Bezemer (2012a)

  2. (ii)

    The FIRE sector pumped wealth in the form of revenues from the real sector, Bezemer (2012b, 2016)

  3. (iii)

    Banks’ lending does the necessary job, creating enough money (through the money multiplier) to meet the demand of the productive sector for goods and services and a little more, to avoid occasional illiquidity and to stimulate aggregate consumers demand. Schumpeter was right. However, credit to the FIRE sector hinders growth and distorts wealth distribution. A functional classification of credit is due, Bezemer (2014)

  4. (iv)

    Macroeconomic models have to introduce a money and credit sub model making explicit the overall model of the FIRE sector. Hudson and Bezemer (2012)

  5. (v)

    Finance is not the Economy. Rent is not Income. FIRE credit is unproductive credit and imposes overhead costs to the production sector, Bezemer and Hudson (2016)

  6. (vi)

    Credit flows to non-financial business have a positive effect on growth but credit flows for mortgages and other assets have no effect or even have a negative effect, Bezemer et al. (2016)

  7. (vii)

    Bank credit to real estate and financial asset markets, increases income inequality. Credit to non-financial business and for household consumption supports broader income formation, decreasing income inequality. Since the nineties has been a progressive shift in credit towards the FIRE sector and consequently increasing the inequality gap, Bezemer and Samarina (2016)

  8. (viii)

    Can ABM models help developing socially useful macroeconomic modelling? Bezemer (2012b), Schasfoort et al. (2016) and Bezemer et al. (2016).

We have learned that the failure of considering credit and money in its duality (real productive credit and FIRE credit) is the core cause of the crisis. This time the cure and the recovery of the crisis will need not only an explanation but also a new design of the financial system to avoid a new crisis, which will deepen inequality and undermine democracy. Yes, Mrs. Legard is right. We need courage because it is a question of institutional design beyond prudential rules. It is a question of politics and economics. A return to the institutional reforms task of Political Economy.

5 Economics as a Social Science

Economics deals with social behavior and therefore inherits complexity. Trying to be a science, it is based on theories according with empirical evidence. However since the society and economic institutions change, the Economy evolves and standing theories in Economics have to be either completed or replaced by new theories to accommodate new contexts. In this chapter, we have referred to three challenging contexts within a period of 60 years. The interval of time between two different contexts that force to change or to complete the Economic relations is very long. During these periods of “great moderation” economist can enjoy their wisdom and politicians are happy applying the accepted rules. One could expect that the economists, being aware of the contextual validity of their theories, would tolerate methodological diversity. It is not like this.

5.1 What Is Wrong with Economics?

The unrealism of its assumptions. The economist build models assuming things which don’t occur in the real world, as long as their models follow from mathematical reasoning and have predictive capacity.

Equilibrium thinking. Equilibrium conditions are the closure of their models, perhaps because when non-narrative Economics started, the world was fascinated with physical laws. Economist aimed to find universal laws equally compelling in the Economy. I remember using the model of a magnetic field to explain market equilibrium; the flow of oil through a pipeline in terms of hydraulic horsepower and the internal pipe diameter to justify a Cobb-Douglas production function or showing them that the funny parameter called in maths the Lagrange’s multiplier was just a shadow price. I may be excused because I was teaching Economics to Engineering students. I agree that Physics has a lot to contribute to Economics today, but Economics is not like Physics. In Economics, there are no general laws. Laws are contextually valid.

Methodological individualism. Individuals, their choices and decisions are the sole units of analysis, without much reference to Psychology or Sociology. Tags and kinship do not count. The agents are fully rational, equally motivated, something that Keynes criticized with his metaphor of the beauty contest. Of course, game theory has relaxed somehow this assumption, but this autism with respect other sisters disciplines, has caused very bad effects in financial markets.

Formalism. All the accepted economic laws ought to be formalized in a mathematically coherent form. The problem with this formalism is that “the great virtue of maths is that it does make precise things which ought to be precise. Its great defect is that it imparts a false precision to things which cannot and ought not to be made precise” (Skidelsky 2015). Many of the results in EE and Artificial Economics (AE) can be described in a narrative or graphical way and of course, when it is possible, in a mathematical formalism. You can proof by numerical calculus where the maxima of a function is, or use your computer to graph the whole function and select all the local or global maxima. You can try to find out the probability distribution of say sin x, where x is a random variable that follows a Poisson distribution or just simulate and graph the results. My choices will be in both cases the second alternative.

The filter of ideological context. The very name of Political Economy emphasizes the dependence of the Economy of the political and ideology context. Many good contributions in Economics are left aside because they are not in agreement with the prevailing ideology.

Academic dictatorship and monoculture. The INET-CORE project is a response to the growing student’s protests against Economics teaching nowadays. Here is Tirole’s 2012 Nobel Prize Winner writing a letter to the French Secretary of State in Higher Education and Research and the AFEP response.

“…May I inform you of my concern about a continuing rumor about the creation of a new section of the National Council of Universities named «Institutions, Economy, Territory and Society». Should this rumor be confirmed, it would cause a disaster for the visibility and the future of research in economics in our country. It is especially important for the community of academic teachers-researchers to be endowed with a single scientific assessment standard, based on the ranking of the journals of the discipline and on an external assessment by internationally prominent peers. It seems inconceivable to me that France would recognise two communities within the same discipline…. Self-proclaimed ‘heterodox’ economists have to comply with the fundamental principles of science.”

Response of the heterodox Association Française d’Economie Politique (AFEP)

  1. 1.

    “The claim that «heterodox» economists want to escape the assessment of their research is preposterous. Our proposition of an authentic peer evaluation of research, based on a variety of publications, is different from the current ranking of journals and the perverse quantitative bibliometric norms of assessment that it implies.”

  2. 2.

    “The proposal to create a commission with Nobel Prize and Clark medals winners is similar to take a representative sample of a Papal conclave to decide about the legitimacy of a demand by a minority of protestants.”

The position here is that Economics is a science, we agree, and there is only one-way to do science, we disagree. Economics is a social science with complexity far beyond a “pure” science. Tolerance and academic diversity is necessary. 50 years for Experimental Economics to gain acceptance in mainstream economics give us a measure of the orthodox economist’s intolerance.

5.2 Mathematics and Game Theory Cannot Deal with Social Complexity

Complexity is a term used in many ways according to different schools in Economics. For our purpose, we mention three types. A dynamical system is complex if it endogenously does not tend asymptotically to a fixed point, a limit cycle, or an “explosion”. Alternatively, a situation exhibits complexity when there is an extreme difficulty of calculating solutions to optimisation problems. Another source of complexity appears when it is required to deal with agents with bounded rationality yet strategic behavior and these heterogeneous agents should learn from the decisions of others. This is where game theory has attempted to extend market design from a constructivist mathematical approach with limited success.

Let me illustrate the last source of complexity with my preferred persuasive argument against forcing Economics into the Procrustean bed, and the need of new methods in Economics: The generative method of Experimental Economics (EE) and Agent Based Modelling (ABM) applied to Economics, that is, Artificial Economics (AE). The example is taken and extended, from the first edition of Pindyck and Rubinfeld (1995) Microeconomics textbook.

Three contestants, A, B and C, have a balloon and a pistol each. From fixed positions, they fire at each other’s balloon. When a shot hits the balloon and breaks it, its owner is out of the game. When only a balloon remains, his owner is the winner and receives a $1000 prize. At the outset, the players decide by lot the order in which they will fire, and each player can choose any remaining balloon as his target. Everyone knows that A is the best shooter and always hits the target; that B hits the target with probability 0.9 and C with probability 0.8. Which contestant has the highest probability of winning the $1000? When I asked my students to advance an answer within 5 min, some will come up with a reasonable and correct one: Contestant C.

What can we learn from this example? An intelligent and knowledgeable student will use probability calculus to work out the right solution, should he have enough time: procedural rationality, in terms of Simon (1982). This was not the case given the time available to answer. How they arrived to the correct answer? Because “as in life the success is for the mediocre” or because “whenever the balloon A is not broken, B or C will shoot to A”. Both answers come from “social knowledge” not from “constructive knowledge”.

When dealing with markets, should one restrict to full rational agents or to agents that use fast and frugal decision rules? Posada and López-Paredes (2008). Individual constructivist methodology does not lead to better models of learning decision than ecological and social knowledge.

Let us extend the example. Once we know that the most likely winner is C, let us repeat the game in such a way that we can keep enjoying the betting game, and the shooters earn their salary. Who will be now the winner? Mathematics will not provide an answer and game theory will be useless for such a simple problem. However, the answer is there: Experiment! See if there is a similar situation in Economics (natural experiment). Yes there is. The dominant firm model or “leave and lets others live” model of oligopoly can provide an answer. There is no need to trespass the constructive mathematical approach, of Industrial Organization textbooks.

However, is the dominant firm model as explained within a constructivist approach correct? You may try to justify that is a correct model with empirical econometric tests, but this is not a sufficient consistency test. What do I mean? Am I saying that the constructivist models and the corresponding econometric test are not enough to certify the scientific validity of economic models? Yes, that is what I mean. One needs to complete the model doing a previous check of internal consistency. Identifiability in terms of the Cowles Commission, or in today’s terms, explaining the dynamics to reach the equilibrium model that you are rescuing from statistical data as Smith, wisely warns us:

“As I see it, there is no rationally constructed science of scientific method. The attempt to do it has led to important insights and understanding, and has been a valuable exercise. But all construction must ultimately pass ecological or ‘fitness’ tests based on the totality of our experience” (Smith 2008).

5.3 Experimental Economics

EE is an approach to Economics that uses human agents in the classroom, the computing laboratories or in the field, as an engine for generating substantive propositions, learning about behavioral assumption or testing and calibrating alternative economic models.

It has developed from disagreements of the economists with some of the strong assumptions of orthodox Economics, replicating observed individual and social behavior. It is the result of the integration of the Economy with Psychology and Social aggregate behavior. Today the integration of Economics with Psychology has been achieved through EE. The integration with Social behavior is pending. We regret the change in title of the Journal of Socio-Economics where Smith published a revised version of his N. Prize speech to the new title: The Journal of Behavioral and Experimental Economics, forsaking the term social. A survey in one document of the current state of EE for those unfamiliar with it is unusual given his wide range of applications. Nevertheless Chakravarty et al. (2011) can be appropriate.

EE allows introducing in the economic models, agents with bounded rationality, heterogeneity and social behavior. We agree with Davis (2016) “The basic rationale for the experimental method has remained pretty much the same over the years: Using the lab can overcome the difficulties of finding data in many naturally occurring settings; the lab has a high degree of “internal validity” because it allows the researcher to change one variable at a time. All of these advantages have contributed to its acceptance as a standard tool in the economist’s toolkit”. This “internal validity” is the fitness test quoted above from Smith.

Following the Nobel Prize award in Economics to Smith and Kahenman in 2002, the use of EE increased. The Nobel Prizes to Ostrom in 2007 and to Roth in 2012 certified that EE is one of the pillars of Economic Methodology. The progress has been very important in the main three streams: Market design, games, and individual or social choice experiments. In what follows, we will deal only with impersonal experiments.

Nowadays there are two windows to look at the Economy.

The value window. A top down constructivist approach, with two generations. 1st g: Market equilibrium with homogeneous and fully rational agents. 2nd g: Game theory and personal exchange, with heterogeneous agents and endogenous growth (Fig. 12).

Fig. 12
figure 12

The “exchange windows” for managerial and economics institutional design

The exchange window. Ecological learning and social intelligence. Socially inspired methods in Economics and Management. 3rd generation: distributed intelligence with heterogeneous bounded rational agents and bottom up modelling. EE extended to soft agents: Artificial Economics (AE).

In one of the earliest market experiments Chamberlin (1948) conducted a classroom trading exercise that was designed as a test of the competitive model in conventional market theory. A group of students were buyers and received a card with a reserve value written on it, and the other group, were sellers and received a card with a reserve marginal cost written on it. Students then walked about the room, and buyers and sellers could negotiate over the terms of trade. When a deal was agreed, the price was written on the blackboard. This test produced trade volumes in excess of the competitive equilibrium price, and trade prices that were quite variable.

Smith (1962) redesigned the experiment with bidding rules similar to those used in equity markets. All bids and offers were centrally and publicly recorded instead of allowing the students to mix in the room and bargain over prices as in Chamberlin’s experiment. This modification of the trading rules, which is known as Continuous Double Auction (CDA) since both buyers and sellers are active, leads quickly and accurately to the predictions of the competitive market model.

Let us modify the bidding rules: Buyers are listening to sellers’ offers but do not make their bids public. What is the expected equilibrium price? Microeconomics has no answer, except at the expense of forcing an equivalence between information and randomness. Asymmetric information is measured by the effects on transaction costs, but cannot be included in the dynamics of the model. Of course, EE can, Smith (1976). Since information is costly, those who hide information should have an advantage and the equilibrium price is expected to be below the price of the symmetric auction, as it happened.

There are deep lessons from these seminal experiments.

  1. (i)

    Collective intelligence. Experiments using the CDA institution converge reliably to the competitive price even with few participants, and neither the buyers nor the sellers need to have information about the values or costs of each one in the market. They are guided by sharing prices information towards the emerging equilibrium. They are standard, yet purposeful citizens. However, they exhibit collective intelligence. Markets as economizers of information: Examination of the “Hayek Hypothesis”, Smith (1982). Hayek’s (1945) argued that price mechanism serves to share and synchronize local and personal knowledge, allowing society’s members to achieve diverse complicated ends through a principle of spontaneous self-organization: that which results of human action but not of human design. Spontaneous order.

  2. (ii)

    For sociologists, social (ecological) learning is the process by means of which agent’s acquisition of new information is caused or favoured by their being exposed to one another in a common environment, Conte (2002). The collective wisdom of Hume.

  3. (iii)

    EE allows detailing the dynamical process, and it will serve us to the final consistency test to accept or reject a model, beyond the econometric tests.

  4. (iv)

    The CDA allows us to experiment the dynamics of other well-known market models such as duopoly, monopoly and assets markets.

  5. (v)

    The auction itself is a powerful “mathematical solver” for even zero intelligent agents that are absolutely aliens to the perfect microeconomic market model used to find the market equilibrium price/output pair. This is of great importance to Management Engineering. Auction inspired methods (socially inspired methods) can be applied to many management activities, such as forecasting, project and ideas selection, collective intelligence, marketing research, yield management and dynamic pricing under the name of “prediction markets” (“prediction auctions” should be a more accurate name) and crowdsourcing, Arrow et al. (2008). Many innovative as well as consulting firms are using already auction-inspired methods.

  6. (vi)

    We can now interpret the conventional supply and demand model as a gadget, although useful, of what the market does. If we are interested in forecasting market responses to variations in the demand or supply the constructivist market mode is still a useful tool.

Consequences of points (i) and (ii). Closing the rational expectations debate. We conclude in the referred paper (Hernández-Iglesias and Hernández-Iglesias 1981) that, “…they (the analytical results) indicate that equivalence between extrapolative and rational expectations occurs frequently for relevant economic variables in stable period” We have now an EE support of this conclusion. The bizarre statement that expectations made by the citizens should be as rational as those of an econometrician is compatible with adaptive expectations, because a simple individual behavior might reveal ecological intelligence, to match the expectations of a good and lucky econometrician.

Consequences of point (iii). Testing economic models far beyond statistical tests. The Ecological “fitness” test. Going back to our balloons contest. The dominant firm model is a standard in Industrial Organization and it applies today in antitrust cases. Does it pass the ecological fitness test? No, it does not. “The predicted market price and dominant firm output depend on arbitrary simplifying assumptions about behaviorin particular, that only the large firm is aware or perceives that it faces a downward-sloping demand. All other firms are presumed to behave as price-takers, reminiscent of the theoretical myths that a CE is driven by “price-taking” behavior. One implication of this assumption is that if the fringe supply costs are rotated using the equilibrium as a pivot so as to leave the dominant firm’s equilibrium price and output unchanged, this will not affect anyone’s behavior” (Smith 2007).

Rassenti and Wilson (2004) studied the behavior in a dominant firm environment using two separate institutional market rules applied to the dominant and to the followers (fringe) firms. One was a posted-offer auction common in retail markets. The other a sealed offer auction. All the firms submit their sealed offers. The uniform clearing market price was determined by aggregating the offers and the actual demand. In the first case, the dominant firm quite often produces more than the dominant model and at higher prices. In the second case with a low elasticity of the aggregated followers supply, the dominant firm produces more than the dominant firm prediction for a wide spread of prices around the predicted price. We can use the dominant firm model of price leadership only as a benchmark of the true underlying competition. An example that Industrial Organization needs Experimental Economics.

5.4 Artificial Economics (AE)

In the late nineties, a young assistant lecturer, Adolfo López-Paredes, was developing an EE economics Lab (LABEXNET) as part of his Ph.D. in my Department. When helping him “shopping in the field” of EE, I came across a short version of a weird draft, that talked about intelligent-cognitive artificial agents and it reported results of simulating a duopoly. I could see there a reach set of results including the dynamics towards equilibrium. I was fascinated by what I was reading, signed by Professor Scott Moss at the Centre for Policy Modelling, Manchester. I wrote to S. Moss asking for more details and he sent me a report of the research undertaking at the CPM with a note “good luck”. We arranged a visit of Adolfo to S. Moss and he was accepted to work at the CPM. With a degree in Industrial Engineering, not in Computing, he was able to get his hands on the SDML a very advance platform for social simulation. In 1999 he obtained his Ph.D. (López-Paredes 1999). Some of the findings were published in López-Paredes and Del Olmo (1998), López-Paredes (2000) Hernández and López-Paredes (1999, 2000) and the narrative content in López-Paredes et al. (2002). He used the EE results from the LABEXNET to initiate the AE simulations. EE and AE are complementary.

In the early nineteen’s I created a group of young colleagues and Ph.D. students interested in the Socio-Economics Applications of Agent Based Modelling (InSisoc) at the Universities of Burgos and Valladolid. I participated as founder of ESSA and member of its Management Committee and later on of the Artificial Economics Conference (ACF). We hosted the ESSA conference in 2004 and the ACF in 2009.

Under the label of SSC there are joint conferences of the ESSA (European Social Simulation Association), representing Europe; PAAA (Pacific-Asian Association for Agent-based Approach in Social Systems Sciences), representing Asia and Oceania; CSSSA (Computational Social Science Society of America), representing the Americas. Each year there is a Summer School in Social Simulation for young people. This short history shows that AE is growing rapidly with young researchers in any field of Economics and Management.

AE economics is what it does. In the JASSS journal there are papers covering AE and social simulation. In the 12 AE annual conferences proceedings: http://www.artificial-economics.org/, there are more than 200 papers which cover a wide range of research in Economics, Finance and Management and show how AE has evolved in the last decade.

Defining AE is controversial because it is at the intersection of Artificial Intelligence and EE. Since the product is always a computer programme, Agent Based Computational Economics (ACE) is being used as an alternative. The heritance of ACE from Dynamics and Control leads to a bias towards computational properties of mathematical top down models, the value window, against the generative bottom up models of EE and Agent Based Modellig (ABM), the exchange window (market oriented). For more on this issue see Shu (2012) and the keynote session at the AE conference 2015, (Izquierdo and Izquierdo 2015). They recently updated a brief but informative view of AE in https://en.wikipedia.org/wiki/Artificial_economics.

AE is EE with soft agents. In AE, agents are no objects. Objects do it for free, agents in AE do it for “money”. They are autonomous and purposeful. They can be heterogeneous, and exposed to another agents in a given environment. The agents can be endorsed with differences in information, learning capacity and decision choices. The modelling process is bottom up, generative. AE inherits the entire EE’s legacy. In particular, as in EE, economic models exhibit the dynamic process towards equilibrium and how the models adapt to external or internal changes. Been a generative process it does not impose equilibrium conditions.

…it will be clear now that the main rationale to do AE is that it expands the set of assumptions that we can explore. The reason is that the set of assumptions that we can investigate using computer simulation is not limited by the strong restrictions that mathematical tractability imposes, so a whole new universe of possibilities opens up. This point is particularly important in the study of socioeconomic processes, which–due to its complex nature–are oftentimes difficult or impossible to address adequately using a purely deductive approach only. The theoretical analysis often requires so many simplifications to ensure tractability that the correspondence between the real world and the model assumptions ends up being disappointingly weak. Thus, using the AE approach we have the potential to understand socioeconomic processes better, and al-so to assess the impact and significance of the simplifications made by the theoretical approach” (Izquierdo and Izquierdo 2015).

The short definition of AE, encapsulates its limits and strength. A large number of agent decision-making models can be found in the literature, each inspired by different aims and research questions (Balke and Gilbert 2014). When we construct AE models, the agents must inherit bounded rationality as in EE. Dealing with aggregated impersonal models, as in Macroeconomics, the agents decisions should be fast and frugal and learning should be ecological. Therefore, it is necessary to avoid the excesses of endorsing the agents with constructivist capacity. Of course, one can use constructivist soft agents to check the compatibility under appropriated institutions with fast and frugal rules, testing again the Hume- Hayek hypothesis. Patching econometric models, such as the DSGE, with artificial agents with decision capacity far from common sense reasoning, is a bad practice. However, it seems unavoidable. Perhaps the on-going project Agent-Based Macroeconomics will contribute to clear this issue, Dilaver and Gilbert (2016).

5.5 Building AE: The Exchange Window

Following the legacy of EE (Smith 1989), to generate an AE model, we consider three dimensions that are essential in the design of any market experiment: The Institution (I) (it is both the exchange rules and the way the contracts are closed, and the information network), the Environment (E) (agent endowments and values, resources, knowledge) (A). They are represented in Fig. 13.

Fig. 13
figure 13

Dimensions in the design of any market experiment

By mapping different arrangements of the elements of this triplet (I × E × A) into observed and forecasted outcomes (O), a host of experimental results can be obtained. This is quite important. You can try to specify the triplet of conventional microeconomic models and you will see that it is not a trivial task.

We must go beyond conventional EE if we want to control the agents’ behavior (A) dimension of our experiments. We have to move from human to artificial agents as argued in Hernández and López-Paredes (1999, 2000), López-Paredes et al. (2002) and Posada et al. (2006a). Taking this step, a rich program of research comes up, just widening the many relevant findings of EE with human agents, and checking their robustness against alternative controllable agents’ behavior.

The first experiment with programmed agents (Gode and Sunder 1993) was a big surprise. They confirmed that institutions matter. To the extreme that in a CDA price convergence and allocative efficiency was achieved, even with zero intelligent (poorly instructed but perceptive) agents. That spontaneous order arises in the CDA, thus confirming F. Hayek and A. Smith conjectures.

The Institution (I) may be, for example, the CDA of the Smith experiment commented above. The CDA market is the dominant institution for the real-world trading of equities, energy, derivatives, emissions permits, etc. The CDA imposes no restrictions on the sequencing of messages. Any trader can send a message at any time during the trading period. We consider a CDA with a bid-ask spread reduction. The only restriction to accelerate convergence is that a new bid/ask has to provide better terms than previous outstanding bids/asks.

The environment (E). Each trader is either a seller or a buyer. Each agent is endowed with a finite number of units. Seller i has ni units to trade and he has a vector of marginal costs (MaCi1, MaCi2… MaCini) for the corresponding units. Here MaCi1 is the marginal cost to seller i of the first unit; MaCi2 is the cost of the second unit, and so on. Buyer j has nj units to trade and he has a vector of reserve prices (RPj1, RPj2… RPjmj) for the corresponding units. Here RPj1 is the reserve price to seller I of the first unit, RPj2 is the reserve price of the second unit, and so on.

The model may be restricted to homogeneous populations or to symmetric environments. It may have no environmental restrictions or any environment in terms of the number of traders, their units and the valuations of each trader. We can alternatively define market environments, both with symmetric or asymmetric supply and demand curves. We consider that an environment is symmetric if the supply and demand curves have opposite signs but equal magnitudes. Otherwise, we consider that the environment is asymmetric. The extreme case is when the supply curve and/or the demand curve are perfectly elastic.

Of course, should it be relevant, we may consider an external physical environment and the bargaining among the stakeholders or the auction will take place at a different level; the agents can share information from the physical landscape and incorporate it into the agent’s environment. For a water management planning of a metropolitan area, Fig. 14, shows the model developed in López-Paredes et al. (2005) and Galán et al. (2009).

Fig. 14
figure 14

SIGAME project architecture by layers

The agents (stakeholders) will negotiate to achieve a solution, bargaining about the global proposals or competing through an auction to select the winner. Of course once there is a winner proposal there could be fine-tuning “over the counter” agreements.

Agents’ behavior (A). In a CDA markets, traders face three non-trivial decisions, Chen (2000): How much should they bid or ask for their own tokens? When should they place a bid or an ask? When should they accept an outstanding order of some other trader? We may define agents with different learning skills. The agents can choose their strategies in an evolutionary way according to individual and social learning for each trading period.

See the scheme in Fig. 15 for an ABM with different agents’ learning endorsements.

Fig. 15
figure 15

Auction dimensions. Case of a CDA with 3 types of learning agents: SD, K, and ZIP

Following this design procedure, research on impersonal exchange at InSiSoc have tested the robustness of the CDA with ABM simulation under very different scenarios and found quite relevant results in terms of auction’s performance:

  • Transaction cost in CDA can be estimated from the experiment (Posada and Hernández 2010)

  • The dynamics of Marsallian and Walrasian instability in a CDA (Posada et al. 2008)

  • Fast and frugal rules for agents learning maintain convergence, performance and efficiency (Posada and López-Paredes 2008)

  • The institutional design is very important to achieve the objective of the markets: The failure of the EPA is an outstanding example of improper design (Posada et al. 2007)

  • Agent’s intelligence and strategy matters. If we allow for agents with different learning capacities and skills, efficiency and performance can be achieved, but the surplus of the different groups of agents may differ. If the proportion of one kind of agents goes over certain threshold, then efficiency decreases (Posada et al. 2006a)

  • Even for non-intelligent, yet perceptive and motivated agents, convergence to an equilibrium price and efficiency will be achieved (Posada et al. 2006b).

5.6 Socially Inspired Methods to Solve Complex Problems

We have seen that auctions (market exchange) are solvers for scarcity and choice complex problems, because they are an n-p hard problem or because an agreement has to be achieved introducing competence. In n-p hard problems, we use heuristics inspired in biological analogies such as genetic algorithms and swarm computation. Why not to use socially inspired methods in complex engineering and management problems? AE is Agent Based Modelling in the Economics or Management. When we use AE exchange methods to modelling physical systems, we maintain the original label as Multi-agent Systems (MAS), since it refers to a wider field of applications. In MAS, we try to build an artificial social simulation model to allocate scarce resources with alternatives uses, endorsing the model agents with behavior and motivations used in auctions models, given the self-regulating capacity of markets. In the MAS approach, we use a market metaphor to solve problems of engineering and management.

This approach has been used to solve many Management relevant problems. Landing and take-off schedules at airports (Rassenti et al. 1982) and SESAR (2014); optimization of freight transport (Bertsekas 1990); management of sections of the rail network (Parkes and Unga 2001); to develop flow shop production programs, Wellman et al. (2001); management resources in a portfolio of projects etc.

To illustrate the idea, we will briefly describe the MAS application to solve the problem of allocating resources to a portfolio of projects, an n-p complete problem (Garey et al. 1976). See Fig. 16. Let us assume an organization that wants to carry out different projects, so that each one will have its objectives, expected profitability, priorities, needs, dates, etc. At the same time, the organization will have limited resources, both personal and material, to try to undertake part of the projects that are available in the portfolio. The available resources are also individual, each with its skills and efficiency in each one of the projects.

Fig. 16
figure 16

Management of a portfolio of projects as a combinatorial auction

The market metaphor in this case is to assume that each of the projects will be represented by an artificial computer agent who will be provided with limited funds that it will use to maximize its utility (profit), to complete the project in time and with the least possible cost.

Each project demands the necessary resources during certain time slots (slots). On the other hand, the resources will also be represented by artificial agents that try to maximize their profit; that is, to extract the maximum amount of wealth to the projects and for this they will be willing to sell their temporary slots (ask) to that project willing to pay more (bid). We can formulate the coordination’s system as an auction. The resources auctioned are not homogeneous and they frequently are complementary. The value for a buyer of a resource at a given time depends also of having another related resource. The CDA auction will not be valid for this problem. The usual approach is to allow agents to simultaneously bid for various assets. This leads to a Combinatorial Auction (CA). Among the different possible random auctions, we will use an iterative auction to fix prices.

The process is as follows. For a detailed explanation, see Arauzo et al. (2009) and (2010). Each activity is associated with a type of skill and each resource has a set of skills and efficiency in each of them. The greater resource efficiency, the shorter duration is required to complete each task. Projects have precedence relationships from end to end such that a task cannot be started until the precedents have been completed. Resources have their own cost rate. There is one agent-project for each project in the portfolio. Each agent-project, will request the set of resources slots that allow them to achieve their goals at minimum cost. The total cost of the project will be the sum of the prices of the slots of the resources plus an additional penalty in case the project is delivered with an allowable delay. In order to make the bid, the project-agents use a dynamic programming algorithm that evaluates the possible combinations of slots that allow the achievement of the project (Wang et al. 1997).

Since the proposal of activities of project-agent is decentralized and each one seeks its own objectives, the result of all proposals frequently results in incompatible programs that request some resources at the same time and are globally not optimal. The rules of the auction that reduce inconsistencies start with a minimum price for each time slot. When a resource-agent receives more than one offer for one of the slots, it raises its price, while the slots without demand lower their price until a stable price is reached. Once the prices adjusted by the resource-agents, the project-agents renew their local programs according to the new price information, to maximize again their individual profits. The auction continues indefinitely. This procedure corresponds to the sub-gradient optimization algorithm (Zhao et al. 1999).

This non-hierarchical MAS approach has certain advantages in solving allocation problems. It is very flexible and robust to changes in the number of agents of both types, the communication between the agents is minimal and generally, it allows finding solutions very satisfactory from the iterative solution of local problems, which is very interesting in many applied problems.

6 Conclusions

A historical perspective of the Economy in the last 60 years allows us to select some pending challenges and advances in Economics. When we consider Economics as a science that tries to understand how growth generates and it is distributed, we face the first challenge: There has been steady growth but increasing inequality as well. One of the causes of inequality was already point out in by Solow’s (1957) seminal paper. There is a welcome residual due to the value added produced by learning, externalities and technological, managerial and institutional improvements. However, since the nineteens, corporate profits are growing over labor income, increasing inequality. The inequality gap has become even greater due to the 2008 crisis. The financial and real sector (FIRE) pumps wealth from the productive sector. To correct inequality, that hinders the rate of growth and undermines democracy, we need measures far beyond Economic Policy. We need deep reforms in Political Economy that let the workers share in a fair way the residual, firm governance and risk. This means deep reforms in the labor market and the FIRE sector.

From the seventies crisis we learned some fundamental messages. (i) Economic Theory is contextual and Economists can learn from different context. (ii) They do not have to wait for a crisis to change their models. Friedman anticipated the explanation of the crisis. (iii) Economic models should specify citizen’s expectations. (iv) Adaptative “fast and frugal” expectations, could do as well as the so-called rational expectations, as anticipated almost 35 years ago by Hernández-Iglesias and Hernández-Iglesias (1981). (v) Higher oil prices triggered the crisis but the persistent causes were wrong monetary expansion policies. Just as it happened in the Japanese crisis in 1990, wrong monetary and fiscal policies reinforced the crisis.

The actual crisis lessons. In 1985, something new happened in the history of the Economy. The time series line of the credit that goes to investments in the FIRE sector crosses the line of the credit that goes to investments in the productive sector. The ratio of credit to the FIRE versus credit that goes to the productive sector was 1. In 2015 the ratio was 5. (i) This means a tremendous credit inefficiency. (ii) The bulk of the FIRE sector credit is created by the shadow banking. When citizens accept securitized credit, they become “bankers” without being aware of it. (iii) The FIRE sector is pumping wealth, through mortgages mainly, from the productive sector and families without rents. (iv) This huge transfer of wealth to capital rents has widened the inequality gap, thinning the middle class, and increasing the employment duality. (v) Econometric models, including the DSGE model did not consider the FIRE sector. (vi) The crisis has favored new independent institutions, as INET, to promote rethinking Economics. Thanks to INET, accounting models of the FIRE sector, buried by the orthodox economists, can reveal the cause and persistency of the crisis, and Agent Based (Artificial Economics) Macroeconomic Models will be soon available. (vii) The crisis makes inescapable a deep reform of the FIRE sector that goes far beyond prudential rules and monetary and fiscal policies.

In the last part of the chapter, we discussed what is wrong with Economics as a social science and the promise of the new avenues of EE and AE. Here are some of the conclusions. (i) EE that was developing since the sixties is nowadays accepted as a toolkit to improve Economics in all their subfields. (ii) It made possible to give up some of the unrealistic assumptions of Mathematical Economics and to enrich Economics with Psychology and Social thinking. (iii) EE brought up the need to consider the “value window” (constructivist rationality) and the “exchange window” (ecological rationality) in Economics confirming by experiments the Hume-Hayek hypothesis of the markets as an engine of knowledge and collective intelligence. (iv) Collective intelligence explains that fast and frugal rationality can be as good as full rationality, i.e., that adaptative expectations can match rational expectations. (v) EE provides an internal consistency test for standing economic relationships or models, which may have past the external econometric testing.

AE extends all the findings of EE since by defining soft agents it is possible to simulate the ultimate details of the model dynamics, in terms of the agent’s endorsements, the institution and the environment. AE can be a useful tool to generate models in many fields of Management Engineering and in any physical landscape populated by social agents. Socially inspired methods such as auctions models can provide solutions to mathematical complex problems (n-p hard) such as scheduling and slot allocations for airlines or to complex problems in Organization such as the supply chain management or the efficient management of a portfolio of projects.