1 Introduction

Economics occupies a commanding position in the policy making process. Central banks and finance ministries have their teams of economists, as do a wide range of government departments and international bodies. Following the pioneering work of Nobel Laureate Gary Becker in the 1970s [2] the reach of economics has extended to areas such as crime, family organisation, racial discrimination and drug and alcohol addiction.

A great deal of policy is now filtered through the lens of economics. Obviously, an appreciation of the potential effects of policy on the economy has always been important within government. But the massive growth of the influence of the formal discipline of economics is illustrated by the example of the United Kingdom. Fifty years ago, the UK government machine employed no more than a dozen or so economists. The newly elected centre-Left Labour government added to their numbers, perhaps doubling them. Now, the Government Economic Service in Britain employs well over 1000 economists, not counting the teams employed in the central bank and in the various regulatory bodies.Footnote 1

We might reasonably ask whether this expansion is justified by the scientific achievements of the discipline. Certainly, this does not seem to have been the view of Jean-Claude Trichet, Governor of the European Central Bank during the economic crisis. In November 2010, he gave his opinion that “When the crisis came, the serious limitations of existing economic and financial models immediately became apparent. Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner. As a policy-maker during the crisis, I found the available models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools” [23].

Trichet was of course referring to the macro aspects of the discipline. The American humorist P. J. O’Rourke rather neatly captured what has traditionally been the distinction between micro and macro economics in his book Eat the Rich, a very thoughtful and entertaining reflection on why some countries are rich and others poor, ‘One thing that economists do know is that the study of economics is divided into two fields, ‘microeconomics’ and ‘macroeconomics’. Micro is the study of individual behaviour, and macro is the study of how economies behave as a whole. That is, microeconomics concerns things that economists are specifically wrong about, while macroeconomics concerns things economists are wrong about generally.’ [17]

Within economics itself, however, over the past three or four decades, the primacy of microeconomics has been firmly asserted. Modern economic theory is essentially a theory of how decision makersFootnote 2 choose between the alternatives which are available in any given context. Macroeconomics—the study of, for example, movements over the business cycle in the total output of an economy (GDP)—still exists as an area of study, but it is deemed essential for macro models to have micro foundations.

The models criticised by Trichet developed from work on so-called real business cycle models by Kydland and Prescott in the 1980s, for which the scholars received the Nobel Prize in 2004 [18]. The more modern versions of these models have the exotic description of Dynamic Stochastic General Equilibrium (DSGE). The formidable mathematical apparatus which envelops these models is essentially based upon a description of how individuals choose to allocate their time between work and leisure. A good discussion of the synthesis between macro and micro in mainstream economics is available in Woodford’s 2009 paper [24]. Despite their conspicuous failures during the financial crisis, DSGE models remain ubiquitous in central banks, international bodies and finance ministries.

At the micro level, however, economics does give powerful insights into how the world operates. Indeed, it provides us with what is probably the only general law in the whole of the social sciences: individuals react to changes in incentives. In other words, if the set of incentives which an individual faces alters, the agent (a generic term for the individual in economics) may alter his or her behaviour. An everyday example of incentives is the behaviour of drivers when approaching a speed camera. Even the fiercest critic of economic theory is likely to slow down. The driver may have been exceeding the speed limit, but the probability of detection by the police, certainly on high speed roads, is low. The probability rises sharply when a speed camera is present. The incentives faced by the driver have changed, and his or her behaviour changes as a result.

2 The Core Model of Economic Theory: ‘Rational’ Choice

2.1 The Background

Within economics agents are postulated to react to incentives in a specific way, that of the so-called rational agent. It was a great presentational coup by economists to describe their theory of individual behaviour as ‘rational’. By implication, any other mode of behaviour is irrational. But it is very important to realise that this usage conflates the normal meanings of the words ‘rational’ and ‘irrational’ in English with the particular scientific definition of the word within economics. Within the discipline, ‘rational’ has a very specific, purely scientific meaning, and refers to a set of hypotheses about how individuals take decisions. As we shall see below, in many contexts in the social and economic world of the twenty-first century, these hypotheses may not be valid. It may very well be irrational, using ‘irrational’ in the usual sense of the word, to behave in a rational way, using ‘rational’ in the specialised way in which it is used within the science of economics! It is to these hypotheses which we now turn.

Economic theory was formalised in the late nineteenth and early twentieth centuries. The main aim was to set out the conditions under which a set of prices could be found which would guarantee that supply and demand would be in balance in all markets. Such a world would be efficient, since all resources would be fully utilised. If individuals were observed who did not have work, this would be because of their preference for leisure over work at the prevailing wage rate (the wage being the price of labour to the employer). They would be choosing not to work. As we saw above briefly in the discussion of modern macroeconomic theory, this concept has exercised a very powerful influence on economics ever since.

During the twentieth century, a major task of economic theory was to refine and make more precise the conditions under which prices which cleared all markets could be found. The task was not to solve this problem for any actually existing economy, but to establish theoretically whether a solution could be found. This concept, known as general equilibrium, was of fundamental importance. Seven of the first eleven Nobel prizes were awarded for work in this area.

In many ways, it was a very strange problem for economics to focus on. The theory refers to the most efficient allocation of resources in a purely static world. Given a fixed amount of resources of all kinds, including labour, could a set of prices be found which would clear all markets. It was strange because by the late nineteenth century, the Industrial Revolution was a century old. For the first time in human history, a social and economic system had emerged in which the total amount of resources available was being continually expanded. There were short term fluctuations in total output, but the underlying trend was unequivocally upwards. The Industrial Revolution is called a ‘revolution’ because it was precisely that, of the most dramatic kind. No other system created by humanity in its entire existence had succeeded in generating additional resources on anything remotely approaching this scale. The problem of why economies do or do not grow is a very difficult one, which even now lacks a satisfactory solution, although some progress has been made. We return to this issue below.

2.2 The Key Assumptions

The building block of this formalisation of economics was a set of postulates about how individual agents make choices. Like any scientific theory, assumptions have to be made in order to simplify the problem and to make it potentially tractable. The question really is whether the assumptions are reasonable approximations to reality.

Two assumptions are of critical importance to rational choice theory. Without them, the task of obtaining analytical solutions to the models being developed would have been impossible, given the tools available in the late nineteenth century. So from a purely utilitarian perspective, there were good reasons for making these assumptions.

The first assumption is that

each agent has a set of preferences about the alternative choices

which are available, which is fixed over time.

The actual decisions which an agent makes will depend upon the relative prices of the alternatives, and upon constraints placed upon the agent such as his or her level of income. But the preferences are assumed to be stable over time. If today I prefer Coca-Cola to Pepsi at a given relative price of the two products, I will always prefer Coke at the same relative price level.

This assumption is still embedded deeply in the psyche of most economists. There is a long standing dispute, for example, about the value of survey data on how people might react in various hypothetical situations. Such surveys have been used extensively in the marketing world, for example, but economists remain deeply suspicious of them. The leading Journal of Economic Perspectives, for example, recently carried a issue with several papers related to this theme. Hausman, an econometrician who has done very distinguished work in his field, put the issue very clearly: “I believe that respondents to…surveys are often not responding out of stable or well-defined preferences, but are essentially inventing their answers on the fly, in a way which makes the resulting data useless for serious analysis (Hausman, 2012)” [8]. In other words, in mainstream economic theory, the assumption is made that agents have fixed tastes and preferences. These preferences are revealed not through answers to hypothetical questions, but through how they actually respond to changes in the set of incentives which they face.

The second assumption is related to the one of fixed preferences:

agents make choices independently, and their preferences are not

altered directly by the decisions of others.

The choices which other people make may influence an agent indirectly, via their impact on the price level. If large numbers of people, for example, buy bananas, the price may rise because of the level of demand. This in turn may affect whether or not the agent buys bananas. But the fact that many others like bananas does not lead the agent to become more enthusiastic about bananas (at a given price).

These two assumptions effectively remain at the heart of modern mainstream economics, despite the advances made in the late twentieth century, which will shortly be discussed. A third assumption was required to make the theory complete. This was that

the agent both gathers and is able to process complete information about

the alternatives when making a choice in any given situation.

Over a 100 years ago, when the theory was being first developed, the idea that there might be limits on the ability of agents to process information did not really have traction, and the assumption was simply that complete information was available. The unbundling of this into its information-gathering and information-processing components is a useful distinction to make.

So, an agent exercising independent choice, with fixed preferences, gathers and processes complete information about the alternatives on offer. With these assumptions, the agent is able to make the best possible choice—the ‘optimal’ one, as economists prefer to say. The choice will be the one which most closely matches the preference of the agent, given the relative prices of the alternatives and the income of the agent. It is an obvious thing to say, but none the less important to note, that the choice of the agent is based upon the attributes of the alternatives.

All scientific theories are approximations to reality. They may be extremely good approximations, as is the case with, say, quantum physics. But they are not reality itself. There will be some divergence between the theory and reality. So how reasonable were the assumptions made about agent behaviour when the fundamentals of economic theory were being developed over 100 years ago? An argument can be made that even then they were at best poor assumptions and at worst positively misleading. Fashion, for example, seems to have existed almost as long as humanity itself. For example, In the Hittite empire of some 3500 years ago, certain kinds of pottery appear to have been more fashionable than others [21]. And in fashion markets, people select to a considerable degree simply on the basis of what is already popular, what is fashionable, rather than on the attributes of the alternatives themselves. As I write these words, the Christmas season is upon us, and every year one toy becomes especially desirable, simply because many other children have demanded it for their present.

However, in the late nineteenth century, even in the most advanced capitalist economies, fashion was not something which concerned the vast majority of the population, even though, for the first time, many ordinary people had money to spend on products other than those required for the bare necessities of life. True, it was in this period that branded goods first began to proliferate, and the modern advertising industry was created. But the products were simple and easy to understand. The content of advertising focused on the attributes of the products—“Pears soap washes you clean”—rather than on the elusive promises of personal fulfilment portrayed in much modern advertising. So perhaps it was a reasonable approximation to reality to assume that people could obtain and process complete information about alternatives, and that they exercised choice independently with a fixed set of preferences. During the first half of the twentieth century, the basic postulates of the theory of rational choice remained unchanged. Large steps were taken in making the theory more formal, but the core of the theory was unaltered.

2.3 A Challenge

During the late 1940s and 1950s, a major challenge arose to this theory of rational decision making. A multi-disciplinary debate took place involving many leading scholars, financed to a substantial extent by the US military at think tanks such as RAND. Philip Mirowski’s book Machine Dreams chronicles this in impressive detail [11]. A key point at issue was whether the economic agent possessed the necessary computational capacity to make optimal decisions. Even if the other assumptions of rational choice theory, such as fixed and independent preferences, were reasonable approximations to reality, and even if large amounts of information were available, it might not be possible for agents in the way prescribed by rational choice theory because of their inability to process the information.

The major figure to emerge within the social sciences from this debate was Herbert Simon. He was not himself an economist, holding a chair in industrial management at Carnegie Mellon University. Simon was awarded the most prestigious prizes in several disciplines. He received the Turing Award, named after the great Alan Turing, the father of modern computers, in 1975 for his contributions to artificial intelligence and the psychology of human cognition, and in 1978 he won the Nobel Prize in economics for ‘for his pioneering research into the decision-making process within economic organisations’. In 1993 the American Psychological Association conferred on him their Award for Outstanding Lifetime Contributions to Psychology.

Perhaps his most significant intellectual contribution was in creating virtually single-handedly the whole field of what is today known as behavioural economics, although as we shall see, his main message is still very far from being accepted by the mainstream economics profession. His seminal article was published in 1955 in the Quarterly Journal of Economics, entitled ‘A Behavioral Model of Rational Choice’ [22]. The article itself is theoretical, but throughout the paper Simon makes explicit the fact that his choices of assumptions are based upon what he considers to be empirical evidence which is both sound and extensive. This truly brilliant article, is the basis for the whole field of behavioural economics, and is worth quoting at some length.

Simon begins the paper with what by now will be familiar material: “Traditional economic theory postulates an ‘economic man’ who, in the course of being ‘economic’, is also ‘rational’. This man is assumed to have knowledge of the relevant aspects of his environment which, if not absolutely complete, is impressively clear and voluminous. He is assumed also to have a well-organised and stable system of preferences and a skill in computation that enables him to calculate, for the alternative courses of action available to him, which of these will permit him to reach the highest attainable point on his preference scale”.

So far, all very relaxing and soothing to economists. But then came his bombshell: ‘Recent developments in economics, and in particular in the theory of the business firm, have raised great doubts as to whether this schematised model of economic man provides a suitable foundation on which to erect a theory—whether it be a theory of how firms do behave, or how they ‘should’ rationally behave …the task is to replace the global rationality of economic man with a kind of rational behavior which is compatible with the access to information and computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist’.

2.4 Developments

This quotation essentially defines the research programmes carried out from the late 1960s and 1970s onwards by future Nobel Laureates such as George Akerlof, Joseph Stiglitz, Daniel Kahneman and Vernon Smith.

There are three distinct strands, two of which have been extensively studied within economics. First, the consequences for models based on the standard rational agent when either agents have incomplete information, or different agents or groups of agents have access to different amounts of information. With their predilection for grand phrases, economists refer to this latter as being a situation of asymmetric information. The second strand blossomed into experiments with economic agents, drawing heavily on the methodology of psychological experiments. The approach here was to examine how agents really did behave, rather than making a priori assumptions about how a rational agent ought to behave. Daniel Kahneman is the most famous scholar in this area, and his work relates to Simon’s injunction to base theoretical models on agents which have ‘computational capacities that are actually possessed by organisms’. In other words, to place realistic bounds on the ability of people to process the information which they have, regardless of whether it is complete. The jargon used by economists dresses this up as bounded rationality.

The first major advance came about through the work of George Akerlof and Joe Stiglitz during the late 1960s and 1970s, along with that of Michael Spence. The Americans were jointly awarded the Nobel Prize in 2001 for their work. They relaxed the assumption that agents have complete information about all the alternatives when contemplating a decision. Not only might agents have imperfect information, but different ones might very well have different amounts. This latter idea, the so-called question of asymmetric information, has been very influential not just within academic economics but in practical policy making. For example, as the press release for the award of their Prizes stated “Borrowers know more than lenders about their repayment prospects. Managers and boards know more than shareholders about the firm’s profitability, and prospective clients know more than insurance companies about their accident risk”.

These points may seem obvious enough when written down. The achievement was to incorporate the assumption of imperfect information into the core model of economic theory. In this world, agents still had fixed preferences and chose independently, and they still made the optimal decision given the amount of information which they had. But instead of having to assume that decision makers have complete information about the alternative choices, the much more general and realistic assumption could now be made that they often do not.

A very important concept in policy making emerged from this work, namely the idea of market failure. Other aspects of developments in theory have contributed to this, but imperfect information has been the key one. Economists have a description of the world which, if it obtained in reality, would possess a variety of apparently desirable characteristics. They slip rather easily into saying that this is how the world ought to work. And if in practice that is not what is observed, there must be some ‘market failure’. The world must be changed to make it conform to theory. So they devise schemes of various degrees of cleverness to eliminate such failures and enable the market to operate as it ‘should’, to allow rational agents to decide in a fully rational manner. And a crucial aspect of this is to improve the amount of information which is available to agents.

The concept of market failure has come to pervade policy making in the West over the past few decades, over a very wide range of policy questions. The role of the policy maker, in this vision of the world, is to ensure that conditions prevail which allow markets to work properly, and for equilibrium to be reached. Ironically, mainstream economics, with its idealisation of markets, now provides the intellectual underpinnings for government intervention in both social and economic issues on a vast scale.

The second major development arising from Simon’s work has impressive empirical achievements, but has made relatively little theoretical impact on economics.Footnote 3

This, as noted above, is essentially based upon experiments grounded in the methodology of psychology. Agents are studied in an experimental context to see how they actually take decisions in particular circumstances. There are many examples in this literature of observed deviations from rational behaviour. However, it cannot be stressed too strongly that this work takes as its reference point the rational choice model of economics. This is presumed to be how agents ought to behave, and the experiments measure the extent to which agents deviate from the rational norm.

The concept of framing is an important example. This means that the choice a person makes can be heavily influenced by how it is presented. Volunteers in an experiment might be confronted with the following hypothetical situation and asked to choose between two alternatives. A disaster is unfolding, perhaps a stand is about to collapse in a soccer stadium and you have to decide how to handle it. Your experts tell you that 3000 lives are at risk. If you take one course of action, you can save 1000 people for certain, but the rest will definitely die. If you take the other, there is a chance that everyone will be saved. But it is risky, and your advisers tell you that it only has a one in three chance of working. If it doesn’t, everyone will die. Simple arithmetic tells us that the expected loss of life in both choices is 2000, for on the second option there is a two out of three chance that all 3000 will be killed. When confronted with this, most people choose the first course of action.

The problem is then put in a different way, it is ‘framed’ differently. This time, you are told that the same first choice open to you will lead to 2000 people being killed. The second will cause the deaths of 3000 people with a chance of two out of three that this will happen, and one out of three that no one will die. The outcomes are identical to those set out above. Yet in this context, most people choose the second option.

Just as with the work on asymmetrical information, these experimental results have had a substantial influence on the conduct of policy. They extend the potential power of incentives as a policy tool to alter behaviour. There are many examples, and here are just two. Yale academics Dean Karlan and Ian Ayres set up the website stickk.com designed to help people reach their goals. A person who wants to lose weight, say, makes a public commitment on the site to lose at least a specific amount by a specific date. He or she agrees a test to be carried out, such as being weighed in the presence of named witnesses, to verify whether or not the goal has been reached. But the person also puts up a sum of money, which is returned if the goal is met. If not, the money goes to charity. The second example is the ‘dollar a day’ plan in Greensboro, North Carolina, aimed at reducing further pregnancies in teenage girls under sixteen who have already had a baby. In addition to counselling and support, the girls in the pilot scheme were paid a dollar for each day in which they did not become pregnant again. Of the sixty-five girls in the scheme, only ten of them got pregnant again over the next 5 years. Of course, there are many criticisms of these and other such ‘nudge’ concepts. A persistent and strong one is that the people who really do want to lose weight are the ones who make the commitment, the girls who really do not want to get pregnant again are the ones who join the scheme. In others words, those who sign up to ‘nudge’ schemes are those who were likely to adopt this behaviour regardless. Nevertheless, ‘nudge’ remains an influential concept.

Behavioural economics advances our knowledge of how agents behave in practice. The theory of choice under imperfect information extends the realism of the model of rational choice. But both of these sidestep the most fundamental challenge which Simon posed to rational economic behaviour. He argued that in many circumstances, we simply cannot compute the ‘optimal’ choice, or decide what constitutes the ‘best’ strategy. This is the case even if we have access to complete information. In many situations it is not just that the search for the optimal decision might be time consuming and expensive, it is that the optimal decision cannot be known, at least in the current state of knowledge and technology, because we lack the capacity to process the available information.

This is an absolutely fundamental challenge to the economic concept of rationality. In such situations, which Simon believed to be pervasive, he argued that agents follow not optimising but satisficing behaviour. By this he meant that agents discover a rule of thumb in any given context to guide their behaviour which gives ‘satisfactory’ results. They continue to use the rule until for, whatever reason, it stops giving reasonable outcomes.

Many economists nowadays choose to interpret Simon’s work in ways which are compatible with their theories. They argue that there are often limits to the amount of information which people gather, the amount of time and effort they take in making a decision. But this is because agents judge that the additional benefits which might be gained by being able to make an even better choice by gathering more information, spending more time contemplating, are offset by the costs of such activities. Agents may well use a restricted information set, and make an optimal decision on this basis. An even better choice might be made if more information were to be gathered, but not one sufficiently better to justify the additional time and effort required. This is how modern economics uses the term ‘satisficing’.

But this whole argument completely misses Simon’s point. He believed that, in many real-life situations, the optimal choice can never be known, even ex post. It is not just a question of being willing to spend more time and effort to acquire the information so that the truly optimal choice can be made. We simply cannot discover it.

Ironically, it is game theory rather than Simon’s views on computational limits, which has become very influential in economic theory. The foundations of modern game theory were essentially laid down in the American research programme of the immediate post-war years. Seminal games such as the Prisoners’ Dilemma were invented. And the fundamental concept of the Nash equilibrium in game theory was discovered. This equilibrium, which pervades many theoretical discussions in economics, arises when no agent has an incentive to change his or her strategy, given the strategies of other agents [12].

In a sense, game theory can be thought of as a reaction by economists to the assumption that agents make decisions independently of the decisions of others. There are many situations in which agents are closely involved with each other, and not simply interacting indirectly through changes in prices in an impersonal market. Companies, for example, will often find themselves competing closely with perhaps just one main rival, and the strategy the firm adopts will be influenced by the actions taken by its rival. The major developments in game theory were in fact motivated by the Cold War, and the direct conflict between the United States and the Soviet Union. So, in a way, game theory recognises that agents may interact rather more directly then through a market in which there are many participants.

The problem is that the informational demands placed upon agents by game theory are enormous. In a pure Nash equilibrium, for example, agents are required to have complete knowledge of the strategies of other agents. The concept has been generalised to allow for incomplete information, but the difficulties of using it in any practical context are still very considerable indeed. As the International Encyclopaedia of the Social Sciences remarks “in applying the concept of Nash equilibrium to practical situations, it is important to pay close attention to the information that individuals have about the preferences, beliefs, and rationality of those with whom they are strategically interacting” [12].

2.5 The Current Situation

Modern economics is far from being an empty box. As noted above, the idea that agents react to changes in incentives is a powerful one, which appears to be of very general applicability. Developments during the past 50 years concerning how agents behave have expanded the usefulness of this key insight, and play an important part in both social and economic policy.

In other respects, economics has not stood still, and has tried to widen the relevance of its theories. For example, the theory of why international trade takes place goes back to the early nineteenth century and the great English economist David Ricardo. Ricardo demonstrated a result which is far from obvious, that even if one country can produce every single product more efficiently than another, trade will still take place between them and both countries can benefit as a result [20]. This principle has been a key motivator of the drive to open up international trade throughout the whole of the post-Second World War period. One problem for the theory, however, is that it found it difficult to account for the fact not only that most trade is between the developed countries, which are at similar levels of efficiency, but that much of this trade is in intermediate goods. That is, goods which are not purchased by consumers, but are simply used as inputs into the production of other goods. But trade theory has now been developed to take these key empirical issues into account.

In fact, in the past 10 or 20 years, purely empirical work has become much more important within economics. Such work is usually not empirical in the sense of trying to test the validity of a particular aspect of economic theory. Rather, the theory is taken as given, and the research involves very detailed statistical investigation of data, often of one of the very large data bases which are increasingly becoming available.

Social and economic data bases often raise very challenging issues in statistical analysis. Just to take one example, there is the question of self-selection bias which we encountered above in the study of teenage pregnancy. In essence, were the women who participated in the programme truly representative of the relevant population as a whole, or did they self-select in ways which leads this particular sample to be a biased representation? This is a key problem in the evaluation of many social programmes. The Chicago econometrician James Heckman was awarded the Nobel Prize for his work in developing statistical techniques which enable questions such as this to be tackled.

There is no doubt that in terms of the techniques required for statistical analysis of what are termed cross-sectional and panel data bases, economics has made very substantial progress in recent decades. Cross-sectional data refers to the characteristics of a given sample of individual agents at a point in time. Panel data follows the same individuals and the evolution of their characteristics over time.

The same cannot be said for the statistical analysis of time-series data in economics. Typical examples of time-series data are national output, or GDP, unemployment and inflation, and this type of data has a macro-economic focus within economics. It is not that techniques have stood still. Far from it, a particularly important one being the concept of co-integration, for which Clive Granger and Robert Engle were awarded the Nobel Prize. This is rather technical and details can be found readily in, for example, the Wikipedia entries on co-integration (https://en.wikipedia.org/wiki/Cointegration) and the related concept of error correction model (https://en.wikipedia.org/wiki/Error_correction_model). In essence, it goes a long way to avoiding obtaining purely spurious correlations between time series variables. Many of these, such as the level of GDP, have underlying trends, and the potential dangers of looking for correlations with the data in this form have been known for some 40 years (see for example Granger and Newbold [7]).

However, time-series analysis has made very little progress in resolving disputes within economics. For example, a fundamental concept, and one which is of great policy relevance at the current time, is that of the fiscal multiplier. The idea was developed by Keynes in the 1930s, during the time of the Great Recession when GDP in America and Germany fell by nearly 30 %. Suppose a government decides to increase spending permanently by, say, £ 1 billion a year. What will be the eventual increase in overall GDP as a result of this? As people are brought back into work as a result of the additional spending, they themselves have more to spend, and the effect spreads—multiplies—through the economy. Of course, there will be so-called ‘leakages’ from this process. Some people will save rather than spend part of any increase in income. Some of the additional spending power will go on imports, which do not increase the overall size of the domestic economy.

This seems straightforward. In the current policy context, there are fierce debates about austerity, of restricting rather than expanding government spending during the current economic recession in many EU countries. If the above were the whole story, it would be obvious to abandon austerity and increase government spending. But the question is much more complicated. The government will need to borrow more to finance the increase in expenditure. This may, for example, have consequences for interest rates. If they rise sharply, negative effects will be created on confidence, personal and corporate borrowing will become more expensive, and the initial positive impact of the extra government spending may be eroded, in part or even completely.

The point is not to arbitrate in this debate. Rather, it is that economists still disagree strongly about the size of a fundamental concept in macroeconomic theory, the fiscal multiplier. In principle, constructing models of the economy by the statistical analysis of time-series data could answer the question. Indeed, many such models exist which do indeed provide an answer to the question. But the answers themselves are different.

In essence, although economics has made progress, its foundations are still those of the theory of how rational agents behave, in which agents have stable and independent preferences, and are able to gather and process large amounts of information. There are, of course, papers which relax the assumptions of stable and independent tastes in order to address issues such as fashion. But these are seen very much as exceptions to the norm. The world in general, to orthodox economists, remains explicable with the rational agent model.

This is all the more so, given the relaxation of the basic assumptions which enables agents to hold asymmetric and imperfect information. The experiments of behavioural economics provide many examples of agents not behaving in the way proscribed by theory, but such behaviour is seen as arising through imperfections in the system. It is the task of policy to remove such impediments to agents behaving in the rational manner as defined by economics.

The challenge posed by Simon in the 1950s, that agents in general lack the computational capacity to operate as rational agents, has been safely neutered. Simon’s concept of satisficing, of agents using behavioural rules of thumb which give satisfactory results, has been absorbed into the discipline, but given an entirely different meaning which is completely consistent with rational behaviour as defined by economics.

Macroeconomics is recognised, certainly by the more thoughtful conventional economists, as leaving much to be desired. Neither models constructed using statistical analysis of time series data, nor DSGE models, with their foundations based on rational choice theory, are regarded as being entirely satisfactory. But this serves to reinforce the primacy of microeconomics within the discipline. Economics is fundamentally a theory of how agents behave when they make choices under various constraints.

3 Moving Forward

3.1 The Core Model of Behaviour

The key issue is the empirical relevance in the cyber society of the twenty-first century of the main assumptions which underlie rational choice theory:

  • Agents have preferences which are stable over time

  • Agents have preferences which are independent of those of other agents

  • Agents are able to gather and process substantial amounts of information

For the purpose of clarity, it is also worth repeating that the economics literature does contain examples of models in which these assumptions are not made. However, the corpus of economic theory is very substantial indeed, and such models only make up a tiny proportion of the whole.

Both twentieth-century technology and now the internet have completely transformed our ability to discover the choices of others. We are faced with a vast explosion of such information compared to the world of a century ago. We also have stupendously more products available to us from which to choose. Eric Beinhocker, in his book The Origin of Wealth, considers the number of choices available to someone in New York alone: ‘The number of economic choices the average New Yorker has is staggering. The Wal-Mart near JFK Airport has over 100,000 different items in stock, there are over 200 television channels offered on cable TV, Barnes & Noble lists over 8 million titles, the local supermarket has 275 varieties of breakfast cereal, the typical department store offers 150 types of lipstick, and there are over 50,000 restaurants in New York City alone.’ [3]

He goes on to discuss stock-keeping units (SKUs) which are the level of brands, pack sizes and so on which retail firms themselves use in re-ordering and stocking their stores. So a particular brand of beer, say, might be available in a single tin, a single bottle, both in various sizes, or it might be offered in a pack of six or twelve. Each of these offers is an SKU. Beinhocker states, ‘The number of SKUs in the New Yorker’s economy is not precisely known, but using a variety of data sources, I very roughly estimate that it is on the order of tens of billions.’ Tens of billions!

So, compared to the world of 1900, the early twenty-first century has seen quantum leaps in both the accessibility of the behaviour, actions and opinions of others, and in the number of choices available. Either of these developments would be sufficient on its own to invalidate the economist’s concept of ‘rational’ behaviour. The assumptions of the theory bear little resemblance to the world they purport to describe.

But the discrepancy between theory and reality goes even further. Many of the products available in the twenty-first century are highly sophisticated, and are hard to evaluate even when information on their qualities is provided. Mobile (or cell) phones have rapidly become an established very widely used technology (despite the inability of different branches of the English language to agree on what they should be called). In December 2014 Google searches on ‘cell phone choices’ and ‘mobile phone choices’ revealed, respectively, 57,500,000 and 114,000,000 sites from which to make your choice. Three years earlier, in December 2011, I had carried out the identical searches and the numbers of sites were then, respectively, 34,300,000 and 27,200,000.

So even over the course of a very short period of time, there has been a massive expansion of available choices. And how many people can honestly say they have any more than a rough idea of the maze of alternative tariffs which are available on these phones?

So here we have a dramatic contrast between the consumer worlds of the late nineteenth and early twenty-first centuries. Branded products and mass markets exist in both, but in one the range of choice is rather limited. In the other, a stupendous cornucopia is presented, far beyond the wildest dreams of even the most utopian social reformers of a century ago. An enormous gulf separates the complicated nature of many modern offers from the more straightforward consumer products of a mere 100 years ago. And, of course, we are now far more aware of the opinions and choices of others.

In many situations in the world of the twenty-first century, the key postulates of rational choice theory in economics seem wholly implausible. Preferences are altered directly by the observed behaviour, actions and opinions of others, and evolve over time. Agents are quite incapable of processing more than a small fraction of the amount of information which is available to them.

There are still areas of economic activity, however, where rational choice theory may very well still be valid. In mature consumer markets, for example, people have had a long time to become familiar with the various alternatives on offer, and understand the differences in their attributes. ‘Mature’ is used here not in the sense of the age of the consumers in a particular market, but how long the market itself has existed. Washing machines, for example have been around now for decades. Improvements are still made, but the basic technology remains unaltered. It seems reasonable to think that most purchasers in this market have reasonably stable preferences, and are able to distinguish between the attributes of the different brands on offer.

One research task is therefore to develop an effective system of classifying different areas of the economy in terms of whether or not the rational choice model may be appropriate. Again as noted above, all scientific theories are approximations to reality. This is especially the case in the social rather than the natural sciences. The question is not so much whether a particular theory seems very close to reality, but whether it seems closer than alternative theories.

With the anthropologists Alex Bentley and Mike O’Brien, I published an initial attempt to make such a classification [4]. The underlying arguments can be summarised diagrammatically in the form of a ‘4-box’, beloved of management consultants the world over. The horizontal axis represents the extent to which the choices of an agent are influenced directly by those of others. At one extreme, the agent makes a purely independent choice, as assumed by the rational choice theory of economics. At the other, the choice of an agent is formed completely by the choices of others. This latter situation may seem initially implausible, but a moment’s reflection is sufficient to realise that is relevant to many popular culture markets on the internet. The popularity of photographs posted on Flickr, for example, seem to bear little connection to the objective attributes of the photograph.

On the vertical axis, we have the ability of the agent to distinguish between the attributes of the available alternatives. These may be difficult to distinguish for several reasons. For example, there may be a relatively small number of choices, but the products are complex and hard to understand. The products may be essentially simple, but the companies which provide them create layers of complexity around them. Mobile phone and utility tariffs are obvious examples here. Or there may, quite simply, be a very large number of alternatives which an agent cannot hope to be able to evaluate.

The rational agent model is located in the top left hand quadrant of the chart. At the extreme left hand edge, each agent decides whether or not to choose a product based strictly on a personal assessment of the attributes of the available alternatives. An implication of the above is that we need different models of ‘rational’ behaviour in different circumstances. The economic concept of rationality is by no means a general model. The cyber society of the twenty-first century is the embodiment of Simon’s view that agents lack the computational capacity to optimise in the way prescribed by the economic model of rational choice (Fig. 1).

Fig. 1
figure 1

The rational agent model assumes perfect information and no copying

The most important challenge in economics is to construct ‘null models’ of agent behaviour which are applicable to the increasing number of circumstances in which the conventional model of rational choice is no longer valid. By ‘null’, we mean the basic model, which can be adapted as required to particular situations. Simon believed that agents use rules of thumb, heuristics, as decision making rules. But it is not very helpful to try and construct a plethora of different rules, each unique to its context. We need some basic principles, which can then be ‘tweaked’ as required.

Guidelines as to the nature of the null models are already available. A remarkable paper was published by the Chicago economist Armen Alchian as long ago as 1950 [1]. Entitled ‘Uncertainty, Evolution and Economic Theory’, his paper anticipates by decades advances in the mathematical theory of evolution in the 1990s. At the time, however, the tools required to formalise Alchian’s insights were not available, and the paper failed to get traction within economics.

The purpose of Alchian’s paper was to modify economic analysis in order to incorporate incomplete information and uncertain foresight as axioms. He argues, in a way which is now familiar, that ‘uncertainty arises from at least two sources: imperfect foresight and human inability to solve complex problems containing a host of variables even when an optimum is definable’.

Alchian recognises that humans can imagine the future, act with purpose and intent and consciously adapt our behaviour [16]. He postulates that, even in the face of uncertainty, at least a local optimum might be found if agents follow what we would now term a Bayesian learning process. However, for convergence to an equilibrium, he argues that two conditions need to be satisfied. A particular trial strategy must be capable of being deemed a success or failure ex post, and the position achieved must be comparable with results of other potential actions.

Alchian argues that it is unlikely that such conditions will hold in practice, for the simple reason that the external environment of a firm is not static but changing. Comparability of resulting situations is destroyed by the changing environment. An important Science paper by Rendell et al. in 2010 confirms this intuition [19]. Economic theory certainly contains models in which imitation is the main driver of behaviour in, for example, herding models. But, as noted already, these are seen as a special case compared to the more generally applicable model in which agents have fairly stable preferences and select on the basis of the attributes of the alternatives which are available. Alchian argued, all those years ago, that under changing external environments—under uncertainty—the model in which agents imitate the behaviour of others is the general principle of behaviour, and not just the special case.

3.2 Networks

The heuristic of imitation, or copying, raises immediately the concept of networks. If this is the driving force of behaviour in any given situation, which other agents is the agent copying? Economists are at last starting to appreciate the potential importance of networks. For example, central banks are showing great interest in the networks which connect banks through the pattern of assets and liabilities, and the possibility that a cascade of failure might percolate across the network. An issue in 2014 of the leading American Economic Association journal, the Journal of Economic Perspectives, carried a symposium of papers on the topic of networks.

Networks in fact were a central part of the thinking of two great economists of the mid-twentieth century, Keynes and Hayek. The differences between them on policy issues tend to command attention, obscuring deep similarities in their thinking about the economy and society. Of course, the mathematical theory of networks (or graph theory as it is also known) scarcely existed at the time, and neither Keynes nor Hayek formalised their models in this way.

For Keynes, the long-run expectations of firms were the most important determinant of the business cycle through their impact on investment.Footnote 4 The long-run expectation of a firm at any point in time is not the result of a rational calculation of the amount of profit which an investment is expected to yield. Rather it is a sentiment, the degree of optimism or pessimism which the agent holds about the future.

There appear to be two components in Keynes’ implicit model of how such expectations are generated. Most importantly, sentiment is altered across the network of firms as a whole by ‘waves of irrational psychology’. Keynes also writes of changes in sentiment being generated as the ‘outcome of mass psychology of a large number of ignorant individuals’. This is the key feature of long run expectations. In addition, an agent seems to have the ability to change its optimism/pessimism spontaneously without regard to external factors, including the sentiments of other agents. Keynes writes of ‘spontaneous optimism’ and a ‘spontaneous urge to action rather than inaction’. This is the context in which his famous phrase ‘animal spirits’ appears [10].

In modern terminology, we have agents on a network which at any point in time are in one of k states of the world, where k is the degree of optimism/pessimism. There is some kind of threshold rule by which individual agents alter their state of the world according to the state of the world of their neighbours.

An illustration of Hayek’s thinking is given by a somewhat different illustration. In his 1949 essay ‘The Intellectuals and Socialism’ [9], he considered the question of how ideas of planning and socialism had come to a dominant position in the market-oriented economies of the West. He attributed this to the role of intellectuals. By the word ‘intellectual’ he did not mean an original thinker. Rather, for Hayek, intellectuals were ‘professional second-hand dealers in ideas’, such as journalists, commentators, teachers, lecturers, artists or cartoonists.

Hayek was very clear on how such a small minority can set so decisively the terms of debate on social and cultural matters. He writes: ‘[Socialists] have always directed their main effort towards gaining the support of this ‘elite’ [of intellectuals], while the more conservative groups have acted, regularly but unsuccessfully, on a more naive view of mass democracy and have usually vainly tried to reach and persuade the individual voter.’ He goes on later in the essay to state, ‘It is not an exaggeration to say that, once the more active part of the intellectuals has been converted to a set of beliefs, the process by which these become generally accepted is almost automatic and irreversible.’ Again, in modern terminology, he is describing what is known as a scale-free network. In other words, a type of network in which a few agents have large numbers of connections, and most agents have only a small number. Simply because an agent has a large number of connections, he or she has the capacity to exercise a disproportionate influence on outcomes. But Hayek is implicitly going rather further, and positing a weighted scale-free network, in which the influence of the highly connected individuals is weighted, in part or in whole, by the number of their connections. So their influence is doubly powerful, by virtue of being connected to many agents, and because their potential influence on these agents carries a relatively high weight.

Modern network theory has made massive developments in the past two decades in terms of understanding how a different behaviour or opinion might or might not spread across a network of any given type. Sophisticated strategies have been worked out as to how to either encourage or prevent such percolation. Another key task for economics is to absorb this work into the discipline (see, for example, [15]).

But there is a wider research agenda with networks. Economics is too important to be left simply to the economists. Most of the advances in network theory have been obtained with the assumptions that both the behaviour of the nodes and the topology of the network are fixed. The nodes are the points in the network, which in a social science context are the agents, and the topology describes the system of connection in the network, which agents any given agent can in principle influence directly.

But a great deal needs to be done to extend our understanding of how the percolation properties of networks are affected when these assumptions are relaxed. In real life, the pattern of connections between agents itself evolves, and agents may choose to adopt different behavioural rules over time. This is a multi-disciplinary challenge, to which economics can in fact bring its own contribution. Namely, to consider the role of incentives in the evolution of networks over time.

3.3 Growth and the Business Cycle

Networks have an important role to play in advancing scientific knowledge of the behaviour of the economy at the macro level. It is here that conventional economics is at its weakest. The modern market-oriented economies ushered in by the Industrial Revolution of the late eighteenth century have a distinguishing characteristic which is not shared by any other form of economic organisation over the entire span of human existence. There is a steady expansion of the total amount of resources which are available. The underlying growth of output over time is the key feature. For example, per capita income in real terms, even in the most advanced societies, was not very much different in, say, 1700, than what it was of in the Roman Empire at its peak in the first and second centuries AD. But since then there has been a massive and dramatic expansion of living standards.

Around this growth trend, there are persistent but somewhat irregular short-term fluctuations in output, a phenomenon known as the business cycle. It is this which the DSGE models, for example, discussed in Sect. 1 above, purport to be able to explain.

But economics lacks a satisfactory scientific explanation of these two distinguishing characteristics of capitalism at the macro level, growth and the business cycle. In terms of growth, theories developed in the 1950s remain the basis of attempts to understand the phenomenon, even in their more modern versions. These theories essentially link the growth of output to changes of the inputs into the process of production, particularly capital (machines, buildings etc) and labour. Progress has been made. The extensive empirical investigations carried out using mainstream growth models show that most of economic growth in the advanced economies is in fact unexplained in this way! Economists attribute this to what they describe as ‘technical progress’. In more everyday English, this refers not just to inventions but, more importantly, to what we can think of as innovations, the practical diffusion of such scientific inventions.

The time series data on the business cycle, the movements of total output (GDP) from year to year, when plotted appear to bear little resemblance to the word ‘cycle’ as it is understood in the natural sciences. The fluctuations are clearly more irregular than regular. A key reason why economists speak of the ‘business cycle’ is because the different sectors of the economy tend to move together during booms and recessions. The correlation is not perfect, but in a period of sustained expansion, for example, most industries will be growing, and in a recession most will be in decline. The implication is that there are general factors which drive the movements of the various sectors of the economy. This does not mean that a particular sector might not experience a sharp rise or fall in output for reasons specific to that sector. But the co-movement of output suggests general factors operating across the economy as a whole.

However, existing models have great difficulty in explaining many of the key empirical features of the business cycle. For example, the correlations between the growth in output in any given year and growth in previous years are close to zero. But there is some structure and, specifically, there tends to be weak but positive correlation between growth in this year’s output and growth in the immediately preceding year. The first wave of real business cycles and DSGE models were completely unable to account for this, because they postulated that the cycle was caused by random shocks which were exogenous to the economy as a whole. In terms of the pattern in the frequency of the fluctuations, there is evidence of a weak cycle, in the scientific sense, of a period roughly between 5 and 12 years.Footnote 5 By incorporating imperfections which prevent markets from operating properly, the latest versions are sometime able to replicate this feature, but it is a struggle.

But there are other features which existing models find very hard, if not impossible to replicate. Looking at the Western economies as a whole and synthesising the evidence on their individual experiences, certain key pieces of evidence have highly non-Gaussian distributions. For example, both the size and duration of recessions, and the duration between recessions. Most recessions are short, with quite small falls in output. But occasionally, there are recessions which last for a number of years and which exhibit deep drops in output. This latter obtains at the moment in several of the Mediterranean countries.

3.4 Sentiment, Narratives and Uncertainty

An important reason why macroeconomics has been unable to make progress in resolving key issues is that sentiment and narratives affect the impact of a given change in any particular policy instrument. Consider, for example, the current question of austerity in the UK. Suppose the British government were to abandon its policy of fiscal restraint, and expand public spending. What would the impact be? We could simulate the policy on a range of macro models and, as noted above, each would give us a different answer. But, in general in economics, there is a one-to-one correspondence between the inputs into a model and the output. The same inputs in the same set of initial condition will always give the same answer. In the real world, this may not be true at all.

At the moment, the UK government has a reputation for fiscal prudence. The facts themselves tell a more ambiguous story. On coming to power in 2010, the UK government projected that borrowing in the financial year 2014/15 would be just under £ 40 billion. It was around £ 100 billion. What is more, over the first seven months of the 2014–15 financial year, borrowing was even higher than it was in 2013–14. The increase was not great, £ 3.7 billion according to the Office for National Statistics, but it is still up, not down. In 2015 the stock of outstanding government debt was around 80 % of GDP, a figure similar to that of Spain. Yet the markets believe the government is prudent, and at the time of writing the yield on 10 year UK government bonds is under 2 %. In Spain it is closer to 6 %.

Now, if there were a fiscal expansion in the UK, and the bond yield remained low, the fiscal multiplier is likely to be substantial, and GDP would receive a distinct boost. But if the change in policy were received differently, and bond yields rose to the 6 or 7 per cent levels which countries such as Spain and Italy have experienced, the outcome would be completely different. The crucial issue would be the dominant narrative which emerged in the financial markets. If we could re-run history many times, from the same initial conditions we would observe a wide range of outcomes, dependent upon the sentiment and narrative which predominated.

Economics essentially deals with the questions raised by the concepts of sentiment and narrative by the theoretical construct of rational expectations. Agents form expectations about the future using what is assumed to be the correct model of the economy. All agents share the same knowledge of this model which they use to form expectations about the future. Over time, these will on average prove to be correct. They may not necessarily be correct in every single period. Indeed, it is possible that they will be wrong in every period. The key point is that the agent is presumed to be using the correct model of the system with which to form expectations. Any errors that might emerge are therefore purely random, and will cancel each other out over time. On average, over an unspecified but long period of time, the expectations will prove correct.

So in considering the impact of a change in fiscal policy, all agents will anticipate the potential outcome and adjust their behaviour appropriately. In one version of this type of model, expansionary fiscal policy cannot have any impact on total output. An increase in public spending could be financed by an equal rise in taxation. But this would serve to more or less cancel out the impact of the spending increase, because personal incomes would be reduced. Alternatively, the increase could be financed by an issue of government bonds. But this implies higher taxation in the future, to meet the stream of interest payments on the bonds and, eventually, the repayment of the principal. An agent using rational expectations will therefore increase his or her savings in order to provide for the higher taxes in future. The effect will be just the same as an increase in taxes now.

We might reasonably ask how agents come to be in possession of the true model of the economy. There is a large and highly technical literature on this. But, essentially, it is held that agents will eventually learn the correct model, because of the costs of making non-optimal decisions through using an incorrect model. There are many problems with this approach. But a very practical one is the simple observation that economists themselves are far from being in agreement about what constitutes the correct macro model of the economy. For example, in the United States at the height of the financial crisis, one eminent group of economists argued that the banks should be bailed out, and another believed the exact opposite, that they should be allowed to fail. Both groups included several Nobel Prize winners.

A fundamental point is that in many situations, especially in macroeconomics, there is unresolved uncertainty about the model itself. More generally, within the statistics literature, there is a widespread understanding that model uncertainty is often an inherent feature of reality. It may simply not be possible to decide on the ‘true’ model. Chatfield is widely cited on this topic [5]. In an economic context, for example, in a 2003 survey for the European Central Bank of sources of uncertainty by Onatski and Williams concluded that ‘The most damaging source of uncertainty for a policy maker is found to be the pure model uncertainty, that is the uncertainty associated with the specification of the reference model’ [14]. Gilboa et al. [6] note that ‘the standard expected utility model, along with Bayesian extensions of that model, restricts attention to beliefs modeled by a single probability measure, even in cases where no rational way exists to derive such well-defined beliefs’. The formal argument that in situations in which the external environment is changing rapidly, it is often not possible for agents to learn the ‘true’ model even by Bayesian process goes back at least as far as Alchian in 1950 [1].

In short, in situations in which there is uncertainty about the true model which describes the system, it may not possible for agents to form rational expectations. As a result, agents are uncertain about the probability distribution of potential outcomes.

It is useful to refer back briefly at this stage to the ‘4-box’ chart. The chart sets out a possible heuristic about the types of model which might best describe the behaviour of agents in different situations. As we move down the vertical axis, the distinction between the attributes of the various alternatives becomes harder to distinguish. We gave several reasons why this might be the case in the immediate discussion of the chart. However, a very important additional one is that for choices which have implications well into the future, the potential outcomes become inherently uncertain. In situations in which there is uncertainty about the model which best describes the system under consideration, there is uncertainty about the probability distribution of potential outcomes. Agents will be unable to distinguish between alternatives because they lack knowledge of the system.

Economics needs to work with other disciplines, especially psychology, in order to understand how agents are able to make decisions under conditions of model uncertainty see, for example, [13]. In essence, agents are motivated by a mixture of excitement and anxiety, excitement about potential gain and anxiety about potential loss. More generally, the importance of narratives, which break down the one to one correspondence between the inputs and output of any particular set of conditions in the economy, must be understood much better. How and why do certain narratives spread and why do others get little or no traction? This is of great importance for the conduct of economic policy, and as yet very little formal work has been done in this area.

4 Summary and Conclusion

Although economics has not stood still in recent decades and is by no means an empty box, the central building block of the entire theoretical edifice, the so-called rational agent, seems less and less relevant to the inter-connected cyber society of the twenty-first century.

A central challenge is to develop areas of work which have the potential to unify much of the analysis carried out in the different social sciences. The fundamental building block of all social science is the behaviour of the individual decision making unit, the ‘agent’ in the jargon of economics and computational social science.

The basic premise of mainstream economics, that explanations of phenomena which emerge at the meso or macro levels must be grounded in the micro level foundations of the rules of agent behaviour, seems sensible. However, the underlying assumptions of economics are much less relevant in the twenty-first century, and existing social science is either unable to explain events to policy makers, as during the financial crisis, or its theory gives misleading advice.

There is little point in carrying out further research which is solely based upon agents behaving as optimisers subject to constraints. The underlying hypothesis is empirically false, and this paradigm has now been tested to destruction.

In the context of agents making decisions, the key research streams to develop are:

  • Identification of the decision making rules followed by agents in the twenty-first century, in particular how and when agents imitate the decisions of others, and when they do not

  • Developing heuristics which enable us to identify which of the types predominate in any given context

  • Expand our knowledge of networks, and in particular to how choices amongst alternatives, whether these are products, innovations, ideas, beliefs, are either spread or contained across networks. A great deal is now known about this when both the behaviour of nodes and the topology of the network is fixed, this needs to be extended to evolving behaviours and topologies

  • To articulate the policy implications of these different modes of behaviour. In particular, if imitation is a stronger motive than incentives, the implications for policy are profound

In many situations, decision makers are faced with genuine uncertainty. Little or no information is available on the probability distribution of outcomes. This is particularly the case with decisions that are difficult to reverse, and which have consequences which extend beyond the immediate future.

In finance, standard and behavioural theories are a hindrance—perhaps a defence against the anxieties created by uncertainty and lack of control. They miss the point that although financial assets must ultimately depend on some kind of ‘fundamental’, human beings have to use their interpretive skills to infer what these values are. They have no given value in and of themselves. The value depends on buyers’ and sellers’ reflexive views as to the future income streams generated by the fundamentals of the underlying entities.

Under uncertainty, decision makers require narratives which give them the conviction to act, rather than being paralysed into inactivity. Narratives—how the facts are perceived—are crucial in many contexts. For example, in the current debate over austerity, the impact of a change in fiscal policy would depend upon how this was perceived by the markets, and how this perception translated into interest rates on government bonds.

In the context of narratives and sentiment the key research streams are:

  • fundamental theory and tools which operationalise the concept of narratives

  • computational theories of narratives, including Big Data

  • tools which develop our understanding of how narratives and sentiment either spread or are contained

  • tools to enable early prediction of narratives which have the potential to develop traction.