An Agent-based Model

Part of the New Economic Windows book series (NEW)


Reductionism, i.e. the methodology of classical mechanics which has been adopted by analogy in neoclassical economics, can be applied if the law of large numbers holds true, i.e.:
  • the functional relationships among variables are linear; and,

  • there is no direct interaction among agents.

Since non-linearities are pervasive, mainstream economics generally adopts the trick of linearizing functional relationships. Moreover agents are supposed to be all alike and not to interact. Therefore an economic system can be conceptualized as consisting of several identical and isolated components, each one being a representative agent (RA). The optimal aggregate solution can be obtained by means of a simple summation of the choices made by each optimizing agent.


Banking Sector Representative Agent Equity Ratio Mainstream Economic Idiosyncratic Shock 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A modeling strategy based on the representative agent is not able, by construction, to reproduce the persistent heterogeneity of economic agents, captured by the skewed distribution of several industrial variables, such as firms’ size, growth rates etc. Stoker (1993) reviews the empirical literature at disaggregated level which shows that heterogeneity matters since there are systematic individual differences in economic behavior. Moreover, as Axtell (1999, p. 41) claims: “... given the power law character of actual firms’ size distribution, it would seem that equilibrium theories of the firm [... ] will never be able to grasp this essential empirical regularity.”Google Scholar
  2. 2.
    According to Hildenbrand and Kirman (1988, p. 239): “... There are no assumptions on [... ] isolated individuals which will give us the properties of aggregate behavior which we need to obtain uniqueness and stability. Thus we are reduced to making assumptions at the aggregate level, which cannot be justified, by the usual individualistic assumptions. This problem is usually avoided in the macroeconomic literature by assuming that the economy behaves like an individual. Such an assumption cannot be justified in the context of the standard economic model and the way to solve the problem may involve rethinking the very basis on which this model is founded.” This long quotation summarizes the conclusion drawn by Arrow (1951), Sonnenschein (1972), and Mantel (1976) on the lack of theoretical foundations of the proposition according to which the properties of an aggregate function reflect those of the individual components.Google Scholar
  3. 3.
    In General Equilibrium theory one can put all the heterogeneity s/he likes, but no direct interaction among agents. Grossman and Stiglitz (1980) has shown that in this case one cannot have any sort of informational perfection. If information is not perfect markets cannot be efficient. Market failure leads to agents’ interaction and to coordination failures, emerging properties of aggregate behavior, and to a pathological nature of business fluctuations.Google Scholar
  4. 4.
    If agents are heterogeneous, some standard procedures (e.g. cointegration, Granger-causality, impulse-response functions of structural VARs) may loose their significance. Moreover, neglecting heterogeneity in aggregate equations may generates spurious evidence of dynamic structure. The difficulty of testing aggregate models based on the RA hypothesis, i.e. to impose aggregate regularity at the individual level, has been long pointed out by Lewbel (1989) and Kirman (1992) with no impact on the mainstream (a notable exceptions is Carroll, 2000).Google Scholar
  5. 7.
    Stability of the slope through time is a quite standard result in the empirical literature on Pareto’s law (see e.g., the work by C. Gini, J. Steindl and H. Simon). Quite nicely, Steindl (1965, p. 143) defines the Pareto coefficient “... a sediment of growth over a long time”.Google Scholar
  6. 8.
    The biased behavior of this random process helps to explain the systematic differences (asymmetries) between expansions and contractions found in the empirical evidence. Gaffeo et al. (2003) have found systematic differences of the Pareto exponents during expansions and contractions.Google Scholar
  7. 9.
    A recession, e.g., is more likely when firms are relatively young, small and lever-aged. The RA framework not only is inconsistent with the evidence (i) but it also misses and outguesses any dynamical properties of the actual systems (Forni and Lippi, 1997).Google Scholar
  8. 10.
    Discussions on this point can be found in Krugman (1996) and Blank and Solomon (2000).Google Scholar
  9. 11.
    Schumpeter (1939) suggested that business cycle scholars should analyze “... how industries and individual firms rise and fall and how their rise and fall affect the aggregates and what we call loosely general business conditions”. This approach is reminiscent of Marshall’s parallel between the dynamics of the individual firm and the evolution of a tree in the forest.Google Scholar
  10. 12.
    For a review of the debate on the shape of the firms’ size distribution sprung up during the 1950s and’ 60s, see the monograph by Steindl (1965).Google Scholar
  11. 14.
    A comprehensive reference is Samorodnitsky and Taqqu (1994).Google Scholar
  12. 15.
    The first author conjecturing it explicitly has been Mandelbrot (1960).Google Scholar
  13. 17.
    Recall that in a sequential economy (Hahn, 1982) spot markets open at given dates, while future markets do not operate.Google Scholar
  14. 23.
    For evidence on the effects exerted by firms’ size and credit worthiness on banks’ loan policies see e.g. Kroszner and Strahan (1999).Google Scholar
  15. 24.
    Delli Gatti et al. (2003) provide an extensive analysis on the relationship between entries and exits and aggregate fluctuations in a model very similar to this one.Google Scholar
  16. 28.
    According to Sornette (2000, p. 94), self-similarity occurs when “... arbitrary sub-parts are statistically similar to the whole, provided a suitable magnification is performed along all directions”.Google Scholar
  17. 29.
    We use a kernel density estimation (Härdle, 1990), with a Gaussian kernel.Google Scholar
  18. 35.
    For instance, in the industrial dynamics literature such a kind of information comes naturally as one considers the evolution of firm sizes. The asymptotic distribution for the size of firms usually associated to alternative models of firms’ size growth is log-normal or power law (Axtell, 2001; Sutton, 1997). As it is well known, power law distribution may do not possess second moments. In such situations, we need to work on transformed variables (e.g., logarithmic transformations) to use the above approach. Obviously results must be read carefully since they may hold only qualitatively.Google Scholar

Copyright information

© Springer-Verlag Italia 2008

Personalised recommendations