How transparent about its inflation target should a central bank be?
- 398 Downloads
Abstract
Using an agent-based model, this paper revisits the merits for a central bank of announcing its inflation target. The model preserves the main transmission channels of monetary policy used in stochastic dynamic general equilibrium models– namely the consumption and the expectation channels, while allowing for agents’ heterogeneity in both expectations and behavior. We find that, in a rather stable environment such as the Great Moderation period, announcing the target allows for the emergence of a loop between credibility and success: if the target is credible, inflation expectations remain anchored at the target, which helps stabilize inflation, and, in turn, reinforces the central bank’s credibility. We then tune the degree of heterogeneity in agents’ behavior and the individual learning process to introduce inflationary pressures, accompanied or not by uncertainty affecting the real transmission channel of monetary policy. Even if learning and heterogeneity would a priori lead to thinking favorably about transparency, we show that this virtuous circle is not robust, as transparency may expose the central bank to a risk of credibility loss. In this case, we discuss the potential benefits from partial announcements.
Keywords
Monetary policy Inflation targeting Credibility Expectations Agent-based modelJEL Classification
C61 C63 E52 E581 Introduction
Over the past three decades, inflation targeting (IT hereafter) has been adopted by an increasing number of countries. Under an IT regime, the central bank (CB hereafter) puts a strong emphasis on communication, especially by announcing the inflation target to the public. Following this trend, an important strand of the academic literature has investigated the macroeconomic benefits that may be expected from adopting IT.^{1} Most of the related studies point to the impact on inflation expectations as the key stabilization mechanism under this regime.
Two properties of an explicit inflation target have been emphasized (see, notably, Demertzis and Viegi 2008, 2009). First, an explicit, numerical target makes it a good candidate as a focal point of coordination for potentially heterogeneous inflation expectations. Second, the announced target becomes a natural reference point for assessing the inflation performances of the monetary authorities, and, hence, for judging the credibility of the announcement itself. As a consequence, this credibility stands as a key factor for determining whether the target can become an effective anchoring device for inflation expectations.^{2} In a dynamic perspective, Demertzis and Viegi (2009) show that the anchoring properties of IT arise through the emergence of a self-reinforcing credibility-success loop: the more credible the monetary authorities, the more likely are inflation expectations to be anchored on the target, and the more likely is inflation to be stabilized around the target, which consolidates the initially favorable credibility assessment, and so on.
The current paper aims at revisiting the stabilization properties of the inflation target in an economic setting characterized by a collection of fully heterogeneous agents who behave, interact and learn under bounded rationality. We depart, in that regard, from the aforementioned literature that has addressed the role of the target in a context where heterogeneity essentially pertains to the formation of inflation expectations, while considering agents as homogeneous and fully rational players in the coordination game. In this paper, agents may not only differ regarding the formation of their inflation expectations but also concerning other dimensions of their economic behavior. This heterogeneity is, in turn, likely to complicate the coordination process between agents in a bounded rationality context.
In such a specific environment, the question naturally arises of whether the anchoring properties of the inflation target can be used as an efficient stabilization tool by the monetary authorities. The answer depends on the interplay between the inflation expectations dynamics that may be influenced by the publicity of the target on the one hand, and the learning dynamics that drives the coordination between agents, on the other hand. In particular, the control of the inflation rate is likely to be all but a simple matter, as interactions between agents, and learning shape the transmission mechanisms of monetary policy in the economy. As a consequence, the announcement of the target can pose a more pronounced credibility challenge for the policymaker than the one arising in representations of IT within representative agent settings.
Given the features of the economy on which we want to focus, an agent based model (ABM) seems a well-suited framework. This framework acknowledges the heterogeneity between agents, and the modalities of their learning behavior at the individual level, without being constrained by the assumptions of intertemporal optimization and aggregation requirements through representative agents.^{3} The price to pay for that flexibility is the absence of any tractable representation of the model, which has then to be assessed through numerical simulations of the emerging dynamics, and a dependence of the outcomes on the range of parameters, which have thus to be chosen with caution.
In the following paper, we elaborate on the ABM developed in Salle et al. (2013). This ABM is deliberately constructed so as to retain the basic structure of standard macroeconomic models dealing with monetary policy (such as the NK model). Specifically, this model encompasses the consumption and the expectation channels of monetary policy to inflation. We extend this ABM by explicitly accounting for an inflation expectations formation process that acknowledges the twofold status of the inflation target under IT, playing both as a focal point and as a reference point for credibility. On those aspects, we adapt the approach developed by Demertzis and Viegi (2009) to the case of agents’ heterogeneity and bounded rationality. We examine under which conditions a credibility-success loop may emerge under IT, and make this regime an advantageous choice for the monetary authorities in the economic environment we have considered.
Our main results can be summarized as follows. The announcement of an inflation target may allow for the emergence of a credibility-loop success if a relatively wide radius of tolerance is coupled with that target. However, this loop arises only in stable macroeconomic environments. In volatile macroeconomic environments, especially characterized by a significant variability in the expectation channel of monetary policy, tying one’s hands by announcing an inflation target turns out to be problematic. Macroeconomic volatility impairs the ability of the CB to deliver its official inflation commitment, and a credibility problem emerges. As a result, a reverse credibility loop can set in. In that case, the economy may benefit from partial announcements about the inflation target, that provide a clear signal to anchor inflation expectations for the part of the public that is reached by that announcement, while allowing for a less tightly defined objective for the remaining part. Overall, our results suggest that fully revealing the inflation target can deteriorate macroeconomic performances, even in a setting with core features – i.e. learning and heterogeneity – which would a priori lead one to think favorably about CB’s transparency. The remainder of the paper is organized as follows. Section 2 presents the ABM. Section 3 explains the simulation protocol and gives insights into the main mechanisms at work in the model. Section 4 discusses the results and Section 5 concludes.
2 The model
2.1 General features
This model elaborates on the macroeconomic ABM first introduced in Salle et al. (2013). This ABM shares several general features of the baseline NK framework (see Woodford 2003, Chap. 4). labor is the only input, used to produce a perishable good, and the goods market operates under imperfect competition. The price/wage adjustments are characterized by nominal rigidities. Inflation is driven by both aggregate demand and inflation expectations, in line with the NK Phillips curve. The two usual transmission channels of monetary policy then result: the consumption and the expectation channels. The CB uses a Taylor rule to set the interest rate.
The economy is populated by n households, indexed by i ∈ [1,n], a single firm summarizing the supply side, and a CB. The sequence of events is as follows. First, the labor market allocates households’ labor supplies to the firm. The quantity of hired labor determines the unemployment rate, the firm’s goods supply, its labor costs and the corresponding price, as well as households’ labor income. Second, households choose their consumption and savings/debt strategy. In a third step, the goods market determines the allocation of the goods supply to each household. This allocation dictates the firm’s profit and each household’s utility. Fourth, agents update their individual behavior and inflation expectations. Finally, the CB sets the nominal interest rate for the next period, and the story starts all over again.
2.2 Households
Households supply labor and consume according to two simple rules of thumb. To implement those two rules, they need to forecast inflation. Moreover, they adapt those rules according to a social learning process, and update their inflation forecasts on the basis of the realization of inflation and the CB’s announcements.
2.2.1 Individual behavior
Labor supply
Consumption
2.2.2 Adaptation through social learning
Following the assumption of perpetual learning, the two strategies \(\gamma ^{w}_{i,t}\) and \(\gamma ^{d}_{i,t}\) are updated at the end of each period through a simple form of a genetic algorithm involving two learning operators: a social learning mechanism (imitation) and random experiments in the strategy space.^{9}
With a probability P_{ m u t }, a household can also perform a random experiment, in order potentially to discover better strategies than those already present among the households’ population. In this case, it draws a new \(\gamma ^{w}_{i,t + 1}\) coefficient from a normal distribution with the mean equal to the average of the coefficients \(\gamma ^{w}_{i,t} \) across all households, and a given standard-deviation σ_{ w }: \(\mathcal {N} \left (\frac {{\sum }_{l = 1}^{n} \gamma ^{w}_{l,t}}{n}, \sigma _{w} \right )\). We truncate the draw at zero, as negative indexation coefficients are not relevant. The new strategy \(\gamma ^{d}_{i,t + 1}\) is also drawn from a normal distribution, with a given standard deviation σ_{ d }: \(\mathcal {N} \left (\frac {{\sum }_{l = 1}^{n} \gamma ^{d}_{l,t}}{n}, \sigma _{d} \right ) \), but this draw does allow for negative coefficients, as both substitution and income effects are plausible (see Eq. 5).
With a probability 1 − P_{ i m i t } − P_{ m u t }, the household keeps its strategies \((\gamma ^{w}_{i,t}, \gamma ^{d}_{i,t})\) unchanged for the next period t + 1.
Parameters σ_{ d } and σ_{ w } can be interpreted in terms of shocks: they control the endogenous variability in the model, which arises from the heterogeneity in the individual behavior and its evolution through the learning process.
High values of σ_{ d } are associated with a high level of uncertainty about the way monetary policy transmits to aggregate demand (see Eq. 5). That situation is akin to model uncertainty in the related literature about monetary policy under uncertainty and we refer to it as such in the paper.^{10} Variability induced by σ_{ w } directly translates into variability in the inflation process through the wage-indexation scheme, and leads to similar effects on inflation dynamics as cost-push shocks (see Eq. 1). Values of σ_{ w } higher than unity generate second-round effects that fuel a wage-price inflation spiral, and may give rise to a stabilization trade-off between the level of inflation and the level of output.
From the preceding, it should be clear that households’ inflation expectations play a central role in the economic dynamics in our model: i) they determine the ex ante real interest rate, through which the CB affects aggregate demand, and ii) they feed the inflation dynamics, and can endogenously drive the inflation process. For these reasons, it becomes important that the CB acts as a manager of expectations (Woodford 2003).
2.2.3 Inflation expectations and CB’s announcements
We assume an inflation expectation formation mechanism that integrates jointly credibility and coordination issues, as in Demertzis and Viegi (2009).^{11} We distinguish between two regimes: IT, in which the CB announces to all households the inflation target π^{ T } and the radius of tolerance around it + / − ζ, and non-IT, in which none of these parameters is announced. Unlike Demertzis and Viegi (2009), our ABM explicitly models heterogeneous expectations, and the question of coordination naturally arises.
Under IT,
The second case of Eq. 8 corresponds to naive (noisy) expectations,^{12} that well account for the unanchoring process of inflation expectations when credibility is weak.
Under non-IT,
The choice of the inflation expectations formation process under non-IT is made for various reasons. First, it allows for a credibility-success loop as under IT, although it does not provide an anchoring device around the target as such (we follow on that point an extension of their model that is suggested by Demertzis and Viegi 2009, p. 31). Second, Eq. 9 translates the lack of anchor in the absence of an explicit inflation target. In our model, if the CB meets its target, the average past inflation remains close to the target, and non-IT resembles IT. However, a series of failures in keeping inflation close to the target pulls average inflation away from the implicit inflation target, and contributes to further deviations of inflation expectations from that target. It should be further noted, as illustrated in Section 3.2, that this design of non-IT results in the same amount of heterogeneity in expectations under IT and non-IT by construction (for given values of P_{ t a r g e t } and ζ), which allows for a fair comparison of their relative performances. Third, as stressed in an experimental study by Roos and Schmidt (2012), past trends of macroeconomic variables are a key determinant of forecasts when laypeople, such as households, are concerned. Eventually, the specification we use translates the Keynesian notion of “market sentiment”, which has been modelled in the context of monetary policy by Canzian (2009) or De Grauwe (2011).
From the preceding, it ensues that, in our set-up, the benefit from announcing the target mainly arises from the potential anchoring effect on households’ inflation expectations. Other economic effects of transparency have been considered in the literature. We do not take them into consideration as such in this paper however. For example, the role of policy objective announcements as an implicit commitment device has been stressed in models where the CB has an incentive to create inflation surprises (see, for example, Walsh 1995), which is not the case in the framework we consider. Furthermore, in our model, households do not rely on interest rate changes to forecast inflation in the absence of an explicit inflation target, so that we cannot address the so-called opacity bias (see Walsh 2010). Finally, in our model, coordination is not made attractive as such, because the utility function depends only on consumption, but households’ expectations indirectly influence other households’ consumption.^{14} They have therefore a collective interest in coordinating their inflation expectations. Coordination could also be assessed with respect to the performance of agents’ learning. As strategies \(\gamma ^{w}_{i,t}\) and \(\gamma ^{k}_{i,t}\) are directly related to individual inflation expectations, one could expect that the social learning mechanism would yield better performances if it takes place in an environment where agents hold comparable beliefs on the future. Coordination could thus favor learning.
We now turn to the description of the rest of the model.
2.3 The firm
2.3.1 Production and price setting behavior
2.3.2 Adaptation of the goods supply
2.4 Monetary authority
2.5 Aggregation and dynamics
Markets do not necessarily clear because price and wage strategies are not set a priori so as to make agents’ strategies mutually consistent. Markets instead confront aggregate supply and aggregate demand according to rationing mechanisms.
2.5.1 Labor market
The corresponding unemployment rate is computed as \(u_{t}=\frac {n - H_{t}}{n}\). The real wage rate is given by \(\omega \equiv \frac {W_{t}}{P_{t}} = \frac {(1-\alpha )}{(1 + \mu )} H_{t}^{-\alpha }\), decreasing with H.
2.5.2 Goods market
Aggregate goods supply \({Y^{s}_{t}}\) is given by the production function (10) and the aggregate goods demand is given by the sum of individual ones (see Eq. 4). The two are confronted according to an efficient rationing mechanism: households are ranked by decreasing goods demand, so that the firm first faces the highest demand. This mechanism stands for the counterpart of the standard assumption of households aiming at maximizing their utility, derived from their consumption. If a household is rationed, it buys bonds b with its remaining cash. Inflation π_{ t } is computed as \( \pi _{t} = \frac {P_{t} - P_{t-1}}{P_{t-1}} \).
2.5.3 Inflation dynamics
3 Model simulations and emerging dynamics
The ABM outcomes are analyzed through a large number of computer simulations that are implemented over different sets of parameter values. In this section, we first describe the method we have adopted to determine these parameter values. We then carry out a first assessment of the mechanisms at play in the ABM on the basis of the emerging dynamics and salient features that arise from the computer simulations. We finally perform an exercise of empirical validation, and show to which extent the ABM is able to account for the stylized facts that are key for the issues covered in this paper.
3.1 Parameter setting and simulation protocol
The structural parameters that underlie the microfoundations of the economy have been set according to standard values in the NK literature (see, notably, Woodford 2003). As for the consumption bounds (\(\underline {d}\) and \(\bar {d}\)), the adjustment rate of the firm 𝜖, the number of households n and the length of the simulations T, we set their value by relying on results of intensive sensitivity analyses performed on the model to allow for a first screening of the parameter space.^{21} This screening has been performed by following the validation procedure proposed by Klügl (2008) that consists in a successive sub-sampling of parameter values and systemic analyses of the plausibility of emergent dynamics vis-à-vis the specific research question at hand.^{22} This procedure results in a so-called minimal model, i.e., a model that incorporates the minimum set of assumptions and parameters to design consistent mechanisms regarding a specific research question. In particular, we checked whether the choice of specific parameter values (or ranges of values) did significantly affect or not the dynamics generated at the micro or macro level, and whether the simulation of the model did lead to degenerate patterns that reflect an inconsistent behavior of the agents or the economy as a whole.^{23} During this step, we specifically observe that i) the size of the macroeconomic variables is plausible, ii) aggregate welfare is increasing and stabilizes, indicating that learning is efficient, iii) explosive dynamics of real variables are ruled out. Accordingly, we use n = 500 households and T = 800 periods. We further set \(\sigma _{\xi }\equiv \frac {\sigma _{w}}{40}\), meaning that the variance of the noise ξ is related to the variance of the proxy for supply shocks σ_{ w } (where 40 is a scaling parameter). It is a rather intuitive modelling device: the more unstable the economy (i.e., the bigger the shocks affecting the inflation rate are), the further from the objective the inflation rate is likely to be and the more difficult it is to stabilize inflation expectations. This assumption is also made for the sake of parsimony in the parameter set. Importantly, this feature is identical under IT or under non-IT, so that the noise in inflation expectations does not vary exogenously under the two regimes.
Following that stage, we are left with the determination of the values to be taken by eight parameters, namely window, P_{ m u t } , P_{ i m i t }, σ_{ d }, σ_{ w }, ϕ_{ π }, ϕ_{ u } and ζ . It is not a coincidence as those parameters are supposed to be key regarding the interplay between the learning environment, the inflation expectations dynamics and the monetary policy strategy upon which we want to focus. We use a design of experiment (DoE)^{24} to cover the space of those remaining parameters and set their values accordingly. Large sampling methods such as Monte Carlo simulations come indeed at a computational cost if there are numerous parameters with large experiment domains, which is a priori our case. DoE allows us to minimize the sample size under constraint of representativeness. We use the design proposed by Cioppa (2002) and provided by Sanchez (2005), which efficiently combines space-filling properties and the non-correlation criteria between parameter configurations, avoiding multicolinearity issues in the analysis of the results. The design is reported in Table 5 in Appendix A. Each given set of parameter values (i.e., for each configuration or experience) is simulated 30 times, in order to account for the non-deterministic nature of the model. The simulation setting is kept the same for IT and non-IT, in order to provide a relevant comparison of the outcomes and dynamics of the model over those two regimes.
3.2 Emerging dynamics and salient features
The results indicate that the distinction between IT and non-IT matters for the stabilization outcomes obtained by the monetary authorities. However, the stabilizing role of IT appears to be also affected by other parameters, namely the two monetary policy coefficients ϕ_{ π } and ϕ_{ u }, the bandwidth of the range around the target ζ as well as the features of the learning dynamics via the importance of the learning shocks (especially σ_{ w }).^{25} In Section 4, we therefore focus on those parameters specifically (thus fixing the other ones) and examine attentively their interplay with the features of IT and non-IT.
More precisely, the regression tree (Fig. 2) indicates that an IT regime coupled with a relative large radius (higher than 0.75 %) yields overall the lowest expected average loss (0.085), while an IT regime coupled with a narrow range (no more than 0.75%) yields the highest loss (0.509) in a strongly volatile inflationary environment (i.e. with a higher than 0.32 value of σ_{ w }). Those two results suggest that the magnitude of the radius bears a strong influence on the stabilization performance of an IT regime, which, however, depend also on the volatility arising from the learning environment.
One of the main objectives of this section is to identify the main mechanisms that underlie the impact of IT on the economy for different parameter configurations. We establish two main results.
Our second result shows that IT does not necessarily allow for the emergence of a credibility-success loop, so that a non-IT regime can overperform an IT regime. Figure 4 illustrates this problem using Experiment 3. This experiment is characterized by a strong volatility of the learning process regarding the wage indexation coefficients (σ_{ w } > 0.32), which directly impinges on the inflation process (and thus can be assimilated to a cost-push shock). Moreover, the radius around the target is low (ζ < 0.75%). According to the regression tree Fig. 2, under that configuration, a non-IT regime over-performs an IT regime.
As Fig. 4 lets it clearly appear, inflation lies well above the target under both regimes at the beginning of the simulations. We observe more volatile substitution coefficients under IT (than under non-IT), which are, moreover, lower, and can be even negative. In that case, the income effect dominates regarding the impact of interest changes on consumption, which means that the usual consumption channel of monetary policy breaks down. In such a context, and even more if the radius of tolerance around the target is narrow, the CB quickly loses its credibility and the inflation expectations become unanchored. This unanchoring does in turn amplify the volatility of the learning behavior which feeds back negatively onto the stabilization of the inflation process, preventing the CB from benefiting from the credibility/success loop. This explains why, under IT, the CB fails to bring back inflation within the targeted range.
By contrast, under the non-IT regime, the monetary authorities manage to drive over time inflation expectations within the range, and the loss values are limited.
The overperformance of non-IT can be explained as follows: as the reference point of household’ expectations under non-IT is the average of past inflation, it works as a moving anchor which, under experience 3, decreases along the disinflationary path implemented by the CB. This allows the channelling of inflation expectations, despite a narrow range of tolerance ζ. By contrast, under IT, inflation expectations are only driven by naive expectations that extrapolate the decreasing path of inflation along the disinflationary path. At the micro level, the variability of agents’ behavior appears much lower under non-IT than under IT, despite the fact that the shocks σ_{ d } and σ_{ w } are the same. This means that the learning process stabilise the emerging substitution coefficients \(\gamma ^{d}_{i,t}\) at higher levels, with much less volatility, under non-IT than under IT, which makes the consumption channel more powerful. Indexation coefficients are also saliently less heterogeneous under non-IT, and stabilizes under unity, which contributes to stabilizing inflation close to the target.
The bottom panel of Fig. 5 reports the average consumption rate d_{i,t} among households as a function of expected real interest rate. Under IT, it is clear than negative real interests rate yield to higher than unity consumption rate (meaning that households are debtors), while positive interest rates drive consumption rate towards the lower bound \(\underline {b}\), as households take advantage of the higher return on savings to save, and decrease their current consumption. This translates into positive coefficients \(\gamma ^{d}_{i,t}\) (see Eq. 5), and indicates that the consumption channel of monetary policy is operational. Under non-IT, we observe a similar, even if less clear-cut, pattern: more observations indicate that households have a lower-than-unity (resp. higher than unity) consumption rate even if real interest rates are expected to be negative (resp. positive) under non-IT than under IT. Again referring to the consumption rule (5), this translates into more negative \(\gamma ^{d}_{i,t}\) coefficients under non-IT than under IT, meaning that the consumption channel is occasionally less effective under non-IT than under IT. However, it should be noted that Fig. 5 pools all observations of the DoE Table 5 together, embedding a variety of situations, as illustrated by Experiments 3 and 13 in Fig. 4.
As a conclusion, the overview of the model performances indicates that the inflation target announcement is not an unconditionally powerful tool to stabilise the economy. This depends on the volatility stemming from the learning environment, the radius of tolerance around the target and the monetary policy rule. We provide a detailed examination of how those elements interact in Section 4.
3.3 Empirical validation of the model
Parameter values under the baseline scenario
n | T | P _{ i m i t } | P _{ m u t } | σ _{ d } | ϕ _{ u } | ϕ _{ π } | π ^{ T } |
---|---|---|---|---|---|---|---|
500 | 800 | 0.1 | 0.02 | 0.15 | 0.2 | 1.5 | 0.02 |
𝜖 | α | μ | window | σ _{ W } | σ _{ ξ } | \(\bar {d}\) | \(\underline {d}\) |
0.01 | 0.25 | 0.1 | 20 | 0.15 | σ_{ W }/40 | 1.5 | 0.5 |
Simulated data statistics, average over 100 runs, 800 periods, discarding the 100th first periods (in order to rule out the effects of initialization)
cor(π,π^{ e }) | skewness (π) | kurtosis (π) | cor(credibility,π − π^{ T }) | ||
---|---|---|---|---|---|
ABM | mean | 0.923 | 0.711 | 1.57 | − 0.384 |
(baseline | t-test, H_{0} : | > 0 | mean < 0 | ||
scenario) | p-value | 0.0000 | 0.0000 | 0.0119 | 0.0000 |
UK (1997Q2-2013Q1) | 0.825 | 0.9622 | 0.2344 | − 0.682 | |
NZ (1987Q1-2012Q4) | 0.875 | 1.535 | 2.082 | − 0.54 |
Finally, it should be noted that non-normality is an emergent property of the model; we do not assume it beforehand. In our model, volatility results from the learning shocks, which are obtained using normal draws, but the non-linear and decentralized nature of our model leads to non-linear aggregate dynamics following these normal disturbances.
We conclude that our ABM is able to account for the stylized facts that are central to our research question.
4 Optimal monetary policy under IT
Section 3 shows that the merits of IT are contingent upon the level of volatility conveyed by the learning behavior of agents. We now focus on this issue, and distinguish various environments in terms of macroeconomic volatility so to analyse optimal monetary policy in such configurations. We can characterize those environments using ranges of values for parameters σ_{ d } and σ_{ w }. We first define a stable environment by setting σ_{ d } = σ_{ w } = 0.05. We then consider two levels of model uncertainty – a moderate level, by setting σ_{ d } = 0.25, and a high level by setting σ_{ d } = 0.4 – as well as two levels of inflationary shocks – a moderate one with σ_{ w } = 0.25 and a strong one with σ_{ w } = 0.4.^{30} When unchanged, the other parameters are kept at their baseline values, reported in Table 1. We measure the CB’s performance with a loss function as given in Eq. 20. The entire methodology we use to map the loss function values to the monetary policy parameters in our non-linear ABM is detailed in Appendix B, and is based on Roustant et al. (2010) and Salle and Yıldızoğlu (2014).
4.1 Transparency, variability and optimal monetary policy rule
Optimal monetary policy \((\phi _{\pi }^{*}, \phi _{u}^{*})\) and associated minimum loss L^{∗}, based of the optimization of the kriging model of the CB’s loss as a function of ϕ_{ π } and ϕ_{ u }
ζ = 0.001 – IT | ζ = 0.01 – IT | ζ = 0.001 – Non-IT | ζ = 0.01 – Non-IT | |
---|---|---|---|---|
Stable scenario: {σ_{ w },σ_{ d }} = {0.05, 0.05} | ||||
\(\phi _{\pi }^{*}\) | 2.66 | 4 | 4 | 2.91 |
\(\phi _{u}^{*} \) | 0.117 | 0 | 0 | 0 |
L ^{∗} | 0.0129 | 0.0018 | 0.0061 | 0.0162 |
Moderate model uncertainty: {σ_{ w },σ_{ d }} = {0.05, 0.25} | ||||
\(\phi _{\pi }^{*}\) | 4 | 0 | 4 | 3.8 |
\(\phi _{u}^{*} \) | 0 | 2 | 0.97 | 1.49 |
L ^{∗} | 0.0137 | 0.008 | 0.011 | 0.0142 |
Strong model uncertainty: {σ_{ w },σ_{ d }} = {0.05, 0.4} | ||||
\(\phi _{\pi }^{*}\) | 4 | 0 | 2.31 | 2.79 |
\(\phi _{u}^{*} \) | 0 | 1.94 | 0.82 | 0.14 |
L ^{∗} | 0.0172 | 0.0123 | 0.0179 | 0.0284 |
Strong model uncertainty and moderate inflationary shocks: {σ_{ w },σ_{ d }} = {0.25, 0.4} | ||||
\(\phi _{\pi }^{*}\) | 0.68 | 1.21 | 4 | 4 |
\(\phi _{u}^{*} \) | 1.2 | 0.32 | 0 | 1.49 |
L ^{∗} | 0.092 | 0.0742 | 0.0252 | 0.0376 |
Moderate inflationary shocks: {σ_{ w },σ_{ d }} = {0.25, 0.05} | ||||
\(\phi _{\pi }^{*}\) | 4 | 4 | 2.08 | 2.98 |
\(\phi _{u}^{*} \) | 0 | 0 | 1.69 | 0.03 |
L ^{∗} | 0.072 | 0.041 | 0.022 | 0.025 |
Strong inflationary shocks: {σ_{ w },σ_{ d }} = {0.4, 0.05} | ||||
\(\phi _{\pi }^{*}\) | 4 | 2.68 | 4 | 3.74 |
\(\phi _{u}^{*} \) | 2 | 1.22 | 1.36 | 1.21 |
L ^{∗} | 0.0697 | 0.0572 | 0.0408 | 0.0487 |
Strong inflationary shocks and moderate model uncertainty: {σ_{ w },σ_{ d }} = {0.4, 0.25} | ||||
\(\phi _{\pi }^{*}\) | 2.87 | 2.66 | 3.29 | 2.84 |
\(\phi _{u}^{*} \) | 0.75 | 0 | 0.4 | 0.6 |
L ^{∗} | 0.08 | 0.075 | 0.0393 | 0.0374 |
Moderate inflationary shocks and model uncertainty: {σ_{ w },σ_{ d }} = {0.25, 0.25} | ||||
\(\phi _{\pi }^{*}\) | 2.38 | 0.68 | 4 | 3.23 |
\(\phi _{u}^{*} \) | 1.2 | 1.85 | 0 | 0 |
L ^{∗} | 0.0484 | 0.0472 | 0.045 | 0.029 |
Table 6 in Appendix A gives the design of experiments of the ϕ_{ π } and ϕ_{ u } values that we have used to estimate and validate the kriging metamodels that underlie our quantitative analysis, and Table 8 in Appendix B reports the details of the estimation.
In the stable environment, an IT regime with a relatively broad range (ζ = 1%) overperforms a non-IT strategy. In the absence of adverse shocks, the CB may benefit from the credibility/success loop and stabilize inflation expectations and inflation. Nevertheless, with a very tight objective (ζ = 0.1%), credibility and success cannot be ensured, the IT regime loses its attractiveness and its performance lies in the same range as the ones obtained under a non-IT regime. The benefit that can be reaped from announcing a vague objective have been notably emphasized by Stein (1989) and Garfinkel and Oh (1995), but in a theoretical framework that incorporates time inconsistency issues. In such a setting, the CB can create surprise inflation by announcing a wide range, which allows it to depart from the target without losing its credibility. In our model, the CB has no incentive to create inflation surprises but the learning environment may cause deviations of inflation and unemployment from their targets and put the CB’s credibility at risk.
We interpret the values of the optimal rule coefficients in terms of a trade-off between the two objectives of the CB. As long as the target is credible, expectations are anchored, and movements of inflation reflect changes in production (cf. second term of Eq. 19). In this case, the two objectives of the CB move in the same direction, and reacting to the deviation of one from its target simultaneously moves the other one towards its target. In this case, there is no trade-off between the two objectives of the CB, and the optimal monetary policy prescribes a one-sided solution (with either \(\phi _{\pi }^{*}\) or \(\phi _{u}^{*}\) being zero). As the expectation channel plays a strongly dominant role in our ABM, it favors the role played, when credible, by the inflation target as an anchoring device under IT, which does then act as a second monetary policy instrument that stabilizes inflation (see Svensson 2010 for a similar interpretation). Monetary policy reaction to inflation can become redundant, which can explain why the optimal reaction to inflation is zero in some cases under IT with a wide range.^{31}
By contrast, if expectations become unanchored, they drive inflation away from the target (cf. first component of Eq. 19), and movements in inflation no longer reflect changes in production. In this case, a trade-off arises between stabilizing inflation and the level of activity. The optimal monetary policy is then likely to be a two-sided solution, according to which the CB has to react to both objectives.
Introducing higher variability in the consumption channel (i.e. increasing σ_{ d }) deteriorates the performance of the CB, across all regimes. For instance, under IT with a wide range, the loss value of the CB is more than four times higher when σ_{ d } = 0.25 (\(\mathcal {L}^{*}= 0.008\)) than when σ_{ d } = 0.05, and almost eight times larger when σ_{ d } = 0.4 (\(\mathcal {L}^{*}= 0.0123\)). Following our interpretation of the optimal coefficients in terms of trade-off, model uncertainty does not create a trade-off between the two objectives of the CB under IT, as the CB optimally adopts a one-sided reaction. With a widely defined objective (ζ = 1%), the credibility-success loop stabilize inflation expectations better than with a narrow objective (ζ = 0.1%), and the CB should only react to deviations of unemployment.
Overall, in a moderate or high degree of uncertainty concerning the real transmission channel of monetary policy, an IT strategy slightly overperforms a non-IT strategy, but only if it is implemented with a wide range (ζ = 1%). With a tight objective (i.e. ζ = 0.1%), loss values are fairly the same as under non-IT.
Under inflationary shock dominance (i.e. moderate or high σ_{ w }), macroeconomic outcomes strongly deteriorate. As explained in Section 2.2.3 (see, also, Eq. 19), this kind of volatility directly leads to variability in the inflation process. The CB is therefore highly likely to miss its inflation objective, and lose its credibility. Consequently, expectations get unanchored. Inflation becomes mostly driven by naive expectations, and is no more in line with the aggregate demand stance. This phenomenon creates a trade-off between the CB objectives. This trade-off translates into the optimal monetary policy reactions, which imply a strong two-sided reaction to both inflation and the level of activity – see, also, Alichi et al. (2009) for a similar analysis in the presence of cost-push shocks. Accordingly, optimal monetary policy implies, as soon as inflationary shocks are strong enough (i.e. for σ_{ w } = 0.4), a strong reaction to both inflation and unemployment. In this case, it is clear that a non-IT strategy outperforms an IT regime. The worst performances are obtained under IT with a tight objective: the loop between credibility and success is strongly impaired, and loss values are much lower if the CB does not announce its target than under IT.
If both model uncertainty and inflationary shocks coexist, again, performances deteriorate compared to the cases with only one type of shock (i.e. either σ_{ w } > 0.05 or σ_{ d } > 0.5), and non-IT outperforms IT, especially when accompanied by a tight objective. The trade-off between the two objectives seems to be mitigated under non-IT as long as the two shocks are moderate (i.e. in the case where σ_{ w } = σ_{ d } = 0.25, the optimal reaction under non-IT is a one-sided strategy).
Finally, in all the cases we have considered, we note that the optimal monetary policy rule is always an aggressive one. This result goes along the lines of previous statements about optimal monetary policy under uncertainty (see Schmidt-Hebbel and Walsh 2009 for a review). Model uncertainty, i.e. uncertainty concerning the parameters that depict the transmission mechanisms of monetary policy, characterizes our environment. It is a multiplicative uncertainty case, as shocks on agents’ behavior translate to inflation and economic activity in a non-linear way.^{32} There is no consensual answer to the question of optimal monetary policy in such a context. The conservatism principle first established by Brainard (1967) prescribes a moderate rule. However, when shocks and parameters are correlated, as it is the case in our model, the Brainard principle does not hold. Other contributions call for an aggressive rule under other cases of “Brainardian” uncertainty, for example when the CB cannot accurately estimate how inflation responds to inflation expectations (see Söderström 2002). Moreover, when radical uncertainty surrounds the economic model and is tackled through the tools of robust control theory, optimal monetary policy rules are hawkish ones (Giannoni 2007), especially when the CB cannot identify which parameters are uncertain (Tetlow and von zur Muehlen 2001). We also conclude in favor of aggressive rules under model uncertainty, which primarily stems, in our case, from learning.
4.2 Can partial announcements overperform pure IT or non-IT regimes?
Some contributions to the debate about the optimal degree of transparency have analysed partial announcements (see, among others, Cornand and Heinemann 2008 and Walsh 2009). In those works, it is assumed that only a fraction P ∈ [0,1] of agents receives the CB’s signal, i.e. the so-called “degree of publicity” P can be lower than one. According to Cornand and Heinemann (2008) this is the case when the CB chooses to provide news only in certain communities, or in a language that is understood only by some. Furthermore, public announcements are in general released through media, but each agent acknowledges a certain medium only with some probability, so that a CB can choose the degree of publicity by selecting appropriate media for publication. Agents may also have limited ability to process information, or may face costs to acquire it, so that an immediate release does not necessarily turn out to be incorporated into all agents’ decisions. Partial dissemination of precise public information may be an optimal communication strategy in combining the positive effects of valuable information for the agents who receive it with a confinement of the threat of overreaction by limiting the number of receivers (Cornand and Heinemann 2008). Walsh (2007b) also shows that the optimal degree of economic transparency depends on the existence of cost-push or demand shocks.
Optimal announcement policy (degree of publicity P^{∗} and range ζ^{∗}) and associated minimum loss L^{∗}, based of the optimization of the kriging model of the CB’s loss as a function of ζ and P (ϕ_{ π } = 1.5, ϕ_{ u } = 0.5)
{σ_{ w },σ_{ d }} = {0.05, 0.05} | {σ_{ w },σ_{ d }} = {0.25, 0.05} | {σ_{ w },σ_{ d }} = {0.4, 0.05} | {σ_{ w },σ_{ d }} = {0.4, 0.25} | |
ζ ^{∗} | 0.0068 | 0.0035 | 0.0064 | 0.0029 |
P ^{∗} | 0.36 | 0.91 | 0.51 | 0.6 |
L ^{∗} | 0.0015 | 0.0465 | 0.0522 | 0.0533 |
{σ_{ w },σ_{ d }} = {0.05, 0.25} | {σ_{ w },σ_{ d }} = {0.05, 0.4} | {σ_{ w },σ_{ d }} = {0.25, 0.4} | {σ_{ w },σ_{ d }} = {0.25, 0.25} | |
ζ ^{∗} | 0.01 | 0.009 | 0.0051 | 0.0089 |
P ^{∗} | 0.91 | 0.65 | 0.45 | 0.87 |
L ^{∗} | 0.0107 | 0.0276 | 0.0444 | 0.0413 |
In the stable scenario (i.e. {σ_{ w },σ_{ d }} = {0.05,0.25}), a low publicity of the target (i.e. P = 0.36), coupled with a medium range (0.7%) is optimal. However, the minimum loss obtained (roughly 0.005) fairly equals the one obtained under a pure IT regime (i.e. when P = 1, ζ = 1% and ϕ_{ π },ϕ_{ u } = {4,0}, see Table 3). As a conclusion, we confirm the result of Demertzis and Viegi (2009): the publicity of the target is superfluous in a weakly volatile environment. This result is in line with what has been observed during the Great Moderation period, where developed countries, either under IT and non-IT, have experienced a low macroeconomic variability (Geraats 2009). The performance of IT in these countries appear, thus, at the most, “non-negative” (Walsh 2009).
When introducing shocks (either increasing σ_{ d } or σ_{ w } values, or both simultaneously), the value of the expected loss increases, and is minimized with partial announcements (i.e. P < 1).^{33} The stronger the shocks (either σ_{ d } or σ_{ w }), the lower the optimal degree of publicity. Intuitively, a mitigate dissemination of the target balances the risk of losing credibility in front of inflation variability, and the gain from the coordination of inflation expectations at the targeted level.
The stronger the inflationary shocks (i.e. the higher σ_{ w } values), the higher the optimal range ζ to be communicated around the target, but the optimal range values remain below the optimal one under the stable scenario (0.0067). Conversely, the higher the model uncertainty (i.e. the higher σ_{ d } values), the lower the optimal range values. Yet, the optimal range values are high (typically above 0.5%), and higher than the ones under the stable scenario and under an environment with strong inflationary shocks.
Note that, this results contradicts Walsh (2007a), who establishes that complete transparency is optimal in front of demand shocks. Those demand shocks are assimilable to the disturbances associated with a high level of σ_{ d } in our model: in both frameworks, they correspond to the control error of the CB on the demand through the nominal interest rate. However, the two models work differently. In Walsh’s set-up, transparency on the target allows firms to infer the kind of shocks (demand or supply) that the CB is expecting while, in case of opacity, firms set their forecasts using the CB’s instrument only, and the so-called opacity bias arises. As firms only adjust their prices in reaction to supply shocks, a demand shock contaminates inflation if firms misinterpret the change in the interest rate in reaction to the demand shock, as a change in response to a supply shock. In our model, the gain (or the loss) of being transparent comes from the gain of being credible (or the loss of having lost its credibility). Credibility, in turn, anchors the heterogeneous private inflation expectations, and reduces macroeconomic volatility through more favorable micro behavior (i.e. lower-than-unity indexation coefficients \(\gamma ^{w}_{i,t}\) and high positive values of the substitution coefficients \(\gamma ^{d}_{i,t}\)). In our set-up, partial dissemination of the target then limits the risk of losing its credibility, while maintaining a partial anchorage in case of success. In that respect, the optimal range around the target is relatively high, close to 1%. This result indicates that the insurance against a credibility loss turns out to be the primary concern of a CB facing high model uncertainty.
By contrast, the need of providing a clear focal point to coordinate expectations– through an explicit inflation target associated with a moderate radius – is of critical importance when volatility comes mainly from the inflation process per se – see also Libich (2011) for a similar argument in a context of wage inflation. Accordingly, a lower range minimizes the expected loss under σ_{ w }-led than under σ_{ d }-led volatility.
5 Conclusion
This paper revisits the virtues of transparency of an inflation targeting regime using an agent-based model. By transparency, we mean the announcement of the numerical value of the inflation target together with a range around it. Thanks to an agent-based perspective, we obtain a comprehensive way of modelling heterogeneity and bounded rationality from a collection of interacting agents, while allowing the main monetary policy mechanisms underlying the dynamics of the baseline NK model. In particular, our ABM incorporates the consumption (real interest rate) channel and the expectation channel of monetary policy.
In our setting, the benefit from announcing the target arises from the emergence of a virtuous circle through a loop between credibility and success. Accordingly, inflation expectations may remain anchored at the CB’s inflation target and inflation may be stabilized around the target. The trade-off that the CB faces between the inflation objective and the level of activity may be loosened. Our results confirm that this mechanism prevails in a rather stable environment, such as the so-called Great Moderation period. However, this virtuous circle is not robust to the introduction of strong inflationary pressures, even when coupled with uncertainty affecting the real transmission channel of monetary policy. This is because inflationary shocks feed back into the inflation dynamics and may produce a self-defeating mechanism.
In this case, partial dissemination of the target may limit the risk of losing its credibility, while maintaining a partial anchorage in case of success. We find that providing a clear signal to anchor inflation expectations on one part of the public, while allowing for a less tightly defined objective for the remaining part, achieves an optimal management of expectations when inflation and inflation expectations display a high degree of volatility. In face of model uncertainty, the insurance against the loss of credibility through the announcement of a wide range appears of primary importance.
Overall, our results support the hypothesis that there is a lack of robustness of a fully transparent inflation targeting regime under volatile economic environments.
Footnotes
- 1.
The literature is impressive on these crossing issues. For a recent survey on IT, see Svensson (2010) and the references therein. A useful reference is Walsh (2009). On the impact of transparency and communication of the CB, see, among others, Geraats (2002, 2009) and Woodford (2005). Empirical evidence on the effects of IT on expectations has been provided by Johnson (2002, 2003).
- 2.
Empirical evidence supports the view that the credibility of the CB, inherited from past inflation performances, acts as a primary determinant of inflation expectations (Blinder et al. 2008).
- 3.
- 4.
Lower case symbols stand for individual variables, and upper case symbols for aggregate ones. s and d superscripts indicate respectively, supply and demand variables.
- 5.
- 6.
- 7.
In DSGE models, transversality conditions are imposed to avoid explosive dynamics in the bond accumulation process. Such restrictions cannot be set in our model, in which we have to impose period-by-period constraints. In that respect, we impose an upper limit \(\bar {d} > 1\) to the consumption adjustment rate d, in order to rule out excessive debt and household defaulting, and we impose a lower bound \(\underline {\textit {d}} > 0 \) to ensure minimal subsistence consumption at each period. This way, consumption cannot be driven to zero.
- 8.
We depart from the behavioral rules introduced in the literature on learning about consumption (see the seminal contribution of Allen and Carroll 2001). This is because this literature seeks to explain how households may learn to smooth their consumption path over time assuming a constant nominal interest rate and a zero-inflation world, while we aim here at specifying the consumption channel of monetary policy through changes in the real interest rate.
- 9.
- 10.
- 11.
On the credibility issue, our expectation model shares also common features with Bomfim and Rudebusch (2000), Alichi et al. (2009) and Libich (2011). See also Arifovic et al. (2010) for a private inflation expectations formation process that is partially based on adaptive learning and takes the announcement of an inflation target by the central bank into account.
- 12.
See De Grauwe (2011) for a comparable mechanism
- 13.
We could have considered a noisy target but our focus is on credibility issues of the announcement and the way the CB can use it to manage expectations; that is why we do not want to add issues of clarity, which have been tackled in Salle et al. (2013).
- 14.
For instance, if most agents anticipate a rise in inflation, actual inflation will rise next period through the expectation channel. Agents who did not expect that rise may lose purchasing power, both through a misassessment of the real rate of return of their savings and an under-indexation of their reservation wage.
- 15.
We assume a deterministic natural production level, we set A_{ t } = 1, ∀t (the long run value of the technology assumed by Woodford 2003, p. 225).
- 16.
Normally, the mark-up is computed over the average cost, and not the marginal cost, but we select here the latter in order to keep the analogy with the elasticity rule of the standard NK model, and the comparability with the DSGE literature in general (see also Rotemberg and Woodford 1999).
- 17.
If we overlook potential rationing, having a labor demand or a good supply strategy is equivalent from the firm’s point of view, as labor is the only input (see Eq. 10). Through the mark-up price setting (12), adjusting price is also equivalent to adjusting quantities, so that the firm has actually only one decision-making variable, expressed here in terms of labor demand.
- 18.
- 19.
We consider the non-linear form of the rule rather than the log-linearized version, given the non-linear dynamics of our framework; see Ashraf and Howitt (2012) for a comparable specification.
- 20.
Households are then either fully employed, i.e. h_{i,t} = 1, or fully unemployed, i.e. h_{i,t} = 0, except for the last hired, who can be only partially unemployed i.e. h_{i,t} < 1.
- 21.
Results are not displayed here but the whole validation procedure is detailed in the PhD thesis of the main author, see Salle (2012).
- 22.
- 23.
For this reason, we rule out parameter values such that 𝜖 > 0.05, \(\bar {d}>2\) and \(\underline {d}< 0.2\). Other values of this parameter have been found to have little, if any, influence on aggregate emergent dynamics.
- 24.
See, for example, Goupy and Creighton (2007) for an introduction. This method is widely used in computer simulations in areas such as industry, chemistry, computer science, biology, etc.
- 25.
A special case obtains when the number of past observations used to forecast inflation (window) does not exceed five periods (the lower bound of the interval we have retained). This is the case in three among the thirty-three configurations. In that case, the expected loss is high, probably because the expectations formation process is very reactive to changes in the inflation process.
- 26.
- 27.
An inflation target was first announced in September 1992 in the UK, but the operational responsibility, which implies greater independence and credibility was given to the Bank of England in May 1997.
- 28.
We have also shown that inflation time series in our model display a significant autocorrelated pattern. This result sounds natural as the micro behavioral rules in the ABM prescribe to adjust past behavior, which by construction involves inertia in the model dynamics. Those additional results are available upon request.
- 29.
More precisely, we use de Mendonca (2007) credibility index, that accounts for the range of tolerance around the target:
We apply the same index to both countries to make the comparison easier, although the UK does not announce a range around the target. However, an implicit range of ± 1% may prevail, as the Governor is held to account through an open letter to the Chancellor if the target is missed by more than 1%.
- 30.
Those values belong to the range that has been analysed in Section 3, and, as shown below in the results of the numerical simulations, those σ_{ d } and σ_{ w } values are high enough to imply significant deviations from the stable case σ_{ d } = σ_{ w } = 0.05 in terms of loss function values.
- 31.
It should be noted that the dynamics arising from the ABM cannot be exposed to determinacy issues, as in the RE models, as ABMs simulate trajectories that are multiple by nature. Moreover, the hypotheses underlying the construction of our ABM rule out the possibility of sunspot equilibria, as inflation expectations only depend on realized past inflation. Consequently, our results should not be compared as such to the ones stressing the necessity of complying with the Taylor principle in the related literature on NK models (see, e.g. Bullard and Mitra 2002).
- 32.
In the case of multiplicative uncertainty, shocks impact the parameters of the model, and the noise is propagated in a multiplicative way with the change in the variables under concern, in contrast to the additive (uncertainty) case, in which shocks enter the model as a term that is added to the model equations.
- 33.
It should be noted that the loss values are higher in case of partial announcements than in Table 3 under pure IT or non-IT regimes. However, in the present exercise, monetary policy coefficients are fixed while they constitute a degree of freedom of monetary policy in Table 3. As a result, we should be cautious when comparing as such the loss values between Tables 3 and 4 and concluding that a pure non-IT regime over-performs any form of partial announcements.
- 34.
- 35.
We have n = 17 observation points of \(\mathcal {L}\) over D (see DoE Table 6, Appendix A). As the model is stochastic, we repeat each 30 times. The kriging estimation has then to be performed in averaging \(\mathcal {L}\) values over the 30 repetitions (van Beers and Kleijnen 2004). This results in n = 17 observations of \(\mathcal {L}\) over D.
- 36.
More complex forms would involve more parameters to be estimated, besides \({\sigma ^{2}_{L}}\), \(\theta _{\phi _{\pi }}\) and \(\theta _{\phi _{u}}\).
Notes
Acknowledgments
We are grateful to Jouko Vilmunen for interesting comments and fruitful discussions throughout the redaction of this paper, as well as to Camille Cornand and Prof. Jean-Christophe Poutineau for helpful advices, and to Prof. Paul De Grauwe, Prof. Cars Hommes, Prof. Jean-Christophe Pé reau and Prof. Thomas Vallée for interesting comments on an earlier draft of this paper. We also thank participants of the 2012 SMYE in ZEW, Mannheim, Germany, of the 2012 AFSE conference in Paris, of the 2013 CEF conference in Vancouver, Canada, of the 1st Bordeaux Workshop on Macro ABM co-organized by the Bank of Finland for interesting remarks and of a seminar at the University of Surrey on February, 26th 2014. Isabelle Salle acknowledges financial support from the EU FP7 RAstaNEWS project (grant agreement number 320278).
Funding
Isabelle Salle acknowledges financial support from the EU FP7 RAstaNEWS project (grant number 320278)
References
- Alichi A, Chen H, Clinton K, Freedman C, Johnson M, Kamenik O, Kisinbay T, Laxton D (2009) Inflation targeting under imperfect policy credibility, IMF Working Papers 09/94, International Monetary FundGoogle Scholar
- Allen T W, Carroll C (2001) Individual learning about consumption. Macroecon Dyn 5:255–271CrossRefGoogle Scholar
- Arifovic J (2000) Evolutionary algorithms in macroeconomic models. Macroecon Dyn 4(03):373–414CrossRefGoogle Scholar
- Arifovic J, Dawid H, Deissenberg C, Kostyshyna O (2010) Learning benevolent leadership in a heterogenous agents economy. J Econ Dyn Control 34:1768–1790CrossRefGoogle Scholar
- Ashraf Q, Howitt P (2012) How inflation affects macroeconomic performance: an agent-based computational investigation NBER Working Papers 18225. National Bureau of Economic Research IncGoogle Scholar
- Assenza T, Delli Gatti D, Grazzini J (2015) Emergent dynamics of a macroeconomic agent based model with capital and credit. J Econ Dyn Control 50 (C):5–28CrossRefGoogle Scholar
- Blinder A S, Ehrmann M, Fratzscher M, De Haan J, Jansen D-J. (2008) Central bank communication and monetary policy: a survey of theroy and evidence. J Econ Lit 46(4):910–945CrossRefGoogle Scholar
- Bomfim A N, Rudebusch G D (2000) Opportunistic and deliberate disinflation under imperfect credibility. J Money Credit Bank 32(4):707–21CrossRefGoogle Scholar
- Brainard W (1967) Uncertainty and the effectiveness of policy. Amer Econ Rev Papers Proc 57:411–425Google Scholar
- Bullard M, Mitra K (2002) Learning about monetary policy rules. J Monet Econ 49(6):1105–1129CrossRefGoogle Scholar
- Canzian J (2009) Three essays in agent-based macroeconomics. Doctoral Thesis, University of Trento CIFREMGoogle Scholar
- Cioppa T (2002) Efficient nearly orthogonal and space-filling experimental designs for high-dimensional complex models. Doctoral Dissertation in philosophy in operations research, Naval postgraduate schoolGoogle Scholar
- Cornand C, Heinemann F (2008) Optimal degree of public information dissemination. Econ J 118(528):718–742CrossRefGoogle Scholar
- De Grauwe P (2011) Animal spirits and monetary policy. Econ Theory 47:423–457CrossRefGoogle Scholar
- De Grauwe P (2012) Lectures on behavioral macroeconomics. Princeton University PressGoogle Scholar
- de Mendonca H (2007) Towards credibility from inflation targeting: the Brazilian experience. Appl Econ 39:2599–2615CrossRefGoogle Scholar
- Delli Gatti D, Gaffeo E, Gallegati M, Palestrini A (2005) The apprentice wizard: monetary policy, complexity and learning. New Math Natural Comput (NMNC) 1(01):109–128CrossRefGoogle Scholar
- Demertzis M, Viegi N (2008) Inflation targets as focal points. Int J Central Bank 4(1):55–87Google Scholar
- Demertzis M, Viegi N (2009) Inflation targeting: a framework for communication. B E J Macroecon 99(1):44Google Scholar
- Dosi G, Fagiolo G, Roventini A (2010) Schumpeter meeting Keynes: a policy-friendly model of endogenous growth and business cycles. J Econ Dyn Control 34(9):1748–1767CrossRefGoogle Scholar
- Faust J, Svensson L (2001) Transparency and credibility: monetary policy with unobservable goals. Int Econ Rev 42(2):369–397CrossRefGoogle Scholar
- Friedman M (1957) A theory of the consumption function. Princeton University PressGoogle Scholar
- Gali J (2008) Monetary policy, inflation, and the business cycle: an introduction to the new Keynesian framework. Princeton University PressGoogle Scholar
- Garfinkel M, Oh S (1995) When and how much to talk, credibility and flexibility in monetary policy with private information. J Monet Econ 35(2):341–357CrossRefGoogle Scholar
- Geraats P (2002) Central bank transparency. Econ J 112(483):532–565CrossRefGoogle Scholar
- Geraats P (2009) Trends in monetary policy transparency. Int Financ 12 (2):235–268CrossRefGoogle Scholar
- Giannoni P (2007) Robust optimal monetary policy in a forward-looking model with parameter and shock uncertainty. J Appl Econ 22(1):179–213CrossRefGoogle Scholar
- Goupy J, Creighton L (2007) Introduction to design of experiments with JMP examples, 3rd edn. SAS Institute Inc., Cary, NC, USAGoogle Scholar
- Holland J, Goldberg D, Booker L (1989) Classifier systems and genetic algorithms. Artif Intell 40:235–289CrossRefGoogle Scholar
- Klügl F (2008) A validation methodology for agent-based simulations. In: Wainwright R, Haddad H (eds) Proceedings of the 2008 ACM symposium on applied computing (SAC), FortalezaGoogle Scholar
- Leijonhufvud A (2006) Agent-based macro. In: Tesfatsion L, Judd K. (eds) Handbook of computational economic. chapter 36, vol 2, North-Holland, pp 1625–1646Google Scholar
- Lengnick M (2013) Agent-based macroeconomics: a baseline model. J Econ Behav Org 86(C):102–120CrossRefGoogle Scholar
- Libich J (2011) Inflation nutters? Modelling the flexibility of inflation targeting. B E J Macroecon 11(1):1–17CrossRefGoogle Scholar
- Lux T, Schornstein S (2005) Genetic learning as an explanation of stylized facts of foreign exchange markets. J Math Econ 41(1–2):169–196CrossRefGoogle Scholar
- Matheron G (1963) Principles of geostatistics. Econ Geol 58(8):1246–1266CrossRefGoogle Scholar
- Mebane W J, Sekhon J (2011) Genetic optimization using derivatives: the rgenoud package for R. J Stat Softw 42(11):1–26CrossRefGoogle Scholar
- Oeffner M (2008) Agent-based Keynesian macroeconomics – an evolutionary model embedded in an agent-based computer simulation Doctoral dissertation/ Bayerische Julius - Maximilians Universität, WurzburgGoogle Scholar
- Orphanides A, Williams J C (2007) Inflation targeting under imperfect knowledge. In: Mishkin F S, Schmidt-Hebbel K (eds) Monetary policy under inflation targeting, vol. 11 of central banking, analysis, and economic policies book series. chapter 4. Banco central de Chile, Santiago, pp 77–123Google Scholar
- R Development Core Team (2009) R: a language and environment for statistical computing, R foundation for statistical computing. Vienna, Austria. ISBN 3-900051-07-0. http://www.R-project.org
- Raberto M, Teglio A, Cincotti S (2008) Integrating real and financial markets in an agent-based economic model: an application to monetary policy design. Comput Econ 32:147–162CrossRefGoogle Scholar
- Roos M W, Schmidt U (2012) The importance of time series extrapolation for macroeconomic expectations. German Econ Rev 13(2):196–210CrossRefGoogle Scholar
- Rotemberg J J, Woodford M (1999) The cyclical behavior of prices and costs. In: Taylor JB, Woodford M (eds) Handbook of macroeconomics. chap. 16, vol 1. Elsevier, pp 1051–1135Google Scholar
- Roustant O, Ginsbourger D, Deville Y (2010) DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization. J Stat Softw VV:IIGoogle Scholar
- Sacks J, Welch W, Mitchell T, Wynn H (1989) Design and analysis of computer experiments. Stat Sci 4(4):409–423CrossRefGoogle Scholar
- Salle I (2012) Learning heterogenity and monetary policy: an application to inflation targeting regimes. PhD thesis, University of BordeauxGoogle Scholar
- Salle I, Yıldızoğlu M (2014) Efficient sampling and metamodeling for computational economic models. Comput Econ 44(4):507–53CrossRefGoogle Scholar
- Salle I, Yıldızoğlu M, Sénégas M-A (2013) Inflation targeting in a learning economy: an ABM perspective. Econ Modell 34:114–128CrossRefGoogle Scholar
- Sanchez S (2005) Nolh designs spreadsheet. Software available online via http://diana.cs.nps.navy.mil/SeedLab/
- Sargent T (1993) Bounded rationality in macroeconomics. Oxford University PressGoogle Scholar
- Schmidt-Hebbel K, Walsh C (2009) Monetary policy under uncertainty and learning: an overview. In: Schmidt-Hebbel K, Walsh C E, Loayza N (eds) Monetary policy under uncertainty and learning, vol. 13 of central banking, analysis, and economic policies book series. chapter 1. Central Bank of Chile, pp 1–25Google Scholar
- Seppecher P (2012) Flexibility of wages and macroeconomic instability in an agent-based computational model with endogenous money. Macroecon Dyn 16 (S2):284–297CrossRefGoogle Scholar
- Söderström U (2002) Monetary policy with uncertain parameters. Scand J Econ 104(1):125–145CrossRefGoogle Scholar
- Stein J (1989) Cheap talk and the fed: a theory of imprecise policy announcements. Amer Econ Rev 79(1):32–42Google Scholar
- Svensson L (1999) Inflation targeting as a monetary policy rule. J Monet Econ 43(3):607–654CrossRefGoogle Scholar
- Svensson L E (2010) Inflation targeting. In: Friedman B M, Woodford M (eds) Handbook of monetary economics, vol. 3 of handbook of monetary economics. chapter 22. Elsevier, pp 1237–1302Google Scholar
- Taylor J (1993) Discretion versus policy rules in practice. Carn-Roch Conf Ser Public Policy 39(1):195–214CrossRefGoogle Scholar
- Tetlow R J, von zur Muehlen P (2001) Robust monetary policy with misspecified models: does model uncertainty always call for attenuated policy?. J Econ Dyn Control 25(6/7):911–949CrossRefGoogle Scholar
- van Beers W, Kleijnen J (2004) Kriging interpolation in simulation: A survey. In: Ingalls, RG, Rossetti, MD, Smith, JS, Peters, BA (eds) chapter computer experiments. Handbook of StatisticsGoogle Scholar
- Walsh C (1995) Optimal contracts for central bankers. Amer Econ Rev 85 (1):150–67Google Scholar
- Walsh C (2007a) Optimal economic transparency. Int J Central Bank 3(1):5–36Google Scholar
- Walsh C (2007b) Transparency, fexibility and inflation targeting. In: Mishkin FS, Schmidt-Hebbel K (eds) Monetary policy under inflation targeting, vol. 11 of central banking, analysis, and economic policies book series. chapter 7. Central Bank of Chile, pp 227–263Google Scholar
- Walsh C (2009) Inflation targeting: what have we learned?. Int Financ 12 (2):195–233CrossRefGoogle Scholar
- Walsh C (2010) Transparency, the opacity bias and optimal flexible inflation targeting. mimeoGoogle Scholar
- Woodford M (2003) Interest and prices, foundations of a theory of monetary policy. Princeton University PressGoogle Scholar
- Woodford M (2005) Central bank communication and policy effectiveness. In: The Greenspan Era: lessons for the future, federal reserve bank of Kansas City. Kansas City, pp 399–474Google Scholar
- Yıldızoğlu M, Sénégas M-A, Salle I, Zumpe M (2014) Learning the optimal buffer-stock consumption rule of Carroll. Macroecon Dyn 18(4):727–752CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.