Skip to main content
Log in

Large Shocks and the Business Cycle: The Effect of Outlier Adjustments

  • Research Paper
  • Published:
Journal of Business Cycle Research Aims and scope Submit manuscript

Abstract

This study examines the impact of outlier-adjusted data on business cycle inferences using coincident indicators of the composite index (CI) in Japan. To estimate the CI and business cycles, this study proposes a Markov switching dynamic factor model incorporating Student’s t-distribution in both the idiosyncratic noise and the factor equation. Furthermore, the model includes a stochastic volatility process to identify whether a large shock is associated with a business cycle. From the empirical analysis, both the factor and the idiosyncratic component have fat-tail error distributions, and the estimated CI and recession probabilities are close to those published by the Economic and Social Research Institute. Compared with the estimated CI using the adjusted data set, the outlier adjustment reduces the depth of the recession. Moreover, the results of the shock decomposition show that the financial crisis in mid-2008 was caused by increase of clustering shocks and large unexpected shocks. In contrast, the Great East Japan Earthquake in 2011 was derived from idiosyncratic noise and did not cause a recession. When analyzing whether to use a sample that includes outliers associated with the business cycle, it is not desirable to use the outlier-adjusted data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The ESRI does not include data on personal income, which is a major coincident economic indicator in the United States.

  2. Coincident indicators are available at http://www.esri.cao.go.jp/en/stat/di/di-e.html.

  3. These statistics denote the Ljung–Box statistics adjusted by Diebold (1988) to test the null hypothesis of no autocorrelations up to 24.

  4. Strictly speaking, this model is a static factor model.

  5. There has been some recent research using large data sets (Stock and Watson 2014).

  6. The index of producers’ shipment of durable consumer goods is used instead of the index of operating rates from November 2011.

  7. See Spiegelhalter et al. (2002) for the detail of the DIC.

  8. CD represents the p value based on the test statistic on the difference between two sample means (i.e., dividing all the generated random draws into three parts, we compute two sample means from the first \(10\%\) and last \(50\%\) of the random draws), where the test statistics are asymptotically distributed as standard normal random variables. We confirm that the random draws generated by MCMC do not converge to the random draws generated from the target distribution when CD is less than 0.01 (see Geweke 1992 for a detailed discussion of CD).

  9. The inefficiency factor, which is an index that measures how well the chain mixes, as proposed by Chib (2001), is defined as

    $$\begin{aligned} \text{ IF } = 1 + 2 \sum _{l =1}^{\infty } \hat{\rho }_l, \end{aligned}$$

    where \(\hat{\rho }_l\) denotes the sample autocorrelation at lag l. It is the ratio of the numerical variance of the sample posterior mean to the variance of the sample mean from the hypothetical uncorrelated draws.

  10. The Japanese Government approved consumption tax law in 1988 and carried it out from April 1989. The consumption tax was increased to 5 from 3% in April 1997 and increased again to 8% in April 2014.

References

  • Aas, K. (2005). The generalized hyperbolic skew Student’s \(t\)-distribution. Journal of Financial Econometrics, 4(2), 275–309.

    Article  Google Scholar 

  • Altissimo, F., Bassanetti, A., Cristadoro, R., Forni, M., Lippi, M., Reichlin, L., & Veronese, G. (2001). EuroCoin: A real-time coincident indicator of the Euro area business cycle. CEPR working paper, no. 3108.

  • Aruoba, S. B., Diebold, F. X., & Scotti, C. (2009). Real-time measurement of business conditions. Journal of Business and Economics Statistics, 27(4), 417–427.

    Article  Google Scholar 

  • Ascari, G., Fagiolo, G., & Roventini, A. (2015). Fat-tail distributions and business-cycle models. Macroeconomic Dynamics, 19(2), 465–476.

    Article  Google Scholar 

  • Balke, N. S., & Fomby, T. B. (1994). Large shocks, small shocks, and economic fluctuations: Outliers in macroeconomic time series. Journal of Applied Econometrics, 9(2), 181–200.

    Article  Google Scholar 

  • Bräuning, F., & Koopman, S. J. (2014). Forecasting macroeconomic variables using collapsed dynamic factor analysis. International Journal of Forecasting, 30(3), 572–584.

    Article  Google Scholar 

  • Camacho, M., Quiros, G. P., & Poncela, P. (2014). Green shoots and double dips in the euro area: A real-time measure. International Journal of Forecasting, 30(3), 520–535.

    Article  Google Scholar 

  • Chan, J. C. C., & Grant, A. L. (2016). On the observed-data deviance information criterion for volatility modeling. Journal of Financial Econometrics, 14(4), 772–802.

    Article  Google Scholar 

  • Chauvet, M. (1998). An econometric characterization of business cycle dynamics with factor structure and regime switching. International Economic Review, 39(4), 969–996.

    Article  Google Scholar 

  • Chib, S. (2001). Markov chain Monte Carlo methods: Computation and inference. In J. J. Heckman & E. Leaper (Eds.), Handbook of econometrics (pp. 3569–3649). Amsterdam: Elsevier.

    Google Scholar 

  • Chib, S., & Greenberg, E. (1995). Understanding the Metropolis–Hastings algorithm. American Statistician, 49(4), 327–335.

    Google Scholar 

  • Christiano, L. J., & Den Haan, W. (1996). Small sample properties of GMM for business cycle analysis. Journal of Business and Economic Statistics, 14, 309–327.

    Google Scholar 

  • Cúrdia, V., Del Negro, M., & Greenwald, D. L. (2014). Rare shocks, great recessions. Journal of Applied Econometrics, 29(7), 1031–1052.

    Article  Google Scholar 

  • Diebold, F. (1988). Empirical modeling of exchange rate dynamics. New York: Spriger.

    Book  Google Scholar 

  • Doornik, J. A. (2006). Ox: Object oriented matrix programming language. London: Timberlake Consultants Press.

    Google Scholar 

  • Durbin, J., & Koopman, S. J. (2002). A simple and efficient simulation smoother for state space time series analysis. Biometrika, 89(3), 603–616.

    Article  Google Scholar 

  • Fagiolo, G., Napoletano, M., & Roventini, A. (2008). Are output growth-rate distributions fat-tailed? Journal of Applied Econometrics, 23(5), 639–669.

    Article  Google Scholar 

  • Franses, P. H., & Paap, R. (1999). Does seasonality influence the dating of business cycle turning points? Journal of Macroeconomics, 21(1), 79–92.

    Article  Google Scholar 

  • Fukuda, S., & Onodera, T. (2001). A new composite index of coincident economic indicators in Japan: How can we improve the forecast performance? International Journal of Forecasting, 17(3), 483–498.

    Article  Google Scholar 

  • Geweke, J. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In J. M. Bernardo, J. O. Berger, A. P. David, & A. F. M. Smith (Eds.), Bayesian statistics 4 (pp. 169–193). Oxford: Oxford University Press.

    Google Scholar 

  • Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica, 57(2), 357–384.

    Article  Google Scholar 

  • Hamilton, J. D. (2011). Calling recessions in real time. International Journal of Forecasting, 27, 1006–1026.

    Article  Google Scholar 

  • Hayashida, M., & Hewings, G. J. D. (2009). Regional business cycles in Japan. International Regional Science Review, 32(2), 119–147.

    Article  Google Scholar 

  • Ishihara, T., & Omori, Y. (2017). Portfolio optimization using dynamic factor and stochastic volatility: Evidence on fat-tailed error and leverage. Japanese Economic Review, 68(1), 63–94.

    Article  Google Scholar 

  • Kaufman, S. (2000). Measuring business cycle with a dynamic Markov switching factor model: An assessment using Bayesian simulation methods. Econometrics Journal, 3(1), 39–65.

    Article  Google Scholar 

  • Kim, C. J., & Nelson, C. R. (1998). Business cycle turning points, a new coincident index, and tests of duration dependence based on a dynamic factor model with regime-switching. The Review of Economics and Statistics, 80(2), 188–201.

    Article  Google Scholar 

  • Kim, C. J., & Nelson, C. R. (1999). State-space models with regime switching. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kim, M., & Yoo, J. (1995). New index of coincident indicators: A multivariate Markov switching factor model approach. Journal of Monetary Economics, 36(3), 607–630.

    Article  Google Scholar 

  • Mariano, R. S., & Murasawa, Y. (2003). A new coincident index of business cycles based on monthly and quarterly series. Journal of Applied Econometrics, 18(4), 427–443.

    Article  Google Scholar 

  • Matas-Mir, A., Osborn, D. R., & Lombardi, M. J. (2008). The effect of seasonal adjustment on the properties of business cycle regimes. Journal of Applied Econometrics, 23(2), 257–278.

    Article  Google Scholar 

  • Nakajima, J. (2015). Bayesian analysis of multivariate stochastic volatility with skew distribution. Econometric Review, 36(5), 546–562.

    Article  Google Scholar 

  • Nakajima, J., Kasuya, M., & Watanabe, T. (2011). Bayesian analysis of time-varying parameter vector autoregressive model for the Japanese economy and monetary policy. Journal of the Japanese and International Economics, 25(3), 225–245.

    Article  Google Scholar 

  • Pitt, M. K., & Shephard, N. (1999). Filtering via simulation: Auxiliary particle filters. Journal of the American Statistical Association, 94(446), 590–599.

    Article  Google Scholar 

  • Primiceri, G. E. (2005). Time-varying structural vector autoregressions and monetary policy. Review of Economic Studies, 72(3), 821–852.

    Article  Google Scholar 

  • Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Linde, A. V. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society Series B, 64(4), 583–639.

    Article  Google Scholar 

  • Stock, J. H., & Watson, M. W. (2014). Estimating turning points using large data sets. Journal of Econometrics, 178, 368–381.

    Article  Google Scholar 

  • Tierney, L. (1994). Markov chains for exploring posterior distributions. Annals of Statistics, 23(4), 1701–1762.

    Article  Google Scholar 

  • Tsay, R. S. (1988). Outliers, level shifts, and variance changes in time series. Journal of Forecasting, 7(1), 1–20.

    Article  Google Scholar 

  • Watanabe, T. (2001). On sampling the degree-of-freedom of Student’s-t disturbances. Statistics and Probability Letters, 52(2), 177–181.

    Article  Google Scholar 

  • Watanabe, T. (2003). Measuring business cycle turning points in Japan with a dynamic Markov switching factor model. Monetary and Economic Studies, Institute for Monetary and Economic Studies, Bank of Japan, 21(1), 35–68.

    Google Scholar 

  • Watanabe, T. (2014). Bayesian analysis of business cycle in Japan using Markov switching model with stochastic volatility and fat-tail distribution. Economic research at Hitotsubashi University, 65(2), 156–167.

    Google Scholar 

  • Watanabe, T., & Omori, Y. (2004). A multi-move sampler for estimating non-Gausian time series model: Comments on Shephard & Pitt (1997). Biometrika, 91, 246–248.

    Article  Google Scholar 

  • Yu, J., & Meyer, R. (2006). Multivariate stochastic volatility models: Bayesian estimation and model comparison. Econometric Reviews, 25, 361–384.

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge helpful the discussions and suggestions of Yasutomo Murasawa and Toshiaki Watanabe and especially, two anonymous referees regarding several points in the paper. This research is supported by a grant-in-aid from Zengin Foundation for Studies on Economics and Finance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshihiro Ohtsuka.

Sampling Algorithm for Parameters

Sampling Algorithm for Parameters

1.1 Sampling \(\gamma _i\)

Let \(\Delta (\psi _i ) = 1 - \psi _i L\), where L denotes a lag operator, and \(\psi _i^* = (1 - \psi _i^2)^{\frac{1}{2}}\). Given c and s, the Eq. (3) can be rewritten as

$$\begin{aligned} \bar{y}_i = \bar{x}_i \gamma _i + \sqrt{\lambda _i} \epsilon _i, \end{aligned}$$

where \(\bar{y}_i = \left( \psi _i^* y_{i1}, \Delta (\psi _i)y_{i2}, \ldots , \Delta (\psi _i) y_{iT} \right) '\), \(\bar{x}_i = \left( \psi _i^* c_1, \Delta ( \psi _i ) c_{2}, \ldots , \Delta ( \psi _i ) c_T \right) '\), \(\lambda _{i} = diag( \lambda _{i1}, \ldots , \lambda _{T} )\), \(\epsilon _i = ( \epsilon _{1i}, \ldots , \epsilon _{Ti} )'\). Thus, the full conditional distribution of \(\gamma _i\) is as follows:

$$\begin{aligned} \gamma _i | \theta _{- \gamma _i}, \vartheta , y \sim \mathcal {N}( \hat{\mu }_{\gamma _i}, \hat{\sigma }^2_{\gamma _i} ), \end{aligned}$$

where \(\hat{\mu }_{\gamma _i} = \hat{\sigma }^2_{\gamma _i}( \sigma ^{-2}_i \bar{x}'_i \lambda _i^{-1}\bar{y} + \sigma _{\gamma 0}^{-2} \gamma _0 )\) and \(\hat{\sigma }^2_{\gamma _i} = ( \sigma _i^{-2}\bar{x}_i' \lambda _i^{-1}\bar{x}_i + \sigma _{\gamma 0}^{-2} )^{-1}\).

1.2 Sampling \(\psi _i\) and \(\sigma ^2_i\)

For sampling parameter \(\psi _i\), we employ the Metropolis-Hastings algorithm proposed by Chib and Greenberg (1995). Let \(\hat{y}_i = (z_{i2}, \ldots , z_{iT})'\) and \(\hat{x}_i = (z_{i1}, \ldots , z_{i,T-1})'\). The full conditional distribution of \(\psi _i\) is given by

$$\begin{aligned} \pi ( \psi _i | \theta _{- \psi _i}, \vartheta , y ) \propto \psi _i^* \exp \left[ - \frac{(\psi _i^* z_{i1})^2}{2 \lambda _{i1} \sigma ^2_i} - \frac{(\psi _i - \psi _{0i})^2}{2 \sigma ^2_{\psi _{i0}}} - \frac{1}{2 \sigma _i^2}\sum _{t=2}^T \frac{(z_{it} - \psi _i z_{i, t-1} )^2}{\lambda _{it}} \right] . \end{aligned}$$

It is difficult to directly draw the parameter. We generate the value from the following candidate distribution:

$$\begin{aligned} \psi _i | \theta _{- \psi _i}, \vartheta , y \sim \mathcal {TN}_{|\psi _i| < 1}( \hat{\mu }_{\psi _i}, \hat{\sigma }^2_{\psi _i} ), \end{aligned}$$

where

$$\begin{aligned} \hat{\mu }_{\psi _i} = \hat{\sigma }^{2}_{\psi _i} \left( \sum _{t=2}^T \frac{z_{it} z_{i, t-1}}{\lambda _{it}} + \frac{\psi _{0i}}{\sigma ^2_{\psi _{0i}}} \right) , \quad \text{ and }\quad \hat{\sigma }^2_{\psi _i} = \sigma ^2_i \left( \sum _{t=2}^T \frac{z_{i, t-1}^2}{\lambda _{it}} + \frac{1}{\sigma ^2_{\psi _{0i}}} \right) ^{-1} . \end{aligned}$$

Let \(\psi _i^{old}\) be the previous value. Then, we draw a candidate \(\psi ^{new}_i\) from \(\mathcal {N}( \hat{\mu }_{\psi _i}, \hat{\sigma }^2_{\psi _i} )\), truncated on \((-1, 1)\), in order to satisfy the stationary condition, and accept it with probability

$$\begin{aligned} \alpha ( \psi _i^{old}, \psi ^{new}_i ) = \min \left[ \frac{\pi (\psi ^{new}_i | \theta _{- \psi _i}, \vartheta , y)}{\pi (\psi _i^{old}| \theta _{- \psi _i}, \vartheta , y)} ,\ 1 \right] . \end{aligned}$$

Next, the full conditional distribution of \(\sigma ^2_i\) is as follows:

$$\begin{aligned} \sigma ^2_i | \theta _{- \sigma ^2_i}, \vartheta , y \sim \mathcal {IG}\left( \frac{\hat{\tau }_i}{2},\ \frac{\hat{\delta }_{i}}{2} \right) , \end{aligned}$$

where \(\hat{\tau }_i = \tau _{0i} + T\) and \(\hat{\lambda }_{i} = \frac{( \psi _i^* z_{i1} )^2}{\lambda _{i1}} + \sum _{t=2}^T \frac{(z_{it} - \psi _i z_{i, t-1})^2}{\lambda _{it}} + \delta _{0i}\).

1.3 Sampling c

We show the state space representation of the model for drawing the latent variables c. Let \(\Delta y_{it} = y_{it} - \psi _i y_{i, t-1}\), for \(i = 1, \ldots , n\), \(\Delta y_t = ( \Delta y_{1t}, \ldots , \Delta y_{nt} )'\), \(\Delta (\phi ) = 1 - \phi (L_q)\) and \(\sigma ^2_{ht} = \exp ( h_t )\). Then, the model can be represented as

$$\begin{aligned} \Delta y_t&= {} \Gamma \alpha _t + \epsilon _t, \quad \epsilon _t \sim \mathcal {N}( 0_{n}, H_t ), \end{aligned}$$
(11)
$$\begin{aligned} \alpha _{t}&= {} m_{t} + T \alpha _{t - 1} + G_t \eta _t, \quad \eta _t \sim \mathcal {N}( 0_{q+1}, I_{q+1} ), \end{aligned}$$
(12)

where \(I_{q+1}\) denotes \((q+1) \times (q+1)\) unit matrix, and \(\Gamma \), \(\alpha _t\), \(H_t\), \(m_t\), T, and \(G_t\) are given by

$$\begin{aligned} {}&\Gamma = \left[ \begin{array}{ccc} \gamma _1 &{} - \gamma _1 \psi _1 &{} 0'_{q-1}\\ \vdots &{} \vdots &{} \vdots \\ \gamma _n &{} - \gamma _n \psi _n &{} 0'_{q-1} \end{array} \right] , \ \alpha _t = \left[ \begin{array}{c} c_t \\ \vdots \\ c_{t-q} \end{array} \right] , \ H_t = diag( \lambda _{1t} \sigma ^2_1, \ldots , \lambda _{nt}\sigma ^2_n) , \\ {}&m_t = \left[ \begin{array}{c} \Delta (\phi ) \mu _{t} \\ 0_{q} \end{array} \right] ,\ T = \left[ \begin{array}{cc} {\phi '}&{} \\ I_{q - 1} &{} 0_{q - 1} \end{array} \right] ,\ G_t = \left[ \begin{array}{cc} \sqrt{\omega _{t}} \sigma _{ht} &{} 0_{q}'\\ 0_{q} &{} 0_{q \times q} \end{array} \right] . \end{aligned}$$

Since Eqs. (11) and (12) constitute the linear Gaussian state space mode, we can sample c using the efficient simulation smoother (Durbin and Koopman 2002).

1.4 Sampling \(\mu \)

Let \(\tilde{x}_t = ( 1 - s_t, s_t )\). Given c and s, the equation of (4) is rewritten as:

$$\begin{aligned} \tilde{y} = \tilde{x} \mu + \omega ^{\frac{1}{2}} \eta , \end{aligned}$$

where \(\tilde{y} = ( \Delta (\phi )c_1, \Delta (\phi )c_2, \ldots , \Delta (\phi )c_T )'\), \(\tilde{x} = ( \Delta (\phi ) x_1', \Delta (\phi )x_2', \ldots , \Delta (\phi )x_T' )'\), \(\bar{\omega } = diag( \omega _1, \ldots , \omega _T )\), and \(\eta = ( \eta _1, \ldots , \eta _T )'\). The full conditional distribution of \(\mu \) can be obtained as

$$\begin{aligned} \mu | \theta _{- \mu }, \vartheta , y \sim \mathcal {N}( \hat{\mu }, \hat{\Sigma }_{\mu } )I[ \mu ^{(0)} < \mu ^{(1)} ], \end{aligned}$$

where \(\hat{\mu } = \hat{\Sigma }_{\mu }( \tilde{x}'\bar{\omega }^{-1}\tilde{y} + \Sigma _{\mu 0}^{-1}\mu _0 )\) and \(\hat{\Sigma }_{\mu } = ( \tilde{x}'\bar{\omega }^{-1}\tilde{x} + \Sigma _{\mu 0}^{-1} )^{-1}\). For sampling \(\mu \), if it does not satisfy the inequality \(\mu ^{(0)} < \mu ^{(1)}\), the generated values are rejected and then sampled again.

1.5 Sampling \(\phi \)

Let \(\dot{c}_t = c_t - \mu _t\) and \(\tilde{c}_{t-1} = ( \dot{c}_{t-1}, \ldots , \dot{c}_{t-q} )\). Then, the full conditional distribution of \(\phi \) is given by

$$\begin{aligned} \phi | \theta _{- \phi }, \vartheta , y \sim \mathcal {N}( \hat{\mu }_{\phi }, \hat{\Sigma }_{\phi } )I[S(\phi )], \end{aligned}$$

where

$$\begin{aligned} \hat{\mu }_{\phi } = \hat{\Sigma }_{\phi } \left( \sum _{t=q}^{T} \frac{\tilde{c}'_{t-1} \dot{c}_t }{\omega _{t}\sigma ^{2}_{ht}} + \Sigma _{\phi _0}^{-1}\phi _0 \right) ,\quad \text{ and }\quad \hat{\Sigma }_{\phi } = \left( \sum _{t=q}^{T} \frac{\tilde{c}_{t-1} \tilde{c}_{t-1}'}{\omega _{t} \sigma ^{2}_{ht}} + \Sigma _{\phi _0}^{-1} \right) ^{-1} \end{aligned}$$

Let \(\phi ^{old}\) be the previous value. Then, we draw a candidate \(\phi ^{new}\) from \(\mathcal {N}( \hat{\mu }_{\phi }, \hat{\sigma }^2_{\phi } )\), truncated on \(S(\phi )\), in order to satisfy the stationary condition, and accept it with probability

$$\begin{aligned} \alpha ( \phi ^{old}, \phi ^{new}) = \min \left[ \frac{\pi (\phi ^{new} | \theta _{- \phi }, \vartheta , y)}{\pi (\phi ^{old}| \theta _{- \phi }, \vartheta , y)} ,\ 1 \right] . \end{aligned}$$

1.6 Sampling p

Following Watanabe (2014), we employ the acceptance rejection algorithm for sampling \(p_{0}\) and \(p_{1}\). The full conditional distribution of p is following as:

$$\begin{aligned} \pi ( p | \theta _{- p}, \vartheta , y )&\propto {} \frac{(1 - p_{0})^{s_1}( 1- p_{1})^{1 - s_1}}{2 - p_{0} - p_{1}} \\&\quad \times \, p_{0}^{\iota _{00} + n_{00}} (1 - p_{0})^{\iota _{01} + n_{01}} p_{1}^{\iota _{11} + n_{11}} (1 - p_{1})^{\iota _{10} + n_{10}} \\ {}&= {} g_{p}( p ) \times p_{0}^{\iota _{00} + n_{00}} (1 - p_{0})^{\iota _{01} + n_{01}} p_{1}^{\iota _{11} + n_{11}} (1 - p_{1})^{\iota _{10} + n_{10}}, \end{aligned}$$

where \(n_{ij}\) means the number of transitions from state i to j. We sample a proposed value \(p_{0}^{new}\) and \(p_{1}^{new}\) from the following independent beta distribution:

$$\begin{aligned} p_{0}^{new} \sim Beta( \iota _{00} + n_{00}, \iota _{01} + n_{01} ) ,\quad p_{1}^{new} \sim Beta( \iota _{11} + n_{11}, \iota _{10} + n_{10} ) . \end{aligned}$$

Since \(0< g_{p}( p^{new} ) <1\), we employ the MH step. Finally, we accept the proposed values with probability

$$\begin{aligned} \alpha ( p^{old}, p^{new} ) = \min \left[ \frac{g_{p}( p^{new} )}{g_{p}( p^{old} )}, 1 \right] . \end{aligned}$$

1.7 Sampling s

For sampling s, we employ the multi-move sampler (Kim and Nelson 1998, 1999). The joint conditional distribution of s is as follows:

$$\begin{aligned} f( s | \theta , \vartheta _{- s}, y ) = f( s_T | \theta , \vartheta _{- s_{T}}, y ) \prod _{t=1}^{T-1} f(s_t | s_{t+1}, \theta , \vartheta _{- s_t}, y), \end{aligned}$$
(13)

First, we sample \(s_T\), which is the first term on the right-hand side of Eq. (13). Given \(s_T\) sampled from \(s_{T-1}\), we can proceed backwards in time. Then, \(f(s_{t} | s_{t+1}, \theta , \vartheta _{- s_t}, y)\) includes the following:

$$\begin{aligned} f( s_{t} | s_{t+1}, \theta , \vartheta _{- s_t}, y) \propto f( s_{t+1} | s_{t} ) f(s_{t} | \theta , \vartheta _{- s_t}, y), \end{aligned}$$

where \(f( s_{t+1} | s_{t} )\) means the transition probability. Next, \(f(s_{t} | \theta , \vartheta _{- s_t}, y)\), for \(t=1, \ldots , T\), is calculated using Hamilton (1989) filter. \(s_{T}\) is sampled from \(f(s_{T} | \theta , \vartheta _{- s_T}, y)\), and \(s_{t}\) is sampled using \(s_{T}\) and Eq. (14) backward in time.

1.8 Sampling \(\lambda _{it}\) and \(\nu _i\)

Given c, s and \(\omega \), the full conditional distribution of \(\lambda _{it}\) is as follows:

$$\begin{aligned} \lambda _{it} | \theta , \vartheta _{- \lambda _{it}}, y \sim \mathcal {IG}\left( \frac{\hat{a}_{it}}{2}, \frac{\hat{b}_{it}}{2}\right) . \end{aligned}$$

where \(\hat{a}_{it} = \nu _{i} + 1\) and \(\hat{b}_{it} = \nu _i + \sigma _i^{-2} ( z_{it} - \psi _i z_{i, t-1})^2\).

Finally, the full conditional distribution of \(\nu _i\) is given by

$$\begin{aligned} \pi ( \nu _i | \lambda _i )&\propto {} \nu ^{A_{i0} - 1} \exp ( - B_{i0} \nu _i ) \prod _{t=1}^{T} \frac{\frac{\nu _i}{2}^{\frac{\nu _i}{2}}}{\Gamma \left( \frac{\nu _i}{2} \right) } \lambda _{it}^{-\frac{\nu _i}{2} + 1} \exp \left( - \frac{\nu _i}{2\lambda _{it}} \right) , \end{aligned}$$

To sampling the degrees of freedom parameter, we employ the AR-MH algorithm extended by Watanabe (2001). The AR-MH algorithm was proposed by Tierney (1994) (see also Chib and Greenberg (1995) for details). This algorithm samples the parameter using the AR and MH step. Suppose there is a candidate function \(h(\nu _i)\) which can be directly sampled, and \(f(\nu _i)\), defined as the target distribution. Then, the AR step proceeds as follows:

  1. 1.

    Sample the candidate \(\nu _i\) from \(h(\nu _i)\) and u from the uniform distribution \(\mathcal {U}[0, 1]\).

  2. 2.

    If \(u \le \frac{f( \nu _i )}{h( \nu _i )}\), return \(\nu _i^{new} = \nu _i\). Else, go to 1.

This step is repeated until the candidate draw is accepted. In this study, we utilize a normal distribution as the candidate function. Let \(p^* ( \nu _i )\) denote \( p( \nu _i | \omega )\) with the constant subtracted, and the log of \(p^* ( \nu _i )\) is given by

$$\begin{aligned} \ln p^*( \nu _i ) = \frac{T}{2}\nu _i \ln \left( \frac{\nu _i}{2}\right) - T \ln \Gamma \left( \frac{\nu _i}{2}\right) - J_i \nu _i + (A_{i0} - 1)\ln ( \nu _i ), \end{aligned}$$
(14)

where

$$\begin{aligned} J_i = \frac{1}{2} \sum _{t =1}^{T} \left\{ \ln ( \lambda _{it} ) + \frac{1}{\lambda _{it}} \right\} + B_{i0}. \end{aligned}$$

We apply the second-order Taylor expansion around \(\nu _i = \nu _i^*\) to (14), which yields

$$\begin{aligned} \ln p^*( \nu _i )&\approx {} \ln p^*( \nu _i^* ) + C_i' ( \nu _i - \nu _i^* ) + \frac{C_i''}{2}( \nu _i - \nu _i^* )^2\\&= {} h(\nu _i), \end{aligned}$$

where

$$\begin{aligned} C_i'&= {} \left. \frac{d \ln p^*(\nu _i)}{d\nu _i} \right| _{\nu _i = \nu _i^*} = \frac{T}{2} \left\{ \ln \left( \frac{\nu _i}{2}\right) + 1 - \psi \left( \frac{\nu _i^*}{2}\right) \right\} - J_i + \frac{A_{i0} - 1}{\nu _i^*},\\ C_i''&= {} \left. \frac{d^2 \ln p^*(\nu _i)}{d^2 \nu _f} \right| _{\nu _i = \nu _i^*} = \frac{T }{2} \left\{ \frac{1}{\nu _i} - \frac{1}{2} \psi ' \left( \frac{\nu _i^*}{2}\right) \right\} - \frac{A_{i0} - 1}{\nu _i^{*2}}, \end{aligned}$$

with \(\psi ( \nu _i )\) and \(\psi '( \nu _i )\) denoting a digamma function defined by \(\psi ( \nu _i ) = \frac{d \ln \Gamma ( \nu _i )}{d \nu _i}\), and a trigamma function defined by \(\psi '( \nu _i ) = \frac{d \psi ( \nu _i )}{d \nu _i}\). Then, the normalized version of \(h(\nu _i )\) has a normal density with mean \(\nu _i^* - \frac{C_i'}{C_i''}\) and variance \(- \frac{1}{C_i''}\).

Next, let the previous sampled value of \(\nu _i\) be \(\bar{\nu _i}\). Then, the MH step proceeds as follows:

  1. 1.

    Calculate the acceptance probability \(q(\nu _i)\)

    • If \(p^*( \bar{\nu _i} ) < \kappa _{\nu _i} h( \bar{\nu _i} )\), then set \(q(\nu _i) = 1\);

    • If \(p^*( \bar{\nu _i} ) \ge \kappa _{\nu _i} h( \bar{\nu _i} )\) and \(p^*( \nu _i^{new} ) < \kappa _{\nu _i} h( \nu _i^{new} )\), then set \(\displaystyle q(\nu _i) = \frac{\kappa _{\nu _i} h(\bar{\nu _i} )}{p^*( \bar{\nu _i} )}\);

    • If \(p^*( \bar{\nu _i} ) \ge \kappa _{\nu _i} h( \bar{\nu _i} )\) and \(p^*( \nu _i^{new} ) \ge \kappa _{\nu _i} h( \nu _i^{new} )\), then set \(q(\nu _i) = \min \left[ \frac{p^*( \nu _i^{new} )h(\bar{\nu _i})}{p^*( \bar{\nu _i} )h(\nu _i^{new})}, 1 \right] \);

  2. 2.

    Sample a value u from the uniform distribution \(\mathcal {U}[0, 1]\).

  3. 3.

    If \(u \le q(\nu _i)\), return \(\nu _i = \nu _i^{new}\). Else, return \(\nu _i = \bar{\nu _i}\).

In this step, the candidate value is accepted with probability \(q(\nu _i)\), and otherwise rejected. If a draw is rejected, the previously sampled value is sampled again. In the empirical analysis, we set \(\kappa _{\nu _i} = 1\).

1.9 Sampling \(\omega _t\) and \(\nu _f\)

Since the full conditional distribution of \(\omega _t\) are mutually independent, it is straightforward to sample \(\omega _t\). Thus, the full conditional distribution of \(\omega _t\) are given as

$$\begin{aligned} \left. \frac{\eta _t^2 / \exp (h_t) + \nu _f - 2}{\omega _t}\right| \theta , \vartheta _{- \omega _t},y \sim \chi ^2 ( \nu _f + 1 ),\quad t =1, \ldots , T. \end{aligned}$$

Finally, we sample \(\nu _f\) using the AR-MH algorithm, as in sampling \(\nu _i\).

1.10 Sampling \(\beta \) and \(\xi ^2\)

Given h, the full conditional distribution of \(\beta \) is given by

$$\begin{aligned} \pi ( \beta | \theta , \vartheta _{- \beta }, y ) \propto (1 - \beta ^2) \exp \left[ - \frac{(1 - \beta ^2)h_{1}^2}{2\xi ^2} - \frac{(\beta - \mu _{\beta _0})^2}{2\sigma ^2_{\beta _0}} \right] \prod _{t=2}^{T} \exp \left[ - \frac{(h_{t} - \beta h_{t-1})^2}{2 \xi ^2 }\right] . \end{aligned}$$

It is difficult to directly draw the parameter. We generate the value from the following candidate distribution

$$\begin{aligned} \beta | \theta _{- \beta }, \vartheta , y \sim \mathcal {TN}_{|\beta | < 1}( \hat{\mu }_{\beta }, \hat{\sigma }^2_{\beta } ), \end{aligned}$$

where

$$\begin{aligned} \hat{\mu }_{\beta } = \hat{\sigma }^{-2}_{\beta } \left( \xi ^{-2} \sum _{t=2}^T h_{t} h_{t-1} + \sigma ^{-2}_{\beta _0}\mu _{\beta _0} \right) , \quad \text{ and }\quad \hat{\sigma }^2_{\beta } = \left( \xi ^{-2}\sum _{t=2}^T h_{t-1}^2 + \sigma ^{-2}_{\beta _0} \right) ^{-1}. \end{aligned}$$

Let \(\beta ^{old}\) be the previous value. Then, we draw a candidate \(\beta ^{new}\) from \(\mathcal {N}( \hat{\mu }_{\beta }, \hat{\sigma }^2_{\beta } )\), truncated on \((-1, 1)\), in order to satisfy the stationary condition, and accept it with probability

$$\begin{aligned} \alpha ( \beta ^{old}, \beta ^{new} ) = \min \left[ \frac{\pi (\beta ^{new} | \theta _{- \beta }, \vartheta , y)}{\pi (\beta ^{old}| \theta _{- \beta }, \vartheta , y)},\ 1 \right] . \end{aligned}$$

Finally, the full conditional distribution of \(\xi ^2\) is as follows:

$$\begin{aligned} \xi ^2 | \theta _{- \xi ^2}, \vartheta , y \sim \mathcal {IG}\left( \frac{\hat{\tau }_h}{2},\ \frac{\hat{\delta }_{h}}{2} \right) , \end{aligned}$$

where \(\hat{\tau }_h = \tau _{0h} + T\) and \(\hat{\delta }_{h} = (1 - \beta ^2) h_{1}^2 + \sum _{t=2}^T (h_{t} - \beta h_{t-1})^2 + \delta _{0h}\).

1.11 Sampling h

For sampling the latent variable h, we employ the multi-move sampler extended by Watanabe and Omori (2004). First, we divide h into \(K + 1\) blocks, \((h_{k_{l-1}}, \ldots , h_{k_l})\) for \(l = 1, \ldots , K+1\) with \(k_0 = 0\) and \(k_{K+1} = T\). The K knots (\(k_{1}, \ldots , k_{K}\)) are randomly drawn from

$$\begin{aligned} k_l = int \left[ T \times \frac{ l \times U_{l}}{K+2} \right] , \end{aligned}$$

where \(U_l\) are independent uniforms in [0, 1] and “int” means the integer part. Following Pitt and Shephard (1999), we draw the error term (\(\zeta _{k_{j-1}},\ldots , \zeta _{k_j - 1}\)) instead of \((h_{k_{i-1} + 1}, \ldots , h_{k_i})\) from their full conditional distributions,

$$\begin{aligned} \pi \left( \zeta _{t-1}, \ldots , \zeta _{t+k-1} | h_{t-1}, h_{t+ k+1}, \theta , \vartheta _{- h_{k_{i-1} + 1}, \ldots , h_{k_i}} \right). \end{aligned}$$
(15)

Next, let \(k_{l - 1} = t - 1\), \(k_{l} = t + k\), and \(h(k) = \{ h_j \}_{j=t-1}^{t+k}\). Then, we construct a candidate distribution in order to sampling the error vector. The log of the posterior density (15) is described as follows:

$$\begin{aligned} {}&\log \pi ( \zeta _{t - 1}, \ldots , \zeta _{t+k} | h_{t-1}, h_{t+k+1}, \theta , \vartheta _{- h(k)}, y)\\&\quad = \text{ const. } + \frac{1}{2\xi ^2}\sum _{j = t}^{t+k} \zeta _{j}^2 - \frac{1}{2} \sum _{j=t}^{t+k} \left\{ h_j + \frac{e^{*2}_{j} }{\omega _j} \exp ( - h_j )\right\} - \frac{1}{2 \xi ^2} (h_{t+k +1} - \beta h_{t+k})^2, \end{aligned}$$

where \(e^*_j\) means the residual of Eq. (4). Then, we evaluate this logarithm of the posterior density using the Taylor expansion of the log-likelihood,

$$\begin{aligned} l (h_{j}) = - \frac{1}{2}h_j - \frac{1}{2} \frac{e^{*2}_{j}}{\omega _j} \exp ( - h_j ), \end{aligned}$$

around the mode \(\hat{h}_j\), as follows:

$$\begin{aligned} {}&\log \pi (\zeta _{t-1}, \ldots , \zeta _{t+k - 1} | \theta , \vartheta _{- h(k)}, y)\\ {}&\quad \approx \text{ const. } + \sum _{j=t}^{t+k} \left\{ l (\hat{h}_j) + l' (\hat{h}_j) ( h_j - \hat{h}_j ) + \frac{1}{2}l''(\hat{h}_j)(h_j - \hat{h}_j)^2 \right\} \\ {}&\quad \quad - \frac{1}{2} \sum _{j=t}^{t+k} \left\{ h_j + \frac{e^{*2}_j}{\omega _j} \exp ( - h_j )\right\} - \frac{1}{2 \xi ^2} (h_{t+k +1} - \beta h_{t+k})^2\\ {}&\quad \equiv \log g_{h}(\zeta _{t-1}, \ldots , \zeta _{t+k-1}), \end{aligned}$$

where

$$\begin{aligned} l' (\hat{h}_j)&\equiv {} \frac{\partial l(\hat{h}_j)}{\partial h_{j}} = - \frac{1}{2} + \frac{1}{2}\frac{e_{j}^{*2}}{\omega _j} \exp ( - \hat{h}_j),\\ l'' (\hat{h}_j)&\equiv {} \frac{\partial ^2 l(\hat{h}_j)}{\partial ^2 h_{j}} = - \frac{1}{2}\frac{e_{j}^{*2}}{\omega _j}\exp ( - \hat{h}_j). \\ \end{aligned}$$

We sample the error term from the posterior distribution with the simulation smoother. Moreover, we employ the AR-MH algorithm. Finally, in order to select the posterior mode \(\hat{h}_j\), we apply the Kalman filter and disturbance smoother (Watanabe and Omori 2004).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ohtsuka, Y. Large Shocks and the Business Cycle: The Effect of Outlier Adjustments. J Bus Cycle Res 14, 143–178 (2018). https://doi.org/10.1007/s41549-018-0027-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41549-018-0027-z

Keywords

JEL Classification

Navigation