Advertisement

Assessing the Accuracy of the Probability Distributions

  • Michael P. Clements
Chapter
Part of the Palgrave Texts in Econometrics book series (PTEC)

Abstract

Ways of assessing the quality of the forecast densities (reported in the form of histograms) provided by the survey respondents are described and applied to the US SPF. Forecast densities can be assessed in terms of whether they could have generated the observed data: that is, whether they differ significantly from the assumed (but unknown) actual densities which gave rise to the observed data. They can also be compared to rival density forecasts, even if they are found wanting in absolute terms. In the reported assessment of the SPF aggregate and individual densities, the benchmarks are constructed to spotlight a particular aspect of the SPF densities. That is, whether the SPF respondents are able to adequately capture the time-varying uncertainty that characterized output growth and inflation. Rather than evaluating the whole densities, specific regions of interest can be considered, and this is illustrated. Finally, some scoring rules might be better suited than others when, as here, the densities are presented in the form of histograms.

References

  1. Abel, J., Rich, R., Song, J., & Tracy, J. (2016). The measurement and behavior of uncertainty: Evidence from the ECB survey of professional forecasters. Journal of Applied Econometrics, 31(3), 533–550.CrossRefGoogle Scholar
  2. Aiolfi, M., Capistrán, C., & Timmermann, A. (2011). Forecast combinations. In M. P. Clements & D. F. Hendry (Eds.), The Oxford handbook of economic forecasting (Chap. 11, pp. 355–388). Oxford: Oxford University Press.Google Scholar
  3. Amisano, G., & Giacomini, R. (2007). Comparing density forecasts via weighted likelihood ratio tests. Journal of Business & Economic Statistics, 25, 177–190.CrossRefGoogle Scholar
  4. Bao, Y., Lee, T.-H., & Saltoglu, B. (2007). Comparing density forecast models. Journal of Forecasting, 26(3), 203–225.CrossRefGoogle Scholar
  5. Bai, J., & Ng, S. (2005). Tests for skewness, kurtosis, and normality for time series data. Journal of Business & Economic Statistics, 23, 49–60.CrossRefGoogle Scholar
  6. Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monthly Weather Review, 75, 1–3.CrossRefGoogle Scholar
  7. Berkowitz, J. (2001). Testing density forecasts, with applications to risk management. Journal of Business and Economic Statistics, 19(4), 465–474.CrossRefGoogle Scholar
  8. Boero, G., Smith, J., & Wallis, K. F. (2011). Scoring rules and survey density forecasts. International Journal of Forecasting, 27(2), 379–393.CrossRefGoogle Scholar
  9. Clemen, R. T. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5, 559–583. Reprinted in Mills, T. C. (Ed.). (1999). Economic forecasting. The International Library of Critical Writings in Economics. Cheltenham: Edward Elgar.Google Scholar
  10. Clements, M. P. (2004). Evaluating the Bank of England density forecasts of inflation. Economic Journal, 114, 844–866.CrossRefGoogle Scholar
  11. Clements, M. P. (2018). Are macroeconomic density forecasts informative?. International Journal of Forecasting, 34, 181–198.CrossRefGoogle Scholar
  12. Clements, M. P., & Smith, J. (2000). Evaluating the forecast densities of linear and non-linear models: Applications to output growth and unemployment. Journal of Forecasting, 19, 255–276.CrossRefGoogle Scholar
  13. Diebold, F. X., Gunther, T. A., & Tay, A. S. (1998). Evaluating density forecasts: With applications to financial risk management. International Economic Review, 39, 863–883.CrossRefGoogle Scholar
  14. Diebold, F. X., & Mariano, R. S. (1995). Comparing predictive accuracy. Journal of Business and Economic Statistics, 13, 253–263.Google Scholar
  15. Diks, C., Panchenko, V., & van Dijk, D. (2011). Likelihood-based scoring rules for comparing density forecasts in tails. Journal of Econometrics, 163(2), 215–230. http://dx.doi.org/10.1016/j.jeconom.2011.04.001.CrossRefGoogle Scholar
  16. Engle, R. F., & Manganelli, S. (2004). CAViaR: Conditional autoregressive value at risk by regression quantiles. Journal of Business & Economic Statistics, 22, 367–381.CrossRefGoogle Scholar
  17. Epstein, E. S. (1969). A scoring system for probability forecasts of ranked categories. Journal of Applied Meteorology, 8, 985–987.CrossRefGoogle Scholar
  18. Genest, C., & Zidek, J. V. (1986). Combining probability distributions: A critique and an annotated bibliography. Statistical Science, 1, 114–148.CrossRefGoogle Scholar
  19. Gneiting, T., & Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477), 359–378.CrossRefGoogle Scholar
  20. Gneiting, T., & Ranjan, R. (2011). Comparing density forecasts using threshold and quantile weighted proper scoring rules. Journal of Business and Economic Statistics, 29, 411–422.CrossRefGoogle Scholar
  21. Hall, S. G., & Mitchell, J. (2009). Recent developments in density forecasting. In T. C. Mills & K. Patterson (Eds.), Palgrave handbook of econometrics, volume 2: Applied econometrics (pp. 199–239). New York: Palgrave MacMillan.CrossRefGoogle Scholar
  22. Kenny, G., Kostka, T., & Masera, F. (2014). How informative are the subjective density forecasts of macroeconomists?. Journal of Forecasting, 33(3), 163–185.CrossRefGoogle Scholar
  23. Kim, S., Shephard, N., & Chib, S. (1998). Stochastic volatility: Likelihood inference and comparison with ARCH models. Review of Economic Studies, 81, 361–393.CrossRefGoogle Scholar
  24. Knüppel, M. (2015). Evaluating the calibration of multi-step-ahead density forecasts using raw moments. Journal of Business & Economic Statistics, 33(2), 270–281.CrossRefGoogle Scholar
  25. Kullback, L., & Leibler, R. A. (1951). On information and sufficiency. Annals of Mathematical Sciences, 22, 79–86.Google Scholar
  26. Lahiri, K., Peng, H., & Sheng, X. (2015). Measuring Uncertainty of a Combined Forecast and Some Tests for Forecaster Heterogeneity. Cesifo Working Paper Series 5468, CESifo Group Munich.Google Scholar
  27. Lahiri, K., & Sheng, X. (2010). Measuring forecast uncertainty by disagreement: The missing link. Journal of Applied Econometrics, 25, 514–538.CrossRefGoogle Scholar
  28. Lahiri, K., Teigland, C., & Zaporowski, M. (1988). Interest rates and the subjective probability distribution of inflation forecasts. Journal of Money, Credit and Banking, 20(2), 233–248.CrossRefGoogle Scholar
  29. Manski, C. F. (2011). Interpreting and combining heterogeneous survey forecasts. In M. P. Clements & D. F. Hendry (Eds.), Oxford handbook of economic forecasting (Chap. 16, pp. 457–472). Oxford: Oxford University Press.Google Scholar
  30. Manzan, S. (2016). Are Professional Forecasters Bayesian. Mimeo, Baruch College, CUNY, New York.Google Scholar
  31. Rosenblatt, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics, 23, 470–472.CrossRefGoogle Scholar
  32. Rossi, B., & Sekhposyan, T. (2015). Macroeconomic uncertainty indices based on nowcast and forecast error distributions. American Economic Review, 105(5), 650–55.CrossRefGoogle Scholar
  33. Rudebusch, G. D., & Williams, J. C. (2009). Forecasting recessions: The puzzle of the enduring power of the yield curve. Journal of Business & Economic Statistics, 27(4), 492–503.CrossRefGoogle Scholar
  34. Shephard, N. (1994). Partial non-Gaussian state space. Biometrika, 81, 115–131.CrossRefGoogle Scholar
  35. Wallis, K. F. (2005). Combining density and interval forecasts: A modest proposal. Oxford Bulletin of Economics and Statistics, 67(s1), 983–994.CrossRefGoogle Scholar
  36. Winkler, R. L. (1967). The quantification of judgement: Some methodological suggestions. Journal of the American Statistical Association, 62, 1105–1120.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Authors and Affiliations

  • Michael P. Clements
    • 1
  1. 1.ICMA Centre, Henley Business SchoolUniversity of ReadingWheatleyUK

Personalised recommendations