Advertisement

Partial Orderings of Default Predictions

  • Walter KrämerEmail author
  • Peter N. Posch
Chapter
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

We compare and generalize various partial orderings of probability forecasters according to the quality of their predictions. It appears that the calibration requirement is quite at odds with the possibility of some such ordering. However, if the requirements of calibration and identical sets of debtors are relaxed, comparability obtains more easily. Taking default predictions in the credit rating industry as an example, we show for a database of 5333 (Moody’s) and 6505 10-year default predictions (S&P), that Moody’s and S&P cannot be ordered neither according to their grade distributions given default nor non-default or to their Gini- curves, but Moody’s dominate S&P with respect to the ROC-criterion.

References

  1. Bischl, B., Schiffner, J., & Weihs, C. (2013). Benchmarking local classification methods. Computational Statistics, 28, 2599–2619.MathSciNetCrossRefGoogle Scholar
  2. DeGroot, M., & Fienberg, S. E. (1983). The comparison and evaluation of forecasters. The Statistician, 32, 12–22.CrossRefGoogle Scholar
  3. Engelmann, B., Hayden, E., & Tasche, D. (2003). Testing rating accuracy. Discussion paper 2003, Deutsche Bundesbank.Google Scholar
  4. Krämer, W. (2005). On the ordering of probability forecasts. Sankhya: The Indian Journal of Statistics, 67, 662–669.Google Scholar
  5. Krämer, W. (2006). Evaluating probability forecasts in terms of refinement and strictly proper scoring rules. Journal of Forecasting., 25, 223–226.MathSciNetCrossRefGoogle Scholar
  6. Krämer, W. (2017). On assessing the relative performance of default predictions. Journal of Forecasting, 36, 854–858.MathSciNetCrossRefGoogle Scholar
  7. Krämer, W., & Güttler, A. (2008). On comparing the accuracy of default predictions in the rating industry. Empirical Economics, 34, 343–356.CrossRefGoogle Scholar
  8. Moody’s (2015). Annual Default Study: Corporate Default and Recovery Rates 1920–2014, Moody’s Investor Service.Google Scholar
  9. Schervish, M. (1989). A general method for comparing probability assessors. The Annals of Statistics, 17, 1856–1879.MathSciNetCrossRefGoogle Scholar
  10. Vardeman, S., & Meeden, G. (1983). Calibration, sufficiency, and domination considerations for bayesian probability assesors. Journal of American Statistical Assocciation, 78, 808–816.CrossRefGoogle Scholar
  11. Vazza, D. et al. (2015). Annual global corporate default study and rating transitions, Standard & Poor’s ratings direct.Google Scholar
  12. Weihs, C., Ligges, U., Möhrchen, F., & Müllensiefen, D. (2007). Classification in music research. Advances in Data Analysis and Classification, 1, 255–291.MathSciNetCrossRefGoogle Scholar
  13. Winkler, R. L. (1996). Scoring rules and the evaluation of prbabilities. Test, 5, 1–60.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Fakultät StatistikTechnische Universität DortmundDortmundGermany
  2. 2.Fakultät WiSoTechnische Universität DortmundDortmundGermany

Personalised recommendations