Skip to main content

Model-Based Measurement of Name Concentration Risk in Credit Portfolios

  • Chapter
  • First Online:
  • 1918 Accesses

Part of the book series: Contributions to Economics ((CE))

Abstract

This chapter deals with the measurement of name concentrations. This type of concentration risk occurs if the weight of single credits in the portfolio does not converge to zero; thus, the individual risk component cannot be completely diversified. The main research questions on name concentrations that are considered in this chapter are:

  • In which cases are the assumptions of the ASRF framework critical concerning the credit portfolio size?

  • In which cases are currently discussed adjustments for the VaR-measurement able to overcome the shortcomings of the ASRF model?

Concerning the first question, it is analyzed how many credits are at least necessary implying the neglect of undiversified individual risk not to be problematic. Since there exist analytical formulas – the so-called granularity adjustment – which approximate these risks, it is further determined in which cases these formulas are able to lead to desired results.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Another solution to the problem of the violation of assumption (A) or (B) might be to cancel risk quantification under the IRB Approach and use internal models. However, this solution is not designated in Basel II.

  2. 2.

    Cf. Sect. 3.2.

  3. 3.

    Gordy (2003) comes to the conclusion that the granularity adjustment works fine for risk buckets of more than 200 loans considering low credit quality buckets and for more than 1,000 loans for high credit quality buckets. However, he uses the CreditRisk+ framework from Credit Suisse Financial Products (1997) and not the Vasicek model that builds the basis of Basel II, and he does not analyze the effect of different correlation factors as they are assumed in Basel II.

  4. 4.

    This question is also interesting when analyzing the Basel II formula because the designated add-on factor for the potential violation of assumption (A) was cancelled from the second consultative document to the third consultative document; see BCBS (2001a, 2003a). Thus, we only prove under which conditions the assumption (A) of the Vasicek model is fulfilled. Of course, this model may suffer from other assumptions like the distributional assumption of standardized returns. However, since we would only like to address the topic of concentration risk, our focus should be reasonable. Additionally, the distributional assumptions seem not to have a deep impact on the measured VaR; see Koyluoglu and Hickman (1998a, b), Gordy (2000) or Hamerle and Rösch (2005a, b, 2006).

  5. 5.

    Wilde (2001) calls this “the granularity adjustment to first order in the unsystematic variance”.

  6. 6.

    This procedure can be motivated by the fact that for market risk quantification of nonlinear exposures two factors of the Taylor series (fist and second order) are common to achieve a higher accuracy; see e.g. Crouhy et al. (2001) or Jorion (2001). This might be appropriate for credit risk as well. Furthermore, the higher order derivatives of VaR given by Wilde (2003) make it possible to systematically derive such a formula.

  7. 7.

    The Basel Committee on Banking Supervision already stated that in principle the effect of portfolio size on credit risk is well understood but lacks practical analyses; see BCBS (2005b).

  8. 8.

    Additionally, this study makes contribution to the ongoing research on analyzing differences between Basel II capital requirements and banks internal “true” risk capital measurement approaches. Since the harmonization of the regulatory capital requirements and the perceived risk capital of banks internal estimates for portfolio credit risk is often stated as the major benefit of Basel II, see e.g. Hahn (2005), p. 127, but often not observed in practice, this task might be of relevance in the future.

  9. 9.

    See Appendix 4.5.2.

  10. 10.

    This is valid because the added risk of the portfolio is unsystematic; see Martin and Wilde (2002) for further explanations.

  11. 11.

    See Appendix 4.5.3.

  12. 12.

    Cf. the identity 2.90.

  13. 13.

    The notation n* refers to the effective number of credits as introduced in (2.87).

  14. 14.

    The equivalent term for heterogeneous portfolios is \( O\left( {\sum\limits_{i = 1}^n {{w^3}} } \right) \).

  15. 15.

    The mth moment of a random variable \( \tilde{X} \) about the mean \( {\eta_m}(\tilde{X}) \) is defined as \( {\eta_m}(\tilde{X}): = b{E}({[\tilde{X} - b{E}(\tilde{X})]^m}) \); cf. Abramowitz and Stegun (1972), 26.1.6.

  16. 16.

    Cf. Appendix 4.5.4.

  17. 17.

    This assumption can be critical for real-world portfolios. Especially, it is often assumed in ongoing research on credit portfolio modeling that the LGD is dependent on the systematic factor. However, the granularity adjustment formula would complicate significantly as neither the ELGD nor the VLGD could be treated as constant for the derivatives. Against this background, this assumption will be retained for the derivation.

  18. 18.

    Cf. Appendix 4.5.4. Pykhtin and Dev (2002) corrected the formulas of Wilde (2001), who neglected the last term of the following conditional variance.

  19. 19.

    Cf. Appendix 4.5.5.

  20. 20.

    Gordy (2003) observes the concavity of the granularity add-on for a high-quality portfolio (A-rated) up to a portfolio size of 1,000 debtors.

  21. 21.

    See Gordy (2004), p. 112, footnote 5, for a similar suggestion.

  22. 22.

    See Appendix 4.5.8 for details regarding the order of these elements.

  23. 23.

    Cf. (4.236) of Appendix 4.5.8.

  24. 24.

    Precisely, the element \( {\eta_{3,c}} \) is the third conditional moment centered about the mean whereas the conditional skewness is the “normalized” third moment, defined as the third conditional moment about the mean divided by the conditional standard deviation to the power of three.

  25. 25.

    Cf. (4.14).

  26. 26.

    See Appendix 4.5.10.

  27. 27.

    Cf. BCBS (2006), p. 10.

  28. 28.

    The chosen portfolio exhibits high unsystematic risk and therefore serves as a good example in order to explain the differences of the four solutions. However, we evaluated several portfolios and basically, the results do not differ. Additionally, we claim that the general statements can also be applied to heterogeneous portfolios.

  29. 29.

    See Rau-Bredow (2005) for a counter-example for very unusual parameter values. This problem can be addressed to the use of VaR as a measure of risk which does not guarantee sub-additivity; cf. Sect. 2.2.3.

  30. 30.

    By contrast, we expected a significant enhancement by using the second order adjustment like mentioned in Gordy (2004), p. 112, footnote 5.

  31. 31.

    To address to the minimum number after which the target tolerance will permanently hold, we have to add the notation “for all \( N \geq n \)” because the function of the coarse grained VaR exhibits jumps dependent on the number of credits.

  32. 32.

    Beside some adjustments on the correlation parameter, these were the major changes of the IRB-formula from the second to the third consultative document; see BCBS (2001a, 2003a).

  33. 33.

    See Sect. 2.7 for details. In both tables, (rounded) parameters ρ due to Basel II are marked.

  34. 34.

    The case of heterogeneous portfolios will be analyzed in Sect. 4.2.2.5.

  35. 35.

    Cf. Deutsche Bundesbank (2009).

  36. 36.

    This is true not only for the first five derivatives but also for all following derivatives; see the general formula for all derivatives of VaR in (4.213).

  37. 37.

    However, we also have to take into consideration that the Taylor series is potentially not convergent at all or does not converge to the correct value. For a further discussion see Martin and Wilde (2002) and Wilde (2003).

  38. 38.

    The used portfolio is based on Overbeck (2000), see also Overbeck and Stahl (2003), but reduced to 20 loans to achieve more test portfolios with a small number of credits.

  39. 39.

    Due to the high number of trials, which corresponds to 3,000 hits in the tail for a confidence level of 0.999, the simulation noise should be negligible.

  40. 40.

    Cf. Sect. 2.6.

  41. 41.

    This is true for a violation of both the granularity and the single risk factor assumption.

  42. 42.

    See e.g. Heitfield et al. (2006), Cespedes et al. (2006), Düllmann (2006), as well as Düllmann and Masschelein (2007).

  43. 43.

    See Albanese and Lawi (2004), p. 215, for this property of a reasonable risk measure.

  44. 44.

    Of course the definition of the VaR does not allow a negative deviation and the VaR jumps to a higher value instead.

  45. 45.

    See Appendix 4.5.11.

  46. 46.

    See Appendix 4.5.12.

  47. 47.

    As mentioned in Sect. 2.6, the VaR is exactly additive and therefore unproblematic in the context of the ASRF framework.

  48. 48.

    We use the idealized default rates from Standard and Poors, see Brand and Bahar (2001), ranging from 0.01% to 18.27%, but the results do not differ widely for different values.

  49. 49.

    The portfolios with high, average, low, and very low quality are taken from Gordy (2000). We added a portfolio with very high quality.

  50. 50.

    The derivatives of ES are derived in Appendix 4.5.13 and 4.5.14.

  51. 51.

    Cf. (4.8).

  52. 52.

    The explanations regarding the order of the derivatives of VaR in Appendix 4.5.8 are valid for the derivatives of ES, too.

  53. 53.

    See also Wilde (2003).

  54. 54.

    See Appendix 4.5.14.

  55. 55.

    Even if the calculations were based on the portfolio gross loss and thus on an LGD of 100%, the results remain identically for every constant LGD as the numerator and the denominator of the analyzed expressions are affected to the same degree.

  56. 56.

    Cf. Schuermann (2005), p. 22, footnote 8.

  57. 57.

    Cf. Schuermann (2005), p. 22, footnote 11.

  58. 58.

    Probably, the data used to generate the figure did not include workout costs and therefore underestimate the true economic loss. Furthermore, the choice of the discount rate influences the effect of negative LGDs: If the recovery cash flows are discounted by the contractual rate, as required by IFRS and as proposed by the Basel II framework, a complete recovery without workout costs leads to a recovery rate of 100%, which shows that negative LGDs are not relevant at all.

  59. 59.

    The issue of interconnections between LGDs and PDs via a systematic factor is not in the scope of this analysis.

  60. 60.

    Cf. Altman et al. (2005), p. 46.

  61. 61.

    Cf. Gupton et al. (1997), p. 80.

  62. 62.

    See also Sect. 2.3.

  63. 63.

    Cf. Bronshtein et al. (2007), p. 760, (16.80).

  64. 64.

    Cf. Schönbucher (2003), p. 147 f.

  65. 65.

    The aggregated data correspond to Fig. 4.8.

  66. 66.

    The critical number of credits in a portfolio which leads to equality of the different parameter settings of the Basel consultative documents is not of interest in the subsequent analyses regarding the ES as both rely on the VaR.

  67. 67.

    See Sect. 4.3.1.

  68. 68.

    As the ASRF solution is constant and the coarse grained solution is monotonously decreasing in n for the ES (this is a result of the monotonicity of specific risk-property, cf. Sect. 4.3.1), the inequality also holds for every number above the first number that satisfies the inequality. Thus, the expression “for all \( N \geq n \)”, which had to be included in the corresponding analysis for the VaR, can be neglected.

  69. 69.

    The corresponding value for deterministic LGDs is 91.64%.

  70. 70.

    The omission of the zeroth-order terms could be foreseen as only the deviation from the systematic loss quantile is analyzed.

  71. 71.

    For functions f, g with \( \mathop {{\lim }}\limits_{x \to {x_0}} f(x) = \mathop {{\lim }}\limits_{x \to {x_0}} g(x) = 0 \) or \( \mathop {{\lim }}\limits_{x \to {x_0}} f(x) = \mathop {{\lim }}\limits_{x \to {x_0}} g(x) = \infty \) it is true that\( \mathop {{\lim }}\limits_{x \to {x_0}} \frac{{f(x)}}{{g(x)}} = \mathop {{\lim }}\limits_{x \to {x_0}} \frac{{f'(x)}}{{g'(x)}} \) if \( \mathop {{\lim }}\limits_{x \to {x_0}} \frac{{f(x)}}{{g(x)}} \) exists; cf. Bronshtein et al. (2007), p. 54, (2.26).

  72. 72.

    Cf. Wilde (2001).

  73. 73.

    Cf. (2.14). The slightly different expressions compared to Rau-Bredow (2002) result from α instead of (1–α) representing the confidence level.

  74. 74.

    Cf. Pitman (1999), p. 416.

  75. 75.

    Cf. Rau-Bredow (2004), p. 66.

  76. 76.

    Cf. Roussas (2007), p. 236.

  77. 77.

    Pykhtin and Dev (2002) corrected the formulas of Wilde (2001), who neglected the last term of the following conditional variance.

  78. 78.

    Cf. Bronshtein et al. (2007), p. 710, (15.5).

  79. 79.

    Cf. Bronshtein et al. (2007), p. 710, (15.8).

  80. 80.

    Weisstein (2009a).

  81. 81.

    Cf. Bronshtein et al. (2007), p. 672, Sect. 14.1.2.1.

  82. 82.

    Cf. Bronshtein et al. (2007), p. 688, (14.41).

  83. 83.

    Cf. Bronshtein et al. (2007), p. 691, (14.49).

  84. 84.

    Cf. Bronshtein et al. (2007), p. 692, (14.51), and Spiegel (1999), p. 144.

  85. 85.

    Cf. Bronshtein et al. (2007), p. 692 f., Sect. 14.3.5.1.

  86. 86.

    Cf. Bronshtein et al. (2007), p. 694, (14.56).

  87. 87.

    Cf. Rowland and Weisstein (2009).

  88. 88.

    Cf. Wilde (2003), p. 3 f.

  89. 89.

    See Martin and Wilde (2002), p. 124 f., and Wilde (2003), p. 2 f.

  90. 90.

    Cf. Billingsley (1995), p. 146 ff., for details about moment generating functions.

  91. 91.

    Cf. Miller and Childers (2004), p. 118.

  92. 92.

    For ease of notation, the derivatives \( {{{\partial G}} \left/ {{\partial z}} \right.} \) and \( {{{\partial G}} \left/ {{\partial w}} \right.} \) will be abbreviated to \( {G_z} \) and \( {G_w} \), respectively. The function G is not associated with a random variable, so confusion should not arise with respect to the similar notation \( {F_{Y + \lambda Z}}(y) \), where the subscript of the distribution function F denotes the corresponding random variable.

  93. 93.

    Cf. Wilde (2003), p. 7.

  94. 94.

    See Abramowitz and Stegun (1972), Sect. 24.1.2(C). The notation \( p \prec m \) indicates that p is a partition of m, cf. Sect. 4.5.6.1.3.

  95. 95.

    See Weisstein (2009b).

  96. 96.

    Cf. Wilde (2003), p. 8.

  97. 97.

    The relation between a partition \( u \) and \( \hat{u} \) is explained in Sect. 4.5.6.1.3.

  98. 98.

    In order to demonstrate that the resulting formula is also valid for \( m = 1 \), the summand for partition \( \{ {1^1}\} \), which equals zero due to argument (4.216), is still considered.

  99. 99.

    For ease of notation, the arguments \( \lambda = 0 \) of the left-hand as well as \( y = {q_\alpha }(\tilde{Y}) \) at the right-hand side are omitted.

  100. 100.

    Cf. (4.213). The notation \( g \circ y \) means that a function g is composed with y.

  101. 101.

    To illustrate that the first identity holds, an example will be demonstrated for m = 5: \( m = 5 \) Furthermore, see (4.9) for the switch between the systematic loss y and the systematic factor x.

  102. 102.

    See (4.14).

  103. 103.

    Cf. Wilde (2003), p. 11.

  104. 104.

    Cf. Appendix 4.5.3.

References

  • Abramowitz M, Stegun I (1972) Handbook of mathematical functions: with formulas, graphs, and mathematical tables, 10th edn. Dover, New York

    Google Scholar 

  • Albanese C, Lawi S (2004) Spectral risk measures for credit portfolios. In: Szegö G (ed) Risk measures for the 21st century. Wiley, Chichester, pp 209–226

    Google Scholar 

  • Altman E, Resti R, Sironi A (2005) Loss given default: a review of the literature. In: Altman E, Resti R, Sironi A (eds) Recovery risk – the next challenge in risk management. Risk Books, London, pp 41–59

    Google Scholar 

  • Basel Committee on Banking Supervision (2001a) Basel II: The New Basel capital accord, Second Consultative Paper. Bank for International Settlements, Basel

    Google Scholar 

  • Basel Committee on Banking Supervision (2003a) Basel II: The New Basel capital accord, Third Consultative Paper. Bank for International Settlements, Basel

    Google Scholar 

  • Basel Committee on Banking Supervision (2005b) Workshop ‘Concentration risk in credit portfolios’: background information. Bank for International Settlements, Basel. http://www.bis.org/bcbs/events/rtf05background.htm. Accessed 18 Aug 2009

  • Basel Committee on Banking Supervision (2006) Studies on credit risk concentration: an overview of the issues and a synopsis of the results from the research task force project. BCBS Working Paper No. 15, Bank for International Settlements, Basel

    Google Scholar 

  • Billingsley P (1995) Probability and measure, 3rd edn. Wiley, New York

    Google Scholar 

  • Brand L, Bahar R (2001) Corporate defaults: will things get worse before they get better? S&P Special Report, pp 5–40

    Google Scholar 

  • Bronshtein IN, Semendyayew KA, Musiol G, Muehlig H (2007) Handbook of mathematics, 5th edn. Springer, Heidelberg

    Google Scholar 

  • Cespedes J, de Juan Herrero J, Kreinin A, Rosen D (2006) A simple multi-factor “factor adjustment” for the treatment of diversification in credit capital rules. J Credit Risk 2(3):57–85

    Google Scholar 

  • Credit Suisse Financial Products (1997). CreditRisk+ – A credit risk management framework. Credit Suisse, London.

    Google Scholar 

  • Crouhy M, Galai D, Mark R (2001) Risk management. McGraw-Hill, New York

    Google Scholar 

  • Deutsche Bundesbank (2009) Credit register of loans of €1.5 million or more. http://www.bundesbank.de/bankenaufsicht/bankenaufsicht_kredit_evidenz.en.php. Accessed 11 Dec 2009

  • Düllmann K (2006). Measuring business sector concentration by an infection model. Discussion Paper, Series 2: Banking and financial studies. Deutsche Bundesbank (3)

    Google Scholar 

  • Düllmann K, Erdelmeier M (2009) Stress testing german banks in a downturn in the automobile industry. Discussion Paper, Series 2: Banking and Financial Studies. Deutsche Bundesbank (2)

    Google Scholar 

  • Düllmann K, Masschelein N (2007) A tractable model to measure sector concentration risk in credit portfolios. J Fin Serv Res 32(1):55–79

    Article  Google Scholar 

  • Emmer S, Tasche D (2005) Calculating credit risk capital charges with the one-factor model. J Risk 7(2):85–103

    Google Scholar 

  • Frye J (2000) Depressing recoveries. Risk 13(11):108–111

    Google Scholar 

  • Gordy MB (2000) A comparative anatomy of credit risk models. J Bank Fin 24(1–2):119–149

    Article  Google Scholar 

  • Gordy MB (2001) A risk-factor model foundation for rating-based capital rules. Working Paper. Board of Governors of the Federal Reserve System, Washington DC

    Google Scholar 

  • Gordy MB (2003) A risk-factor model foundation for rating-based capital rules. J Fin Intermediation 12(3):199–232

    Article  Google Scholar 

  • Gordy MB (2004) Granularity adjustment in portfolio credit risk measurement. In: Szegö G (ed) Risk measures for the 21st century. Wiley, New York, pp 109–121

    Google Scholar 

  • Gouriéroux C, Laurent J, Scaillet O (2000) Sensitivity analysis of values at risk. J Empir Fin 7(3–4):225–245

    Article  Google Scholar 

  • Gupton GM, Finger CC, Bhatia M (1997) CreditMetrics™ – Technical document. Morgan Guarantee Trust Co, New York

    Google Scholar 

  • Gürtler M, Heithecker D, Hibbeln M (2008a) Concentration risk under Pillar 2: when are credit portfolios infinitely fine grained? Kredit Kapital 41(1):79–124

    Article  Google Scholar 

  • Gürtler M, Hibbeln M, Olboeter S (2008b) Design of collateralized debt obligations: the impact of target ratings on the first loss piece. In: Gregoriou GN, Ali P (eds) The credit derivatives handbook. McGraw-Hill, New York, pp 203–228

    Google Scholar 

  • Hahn F (2005) The effect of bank capital on bank credit creation. Kredit Kapital 38(1):103–127

    Google Scholar 

  • Hamerle A, Rösch D (2005a) Bankinterne Parametrisierung und empirischer Vergleich von Kreditrisikomodellen. Die Betriebswirtschaft 65(2):179–196

    Google Scholar 

  • Hamerle A, Rösch D (2005b) Misspecified copulas in credit risk models: how good is Gaussian? J Risk 8(1):41–59

    Google Scholar 

  • Hamerle A, Rösch D (2006) Parameterizing credit risk models. J Credit Risk 2(4):101–122

    Google Scholar 

  • Heitfield E, Burton S, Chomsisengphet S (2006) Systematic and idiosyncratic risk in syndicated loan portfolios. J Credit Risk 2(3):3–31

    Google Scholar 

  • Jorion P (2001) Value at risk, 2nd edn. McGraw-Hill, New York

    Google Scholar 

  • Koyluoglu HU, Hickman A (1998a) A generalized framework for credit portfolio models. Working Paper, http://www.defaultrisk.com

  • Koyluoglu HU, Hickman A (1998b) Reconcilable differences. Risk 11(10):56–62

    Google Scholar 

  • Martin R, Wilde T (2002) Unsystematic credit risk. Risk 15(11):123–128

    Google Scholar 

  • Miller SL, Childers DG (2004) Probability and random processes: with applications to signal processing and communications, 2nd edn. Academic Press, Amsterdam

    Google Scholar 

  • Overbeck L (2000) Allocation of economic capital in loan portfolios. In Frank WH, Stahl G (eds) Measuring risk in complex stochastic systems. Springer, New York, pp 1–17

    Google Scholar 

  • Overbeck L, Stahl G (2003) Stochastic essentials for the risk management of credit portfolios. Kredit Kapital 36(1):52–81

    Google Scholar 

  • Pitman J (1999) Probability. Springer, New York

    Google Scholar 

  • Pykhtin M (2003) Unexpected recovery risk. Risk 16(8):74–78

    Google Scholar 

  • Pykhtin M, Dev A (2002) Analytical approach to credit risk modelling. Risk 15(3):26–32

    Google Scholar 

  • Rau-Bredow H (2002) Credit portfolio modelling, marginal risk contributions, and granularity adjustment. Working Paper, Würzburg

    Google Scholar 

  • Rau-Bredow H (2004) Value at risk, expected shortfall, and marginal risk contribution. In: Szegö G (ed) Risk measures for the 21st century. Wiley, Chichester, pp 61–68

    Google Scholar 

  • Rau-Bredow H (2005) Unsystematic credit risk and coherent risk measures. Working Paper, Würzburg

    Google Scholar 

  • Roussas GG (2007) Introduction to probability. Academic Press, Amsterdam

    Google Scholar 

  • Rowland T, Weisstein EW (2009) Complex residue, from mathworld – a Wolfram web resource. http://mathworld.wolfram.com/ComplexResidue.html. Accessed 18 Aug 2009

  • Schönbucher P (2003) Credit derivatives pricing models: models, pricing and implementation. Wiley, Chichester

    Google Scholar 

  • Schuermann T (2005) What do we know about loss given default? In: Altman E, Sironi A (eds) Recovery risk – the next challenge in risk management. Risk Books, London, pp 3–24

    Google Scholar 

  • Spiegel MR (1999) Schaum’s outline of theory and problems of complex variables: with an introduction to conformal mapping and its applications. McGraw-Hill, New York

    Google Scholar 

  • Weisstein EW (2009a) Delta function, from MathWorld – a Wolfram Web resource. http://mathworld.wolfram.com/DeltaFunction.html. Accessed 18 Aug 2009

  • Weisstein EW (2009b) Leibniz identity, from MathWorld – a Wolfram web resource. http://mathworld.wolfram.com/LeibnizIdentity.html. Accessed 18 Aug 2009

  • Wilde T (2001) Probing granularity. Risk 14(8):103–106

    Google Scholar 

  • Wilde T (2003) Derivatives of VaR and CVaR. Working Paper, CSFB

    Google Scholar 

  • Gürtler M, Hibbeln M, Vöhringer C (2010) Measuring concentration risk for regulatory purposes. J Risk 12(3):69–104

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Hibbeln .

Appendix

Appendix

4.1.1 Alternative Derivation of the First-Order Granularity Adjustment

With reference to Wilde (2001), the granularity adjustment will be derived as an approximation of the difference \( \Delta q \) between the true VaR of a granular portfolio \( {q^{(n)}} \) and the approximation \( {q^{(\infty )}} \) that results if infinite granularity is assumed to hold:

$$ \Delta q = q_\alpha^{(n)} - q_\alpha^{(\infty )}. $$
(4.104)

Instead of determining the add-on \( \Delta q \) directly, it will be analyzed how much the confidence level α will be overestimated or the probability \( p: = 1 - \alpha \) of exceeding the VaR will be underestimated if the portfolio is assumed to be infinitely granular. Thus, the probability

$$ \Delta p = {p^{(\infty )}} - p = \alpha - {\alpha^{(\infty )}} $$
(4.105)

refers to the overestimation of the confidence level if only the systematic loss is considered. Here, α is the specified “target” confidence level, and by definition also the probability that the systematic loss will not exceed \( q_\alpha^{(\infty )} \):

$$ 1 - p = \alpha : = b{P}\left( {\tilde{L} \leq q_\alpha^{(n)}} \right) = b{P}\left( {b{E}\left[ {\tilde{L}|\tilde{x}} \right] \leq q_\alpha^{(\infty )}} \right). $$
(4.106)

By contrast, \( {\alpha^{(\infty )}} \) is the actual confidence level if the VaR is approximated by the ASRF model:

$$ 1 - {p^{(\infty )}} = {\alpha^{(\infty )}}: = b{P}\left( {\tilde{L} \leq q_\alpha^{(\infty )}} \right). $$
(4.107)

Subsequent to the derivation of \( \Delta p \), the result will be transformed into a shift of the loss quantile \( \Delta q \).

Analogous to Appendix 2.8.3, the unconditional probability \( {p^{(\infty )}} \) can be expressed in terms of the conditional probability. Then, the substitution \( y: = q_\alpha^{(\infty )} + t \) is performed to center the integration at \( q_\alpha^{(\infty )} \):

$$ \begin{array} {c} p + \Delta p = b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}} \right) = \int\limits_{y = - \infty }^\infty {b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = y} \right)} \,{f_Y}(y)dy \\ = \int\limits_{t = - \infty }^\infty {b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right)} \,{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)dt, \\ \end{array} $$
(4.108)

with the shorter notation \( \tilde{Y}: = b{E}\left( {\tilde{L}|\tilde{x}} \right) \) for the conditional expectation. According to (4.106), the probability p can be written as

$$ p = b{P}\left( {\tilde{Y} \geq q_\alpha^{(\infty )}} \right) = \int\limits_{y = q_\alpha^{(\infty )}}^\infty {{f_Y}(y)\,dy} = \int\limits_{t = 0}^\infty {{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)\,dt} $$
(4.109)

using the substitution \( y: = q_\alpha^{(\infty )} + t \) again, so that \( t(y = q_\alpha^{(\infty )}) = 0 \) and \( t(y = \infty ) = \infty \). Hence, (4.108) can be expressed as

$$ \begin{array} {c} \Delta p = \int\limits_{t = - \infty }^\infty {b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right)} \,{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)dt - \int\limits_{t = 0}^\infty {{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)\,dt} \\ = \int\limits_{t = - \infty }^0 {b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right)} \,{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)dt \\ + \int\limits_{t = 0}^\infty {\left[ {b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right) - 1} \right]{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)dt} . \\ \end{array} $$
(4.110)

The following transformations are performed for simplification of the integrand in order to solve the integral. A realization of the systematic loss implies a realization of the systematic factor. As the credit loss events are assumed to be independent for a realization of the systematic factor, the conditional credit losses follow a binomial distribution, which can be approximated by a normal distribution for a sufficient number of credits. This leads to

$$ \begin{array} {c} b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right) = 1 - b{P}\left( {\tilde{L} < q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right) \\ \approx 1 - \Phi \left( {\frac{{q_\alpha^{(\infty )} - b{E}\left( {\tilde{L}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right)}}{{\sqrt {{b{V}\left( {\tilde{L}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right)}} }}} \right). \\ \end{array} $$
(4.111)

As \( b{E}(\tilde{L}) = b{E}(b{E}(\tilde{L}|\tilde{x})) = b{E}(\tilde{Y}) \), which is due to the law of iterated expectation, the conditional expectation of (4.111) equals

$$ b{E}\left( {\tilde{L}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right) = b{E}\left( {\tilde{Y}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right) = q_\alpha^{(\infty )} + t. $$
(4.112)

With the symmetry \( 1 - \Phi ( - x) = \Phi (x) \) and defining \( {\sigma^2}(y): = b{V}(\tilde{L}|\tilde{Y} = y) \), (4.111) results in

$$ b{P}\left( {\tilde{L} \geq q_\alpha^{(\infty )}|\tilde{Y} = q_\alpha^{(\infty )} + t} \right) \approx 1 - \Phi \left( {\frac{{q_\alpha^{(\infty )} - q_\alpha^{(\infty )} - t}}{{\sigma \left( {q_\alpha^{(\infty )} + t} \right)}}} \right) = \Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )} + t} \right)}}} \right), $$
(4.113)

so that (4.110) can be written as

$$ \Delta p = \int\limits_{t = - \infty }^0 {\Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )} + t} \right)}}} \right)} \,{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)dt + \int\limits_{t = 0}^\infty {\left[ {\Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )} + t} \right)}}} \right) - 1} \right]{f_Y}\left( {q_\alpha^{(\infty )} + t} \right)dt} . $$
(4.114)

Subsequently, several linear approximations will be performed relying on the assumption that the loss quantile of the granular portfolio is close to the systematic loss quantile and the linearizations lead to minor errors. Linearizing the density function at \( q_\alpha^{(\infty )} \) leads to

$$ {f_Y}\left( {q_\alpha^{(\infty )} + t} \right) \approx {f_Y}\left( {q_\alpha^{(\infty )}} \right) + t \cdot {\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}. $$
(4.115)

The argument of the normal distribution can be approximated as

$$ \begin{array} {c} t \cdot \left( {\frac{1}{{\sigma \left( {q_\alpha^{(\infty )} + t} \right)}}} \right) \approx t \cdot \left( {\frac{1}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} + t \cdot {{\left[ {\frac{d}{{dt}}\frac{1}{{\sigma \left( {q_\alpha^{(\infty )} + t} \right)}}} \right]}_{t = 0}}} \right) \\ = t \cdot \left( {\frac{1}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} + t \cdot {{\left[ { - \frac{1}{{{\sigma^2}\left( {q_\alpha^{(\infty )} + t} \right)}}\frac{d}{{dt}}\sigma \left( {q_\alpha^{(\infty )} + t} \right)} \right]}_{t = 0}}} \right) \\ = \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} - \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left[ {\frac{d}{{dt}}\sigma \left( {q_\alpha^{(\infty )} + t} \right)} \right]}_{t = 0}}} \right). \\ \end{array} $$
(4.116)

With the substitution \( y: = q_\alpha^{(\infty )} + t \), so \( dy/dt = 1 \) and \( y(t = 0) = q_\alpha^{(\infty )} \), the derivative of the conditional standard deviation can be rewritten as

$$ \frac{d}{{dt}}\sigma {\left. {\left( {q_\alpha^{(\infty )} + t} \right)} \right|_{t = 0}} = \frac{d}{{dy}}\sigma {\left. {(y)} \right|_{y = q_\alpha^{(\infty )}}}. $$
(4.117)

Inserting (4.115)–(4.117) in (4.114) leads to

$$ \begin{array} {c} \Delta p = \left( {\int\limits_{t = - \infty }^0 {\Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} - \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right)} \cdot \left[ {{f_Y}\left( {q_\alpha^{(\infty )}} \right) + t \cdot {{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right]dt} \right) \\ - \left( { - \int\limits_{t = 0}^\infty {\left[ {\Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} - \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right) - 1} \right] \cdot \left[ {{f_Y}\left( {q_\alpha^{(\infty )}} \right) + t \cdot {{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right]dt} } \right) \\ = :\Delta {p_1} - \Delta {p_2}. \\ \end{array} $$
(4.118)

When the substitution \( t: = - t \) for the term \( \Delta {p_2} \) is performed and the symmetry of the normal distribution \( \Phi ( - x) - 1 = - \Phi (x) \) is used, both terms \( \Delta {p_1} \) and \( \Delta {p_2} \) are identical except for the algebraic signs:

$$ \begin{array} {c} \Delta {p_2} = - \int\limits_{t = 0}^{ - \infty } {\left[ {\Phi \left( { - \left[ {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} + \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right]} \right) - 1} \right]} \\ \cdot \left[ {{f_Y}\left( {q_\alpha^{(\infty )}} \right) - t \cdot {{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right] \cdot \left( { - 1} \right)dt \\ = \int\limits_{t = - \infty }^0 {\Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} + \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right) \cdot \left( {{f_Y}\left( {q_\alpha^{(\infty )}} \right) - t \cdot {{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right)dt} . \\ \end{array} $$
(4.119)

A linearization of the normal distributions in \( \Delta {p_1} \) and \( \Delta {p_2} \) results in

$$ \begin{array} {c} \Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}} \mp \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right) \\ \approx \Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right) \mp \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}{\left. {\frac{{d\Phi (y)}}{{dy}}} \right|_{y = \frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}}} \\ = \Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right) \mp \frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}\varphi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right). \\ \end{array} $$
(4.120)

Using this approximation, the terms \( \Delta {p_1} \) and \( \Delta {p_2} \) from (4.118) can be written as

$$ \begin{array} {c} \Delta {p_{1,2}} \approx \int\limits_{t = - \infty }^0 {\underbrace {\Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right)}_{ = :{\beta_0}} \cdot \left[ {\underbrace {{f_Y}\left( {q_\alpha^{(\infty )}} \right)}_{ = :{\gamma_0}}\pm \underbrace {t{{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}}_{ = :{\gamma_1}}} \right]dt} \\ \mp \int\limits_{t = - \infty }^0 {\underbrace {\frac{{{t^2}}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}{{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}\varphi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right)}_{ = :{\beta_1}} \cdot \left[ {\underbrace {{f_Y}\left( {q_\alpha^{(\infty )}} \right)}_{ = :{\gamma_0}}\pm \underbrace {t{{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}}_{ = :{\gamma_1}}} \right]dt} . \\ \end{array} $$
(4.121)

The summands \( {\beta_0},\;{\gamma_0} \) are the points around which the linearizations have been performed. The summands \( {\beta_1},\;{\gamma_1} \) have resulted from the first-order approximations. Using this notation, the shift in probability \( \Delta p \) of (4.118) can notably be simplified to

$$ \begin{array} {c} \Delta p \approx \Delta {p_1} - \Delta {p_2} \\ \approx \int\limits_{t = - \infty }^0 {{\beta_0}\left( {{\gamma_0} + {\gamma_1}} \right)} - {\beta_1}\left( {{\gamma_0} + {\gamma_1}} \right)dt - \int\limits_{t = - \infty }^0 {{\beta_0}\left( {{\gamma_0} - {\gamma_1}} \right)} + {\beta_1}\left( {{\gamma_0} - {\gamma_1}} \right)dt \\ = \int\limits_{t = - \infty }^0 {2{\beta_0}{\gamma_1} - 2{\beta_1}{\gamma_0}dt} . \\ \end{array} $$
(4.122)

Fortunately, both integrands are already first-order terms whereas the cross-terms \( {\beta_1} \cdot {\gamma_1} \) vanish.Footnote 70 Thus, there is no need for a further linearization. The remaining expression is

$$ \begin{array} {c} \Delta p \approx 2{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}\int\limits_{t = - \infty }^0 {t \cdot \Phi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right)dt} \\ - 2{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}\frac{{{f_Y}\left( {q_\alpha^{(\infty )}} \right)}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}\int\limits_{t = - \infty }^0 {{t^2} \cdot \varphi \left( {\frac{t}{{\sigma \left( {q_\alpha^{(\infty )}} \right)}}} \right)dt} . \\ \end{array} $$
(4.123)

In order to solve the integrals, the substitution \( y: = t/\sigma (q_\alpha^{(\infty )}) \) is performed, with \( dy/dt = 1/\sigma (q_\alpha^{(\infty )}) \), \( y(t = - \infty ) = - \infty \) and \( y(t = 0) = 0 \):

$$ \begin{array} {c} \Delta p \approx 2{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}\int\limits_{y = - \infty }^0 {y \cdot \sigma \left( {q_\alpha^{(\infty )}} \right) \cdot \Phi (y) \cdot \sigma \left( {q_\alpha^{(\infty )}} \right)dy} \\ - 2{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}\frac{{{f_Y}\left( {q_\alpha^{(\infty )}} \right)}}{{{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)}}\int\limits_{y = - \infty }^0 {{{\left[ {y \cdot \sigma \left( {q_\alpha^{(\infty )}} \right)} \right]}^2} \cdot \varphi (y) \cdot \sigma \left( {q_\alpha^{(\infty )}} \right)dy} \\ = 2{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}{\sigma^2}\left( {q_\alpha^{(\infty )}} \right)\underbrace {\int\limits_{y = - \infty }^0 {y \cdot \Phi (y)dy} }_* \\ - 2{\left. {\frac{{d\sigma (y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}{f_Y}\left( {q_\alpha^{(\infty )}} \right) \cdot \sigma \left( {q_\alpha^{(\infty )}} \right)\underbrace {\int\limits_{y = - \infty }^0 {{y^2} \cdot \varphi (y)dy} }_{**}. \\ \end{array} $$
(4.124)

For the second integral (**), it is used that the integrand is axially symmetric to the y-axis. Furthermore, the definition of the variance is utilized, considering that the standard normal distribution has mean \( {\mu_Y} = 0 \) and variance \( \sigma_Y^2 = 1 \):

$$ \int\limits_{y = - \infty }^0 {{y^2} \cdot \varphi (y)dy} = \frac{1}{2}\int\limits_{y = - \infty }^\infty {{y^2} \cdot \varphi (y)dy} = \frac{1}{2}\int\limits_{y = - \infty }^\infty {{{\left( {y - {\mu_Y}} \right)}^2} \cdot \varphi (y)dy} = \frac{1}{2}{\sigma_Y}^2 = \frac{1}{2}. $$
(4.125)

The first integral (*) can be calculated with integration by parts:

$$ \int\limits_{y = - \infty }^0 {y \cdot \Phi (y)dy} = \left[ {\frac{1}{2}{y^2} \cdot \Phi (y)} \right]_{y = - \infty }^0 - \int\limits_{y = - \infty }^0 {\frac{1}{2}{y^2} \cdot \varphi (y)} \,dy. $$
(4.126)

For \( y = 0 \), the first term is zero but for \( y = - \infty \), the result is not obvious. Using l’Hôpital’s rule several times leads toFootnote 71

$$ \begin{array} {c} \mathop {{\lim }}\limits_{y \to - \infty } \frac{1}{2}{y^2} \cdot \Phi (y) = \mathop {{\lim }}\limits_{y \to \infty } \frac{1}{2}\frac{{\Phi \left( { - y} \right)}}{{{y^{ - 2}}}}\mathop { = }\limits^{{\text{l'H\^o pital}}} \mathop {{\lim }}\limits_{y \to \infty } \frac{1}{2}\frac{{ - \varphi \left( { - y} \right)}}{{ - 2{y^{ - 3}}}} = \mathop {{\lim }}\limits_{y \to \infty } \frac{1}{4}\frac{{{y^3}}}{{{e^{{{{{y^2}}} \left/ {2} \right.}}}}}\mathop { = }\limits^{{\text{l'H\^o pital}}} \mathop {{\lim }}\limits_{y \to \infty } \frac{1}{4}\frac{{3{y^2}}}{{y \cdot {e^{{{{{y^2}}} \left/ {2} \right.}}}}} \\ = \mathop {{\lim }}\limits_{y \to \infty } \frac{3}{4}\frac{y}{{{e^{{{{{y^2}}} \left/ {2} \right.}}}}}\mathop { = }\limits^{{\text{l'H\^o pital}}} \mathop {{\lim }}\limits_{y \to \infty } \frac{3}{4}\frac{1}{{y \cdot {e^{{{{{y^2}}} \left/ {2} \right.}}}}} = 0, \\ \end{array} $$
(4.127)

so that the first term of (4.126) vanishes. Using the result of the previous integration, (4.126) equals \( - {1/4} \). Hence, \( \Delta p \) from (4.124) is given as

$$ \Delta p \approx - \frac{1}{2}{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}{\sigma^2}\left( {q_\alpha^{(\infty )}} \right) - {\left. {\frac{{d\sigma (y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}{f_Y}\left( {q_\alpha^{(\infty )}} \right) \cdot \sigma \left( {q_\alpha^{(\infty )}} \right). $$
(4.128)

Because of \( \sigma \frac{{d\sigma }}{{dy}} = \frac{1}{2}\frac{{d{\sigma^2}}}{{d\sigma }}\frac{{d\sigma }}{{dy}} = \frac{1}{2}\frac{{d{\sigma^2}}}{{dy}} \), (4.128) is equivalent to

$$ \begin{array} {c} \Delta p \approx - \left[ {\frac{1}{2}{{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}{\sigma^2}\left( {q_\alpha^{(\infty )}} \right) + {{\left. {\frac{1}{2}\frac{{d{\sigma^2}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}{f_Y}\left( {q_\alpha^{(\infty )}} \right)} \right] \\ = - \frac{1}{2}{\left[ {\frac{{d{f_Y}(y)}}{{dy}}{\sigma^2}(y) + \frac{{d{\sigma^2}(y)}}{{dy}}{f_Y}(y)} \right]_{y = q_\alpha^{(\infty )}}} \\ = - \frac{1}{2}\frac{d}{{dy}}{\left. {\left( {{f_Y}(y) \cdot {\sigma^2}(y)} \right)} \right|_{y = q_\alpha^{(\infty )}}}. \\ \end{array} $$
(4.129)

This expression is the linearized deviation of the specified probability \( p = 1 - \alpha \) if only the systematic loss is considered for calculation of the loss quantile.

As initially noticed, the determined shift of the probability has to be transformed into a shift of the loss quantile (cf. Fig. 4.12). If the probability density function of the portfolio loss is assumed to be almost linear in a region around the quantile, the required transformation is

$$ \Delta p \approx \frac{1}{2}\left[ {{f_Y}\left( {q_\alpha^{(\infty )}} \right) + {f_Y}\left( {q_\alpha^{(\infty )} + \Delta q} \right)} \right]\Delta q. $$
(4.130)
Fig. 4.12
figure 12

Relation between the shift of the probability and the loss quantile

Two last first-order approximations lead to

$$ \begin{array} {c} \Delta p \approx \frac{1}{2}\left[ {{f_Y}\left( {q_\alpha^{(\infty )}} \right) + \left( {{f_Y}\left( {q_\alpha^{(\infty )}} \right) + \Delta q{{\left. {\frac{{d{f_Y}(y)}}{{dy}}} \right|}_{y = q_\alpha^{(\infty )}}}} \right)} \right]\Delta q \\ = {f_Y}\left( {q_\alpha^{(\infty )}} \right) \cdot \Delta q + {\left. {\frac{1}{2}\frac{{d{f_Y}(y)}}{{dy}}} \right|_{y = q_\alpha^{(\infty )}}}{\left( {\Delta q} \right)^2} \\ \approx {f_Y}\left( {q_\alpha^{(\infty )}} \right) \cdot \Delta q. \\ \end{array} $$
(4.131)

Inserting (4.129) into (4.131) finally leads to

$$ \begin{array} {c} \Delta q \approx \frac{{\Delta p}}{{{f_Y}\left( {q_\alpha^{(\infty )}} \right)}} \approx - \frac{1}{2}\frac{1}{{{f_Y}(y)}}\frac{d}{{dy}}{\left. {\left( {{f_Y}(y) \cdot {\sigma^2}(y)} \right)} \right|_{y = q_\alpha^{(\infty )}}} \\ = - \frac{1}{2}\frac{1}{{{f_Y}(y)}}\frac{d}{{dy}}{\left. {\left( {{f_Y}(y) \cdot b{V}\left( {\tilde{L}|\tilde{Y} = y} \right)} \right)} \right|_{y = q_\alpha^{(\infty )}}}. \\ \end{array} $$
(4.132)

Using (4.8), this can be written as

$$ \Delta q \approx - \frac{1}{{2{f_x}(x)}}\frac{d}{{dx}}{\left. {\left( {\frac{{{f_x}(x)b{V}\left[ {\tilde{L}|\tilde{x} = x} \right]}}{{\frac{d}{{dx}}b{E}\left[ {\tilde{L}|\tilde{x} = x} \right]}}} \right)} \right|_{x = {q_{1 - \alpha }}\left( {\tilde{x}} \right)}}, $$
(4.133)

which is identical to the first-order granularity adjustment of Sect. 4.2.1.1.Footnote 72

4.1.2 First and Second Derivative of VaR

The derivatives of VaR will be determined on the basis of Rau-Bredow (2002, 2004) in the following. Consider two continuous random variables \( \tilde{Y} \) and \( \tilde{Z} \) with joint probability density function \( f(y,z) \) and a variable \( \lambda \in b{R} \). The VaR (the quantile) \( q: = {q_\alpha }\left( {\tilde{L}} \right) \) of \( \tilde{L} = \tilde{Y} + \lambda \tilde{Z} \) can implicitly be defined asFootnote 73

$$ b{P}\left( {\tilde{L} \leq q} \right) = \alpha . $$
(4.134)

Furthermore, the formula of the conditional density function will be used:Footnote 74

$$ {f_{Z|Y = y}}(z) = \frac{{{f_{Y,Z}}(y,z)}}{{{f_Y}(y)}} $$
(4.135)

leading toFootnote 75

$$ {f_{Z|Y + \lambda Z = q}}(z) = \frac{{{f_{Y + \lambda Z,Z}}\left( {q,z} \right)}}{{{f_{Y + \lambda Z}}(q)}} = \frac{{{f_{Y,Z}}\left( {q - \lambda z,z} \right)}}{{{f_{Y + \lambda Z}}(q)}}. $$
(4.136)

4.1.2.1 First Derivative

As the derivative of the constant α is zero, the derivative of (4.134) is

$$ \begin{array} {c} 0 = \frac{\partial }{{\partial \lambda }}b{P}\left( {\tilde{Y} + \lambda \tilde{Z} \leq q} \right) \\ = \frac{\partial }{{\partial \lambda }}\int\limits_{z = - \infty }^\infty {\int\limits_{y = - \infty }^{q - \lambda z} {{f_{Y,Z}}(y,z)\,dy\,dz} } \\ = \int\limits_{z = - \infty }^\infty {\frac{\partial }{{\partial \lambda }}\int\limits_{y = - \infty }^{q - \lambda z} {{f_{Y,Z}}(y,z)\,dy\,dz} } . \\ \end{array} $$
(4.137)

Performing the inner integration and the differentiation leads to

$$ 0 = \int\limits_{z = - \infty }^\infty {\left( {\frac{{dq}}{{d\lambda }} - z} \right)\,{f_{Y,Z}}\left( {q - \lambda z,z} \right)\,dz} . $$
(4.138)

Using the formula for the conditional density function (4.135) and the integral representation of the conditional expectation, we get

$$ \begin{array} {c} 0 = \int\limits_{z = - \infty }^\infty {\left( {\frac{{dq}}{{d\lambda }} - z} \right)\,{f_{Y + \lambda Z}}(q)\,{f_{Z|Y + \lambda Z = q}}(z)\,dz} \\ = {f_{Y + \lambda Z}}(q)\left( {\frac{{dq}}{{d\lambda }}\int\limits_{z = - \infty }^\infty {{f_{Z|Y + \lambda Z = q}}(z)\,dz} - \int\limits_{z = - \infty }^\infty {z\,{f_{Z|Y + \lambda Z = q}}(z)\,dz} } \right) \\ = {f_{Y + \lambda Z}}(q)\left( {\frac{{dq}}{{d\lambda }} \cdot 1 - b{E}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = q} \right]} \right). \\ \end{array} $$
(4.139)

This leads to the first derivative of VaR:

$$ \frac{{dVa{R_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{d\lambda }} = b{E}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = {q_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)} \right]. $$
(4.140)

The first derivative at \( \lambda = 0 \) is

$$ {\left. {\frac{{dVa{R_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{d\lambda }}} \right|_{\lambda = 0}} = b{E}\left[ {\tilde{Z}|\tilde{Y} = {q_\alpha }\left( {\tilde{Y}} \right)} \right]. $$
(4.141)

4.1.2.2 Second Derivative

Similar to (4.137), the second derivative of (4.134) is

$$ 0 = \frac{{{\partial^2}}}{{\partial {\lambda^2}}}b{P}\left( {\tilde{Y} + \lambda \tilde{Z} \leq q} \right) = \frac{{{\partial^2}}}{{\partial {\lambda^2}}}\int\limits_{z = - \infty }^\infty {\int\limits_{y = - \infty }^{q - \lambda z} {{f_{Y,Z}}(y,z)\,dy\,dz} } . $$
(4.142)

With the first derivative of (4.138) and applying the product rule, this leads to

$$ \begin{array} {c} 0 = \frac{\partial }{{\partial \lambda }}\int\limits_{z = - \infty }^\infty {\left( {\frac{{dq}}{{d\lambda }} - z} \right){f_{Y,Z}}\left( {q - \lambda z,z} \right)\,dz} \\ = \int\limits_{z = - \infty }^\infty {\left( {\frac{{{d^2}q}}{{{d^2}\lambda }}} \right){f_{Y,Z}}\left( {q - \lambda z,z} \right) + \left( {\frac{{dq}}{{d\lambda }} - z} \right)\underbrace {\frac{{\partial {f_{Y,Z}}\left( {q - \lambda z,z} \right)}}{{\partial \lambda }}}_*dz} . \\ \end{array} $$
(4.143)

The derivative (*) can be determined with the chain rule:

$$ \begin{array} {c} \frac{{\partial {f_{Y,Z}}\left( {q - \lambda z,z} \right)}}{{\partial \lambda }} = \frac{{\partial \left( {q - \lambda z} \right)}}{{\partial \lambda }}\frac{{\partial {f_{Y,Z}}\left( {q - \lambda z,z} \right)}}{{\partial \left( {q - \lambda z} \right)}}\frac{{\partial q}}{{\partial q}} \\ = \left( {\frac{{dq}}{{d\lambda }} - z} \right)\frac{{\partial {f_{Y,Z}}\left( {q - \lambda z,z} \right)}}{{\partial q}}\frac{1}{{{{{\partial \left( {q - \lambda z} \right)}} \left/ {{\partial q}} \right.}}} \\ = \left( {\frac{{dq}}{{d\lambda }} - z} \right)\frac{{\partial {f_{Y,Z}}(q - \lambda z,z)}}{{\partial q}}. \\ \end{array} $$
(4.144)

Inserting (4.144) and the conditional density (4.136) into (4.143) results in

$$ \begin{array} {c} 0 = \int\limits_{z = - \infty }^\infty {\left( {\frac{{{d^2}q}}{{{d^2}\lambda }}} \right){f_{Y,Z}}\left( {q - \lambda z,z} \right) + {{\left( {\frac{{dq}}{{d\lambda }} - z} \right)}^2}\frac{{\partial {f_{Y,Z}}\left( {q - \lambda z,z} \right)}}{{\partial q}}dz} \\ = \left( {\frac{{{d^2}q}}{{{d^2}\lambda }}} \right)\int\limits_{z = - \infty }^\infty {{f_{Y + \lambda Z}}(q){f_{Z|Y + \lambda Z = q}}(z)dz} \\ + \int\limits_{z = - \infty }^\infty {{{\left( {\frac{{dq}}{{d\lambda }} - z} \right)}^2}\frac{{\partial \left( {{f_{Y + \lambda Z}}(q)\,{f_{Z|Y + \lambda Z = q}}(z)} \right)}}{{\partial q}}dz} . \\ \end{array} $$
(4.145)

The first summand of (4.145) equals

$$ \left( {\frac{{{d^2}q}}{{{d^2}\lambda }}} \right){f_{Y + \lambda Z}}(q)\int\limits_{z = - \infty }^\infty {{f_{Z|Y + \lambda Z = q}}(z)dz} = \left( {\frac{{{d^2}q}}{{{d^2}\lambda }}} \right){f_{Y + \lambda Z}}(q). $$
(4.146)

In order to calculate the second summand of (4.145), the first derivative from (4.140) as well as the integral representation of the conditional variance is used:

$$ \begin{array} {c} \int\limits_{z = - \infty }^\infty {{{\left( {\frac{{dq}}{{d\lambda }} - z} \right)}^2}\frac{{\partial \left( {{f_{Y + \lambda Z}}(q)\,{f_{Z|Y + \lambda Z = q}}(z)} \right)}}{{\partial q}}dz} \\ = \int\limits_{z = - \infty }^\infty {{{\left( {z - b{E}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = q} \right]} \right)}^2}\frac{{\partial \left( {{f_{Y + \lambda Z}}(q){f_{Z|Y + \lambda Z = q}}(z)} \right)}}{{\partial q}}dz} \\ = {\left. {\frac{d}{{dy}}\left( {{f_{Y + \lambda Z}}(y)\int\limits_{z = - \infty }^\infty {{{\left( {z - b{E}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = q} \right]} \right)}^2}{f_{Z|Y + \lambda Z = y}}(z)dz} } \right)} \right|_{y = q}} \\ = {\left. {\frac{d}{{dy}}\left( {{f_{Y + \lambda Z}}(y)b{V}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = y} \right]} \right)} \right|_{y = q}}. \\ \end{array} $$
(4.147)

With these summands, (4.145) can be written as

$$ 0 = \left( {\frac{{{d^2}q}}{{{d^2}\lambda }}} \right){f_{Y + \lambda Z}}(y) + {\left. {\frac{d}{{dy}}\left( {{f_{Y + \lambda Z}}(y)b{V}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = y} \right]} \right)} \right|_{y = q}}. $$
(4.148)

Thus, the second derivative of VaR is equal to

$$ \frac{{{d^2}Va{R_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{{d^2}\lambda }} = - \frac{1}{{{f_{Y + \lambda Z}}(y)}} \cdot {\left. {\frac{d}{{dy}}\left( {{f_{Y + \lambda Z}}(y)b{V}\left[ {\tilde{Z}|\tilde{Y} + \lambda \tilde{Z} = y} \right]} \right)} \right|_{y = {q_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}. $$
(4.149)

The second derivative at \( \lambda = 0 \) is

$$ {\left. {\frac{{{d^2}Va{R_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{{d^2}\lambda }}} \right|_{\lambda = 0}} = - \frac{1}{{{f_Y}(y)}}{\left. {\frac{d}{{dy}}\left( {{f_Y}(y)b{V}\left[ {\tilde{Z}|\tilde{Y} = y} \right]} \right)} \right|_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}. $$
(4.150)

4.1.3 Probability Density Function of Transformed Random Variables

Let \( \tilde{X} \) be a random variable with density \( {f_X}(x) \) and let \( \tilde{Y} \) be a random variable with \( \tilde{Y} = g(\tilde{X}) \). If g is strictly monotonous and differentiable, the probability density function (PDF) of \( \tilde{Y} \) can be transformed using the inverse function theoremFootnote 76:

$$ {f_Y}(y) = {f_X}\left( {{g^{ - 1}}(y)} \right) \cdot \left| {\frac{{d{g^{ - 1}}(y)}}{{dy}}} \right| $$
(4.151)

With \( {g^{ - 1}}(y) = x \), we obtain

$$ \left| {\frac{{d{g^{ - 1}}(y)}}{{dy}}} \right| = \left| {\frac{{dx}}{{dy}}} \right| = \left| {\frac{1}{{{{{dy}} \left/ {{dx}} \right.}}}} \right| $$
(4.152)

which leads to

$$ {f_Y}(y) = \frac{{{f_X}(x)}}{{\left| {{{{dy}} \left/ {{dx}} \right.}} \right|}}. $$
(4.153)

4.1.4 VaR-Based First-Order Granularity Adjustment for a Normally Distributed Systematic Factor

The granularity adjustment (4.10) can be expressed as

$$ \begin{array} {c} \Delta {l_1} = {\left. { - \frac{1}{{2\varphi }}\frac{d}{{dx}}\left( {\frac{{\varphi \,{\eta_{2,c}}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right|_{x = {\Phi^{ - 1}}\left( {1 - \alpha } \right)}} \\ = {\left. { - \frac{1}{{2\varphi }}\left[ {\frac{d}{{dx}}\left( {\varphi \,{\eta_{2,c}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \varphi \,{\eta_{2,c}}\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]} \right|_{x = {\Phi^{ - 1}}\left( {1 - \alpha } \right)}} \\ = {\left. { - \frac{1}{2}\left[ {\frac{1}{\varphi }\frac{d}{{dx}}\left( {\varphi \,{\eta_{2,c}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + {\eta_{2,c}}\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]} \right|_{x = {\Phi^{ - 1}}\left( {1 - \alpha } \right)}} \\ = {\left. { - \frac{1}{2}\left[ {\left( {\frac{{{\eta_{2,c}}}}{\varphi }\frac{{d\varphi }}{{dx}} + \frac{{d{\eta_{2,c}}}}{{dx}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - {\eta_{2,c}}\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right]} \right|_{x = {\Phi^{ - 1}}\left( {1 - \alpha } \right)}}. \\ \end{array} $$
(4.154)

Because of

$$ \frac{1}{\varphi }\frac{{d\varphi }}{{dx}} = \frac{{d(\ln \varphi )}}{{dx}} = \frac{d}{{dx}}\left( {\ln \left[ {\frac{1}{{\sqrt {{2\pi }} }}\exp \left( { - \frac{{{x^2}}}{2}} \right)} \right]} \right) = \frac{d}{{dx}}\left( {\ln \frac{1}{{\sqrt {{2\pi }} }} - \frac{{{x^2}}}{2}} \right) = - x, $$
(4.155)

the granularity adjustment (4.154) can be written as

$$ \Delta {l_1} = {\left. {\frac{1}{2}\left[ {\frac{{x \cdot {\eta_{2,c}}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - \frac{{{{{d{\eta_{2,c}}}} \left/ {{dx}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{{{{{{\eta_{2,c}} \cdot {d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right]} \right|_{x = {\Phi^{ - 1}}\left( {1 - \alpha } \right)}}. $$
(4.156)

For the calculation of (4.156), the conditional expectation and variance have to be determined. Assuming stochastically independent LGDs and with ELGD and VLGD for the expectation and the variance of the LGD, respectively, the required moments are given asFootnote 77

$$ \begin{array} {c} {\mu_{1,c}} = b{E}\left( {\sum\limits_{i = 1}^n {{w_i} \cdot \widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}} |\tilde{x} = x} \right) \\ = \sum\limits_{i = 1}^n {{w_i} \cdot ELG{D_i} \cdot b{E}\left( {{1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x} = x} \right)} \\ = \sum\limits_{i = 1}^n {{w_i} \cdot ELG{D_i}} \cdot {p_i}(x), \\ \end{array} $$
(4.157)
$$ \begin{array} {c} {\eta_{2,c}} = b{V}\left( {\sum\limits_{i = 1}^n {{w_i} \cdot \widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}} |\tilde{x} = x} \right) \\ = \sum\limits_{i = 1}^n {w_i^2 \cdot b{V}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x} = x} \right)} \\ = \sum\limits_{i = 1}^n {w_i^2 \cdot \left[ {b{E}\left( {{{\left[ {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x} = x} \right]}^2}} \right) - {b{E}^2}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x} = x} \right)} \right]} \\ = \sum\limits_{i = 1}^n {w_i^2 \cdot \left[ {b{E}\left( {{{\widetilde{{LG{D_i}}}}^2}} \right) \cdot b{E}\left( {{{\left[ {{1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x} = x} \right]}^2}} \right) - {{\left( {ELG{D_i} \cdot {p_i}(x)} \right)}^2}} \right]} \\ = \sum\limits_{i = 1}^n {w_i^2 \cdot \left[ {\left( {ELGD_i^2 + VLG{D_i}} \right) \cdot {p_i}(x) - ELGD_i^2 \cdot p_i^2(x)} \right]} . \\ \end{array} $$
(4.158)

4.1.5 VaR-Based First-Order Granularity Adjustment for Homogeneous Portfolios

For homogeneous portfolios, the granularity adjustment formula (4.28) can be simplified to

$$ \begin{array} {c} \Delta {l_1} = \frac{1}{{2n}}\left[ {{\Phi^{ - 1}}\left( \alpha \right)\frac{{\left( {ELG{D^2} + VLGD} \right)\Phi (z) - ELG{D^2}\,{\Phi^2}(z)}}{{ELGD\left( {{{{\sqrt {\rho } }} \left/ {{\sqrt {{1 - \rho }} }} \right.}} \right)\varphi (z)}}} \right. \\ - \frac{{\left( {ELG{D^2} + VLGD} \right) - 2ELG{D^2}\,\Phi (z)}}{{ELGD}} \\ {\left. { - \frac{{\left( {ELG{D^2} + VLGD} \right)\Phi (z)z - ELG{D^2}\,{\Phi^2}(z)z}}{{ELGD \cdot \varphi (z)}}} \right]_{z = \frac{{{\Phi^{ - 1}}(PD) + \sqrt {\rho } {\Phi^{ - 1}}\left( \alpha \right)}}{{\sqrt {{1 - \rho }} }}}} \\ = \frac{1}{{2n}}\left( {\frac{{ELG{D^2} + VLGD}}{{ELGD}}\left[ {\frac{{\sqrt {{1 - \rho }} \,{\Phi^{ - 1}}\left( \alpha \right)\Phi (z)}}{{\sqrt {\rho } \,\varphi (z)}}} \right. - 1 - \left. {\frac{{\Phi (z)z}}{{\varphi (z)}}} \right]} \right. \\ {\left. { - ELGD\,\Phi (z)\left[ {\frac{{\sqrt {{1 - \rho }} \,{\Phi^{ - 1}}\left( \alpha \right)\Phi (z)}}{{\sqrt {\rho } \,\varphi (z)}}} \right. - 2 - \left. {\frac{{\Phi (z)z}}{{\varphi (z)}}} \right]} \right)_{z = \frac{{{\Phi^{ - 1}}(PD) + \sqrt {\rho } {\Phi^{ - 1}}\left( \alpha \right)}}{{\sqrt {{1 - \rho }} }}}} \\ = \frac{1}{{2n}}\left( {\frac{{ELG{D^2} + VLGD}}{{ELGD}}\left[ {\frac{{\Phi (z)}}{{\varphi (z)}}\frac{{{\Phi^{ - 1}}\left( \alpha \right)\left( {1 - 2\rho } \right) + {\Phi^{ - 1}}(PD)\sqrt {\rho } }}{{\sqrt {\rho } \sqrt {{1 - \rho }} }} - 1} \right]} \right. \\ {\left. { - ELGD \cdot \Phi (z)\left[ {\frac{{\Phi (z)}}{{\varphi (z)}}\frac{{{\Phi^{ - 1}}\left( \alpha \right)\left( {1 - 2\rho } \right) + {\Phi^{ - 1}}(PD)\sqrt {\rho } }}{{\sqrt {\rho } \sqrt {{1 - \rho }} }} - 2} \right]} \right)_{z = \frac{{{\Phi^{ - 1}}(PD) + \sqrt {\rho } {\Phi^{ - 1}}\left( \alpha \right)}}{{\sqrt {{1 - \rho }} }}}}. \\ \end{array} $$
(4.159)

4.1.6 Arbitrary Derivatives of VaR

The following determination of all derivatives of VaR is based on Wilde (2003). The quantile q α of \( \tilde{L} = \tilde{Y} + \lambda \tilde{Z} \) can be written as \( q(\lambda ) \) to denote that the quantile depends on the parameter λ. Using this notation, the quantile can be defined implicitly as an argument of the distribution function F by \( F(q(\lambda ),\lambda ): = b{P}\left( {\tilde{Y} + \lambda \tilde{Z} \leq {q_\alpha }(\tilde{Y} + \lambda \tilde{Z})} \right) = \alpha \). In order to calculate the derivatives of q α , at first all derivatives of F are determined in Sect. 4.5.6.2.1. As the quantile is defined implicitly, the implicit derivatives of \( F(q(\lambda ),\lambda ) - \alpha = 0 \) have to be determined. This is done by application of the residue theorem in Sect. 4.5.6.2.2. As a next step, the result will be expressed in combinatorial form in Sect. 4.5.6.2.3. Using the results of the derivatives of the distribution function and the implicit derivatives, it is possible to determine all derivatives of VaR. This is performed in Sect. 4.5.6.2.4. As the resulting formula is quite complex, an expression for the first five derivatives of VaR is determined in Sect. 4.5.7. The mathematical basics to the Laplace transform, complex residues, and partitions, which are needed within the derivation, are presented in the following Sect. 4.5.6.1.

4.1.6.1 Mathematical Basics

4.1.6.1.1 Laplace Transform and Dirac’s Delta Function

The Laplace transform \( L \) of a function \( f(t) \) with \( t \in {b{R}^{+} } \) is given asFootnote 78

$$ \left[ {L\left\{ {f(t)} \right\}} \right](s): = \int\limits_{t = - 0}^\infty {f(t){e^{ - st}}dt} = :\Theta (s) $$
(4.160)

with \( s = c + i\omega \,\, \in \,\,b{C} \), where \( b{C} \) denotes the set of all complex numbers. The inverse Laplace transform \( {L^{ - 1}} \) can be represented asFootnote 79

$$ \left[ {{L^{ - 1}}\left\{ {\Theta (s)} \right\}} \right](t): = \frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {\Theta (s){{\text{e}}^{st}}ds} = {L^{ - 1}}\left\{ {L\left\{ {f(t)} \right\}} \right\} = f(t). $$
(4.161)

Dirac’s delta function \( \delta (x) \) can be defined asFootnote 80

$$ \int\limits_{ - \infty }^\infty {\delta (x)f\left( {x - {x_0}} \right)dx} = f\left( {{x_0}} \right) $$
(4.162)

A more illustrative, heuristic definition of \( \delta (x) \) is given by

$$ \delta (x) = \left\{ {\begin{array} {c}{*{20}{c}} 0 \\ {{\text{if }}x \ne 0,} \\ \infty \\ {{\text{if }}x = 0,} \\ \end{array} } \right.\quad {\text{and}}\quad \int\limits_{ - \infty }^\infty {\delta (x)dx} = 1. $$
(4.163)

Using the definition of the Laplace transform and the inverse Laplace transform, Dirac’s delta function can be written as

$$ \begin{array} {c} \delta (t) = {L^{ - 1}}\left\{ {L\left\{ {\delta (t)} \right\}} \right\} = {L^{ - 1}}\left\{ {\int\limits_{t = - 0}^\infty {\delta (t){{\text{e}}^{ - st}}dt} } \right\} \\ = {L^{ - 1}}\left\{ {{{\text{e}}^{ - s \cdot 0}}} \right\} = {L^{ - 1}}\left\{ 1 \right\} = \frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {1 \cdot {{\text{e}}^{st}}} ds. \\ \end{array} $$
(4.164)
4.1.6.1.2 Laurent Series, Singularities, and Complex Residues

If \( f(z) \) is differentiable in all points of an open subset of the complex plane \( H \subset b{C} \), then we call \( f(z) \) holomorphic on H.Footnote 81 For a function \( f(z) \), which is holomorphic in a simply connected region H, according to the Cauchy integral theorem we haveFootnote 82

$$ \oint\limits_C {f(z)dz = 0} $$
(4.165)

with C being a closed path in H. If a function \( f(z) \) is holomorphic in z 0 and in a circular region around z 0, we can perform a Taylor series expansion, which is analogous to the real plane:Footnote 83

$$ f(z) = \sum\limits_{n = 0}^\infty {\frac{{{f^{(n)}}({z_0})}}{{n!}}} {(z - {z_0})^n} $$
(4.166)

However, if a function \( f(z) \) is only holomorphic inside the annulus between two concentric circles with center z 0 and radii r 1 and r 2, which is the region \( H = \left\{ {z|0 \leq {r_1} < \left| {z - {z_0}} \right| < {r_2}} \right\} \), the function \( f(z) \) can be expressed as a generalized power series, the so-called Laurent series:Footnote 84

$$ f(z) = \sum\limits_{n = - \infty }^\infty {{a_n}} {(z - {z_0})^n} = \underbrace {\sum\limits_{n = - \infty }^{ - 1} {{a_n}} {{(z - {z_0})}^n}}_{\text{principal part}} + \underbrace {\sum\limits_{n = 0}^\infty {{a_n}} {{(z - {z_0})}^n}}_{\text{analytic part}} $$
(4.167)

Thus, the function has to be holomorphic only inside the annulus and not inside the inner circle or outside the outer circle.

If a function \( f(z) \) is holomorphic in a neighborhood of z 0 but not in the point z 0, then z 0 is called an isolated singularity of the function \( f(z) \). The concrete type of a singularity can be classified according to the analytic part of the Laurent series:Footnote 85

  • The point z 0 is a removable singularity if \( {a_n} = 0\,\,\forall n < 0 \). In this case, the Laurent series is identical to the Taylor series above.

  • The point z 0 is a pole of order m if the principal part consists of a finite number of terms with \( {a_m} \ne 0 \) and \( {a_n} = 0 \) for \( n < m < 0 \).

  • The point z 0 is an essential singularity if the principal part consists of an infinite number of terms.

The coefficient \( {a_{ - 1}} \) of the Laurent series (4.167) around an isolated singularity z 0 is the residue of \( f(z) \) in z 0. This will subsequently be denoted by \( {\text{Re}}{{\text{s}}_{{z_0}}}(f) \). The residue can also be defined as

$$ {a_{ - 1}} = {\text{Re}}{{\text{s}}_{{z_0}}}(f) = \frac{1}{{2\pi i}} \cdot \oint\limits_C {f(z)} \,dz $$
(4.168)

where C is a contour with winding number 1 in a holomorphic region H around an isolated singularity in z 0. If the contour C encloses a finite number of isolated singularities \( {z_1},{z_2},...,{z_m} \) with corresponding residues \( {a_{ - 1}}({z_\mu }) \) \( (\mu = 1,...,m) \), we have

$$ \oint\limits_C {f(z)dz} = 2\pi i\sum\limits_{\mu = 1}^m {{a_{ - 1}}({z_\mu })} $$
(4.169)

which is the residue theorem.Footnote 86

The residue \( {\text{Re}}{{\text{s}}_{{z_0}}}(f) \) with z 0 being a pole of order m can be calculated asFootnote 87

$$ {\text{Re}}{{\text{s}}_{{z_0}}}(f) = \mathop {{\lim }}\limits_{z \to {z_0}} \frac{1}{{(m - 1)!}}\frac{{{d^{m - 1}}}}{{d{z^{m - 1}}}}\left[ {{{\left( {z - {z_0}} \right)}^m} \cdot f(z)} \right]. $$
(4.170)

For a function \( f = {{{g(z)}} \left/ {{h(z)}} \right.} \), where h has a simple zero in z 0, the residue can be determined with

$$ {\text{Re}}{{\text{s}}_{{z_0}}}(f) = \frac{{g({z_0})}}{{h'({z_0})}}. $$
(4.171)
4.1.6.1.3 Partitions

A partition p of a positive integer m is a way to express m as a sum of positive integers in non-decreasing order. A partition p of m will be denoted by \( p \prec m \). A partition p can be indicated by \( p = {1^{{e_1}}},{2^{{e_2}}},...,{m^{{e_m}}} \), where \( {e_i} \) is the frequency of the number i in the partition. The number of summands of p is expresses by \( \left| p \right| \), which is the sum \( \left| p \right| = {e_1} + {e_2} + ... + {e_m} \). The notation \( \hat{p} \) indicates the partition which results if each summand of a partition p is increased by 1. This means that for \( p \prec m \) the partition \( \hat{p} \) refers to a specific partition of \( m + \left| p \right| \).Footnote 88

Example

  • For \( m = 5 \), there exist seven partitions \( p \prec m \): \( p \prec m = \left\{ {1 + 1 + 1 + 1 + 1,\;1 + 1 + 1 + 2,\;1 + 2 + 2,\;1 + 1 + 3,\;2 + 3,\;1 + 4,\;5} \right\}. \) Thus, a concrete partition for \( m = 5 \) is \( p = 3 + 1 + 1 \).

  • This partition can also be denoted by \( p = {1^{{e_1}}}\,{2^{{e_2}}}\,...\,{m^{{e_m}}} = {1^2}{3^1} \), leading to \( {e_1} = 2, \) \( {e_2} = 0, \) \( {e_3} = 1, \) \( {e_4} = 0 \), and \( {e_5} = 0 \). Thus, the number m results from: \( m = 1 \cdot {e_1} + 2 \cdot {e_2} + ... + m \cdot {e_m} = 1 \cdot 2 + 3 \cdot 1 = 5 \).

  • The number of summands of this partition is \( \left| {p = {1^2}{3^1}} \right| = {e_1} + {e_2} + ... + {e_m} = 2 + 1 = 3 \).

  • The partition \( \hat{p} \) appendant to the partition \( p = 3 + 1 + 1 \) is \( \hat{p} = 4 + 2 + 2 \), which is a specific partition of \( m + \left| p \right| = 5 + 3 = 8 \).

4.1.6.2 Determination of the Derivatives

4.1.6.2.1 Derivatives of the Distribution Function

Proposition

The derivatives of the distribution function of losses \( {F_{Y + \lambda Z}}(y) = b{P}(\tilde{Y} + \lambda \tilde{Z} \leq y) \) at \( \lambda = 0 \) are given asFootnote 89

$$ {\left. {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{F_{Y + \lambda Z}}(y)} \right|_{\lambda = 0}} = {( - 1)^m}\frac{{{d^{m - 1}}}}{{d{y^{m - 1}}}}\left( {b{E}\left( {{{\tilde{Z}}^m}|\tilde{Y} = y} \right){f_Y}(y)} \right). $$
(4.172)

Proof

Using the definition of the Laplace transform (4.160) and recognizing that the loss \( \tilde{L} = \tilde{Y} + \lambda \tilde{Z} \) cannot go below zero so that the probability density function is \( {f_{Y + \lambda Z}}(y) = 0 \) for all \( y < 0 \), we get for the Laplace transform of \( {f_{Y + \lambda Z}}(y) \)

$$ L\left\{ {{f_{Y + \lambda Z}}(y)} \right\} = \int\limits_{y = - 0}^\infty {{{\text{e}}^{ - sy}}{f_{Y + \lambda Z}}(y)dy} = \int\limits_{y = - \infty }^\infty {{{\text{e}}^{ - sy}}{f_{Y + \lambda Z}}(y)dy} . $$
(4.173)

With the definition of the expectation operator

$$ b{E}\left( {g\left( {\tilde{X}} \right)} \right) = \int\limits_{x = - \infty }^\infty {g(x){f_X}(x)dx} . $$
(4.174)

(4.173) is equivalent to

$$ L\left\{ {{f_{Y + \lambda Z}}(y)} \right\} = \int\limits_{y = - \infty }^\infty {{{\text{e}}^{ - sy}}{f_{Y + \lambda Z}}(y)dy} = b{E}\left( {{{\text{e}}^{ - s\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}} \right) $$
(4.175)

Applying the definition of the inverse Laplace transform (4.161) and using the moment generating function M of \( \tilde{Y} + \lambda \tilde{Z} \), which is defined asFootnote 90

$$ {M_{Y + \lambda Z}}(s) = b{E}\left( {{{\text{e}}^{s\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}} \right) $$
(4.176)

the probability density function equalsFootnote 91

$$ {f_{Y + \lambda Z}}(y) = {L^{ - 1}}\left\{ {L\left\{ {{f_{Y + \lambda Z}}(y)} \right\}} \right\} = {L^{ - 1}}\left\{ {{M_{Y + \lambda Z}}\left( { - s} \right)} \right\} = \frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {{M_{Y + \lambda Z}}(s){{\text{e}}^{ - sy}}ds} . $$
(4.177)

Thus, the derivatives of the probability density function at \( \lambda = 0 \) can be determined using the approach

$$ {\left. {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{f_{Y + \lambda Z}}(y)} \right|_{\lambda = 0}} = \frac{1}{{2\pi i}}{\left. {\int\limits_{s = c - i\infty }^{c + i\infty } {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{M_{Y + \lambda Z}}(s){{\text{e}}^{ - sy}}} ds} \right|_{\lambda = 0}}. $$
(4.178)

Applying definition (4.176), we obtain for the derivatives of M

$$ \begin{array} {c} {\left. {\frac{{{\partial^m}{M_{Y + \lambda Z}}(s)}}{{\partial {\lambda^m}}}} \right|_{\lambda = 0}} = {\left. {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}b{E}\left( {{{\text{e}}^{s\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}} \right)} \right|_{\lambda = 0}} \\ = {\left. {b{E}\left( {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{{\text{e}}^{s\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}} \right)} \right|_{\lambda = 0}} \\ = {\left. {b{E}\left( {{s^m}{{\tilde{Z}}^m}{{\text{e}}^{s\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}} \right)} \right|_{\lambda = 0}} \\ = b{E}\left( {{s^m}{{\tilde{Z}}^m}{{\text{e}}^{s\tilde{Y}}}} \right). \\ \end{array} $$
(4.179)

With (4.179) and \( {s^m}{{\text{e}}^{s\left( {\tilde{Y} - y} \right)}} = {( - 1)^m}\frac{{{\partial^m}}}{{\partial {y^m}}}{{\text{e}}^{s\left( {\tilde{Y} - y} \right)}} \), (4.178) is equivalent to

$$ \begin{array} {c} {\left. {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{f_{Y + \lambda Z}}(y)} \right|_{\lambda = 0}} = \frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {b{E}\left( {{s^m}{{\tilde{Z}}^m}{{\text{e}}^{s\tilde{Y}}}} \right){e^{ - sy}}} ds \\ = b{E}\left( {\frac{1}{{2\pi i}}{{\tilde{Z}}^m}\int\limits_{s = c - i\infty }^{c + i\infty } {{s^m}{{\text{e}}^{s\left( {\tilde{Y} - y} \right)}}} ds} \right) \\ = {( - 1)^m}\frac{{{d^m}}}{{d{y^m}}}b{E}\left( {{{\tilde{Z}}^m}\frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {{{\text{e}}^{s\left( {\tilde{Y} - y} \right)}}} ds} \right). \\ \end{array} $$
(4.180)

According to (4.164), Dirac’s delta function can be written as

$$ \delta (t) = \frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {1 \cdot {{\text{e}}^{st}}} ds $$
(4.181)

which leads to

$$ \delta \left( {\tilde{Y} - y} \right) = \frac{1}{{2\pi i}}\int\limits_{s = c - i\infty }^{c + i\infty } {1 \cdot {{\text{e}}^{s\left( {\tilde{Y} - y} \right)}}} ds $$
(4.182)

for \( t = \tilde{Y} - y \). Hence, (4.180) is equivalent to

$$ {\left. {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{f_{Y + \lambda Z}}(y)} \right|_{\lambda = 0}} = {( - 1)^m}\frac{{{d^m}}}{{d{y^m}}}b{E}\left( {{{\tilde{Z}}^m}\delta \left( {\tilde{Y} - y} \right)} \right). $$
(4.183)

With \( b{E}[{\tilde{Z}^m}\delta (\tilde{Y} - y)] = b{E}[{\tilde{Z}^m}|\tilde{Y} = y] \cdot {f_Y}(y) \), the derivatives of the distribution function result after integration of (4.183):

$$ {\left. {\frac{{{\partial^m}}}{{\partial {\lambda^m}}}{F_{Y + \lambda Z}}(y)} \right|_{\lambda = 0}} = {( - 1)^m}\frac{{{d^{m - 1}}}}{{d{y^{m - 1}}}}\left( {b{E}\left( {{{\tilde{Z}}^m}|\tilde{Y} = y} \right){f_Y}(y)} \right), $$
(4.184)

which is proposition (4.172). In order to determine the derivatives of the quantile \( {{{{d^m}q}} \left/ {{d{\lambda^m}}} \right.} \), the implicit derivatives of \( F(q(\lambda ),\lambda ) - \alpha = 0 \) with \( F(q(\lambda ),\lambda ): = {F_{\tilde{Y} + \lambda \tilde{Z}}}({q_\alpha }(\tilde{Y} + \lambda \tilde{Z})) \) \( = b{P}\left( {\tilde{Y} + \lambda \tilde{Z} \leq {q_\alpha }(\tilde{Y} + \lambda \tilde{Z})} \right) \) will be calculated in the following.

4.1.6.2.2 Implicit Derivatives: Complex Residue Form

Consider a function \( G(z,w) \) of two variables \( z,w \in b{C} \). Suppose there exists an analytic function \( w = w(z) \) in a region around a pole \( z = {z_0} \), such that \( G(z,w(z)) = 0 \). The first derivative \( {{{dw}} \left/ {{dz}} \right.} \) can be determined as follows:Footnote 92

$$ \begin{array} {c} \\ 0 = \frac{{\partial G}}{{\partial z}} + \frac{{\partial G}}{{\partial w}} \cdot \frac{{dw}}{{dz}} \\ \Leftrightarrow \frac{{dw}}{{dz}} = - \frac{{\partial G/\partial z}}{{\partial G/\partial w}} = : - \frac{{{G_z}}}{{{G_w}}}. \\ \end{array} $$
(4.185)

Proposition

For \( {G_w}({z_0},{w_0}) \ne 0 \), the derivatives \( {{{{d^m}w}} \left/ {{d{z^m}}} \right.} \) are given as

$$ \frac{{{d^m}w}}{{d{z^m}}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\frac{{{\partial^{m - 1}}}}{{\partial {z^{m - 1}}}}\left( {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right)} \right|}_{z = {z_0}}}} \right]. $$
(4.186)

Proof

According to (4.186), the first derivative is

$$ \frac{{dw}}{{dz}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\left( {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right)} \right|}_{z = {z_0}}}} \right] = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {\frac{{{G_z}({z_0},w)}}{{G({z_0},w)}}} \right]. $$
(4.187)

As z 0 is a pole of G and \( G({z_0},w) = 0 \), an application of (4.171) leads to

$$ \frac{{dw}}{{dz}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {\frac{{{G_z}({z_0},w)}}{{G({z_0},w)}}} \right] = - \frac{{{G_z}}}{{{G_w}}}. $$
(4.188)

which is equal to (4.185). This shows that the formula is correct for \( m = 1 \).

Applying the residue theorem (4.169)

$$ \sum\limits_{\mu = 1}^m {{a_{ - 1}}({z_\mu })} = \frac{1}{{2\pi i}}\oint\limits_C {f(z)dz} $$
(4.189)

and recognizing that there is only a singularity at \( z = {z_0} \) leads to

$$ \frac{{dw}}{{dz}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right|}_{z = {z_0}}}} \right] = - \frac{1}{{2\pi i}}\oint\limits_C {{{\left. {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right|}_{z = {z_0}}}} dw. $$
(4.190)

Differentiating and applying the residue theorem again results in

$$ \begin{array} {c} \frac{{{d^m}w}}{{d{z^m}}} = \frac{{{\partial^{m - 1}}}}{{\partial {z^{m - 1}}}}\left( { - \frac{1}{{2\pi i}}\oint\limits_C {{{\left. {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right|}_{z = {z_0}}}} dw} \right) \\ = - \frac{1}{{2\pi i}}\oint\limits_C {\frac{{{\partial^{m - 1}}}}{{\partial {z^m}}}{{\left. {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right|}_{z = {z_0}}}} dw \\ = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {\frac{{{\partial^{m - 1}}}}{{\partial {z^m}}}{{\left. {\frac{{{G_z}(z,w)}}{{G(z,w)}}} \right|}_{z = {z_0}}}} \right], \\ \end{array} $$
(4.191)

which is the proposition presented in (4.186). This result is a generalization of the Lagrange inversion theorem.Footnote 93

4.1.6.2.3 Implicit Derivatives: Combinatorial Form

In order to express the implicit derivatives (4.191) in combinatorial form, Faà di Bruno’s formula will be used. According to this formula, the following equation holds for a function \( g = g(y) \) with \( y = y(x) \):Footnote 94

$$ \frac{{{d^m}g}}{{d{x^m}}} = \sum\limits_{p \prec m} {{\alpha_p}\frac{{{d^{\left| p \right|}}g}}{{d{y^{\left| p \right|}}}}} \frac{{{d^p}y}}{{d{x^p}}} $$
(4.192)

with \( {\alpha_p} = \frac{{m!}}{{{{(1!)}^{{e_1}}} \cdot {e_1}!\, \cdot ... \cdot {{(m!)}^{{e_m}}} \cdot {e_m}!}} \), \( \frac{{{d^{\left| p \right|}}g}}{{d{y^{\left| p \right|}}}} \) as ordinary \( {\left| p \right|^{\text{th}}} \) derivative, and

$$ \frac{{{d^p}y}}{{d{x^p}}}: = {\left( {\frac{{dy}}{{dx}}} \right)^{{e_{p1}}}} \cdot {\left( {\frac{{{d^2}y}}{{d{x^2}}}} \right)^{{e_{p2}}}} \cdot ... \cdot {\left( {\frac{{{d^m}y}}{{d{x^m}}}} \right)^{{e_{pm}}}} = {\prod\limits_{i = 1}^m {\left( {\frac{{{d^i}y}}{{d{x^i}}}} \right)}^{{e_{pi}}}}. $$
(4.193)

Proposition

Equation (4.191) is equivalent to

$$ \frac{{{d^m}w}}{{d{z^m}}} = \sum\limits_{ p \prec m, \\ u \prec s \leq |p| - 1 } {{\alpha_p}{\alpha_{\hat{u}}}\frac{{{{( - 1)}^{\left| p \right| + \left| u \right|}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}{{\left. {{G_w}^{ - \left| p \right| - \left| u \right|}\frac{{{\partial^{\hat{u}}}G}}{{\partial {w^{\hat{u}}}}}\frac{{{\partial^{\left| p \right| - 1 - s}}}}{{\partial {w^{\left| p \right| - 1 - s}}}}\frac{{{\partial^p}G}}{{\partial {z^p}}}} \right|}_{z,w = 0}}} . $$
(4.194)

Proof

For ease of notation, it will be assumed that \( {z_0} = {w_0} = 0 \), so that \( G(0,0) = 0 \). With \( {{{\partial \ln G}} \left/ {{\partial z}} \right.} = {{{{G_z}}} \left/ {G} \right.} \), (4.191) is equivalent to

$$ \frac{{{d^m}w}}{{d{z^m}}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\frac{{{\partial^{m - 1}}}}{{\partial {z^{m - 1}}}}\left( {\frac{{{G_z}}}{G}} \right)} \right|}_{z = 0}}} \right] = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\frac{{{\partial^m}}}{{\partial {z^m}}}\ln G} \right|}_{z = 0}}} \right]. $$
(4.195)

The mth derivative of \( \ln G \) can be calculated using Faà di Bruno’s formula:

$$ \begin{array} {c} \frac{{{\partial^m}}}{{\partial {z^m}}}\ln G = \sum\limits_{p \prec m} {{\alpha_p}\frac{{{d^{\left| p \right|}}\ln G}}{{d{G^{\left| p \right|}}}}} \frac{{{\partial^p}G}}{{\partial {z^p}}} = \sum\limits_{p \prec m} {{\alpha_p}\frac{{{d^{\left| p \right| - 1}}}}{{d{G^{\left| p \right| - 1}}}}\left( {\frac{1}{G}} \right)} \frac{{{\partial^p}G}}{{\partial {z^p}}} \\ = \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \left( {\left| p \right| - 1} \right)! \cdot {G^{ - \left| p \right|}} \cdot {G_{z,p}}}, \\ \end{array} $$
(4.196)

with \( {{{{\partial^p}G}} \left/ {{\partial {z^p}}} \right.} = :{G_{z,p}} \). This leads to

$$ \frac{{{d^m}w}}{{d{z^m}}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\frac{{{\partial^m}}}{{\partial {z^m}}}\ln G} \right|}_{z = 0}}} \right] = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \left( {\left| p \right| - 1} \right)! \cdot {G^{ - \left| p \right|}} \cdot {G_{z,p}}} } \right|}_{z = 0}}} \right]. $$
(4.197)

According to (4.170), the residue of a function h(w) in w 0, with w 0 being a pole of order r, can be calculated as

$$ {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {h(w)} \right] = \mathop {{\lim }}\limits_{w \to {w_0}} \frac{1}{{\left( {r - 1} \right)!}}\frac{{{d^{r - 1}}}}{{d{w^{r - 1}}}}\left( {{{\left( {w - {w_0}} \right)}^r} \cdot h(w)} \right). $$
(4.198)

With \( r = \left| p \right| \), we obtain for the derivative (4.197)

$$ \begin{array} {c} \frac{{{d^m}w}}{{d{z^m}}} = - {\text{Re}}{{\text{s}}_{{w_0}}}\left[ {{{\left. {\sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \left( {\left| p \right| - 1} \right)! \cdot {G^{ - \left| p \right|}} \cdot {G_{z,p}}} } \right|}_{z = 0}}} \right] \\ = - \frac{1}{{\left( {\left| p \right| - 1} \right)!}}{\left. {\frac{{{\partial^{\left| p \right| - 1}}}}{{\partial {w^{\left| p \right| - 1}}}}\left[ {{{\left. {{w^{\left| p \right|}} \cdot \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \left( {\left| p \right| - 1} \right)! \cdot {G^{ - \left| p \right|}} \cdot {G_{z,p}}} } \right|}_{z = 0}}} \right]} \right|_{w = 0}} \\ = - {\left. {\sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \frac{{{\partial^{\left| p \right| - 1}}}}{{\partial {w^{\left| p \right| - 1}}}}\left( {{{\left. {{{\left( {\frac{G}{w}} \right)}^{ - \left| p \right|}} \cdot {G_{z,p}}} \right|}_{z = 0}}} \right)} } \right|_{w = 0}}. \\ \end{array} $$
(4.199)

Using the Leibniz identity for arbitrary-order derivatives of products of functions, we get:Footnote 95

$$ \begin{array} {c} \frac{{{d^m}w}}{{d{z^m}}} = {\left. { - \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \frac{{{\partial^{\left| p \right| - 1}}}}{{\partial {w^{\left| p \right| - 1}}}}\left( {{{\left. {{{\left( {\frac{G}{w}} \right)}^{ - \left| p \right|}} \cdot {G_{z,p}}} \right|}_{z = 0}}} \right)} } \right|_{w = 0}} \\ = {\left. { - \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \sum\limits_{s = 0}^{\left| p \right| - 1} {\left( {\begin{array} {c}{*{20}{c}} {\left| p \right| - 1} \\ s \\ \end{array} } \right)\frac{{{\partial^s}}}{{\partial {w^s}}}{{\left( {\frac{{G(0,w)}}{w}} \right)}^{ - \left| p \right|}} \cdot \frac{{{\partial^{\left| p \right| - 1 - s}}}}{{\partial {w^{\left| p \right| - 1 - s}}}}\left( {{G_{z,p}}(0,w)} \right)} } } \right|_{w = 0}}. \\ \end{array} $$
(4.200)

As a next step, the derivative \( \frac{{{\partial^s}}}{{\partial {w^s}}}{\left( {\frac{{G(0,w)}}{w}} \right)^{ - \left| p \right|}} \) contained in (4.200) will be calculated. Performing a Taylor series expansion of \( G(0,w) \) at \( w = 0 \), we have

$$ \begin{array} {c} G(0,w) = G(0,0) + \frac{w}{{1!}} \cdot \frac{\partial }{{\partial w}}G(0,0) + \frac{{{w^2}}}{{2!}} \cdot \frac{{{\partial^2}}}{{\partial {w^2}}}G(0,0) + \frac{{{w^3}}}{{3!}} \cdot \frac{{{\partial^3}}}{{\partial {w^3}}}G(0,0) + ... \\ = 0 + w \cdot {G_w}(0,0) + \sum\limits_{r \geq 2} {\frac{{{w^r}}}{{r!}} \cdot \frac{{{\partial^r}}}{{\partial {w^r}}}G(0,0)} \\ = w \cdot {G_w}(0,0) + \sum\limits_{r \geq 1} {\frac{{{w^{r + 1}}}}{{(r + 1)!}} \cdot \frac{{{\partial^{r + 1}}}}{{\partial {w^{r + 1}}}}G(0,0)} \\ = w \cdot {G_w}(0,0) + w \cdot {G_w}(0,0) \cdot \sum\limits_{r \geq 1} {\frac{{{w^r}}}{{(r + 1)!}} \cdot \frac{{{\partial^{r + 1}}}}{{\partial {w^{r + 1}}}}G(0,0) \cdot \frac{1}{{{G_w}(0,0)}}} \\ = w \cdot {G_w}(0,0) \cdot \left( {1 + \sum\limits_{r \geq 1} {\frac{{{w^r}}}{{r!}} \cdot \frac{1}{{r + 1}} \cdot \frac{{\frac{{{\partial^{r + 1}}}}{{\partial {w^{r + 1}}}}G(0,0)}}{{\frac{\partial }{{\partial w}}G(0,0)}}} } \right). \\ \end{array} $$
(4.201)

Thus, for \( {{{G(0,w)}} \left/ {w} \right.} \), we obtain

$$ \begin{array} {c} \frac{{G(0,w)}}{w} = {G_w}(0,0) \cdot \left( {1 + \sum\limits_{r \geq 1} {\frac{{{w^r}}}{{r!}} \cdot \frac{1}{{r + 1}} \cdot \frac{{\frac{{{\partial^{r + 1}}}}{{\partial {w^{r + 1}}}}G(0,0)}}{{\frac{\partial }{{\partial w}}G(0,0)}}} } \right) \\ = {G_w}(0,0) \cdot \left( {1 + \sum\limits_{r \geq 1} {\frac{{{w^r}}}{{r!}} \cdot {\varphi_r}} } \right), \\ \end{array} $$
(4.202)

with \( {\varphi_r} = \frac{1}{{r + 1}} \cdot \frac{{{{{{\partial^{r + 1}}}} \left/ {{\partial {w^{r + 1}}G(0,0)}} \right.}}}{{{{\partial } \left/ {{\partial w}} \right.}G(0,0)}}. \) Another application of Faà di Bruno’s formula results in:Footnote 96

$$ \begin{array} {c} \frac{{{\partial^s}}}{{\partial {w^s}}}{\left( {\frac{{G(0,w)}}{w}} \right)^{ - \left| p \right|}} = G_w^{ - \left| p \right|}(0,0) \cdot \frac{{{\partial^s}}}{{\partial {w^s}}}{\left( {1 + \sum\limits_{r \geq 1} {{\varphi_r} \cdot \frac{{{w^r}}}{{r!}}} } \right)^{ - \left| p \right|}} \\ = G_w^{ - \left| p \right|}(0,0) \cdot \sum\limits_{u \prec s} {{\alpha_u} \cdot {\varphi_u} \cdot {{( - 1)}^{\left| u \right|}} \cdot \frac{{\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {\left| p \right| - 1} \right)!}}}, \\ \end{array} $$
(4.203)

withFootnote 97

$$ {\alpha_u} \cdot {\varphi_u} = \frac{{s!}}{{\left( {s + \left| u \right|} \right)!}} \cdot {\alpha_{\hat{u}}} \cdot \frac{{{\partial^{\hat{u}}}}}{{\partial {w^{\hat{u}}}}}G\left( {0,0} \right) \cdot {G_w}^{ - \left| u \right|}\left( {0,0} \right). $$
(4.204)

Applying (4.203) and (4.204) to (4.200) leads to

$$ \begin{array} {c} \frac{{{d^m}w}}{{d{z^m}}} = {\left. { - \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \sum\limits_{s = 0}^{\left| p \right| - 1} {\left( {\begin{array} {c}{*{20}{c}} {\left| p \right| - 1} \\ s \\ \end{array} } \right)\frac{{{\partial^s}}}{{\partial {w^s}}}{{\left( {\frac{{G(0,w)}}{w}} \right)}^{ - \left| p \right|}} \cdot \frac{{{\partial^{\left| p \right| - 1 - s}}}}{{\partial {w^{\left| p \right| - 1 - s}}}}\left( {{G_{z,p}}(0,w)} \right)} } } \right|_{w = 0}} \\ = - \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \sum\limits_{s = 0}^{\left| p \right| - 1} {\left( {\begin{array} {c}{*{20}{c}} {\left| p \right| - 1} \\ s \\ \end{array} } \right) \cdot G_w^{ - \left| p \right|}(0,0) \cdot \sum\limits_{u \prec s} {\frac{{s!}}{{\left( {s + \left| u \right|} \right)!}} \cdot {\alpha_{\hat{u}}} \cdot \frac{{{\partial^{\hat{u}}}}}{{\partial {w^{\hat{u}}}}}G\left( {0,0} \right)} } } \\ \cdot {G_w}^{ - \left| u \right|}\left( {0,0} \right) \cdot {( - 1)^{\left| u \right|}} \cdot \frac{{\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {\left| p \right| - 1} \right)!}} \cdot {\left. {\frac{{{\partial^{\left| p \right| - 1 - s}}}}{{\partial {w^{\left| p \right| - 1 - s}}}}\left( {{G_{z,p}}(0,w)} \right)} \right|_{w = 0}} \\ = - \sum\limits_{p \prec m} {{\alpha_p} \cdot {{( - 1)}^{\left| p \right| - 1}} \cdot \sum\limits_{s = 0}^{\left| p \right| - 1} {\left( {\begin{array} {c}{*{20}{c}} {\left| p \right| - 1} \\ s \\ \end{array} } \right) \cdot \sum\limits_{u \prec s} {{\alpha_{\hat{u}}} \cdot {{( - 1)}^{\left| u \right|}} \cdot G_w^{ - \left| p \right| - \left| u \right|}(0,0)} } } \\ \cdot \frac{{s!\, \cdot \left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\, \cdot \left( {\left| p \right| - 1} \right)!}} \cdot \frac{{{\partial^{\hat{u}}}}}{{\partial {w^{\hat{u}}}}}G\left( {0,0} \right) \cdot {\left. {\frac{{{\partial^{\left| p \right| - 1 - s}}}}{{\partial {w^{\left| p \right| - 1 - s}}}}\left( {{G_{z,p}}(0,w)} \right)} \right|_{w = 0}}. \\ \end{array} $$
(4.205)

Summarizing the sums, using \( ( - 1) \cdot {( - 1)^{\left| p \right| - 1}} \cdot {( - 1)^{\left| u \right|}} = {( - 1)^{\left| p \right| + \left| u \right|}} \), and

$$ \begin{array} {c} \left( {\begin{array} {c}{*{20}{c}} {\left| p \right| - 1} \\ s \\ \end{array} } \right) \cdot \frac{{s!}}{{\left( {\left| p \right| - 1} \right)!}} \cdot \frac{{\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!}} = \frac{{\left( {\left| p \right| - 1} \right)!}}{{s! \cdot \left( {\left| p \right| - 1 - s} \right)!}} \cdot \frac{{s!}}{{\left( {\left| p \right| - 1} \right)!}} \cdot \frac{{\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!}} \\ = \frac{{\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {\left| p \right| - 1 - s} \right)!\, \cdot \left( {s + \left| u \right|} \right)!}}, \\ \end{array} $$
(4.206)

(4.205) can be simplified to

$$\begin{array}{lll} \frac{{{d^m}w}}{{d{z^m}}} = & \sum\limits_{ p \prec m, u \prec s \leq |p| - 1 } {{\alpha_p} \cdot {\alpha_{\hat{u}}} \cdot {{( - 1)}^{\left| p \right| + \left| u \right|}} \cdot \frac{{\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {\left| p \right| - 1 - s)! \cdot (s + \left| u \right|} \right)!}}}\\ & \cdot G_w^{ - \left| p \right| - \left| u \right|}(0,0) \cdot \frac{{{\partial^{\hat{u}}}}}{{\partial {w^{\hat{u}}}}}G\left( {0,0} \right)\\ & \cdot {\left. {\frac{{{\partial^{\left| p \right| - 1 - s}}}}{{\partial {w^{\left| p \right| - 1 - s}}}}\left( {{G_{z,p}}(0,w)} \right)} \right|_{w = 0}},\end{array}$$
(4.207)

which concludes the proof.

4.1.6.2.4 Completion of the Derivation

Application of (4.207) can be used to determine the derivatives of a quantile, which will be calculated subsequently. With \( F(q(\lambda ),\lambda ) - \alpha = 0 = G(w(z),z) \), the derivatives are given as

$$ {\left. {\frac{{{d^m}q}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = {\left. {\frac{{{d^m}w}}{{d{z^m}}}} \right|_{z = 0}}, $$
(4.208)

where the right-hand side can be determined with (4.207). The derivatives of G contained in (4.207) can be calculated with (4.172):

$$ \begin{array} {c} {\left. {\frac{{{\partial^{r + s}}G}}{{\partial {w^r}\partial {z^s}}}} \right|_{z = 0}} = {\left. {\frac{{{\partial^{r + s}}F}}{{\partial {y^r}\partial {\lambda^s}}}} \right|_{\lambda = 0}} = \frac{{{\partial^r}}}{{\partial {y^r}}}{\left. {\left( {\frac{{{\partial^s}F}}{{\partial {\lambda^s}}}} \right)} \right|_{\lambda = 0}} \\ = \frac{{{d^r}}}{{d{y^r}}}\left( {{{( - 1)}^s}\frac{{{d^{s - 1}}}}{{d{y^{s - 1}}}}\left( {b{E}\left( {{{\tilde{Z}}^s}|\tilde{Y} = y} \right){f_Y}(y)} \right)} \right) \\ = {( - 1)^s}\frac{{{d^{r + s - 1}}}}{{d{y^{r + s - 1}}}}\left( {b{E}\left( {{{\tilde{Z}}^s}|\tilde{Y} = y} \right){f_Y}(y)} \right) \\ = {( - 1)^s}\frac{{{d^{r + s - 1}}}}{{d{y^{r + s - 1}}}}\left( {{\mu_{s,c}}f} \right), \\ \end{array} $$
(4.209)

where we define \( {\mu_{s,c}}: = b{E}({\tilde{Z}^s}|\tilde{Y} = y) \) and \( f: = {f_Y}(y) \) for convenience. Using definition (4.193) for the pth derivative with \( p \prec m \), this leads to

$$ {\left. {\frac{{{\partial^p}G}}{{\partial {z^p}}}} \right|_{z = 0}} = {\left. {{{\prod\limits_{i = 1}^m {\left( {\frac{{{\partial^i}G}}{{\partial {z^i}}}} \right)} }^{{e_{pi}}}}} \right|_{z = 0}} = {\prod\limits_{i = 1}^m {\left( {{{( - 1)}^i}\frac{{{d^{i - 1}}({\mu_{i,c}}f)}}{{d{y^{i - 1}}}}} \right)}^{{e_{pi}}}} = {( - 1)^m}{\prod\limits_{i = 1}^m {\left( {\frac{{{d^{i - 1}}({\mu_{i,c}}f)}}{{d{y^{i - 1}}}}} \right)}^{{e_{pi}}}}. $$
(4.210)

Similarly the \( {\hat{u}^{\text{th}}} \) derivative can be determined with \( u \prec s \). It has to be considered that for each partition u the elements of the corresponding partition \( \hat{u} \) are increased by 1. Thus, the smallest number is 2 and the largest is \( s + 1 \). Hence, we obtain

$$ \frac{{{\partial^{\hat{u}}}G}}{{\partial {w^{\hat{u}}}}} = {\prod\limits_{i = 2}^{s + 1} {\left( {\frac{{{\partial^i}G}}{{\partial {w^i}}}} \right)}^{{e_{\hat{u}i}}}} = {\prod\limits_{i = 2}^{s + 1} {\left( {\frac{{{\partial^i}G}}{{\partial {w^i}}}} \right)}^{{e_{u(i - 1)}}}} = {\prod\limits_{i = 1}^s {\left( {\frac{{{\partial^{i + 1}}G}}{{\partial {w^{i + 1}}}}} \right)}^{{e_{ui}}}} = {\prod\limits_{i = 1}^s {\left( {\frac{{{\partial^{i + 1}}F}}{{\partial {y^{i + 1}}}}} \right)}^{{e_{ui}}}} = {\prod\limits_{i = 1}^s {\left( {\frac{{{d^i}f}}{{d{y^i}}}} \right)}^{{e_{ui}}}}. $$
(4.211)

Furthermore, we have \( {G_w} = {{{dF}} \left/ {{dy}} \right.} = f \) and \( {( - 1)^{\left| p \right| + \left| u \right|}} \cdot {f^{\left| p \right| + \left| u \right|}} = {( - f)^{\left| p \right| + \left| u \right|}} \). Using these formulas, we finally get for (4.207) or (4.208):

$$ \begin{array} {lll} {\left. {\frac{{{d^m}q}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} &= {( - 1)^m}\left[ {\sum\limits_{ p \prec m, u \prec s \leq |p| - 1 } {\frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}} \cdot {{\left( { - f} \right)}^{ - \left| p \right| - \left| u \right|}}} \right. \cdot \left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \\ &{\left. { \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right)} \right]_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}, \\ \end{array} $$
(4.212)

which is the formula for arbitrary derivatives of VaR. Written without abbreviations this is

$$ \begin{array} {c} {\left. {\frac{{{d^m}Va{R_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = {( - 1)^m}\left[ {\sum \limits_{-} { p \prec m, u \prec s \leq |p| - 1 } {\frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}} \cdot {{\left( { - {f_Y}(y)} \right)}^{ - \left| p \right| - \left| u \right|}}} \right. \\ \left[ \cdot \left( {\prod \limits_{i = 1}^s {{{\left[ {\frac{{{d^i}{f_Y}(y)}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {b{E}\left( {{{\tilde{Z}}^m}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \right] _{y = {q_\alpha }\left( {\tilde{Y}} \right)}, \\ \end{array} $$
(4.213)

with \( {\alpha_p} = \frac{{m!}}{{{{(1!)}^{{e_p}_1}}{e_p}_{,1}!\, \cdot ... \cdot {{(m!)}^{{e_{p,m}}}}{e_{p,m}}!}} \).

4.1.7 Determination of the First Five Derivatives of VaR

The general form of the mth derivative of VaR is given by (4.213). Subsequently, the first five derivatives will be determined with this formula. For each derivative, we have summands for all partitions \( p \prec m \) and \( u \prec s \leq \left| p \right| - 1 \). For the considered cases \( 1 \leq m \leq 5 \), the following partitions \( p \prec m \) exist:

$$ \begin{array} {c} p \prec 1 = \left\{ {{1^1}} \right\}; \\ p \prec 2 = \left\{ {{1^2},\;{2^1}} \right\}; \\ p \prec 3 = \left\{ {{1^3},\;{1^1}{2^1},\;{3^1}} \right\}; \\ p \prec 4 = \left\{ {{1^4},\;{1^2}{2^1},\;{2^2},\;{1^1}{3^1},\;{4^1}} \right\}; \\ p \prec 5 = \left\{ {{1^5},\;{1^3}{2^1},\;{1^1}{2^2},\;{1^2}{3^1},\;{2^1}{3^1},\;{1^1}{4^1},\;{5^1}} \right\}. \\ \end{array} $$
(4.214)

By construction, the expectation of the unsystematic loss is zero:

$$ {\mu_{1,c}}(y) = b{E}\left( {{{\tilde{Z}}^1}|\tilde{Y} = y} \right) = 0, $$
(4.215)

which is called the “granularity adjustment condition”. Consequently, for all partitions with \( {e_{p1}} \ne 0 \), the summands of (4.213) are zero, too:

$$ \prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} = {0^{{e_{p1}}}} \cdot \prod\limits_{i = 2}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} = \left\{ \begin{array} {c} \prod\limits_{i = 2}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} \\ {\text{if }}{e_{p1}} = 0, \\ 0 \\ {\text{if }}{e_{p1}} \ne 0. \\ \end{array} \right. $$
(4.216)

Hence, the only relevant partitions \( p \prec m \) of (4.214) with non-zero terms and the corresponding numbers \( \left| p \right| \) are given asFootnote 98

$$ \begin{array} {c} p \prec 1 = \left\{ {{1^1}} \right\}\quad {\text{with }}\left| {p = {1^1}} \right| = 1, \\ p \prec 2 = \left\{ {{2^1}} \right\}\quad {\text{with }}\left| {p = {2^1}} \right| = 1, \\ p \prec 3 = \left\{ {{3^1}} \right\}\quad {\text{with }}\left| {p = {3^1}} \right| = 1, \\ p \prec 4 = \left\{ {{4^1},\;{2^2}} \right\}\quad {\text{with }}\left| {p = {4^1}} \right| = 1,\;\left| {p = {2^2}} \right| = 2, \\ p \prec 5 = \left\{ {{5^1},\;{2^1}{3^1}} \right\}\quad {\text{with }}\left| {p = {5^1}} \right| = 1,\;\left| {p = {2^1}{3^1}} \right| = 2. \\ \end{array} $$
(4.217)

For the associated terms

$$ {\alpha_p} = \frac{{m!}}{{{{(1!)}^{{e_p}_1}}{e_p}_{,1}!\, \cdot ... \cdot {{(m!)}^{{e_{p,m}}}}{e_{p,m}}!}}, $$
(4.218)

we obtain

$$ \begin{array} {c} {\alpha_{{1^1}}} = \frac{{1!}}{{{{(1!)}^1} \cdot 1!}} = 1, \\ {\alpha_{{2^1}}} = \frac{{2!}}{{{{(2!)}^1} \cdot 1!}} = 1, \\ {\alpha_{{3^1}}} = \frac{{3!}}{{{{(3!)}^1} \cdot 1!}} = 1, \\ {\alpha_{{4^1}}} = \frac{{4!}}{{{{(4!)}^1} \cdot 1!}} = 1,\;\;\;\;{\alpha_{{2^2}}} = \frac{{4!}}{{{{(2!)}^2} \cdot 2!}} = \frac{{24}}{8} = 3, \\ {\alpha_{{5^1}}} = \frac{{5!}}{{{{(5!)}^1} \cdot 1!}} = 1,\;\;\;\;{\alpha_{{2^1}{3^1}}} = \frac{{5!}}{{{{(2!)}^1} \cdot 1! \cdot {{(3!)}^1} \cdot 1!}} = \frac{{120}}{{12}} = 10. \\ \end{array} $$
(4.219)

According to (4.217), we only have \( \left| p \right| = 1 \) and \( \left| p \right| = 2 \), leading to the following partitions \( u \prec s \leq \left| p \right| - 1 \):

$$ \begin{array} {c} \left| p \right| = 1: \\ u \prec \left( {s = 0} \right) = \left\{ 0 \right\} \\ \left| p \right| = 2: \\ u \prec \left\{ {s = 0,\;s = 1} \right\} = \left\{ {0,\;{1^1}} \right\}. \\ \end{array} $$
(4.220)

As we have one summand for each \( p \prec m \) and \( u \prec s \leq (|p| - 1) \), we obtain one summand for \( m = 1,\;2,\;3 \) and three summands for \( m = 4,\;5 \):

$$ {\left. {\frac{{{d^m}q}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = \left\{ {\begin{array} {c}{*{20}{c}} {(I),} \\ {(I) + (II) + (III),} \\ \end{array} } \right.\begin{array} {c}{*{20}{c}} {\quad {\text{if }}m = 1,\;2,\;3,} \\ {\quad {\text{if }}m = 4,\;5,} \\ \end{array} $$
(4.221)

where the summands are determined with the following variables:

$$ \begin{array} {c} (I)\quad m = 1,\;...,\;5:p = {m^1},\quad \left| p \right| = 1,\;u \prec \left( {s = 0} \right) = \left\{ 0 \right\}, \\ (II)\quad \begin{array} {c}{*{20}{c}} {m = 4:} \\ {m = 5:} \\ \end{array} \left. {\begin{array} {c}{*{20}{c}} {p = {2^2},} \\ {p = {2^1}{3^1},} \\ \end{array} } \right\}\quad \left| p \right| = 2,\;u \prec \left( {s = 0} \right) = \left\{ 0 \right\}, \\ (III)\quad \begin{array} {c}{*{20}{c}} {m = 4:} \\ {m = 5:} \\ \end{array} \left. {\begin{array} {c}{*{20}{c}} {p = {2^2},} \\ {p = {2^1}{3^1},} \\ \end{array} } \right\}\quad \left| p \right| = 2,\;u \prec \left( {s = 1} \right) = \left\{ {{1^1}} \right\}. \\ \end{array} $$
(4.222)

The first summand (I), with \( p = {m^1},\;\left| p \right| = 1,\;s = 0,\;u = 0,\left| u \right| = 0,\;\hat{u} = {1^1} \), \( {e_{pm}} = 1 \), and \( {e_{pi}} = 0 \) for all \( i \ne m \), equals:Footnote 99

$$ \begin{array} {c} (I) = \frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}{\left( { - f} \right)^{ - \left| p \right| - \left| u \right|}}\left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = \frac{{1 \cdot 1 \cdot \left( {1 + 0 - 1} \right)!}}{{\left( {0 + 0} \right)!\left( {1 - 1 - 0} \right)!}}{\left( { - f} \right)^{ - 1 - 0}}\left( {\prod\limits_{i = 1}^0 {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \cdot \frac{{{d^{1 - 1 - 0}}}}{{d{y^{1 - 1 - 0}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = - \frac{1}{f} \cdot \frac{{{d^{m - 1}}\left( {{\mu_{m,c}}f} \right)}}{{d{y^{m - 1}}}}. \\ \end{array} $$
(4.223)

For \( m = 4 \), the second summand (II.[4]), with values \( p = {2^2},\;\left| p \right| = 2,\;s = 0,\;u = 0,\left| u \right| = 0,\;\hat{u} = {1^1} \), \( {e_{p2}} = 2 \), and \( {e_{pi}} = 0 \) for all \( i \ne 2 \), is equivalent to

$$ \begin{array} {c} II.[4] = \frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}{\left( { - f} \right)^{ - \left| p \right| - \left| u \right|}}\left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \\ \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = \frac{{3 \cdot 1 \cdot \left( {2 + 0 - 1} \right)!}}{{\left( {0 + 0} \right)!\left( {2 - 1 - 0} \right)!}}{\left( { - f} \right)^{ - 2 - 0}}\left( {\prod\limits_{i = 1}^0 {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \cdot \frac{{{d^{2 - 1 - 0}}}}{{d{y^{2 - 1 - 0}}}}\left( {\prod\limits_{i = 1}^4 {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = 3 \cdot \frac{1}{{{f^2}}} \cdot \frac{d}{{dy}}{\left[ {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right]^2}. \\ \end{array} $$
(4.224)

For \( m = 5 \), we have \( p = {2^1}{3^1},\;\left| p \right| = 2,\;s = 0,\;u = 0,\left| u \right| = 0,\;\hat{u} = {1^1},\;{e_{p2}} = 1,\;{e_{p3}} = 1, \) and \( {e_{pi}} = 0 \) for all \( i \ne 2,3 \), leading to

$$ \begin{array} {c} II.[5] = \frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}{\left( { - f} \right)^{ - \left| p \right| - \left| u \right|}}\left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \\ \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = \frac{{10 \cdot 1 \cdot \left( {2 + 0 - 1} \right)!}}{{\left( {0 + 0} \right)!\left( {2 - 1 - 0} \right)!}}{\left( { - f} \right)^{ - 2 - 0}}\left( {\prod\limits_{i = 1}^0 {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{uj}}}}} } \right) \cdot \frac{{{d^{2 - 1 - 0}}}}{{d{y^{2 - 1 - 0}}}}\left( {\prod\limits_{i = 1}^5 {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = 10 \cdot \frac{1}{{{f^2}}} \cdot \frac{d}{{dy}}\left( {\left[ {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right]\left[ {\frac{{{d^2}\left( {{\mu_{3,c}}f} \right)}}{{d{y^2}}}} \right]} \right). \\ \end{array} $$
(4.225)

The third summand for \( m = 4 \) (III.[4]), with \( p = {2^2},\;\left| p \right| = 2,\;s = 1,\;u = {1^1},\;\left| u \right| = 1,\;\hat{u} = {2^1},\;{e_{p2}} = 2, \) \( {e_{pi}} = 0 \) for all \( i \ne 2 \), and \( {e_{u1}} = 1 \) equals

$$ \begin{array} {c} III.[4] = \frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}{\left( { - f} \right)^{ - \left| p \right| - \left| u \right|}}\left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \\ \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = \frac{{3 \cdot 1 \cdot \left( {2 + 1 - 1} \right)!}}{{\left( {1 + 1} \right)!\left( {2 - 1 - 1} \right)!}}{\left( { - f} \right)^{ - 2 - 1}}\left( {\prod\limits_{i = 1}^1 {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^1}} } \right) \cdot \frac{{{d^{2 - 1 - 1}}}}{{d{y^{2 - 1 - 1}}}}\left( {\prod\limits_{i = 1}^4 {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = - 3 \cdot \frac{1}{{{f^3}}} \cdot \frac{{df}}{{dy}} \cdot {\left[ {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right]^2}. \\ \end{array} $$
(4.226)

For \( m = 5 \), we have \( p = {2^1}{3^1},\;\left| p \right| = 2,\;s = 1,\;u = {1^1},\;\left| u \right| = 1,\;\hat{u} = {2^1},\;{e_{p2}} = 1,\;{e_{p3}} = 1, \) \( {e_{pi}} = 0 \) for all \( i \ne 2,3 \), and \( {e_{u1}} = 1 \). Hence, we get

$$ \begin{array} {c} III.[5] = \frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}{\left( { - f} \right)^{ - \left| p \right| - \left| u \right|}}\left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right) \\ \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = \frac{{10 \cdot 1 \cdot \left( {2 + 1 - 1} \right)!}}{{\left( {1 + 1} \right)!\left( {2 - 1 - 1} \right)!}}{\left( { - f} \right)^{ - 2 - 1}}\left( {\prod\limits_{i = 1}^1 {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^1}} } \right) \cdot \frac{{{d^{2 - 1 - 1}}}}{{d{y^{2 - 1 - 1}}}}\left( {\prod\limits_{i = 1}^5 {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right) \\ = - 10 \cdot \frac{1}{{{f^3}}} \cdot \frac{{df}}{{dy}} \cdot \left[ {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right] \cdot \left[ {\frac{{{d^2}\left( {{\mu_{3,c}}f} \right)}}{{d{y^2}}}} \right]. \\ \end{array} $$
(4.227)

Summing up the relevant elements from (4.223) to (4.227) and multiplying by \( {( - 1)^m} \) leads to

$$ {\left. {\frac{{dq}}{{d\lambda }}} \right|_{\lambda = 0}} = {( - 1)^1} \cdot \left( { - \frac{1}{f}} \right) \cdot \frac{{{d^{1 - 1}}\left( {{\mu_{1,c}}f} \right)}}{{d{y^{1 - 1}}}} = {\mu_{1,c}} = 0 $$
(4.228)
$$ {\left. {\frac{{{d^2}q}}{{d{\lambda^2}}}} \right|_{\lambda = 0}} = {( - 1)^2} \cdot \left( { - \frac{1}{f}} \right) \cdot \frac{{{d^{2 - 1}}\left( {{\mu_{2,c}}f} \right)}}{{d{y^{2 - 1}}}} = - \frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}} $$
(4.229)
$$ {\left. {\frac{{{d^3}q}}{{d{\lambda^3}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^3} \cdot \left( { - \frac{1}{f}} \right) \cdot \frac{{{d^{3 - 1}}\left( {{\mu_{3,c}}f} \right)}}{{d{y^{3 - 1}}}} = \frac{1}{f} \cdot \frac{{{d^2}\left( {{\mu_{3,c}}f} \right)}}{{d{y^2}}} $$
(4.230)
$$ \begin{array} {c} {\left. {\frac{{{d^4}q}}{{d{\lambda^4}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^4} \cdot \left[ {\left( { - \frac{1}{f}} \right) \cdot \frac{{{d^{4 - 1}}\left( {{\mu_{4,c}}f} \right)}}{{d{y^{4 - 1}}}} + 3 \cdot \frac{1}{{{f^2}}} \cdot \frac{d}{{dy}}{{\left( {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right)}^2}} \right. \\ \left. { - 3 \cdot \frac{1}{{{f^3}}} \cdot \frac{{df}}{{dy}} \cdot {{\left( {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right)}^2}} \right] \\ = \left( { - \frac{1}{f}} \right) \cdot \left( {\frac{{{d^3}\left( {{\mu_{4,c}}f} \right)}}{{d{y^3}}} - 3 \cdot \frac{d}{{dy}}\left[ {\frac{1}{f}{{\left( {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right)}^2}} \right]} \right), \\ \end{array} $$
(4.231)

and

$$ \begin{array} {c} {\left. {\frac{{{d^5}q}}{{d{\lambda^5}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^5} \cdot \left[ {\left( { - \frac{1}{f}} \right) \cdot \frac{{{d^{5 - 1}}\left( {{\mu_{5,c}}f} \right)}}{{d{y^{5 - 1}}}} + 10 \cdot \frac{1}{{{f^2}}} \cdot \frac{d}{{dy}}\left( {\left[ {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right]\left[ {\frac{{{d^2}\left( {{\mu_{3,c}}f} \right)}}{{d{y^2}}}} \right]} \right)} \right. \\ \left. { - 10 \cdot \frac{1}{{{f^3}}} \cdot \frac{{df}}{{dy}} \cdot \left[ {\frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}} \right] \cdot \left[ {\frac{{{d^2}\left( {{\mu_{3,c}}f} \right)}}{{d{y^2}}}} \right]} \right] \\ = \frac{1}{f} \cdot \left[ {\frac{{{d^4}\left( {{\mu_{5,c}}f} \right)}}{{d{y^4}}} - 10 \cdot \frac{d}{{dy}}\left( {\frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}\frac{{{d^2}\left( {{\mu_{3,c}}f} \right)}}{{d{y^2}}}} \right)} \right]. \\ \end{array} $$
(4.232)

Comparing these terms, we find that the derivatives for \( m = 1,\;...,\;5 \) can be written as

$$ {\left. {\frac{{{d^m}q}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^m}\left( { - \frac{1}{f}} \right)\left[ {\frac{{{d^{m - 1}}\left( {{\mu_{m,c}}f} \right)}}{{d{y^{m - 1}}}}} \right. - \kappa (m) \cdot \left. {\frac{d}{{dy}}\left( {\frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}\frac{{{d^{m - 3}}\left( {{\mu_{m - 2,c}}f} \right)}}{{d{y^{m - 3}}}}} \right)} \right] $$
(4.233)

or without abbreviations as

$$ \begin{array} {c} {\left. {\frac{{{d^m}Va{R_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^m}\left( { - \frac{1}{{{f_Y}(y)}}} \right)\left[ {\frac{{{d^{m - 1}}\left( {{\mu_m}\left( {\tilde{Z}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{d{y^{m - 1}}}}} \right. \\ - \kappa (m) \cdot {\left. {\frac{d}{{dy}}\left( {\frac{1}{{{f_Y}(y)}} \cdot \frac{{d\left( {{\mu_2}\left( {\tilde{Z}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{dy}}\frac{{{d^{m - 3}}\left( {{\mu_{m - 2}}\left( {\tilde{Z}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{d{y^{m - 3}}}}} \right)} \right]_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}, \\ \end{array} $$
(4.234)

with \( \kappa (1) = \kappa (2) = 0,\,\,\kappa (3) = 1,\,\,\kappa (4) = 3 \), and \( \kappa (5) = 10 \), which is the result of Wilde (2003).

4.1.8 Order of the Derivatives of VaR

For any \( m \in b{N} \), the (m+1)th element of the Taylor series can be written asFootnote 100

$$ \frac{{{\lambda^m}}}{{m!}}{\left[ {\frac{{{\partial^m}Va{R_\alpha }(\tilde{Y} + \lambda \tilde{Z})}}{{\partial {\lambda^m}}}} \right]_{\lambda = 0}} = g \circ {\left. {\left( {\frac{{{\lambda^m}}}{{m!}}\sum\limits_{p \prec m} {\prod\limits_{i = 1}^m {{{\left( {{\mu_i}\left[ {\tilde{Z}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} } } \right)} \right|_{y = {q_\alpha }(\tilde{Y})}}, $$
(4.235)

with g being a function that is independent of the number of credits n. With \( {\mu_i} \) as the ith moment about the origin and \( {\eta_i} \) as the ith moment about the mean, it is possible to writeFootnote 101

$$ \begin{array} {c} \lambda \cdot \sum\limits_{p \prec 5} {\prod\limits_{i = 1}^5 {{{\left( {{\mu_i}\left( {\tilde{Z}} \right)} \right)}^{{e_{pi}}}}} } = \lambda \cdot \left( {{\mu_5}\left( {\tilde{Z}} \right) + {\mu_4}\left( {\tilde{Z}} \right) \cdot {\mu_1}\left( {\tilde{Z}} \right) + {\mu_3}\left( {\tilde{Z}} \right) \cdot {{\left( {{\mu_1}\left( {\tilde{Z}} \right)} \right)}^2}} \right. \\ + {\mu_3}\left( {\tilde{Z}} \right) \cdot {\mu_2}\left( {\tilde{Z}} \right) + {\mu_2}\left( {\tilde{Z}} \right) \cdot {\left( {{\mu_1}\left( {\tilde{Z}} \right)} \right)^3} + {\mu_2}{\left( {\tilde{Z}} \right)^2} \cdot {\mu_1}\left( {\tilde{Z}} \right)\left. { + {{\left( {{\mu_1}\left( {\tilde{Z}} \right)} \right)}^5}} \right) \\ = {\mu_5}\left( {\lambda \tilde{Z}} \right) + {\mu_4}\left( {\lambda \tilde{Z}} \right) \cdot {\mu_1}\left( {\lambda \tilde{Z}} \right) + {\mu_3}\left( {\lambda \tilde{Z}} \right) \cdot {\left( {{\mu_1}\left( {\lambda \tilde{Z}} \right)} \right)^2} \\ + {\mu_3}\left( {\lambda \tilde{Z}} \right) \cdot {\mu_2}\left( {\lambda \tilde{Z}} \right) + {\mu_2}\left( {\lambda \tilde{Z}} \right) \cdot {\left( {{\mu_1}\left( {\lambda \tilde{Z}} \right)} \right)^3} + {\mu_2}{\left( {\lambda \tilde{Z}} \right)^2} \cdot {\mu_1}\left( {\lambda \tilde{Z}} \right) \\ + {\left( {{\mu_1}\left( {\lambda \tilde{Z}} \right)} \right)^5}. \\ \end{array} $$
(4.236)

for each m. Thus, the derivatives are given as

$$ \begin{array} {c} {\lambda^m}\sum\limits_{p \prec m} {\prod\limits_{i = 1}^m {{{\left. {{{\left( {{\mu_i}\left[ {\tilde{Z}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} \right|}_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}} } = \sum\limits_{p \prec m} {\prod\limits_{i = 1}^m {{{\left. {{{\left( {{\mu_i}\left[ {\lambda \tilde{Z}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} \right|}_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}} } \\ = \sum\limits_{p \prec m} {{{\prod\limits_{i = 1}^m {\left. {{{\left( {{\mu_i}\left[ {\tilde{L} - b{E}\left( {\tilde{L}|\tilde{x}} \right)|\tilde{x} = x} \right]} \right)}^{{e_{pi}}}}} \right|} }_{x = {q_{1 - \alpha }}\left( {\tilde{x}} \right)}}} \\ = \sum\limits_{p \prec m} {\prod\limits_{i = 1}^m {{{\left. {{{\left( {{\mu_i}\left[ {\left( {\tilde{L}|\tilde{x} = x} \right) - b{E}\left( {\tilde{L}|\tilde{x} = x} \right)} \right]} \right)}^{{e_{pi}}}}} \right|}_{x = {q_{1 - \alpha }}\left( {\tilde{x}} \right)}}} } \\ = \sum\limits_{p \prec m} {{{\prod\limits_{i = 1}^m {\left. {{{\left( {{\eta_i}\left[ {\tilde{L}|\tilde{x} = x} \right]} \right)}^{{e_{pi}}}}} \right|} }_{x = {q_{1 - \alpha }}\left( {\tilde{x}} \right)}}} \\ = \sum\limits_{p \prec m} {\prod\limits_{i = 1}^m {{{\left. {{{\left( {{\eta_i}\left[ {\tilde{L}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} \right|}_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}} } \\ \end{array} $$
(4.237)

Due toFootnote 102

$$ \frac{{{\lambda^m}}}{{m!}}{\left[ {\frac{{{\partial^m}Va{R_\alpha }(\tilde{Y} + \lambda \tilde{Z})}}{{\partial {\lambda^m}}}} \right]_{\lambda = 0}} = g \circ {\left. {\left( {\frac{1}{{m!}}\sum\limits_{p \prec m} {\prod\limits_{i = 1}^m {{{\left( {{\eta_i}\left[ {\tilde{L}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} } } \right)} \right|_{y = {q_\alpha }(\tilde{Y})}}. $$

with \( {\eta_i}\left( {\tilde{L}|\tilde{x} = x} \right) = {\eta_i}^*(x) \cdot \sum\limits_{j = 1}^n {{w_j}^i} \leq {\eta_i}^*(x) \cdot {\left( {\frac{b}{a}} \right)^i} \cdot \frac{1}{{{n^{i - 1}}}} = O\left( {\frac{1}{{{n^{i - 1}}}}} \right), \) for all i, and revisiting (4.235) and (4.236), it is straightforward to see that only for \( 0 < a \leq EA{D_i} \leq b \) and \( m = 3 \) there exist terms which are at maximum of order O(1/n 2):

$$ m = 4 $$
(4.238)

All terms with higher derivatives of VaR are at least of Order O(1/n 3).

4.1.9 VaR-Based Second-Order Granularity Adjustment for a Normally Distributed Systematic Factor

For convenience, the summands of the second-order granularity add-on \( \begin{array} {c} \sum\limits_{p \prec 3} {\prod\limits_{i = 1}^3 {{{\left( {{\eta_i}\left[ {\tilde{L}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} } = {\eta_3}\left[ {\tilde{L}|\tilde{Y} = y} \right] = O\left( {\frac{1}{{{n^2}}}} \right), \\ \sum\limits_{p \prec 4} {\prod\limits_{i = 1}^4 {{{\left( {{\eta_i}\left[ {\tilde{L}|\tilde{Y} = y} \right]} \right)}^{{e_{pi}}}}} } = {\eta_4}\left[ {\tilde{L}|\tilde{Y} = y} \right] + {\left( {{\eta_2}\left[ {\tilde{L}|\tilde{Y} = y} \right]} \right)^2} = O\left( {\frac{1}{{{n^3}}}} \right) + O\left( {\frac{1}{{{n^2}}}} \right). \\ \end{array} \) will be calculated separately:

$$ \Delta {l_2} $$
(4.239)

The term \( \begin{array} {c} \Delta {l_2} = \frac{1}{{6\varphi }}\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\frac{d}{{dx}}\left[ {\frac{{{\eta_{3,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right]} \right) \\ + \frac{1}{{8\varphi }}\frac{d}{{dx}}{\left. {\left[ {\frac{1}{\varphi }\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{{\left( {\frac{d}{{dx}}\left[ {\frac{{{\eta_{2,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right]} \right)}^2}} \right]} \right|_{x = {\Phi^{ - 1}}(1 - \alpha )}} \\ = :{\left. {\Delta {l_{2,1}} + \Delta {l_{2,2}}} \right|_{x = {\Phi^{ - 1}}(1 - \alpha )}}. \\ \end{array} \) equals

$$ \Delta {l_{2,1}} $$
(4.240)

For the calculation, we need the first and second derivative of the density function \( \begin{array} {c} \Delta {l_{2,1}} = \frac{1}{6}\left[ {\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\frac{1}{\varphi }\frac{d}{{dx}}\left( {\frac{{{\eta_{3,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) + \frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\frac{1}{\varphi }\frac{{{d^2}}}{{d{x^2}}}\left( {\frac{{{\eta_{3,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right] \\ = \frac{1}{6}\left[ {\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\left( {\underbrace {\frac{1}{\varphi }\frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right)}_{ = :A}\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + {\eta_{3,c}}\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right)} \right. \\ + \frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\frac{1}{\varphi }\frac{d}{{dx}}\left[ {\underbrace {\frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}}_{ = :B} + \underbrace {{\eta_{3,c}}\varphi \frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)}_{ = :C}} \right]. \\ \end{array} \). As the systematic factor is assumed to be normally distributed, we have

$$ \varphi $$
(4.241)
$$ \varphi = \frac{1}{{\sqrt {{2\pi }} }}{e^{ - \frac{{{x^2}}}{2}}} $$
(4.242)
$$ \frac{{d\varphi }}{{dx}} = \left( { - x} \right)\frac{1}{{\sqrt {{2\pi }} }}{e^{ - \frac{{{x^2}}}{2}}} = - x\,\varphi $$
(4.243)

Furthermore, we need the derivative

$$ \frac{{{d^2}\varphi }}{{d{x^2}}} = \left( { - 1} \right)\frac{1}{{\sqrt {{2\pi }} }}{e^{ - \frac{{{x^2}}}{2}}} - x\left( { - x} \right)\frac{1}{{\sqrt {{2\pi }} }}{e^{ - \frac{{{x^2}}}{2}}} = ({x^2} - 1)\varphi . $$
(4.244)

Herewith, the term A form (4.240) can easily be calculated:

$$ \frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) = - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}. $$
(4.245)

Furthermore, \( A = \frac{1}{\varphi }\frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right) = \frac{{d{\eta_{3,c}}}}{{dx}} + \frac{{{\eta_{3,c}}}}{\varphi }\frac{{d\varphi }}{{dx}} = \frac{{d{\eta_{3,c}}}}{{dx}} - {\eta_{3,c}}x. \) is equal to

$$ {{{dB}} \left/ {{dx}} \right.} $$
(4.246)

Similarly, \( \begin{array} {c} \frac{{dB}}{{dx}} = \frac{d}{{dx}}\left( {\frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) \\ = \frac{{{d^2}}}{{d{x^2}}}\left( {{\eta_{3,c}}\varphi } \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right)\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) \\ = \frac{d}{{dx}}\left( {\frac{{d{\eta_{3,c}}}}{{dx}}\varphi + {\eta_{3,c}}\frac{{d\varphi }}{{dx}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \left( {\frac{{d{\eta_{3,c}}}}{{dx}}\varphi + {\eta_{3,c}}\frac{{d\varphi }}{{dx}}} \right)\left( { - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right) \\ = \left( {\frac{{{d^2}{\eta_{3,c}}}}{{d{x^2}}}\varphi + 2\frac{{d{\eta_{3,c}}}}{{dx}}\frac{{d\varphi }}{{dx}} + {\eta_{3,c}}\frac{{{d^2}\varphi }}{{d{x^2}}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} \\ - \left( {\frac{{d{\eta_{3,c}}}}{{dx}}\varphi + {\eta_{3,c}}\frac{{d\varphi }}{{dx}}} \right)\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}. \\ \end{array} \) is equivalent to

$$ {{{dC}} \left/ {{dx}} \right.} $$
(4.247)

Using these terms, \( \begin{array} {c} \frac{{dC}}{{dx}} = \frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi \left( { - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right)} \right) \\ = - \frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right)\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}} - {\eta_{3,c}}\varphi \frac{d}{{dx}}\left( {\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right) \\ = \left( { - \frac{{d{\eta_{3,c}}}}{{dx}}\varphi - {\eta_{3,c}}\frac{{d\varphi }}{{dx}}} \right)\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}} \\ - {\eta_{3,c}}\varphi \left( {\frac{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}\left( {{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}} \right) - 2\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right){{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^4}}}} \right). \\ \end{array} \) results in

$$ \Delta {l_{2,1}} $$
(4.248)

Applying the derivatives of \( \begin{array} {c} \Delta {l_{2,1}} = \frac{1}{6}\left[ { - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}\left( {\frac{{{{{d{\eta_{3,c}}}} \left/ {{dx}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - \frac{{{\eta_{3,c}}x}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - {\eta_{3,c}}\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right)} \right. \\ + \frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\frac{1}{\varphi }\left[ {\left( {\frac{{{d^2}{\eta_{3,c}}}}{{d{x^2}}}\varphi + 2\frac{{d{\eta_{3,c}}}}{{dx}}\frac{{d\varphi }}{{dx}} + {\eta_{3,c}}\frac{{{d^2}\varphi }}{{d{x^2}}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right. \\ - 2\left( {\frac{{d{\eta_{3,c}}}}{{dx}}\varphi + {\eta_{3,c}}\frac{{d\varphi }}{{dx}}} \right)\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}} \\ \left. { - {\eta_{3,c}}\varphi \left( {\frac{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}\left( {{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}} \right) - 2\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right){{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^4}}}} \right)} \right]. \\ \end{array} \) from (4.242) and (4.243) leads to

$$ \varphi $$
(4.249)

Henceforward, the summand \( \begin{array} {c} \Delta {l_{2,1}} = \frac{1}{6}\left[ { - 3\frac{{\left( {{{{d{\eta_{3,c}}}} \left/ {{dx}} \right.}} \right)\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}} + 3\frac{{{\eta_{3,c}}x\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}} + 3{\eta_{3,c}}\frac{{{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^4}}}} \right. \\ + \frac{{{{{{d^2}{\eta_{3,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}} - 2x\frac{{{{{d{\eta_{3,c}}}} \left/ {{dx}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}} + \frac{{{\eta_{3,c}}\left( {{x^2} - 1} \right)}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}\left. { - {\eta_{3,c}}\frac{{{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}} \right] \\ = \frac{1}{{6{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}\left[ {{\eta_{3,c}}\left( {{x^2} - 1 - \frac{{{{{{{{d^3}{\mu_{1,c}}}} \left/ {{dx}} \right.}}^3}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{{3x\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{{3{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right)} \right. \\ \left. { + \frac{{d{\eta_{3,c}}}}{{dx}}\left( { - 2x - \frac{{3\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) + \frac{{{d^2}{\eta_{3,c}}}}{{d{x^2}}}} \right]. \\ \end{array} \) will be simplified:

$$ \Delta {l_{2,2}} $$
(4.250)

The term (*) is the negative twice of the first-order granularity adjustment, so that we can use the resulting equation (4.18). This leads to

$$ \begin{array} {c} \Delta {l_{2,2}} = \frac{1}{{8\varphi }}\frac{d}{{dx}}\left[ {\frac{1}{\varphi }\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{{\left( {\frac{d}{{dx}}\left[ {\frac{{{\eta_{2,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right]} \right)}^2}} \right] \\ = \frac{1}{{8\varphi }}\frac{d}{{dx}}\left( {\frac{\varphi }{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{{\underbrace {\left( {\frac{1}{\varphi }\frac{d}{{dx}}\left[ {\frac{{{\eta_{2,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right]} \right)}_*}^2}} \right). \\ \end{array} $$
(4.251)

Using the derivative of a normal distribution \( \begin{array} {c} \Delta {l_{2,2}} = \frac{1}{{8\varphi }}\frac{d}{{dx}}\left( {\frac{\varphi }{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{{\left[ { - \frac{{x\,{\eta_{2,c}}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{{{{{d{\eta_{2,c}}}} \left/ {{dx}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right]}^2}} \right) \\ = \frac{1}{8}\left[ {\underbrace {\frac{1}{\varphi }\frac{d}{{dx}}\left( {\frac{\varphi }{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}} \right)}_{ = :(I)}{{\left( { - x\,{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)}^2}} \right. \\ \left. { + \frac{1}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}\underbrace {\frac{d}{{dx}}\left( {{{\left[ { - x\,{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right]}^2}} \right)}_{ = :(II)}} \right]. \\ \end{array} \), the term (I) is equivalent to

$$ d\varphi /dx = - x\,\varphi $$
(4.252)

Term (II) can be written as

$$ \begin{array} {c} (I) = \frac{1}{\varphi }\frac{d}{{dx}}\left( {\frac{\varphi }{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}} \right) \\ = \frac{1}{\varphi }\frac{{d\varphi }}{{dx}}\frac{1}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}} + \frac{d}{{dx}}\left( {\frac{1}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}} \right) \\ = \frac{{ - x}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}} - 3\frac{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^4}}}. \\ \end{array} $$
(4.253)

Using these expressions, \( \begin{array} {c} (II) = \frac{d}{{dx}}\left( {{{\left[ { - x\,{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right]}^2}} \right) \\ = 2\left( { - x\,{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\left( { - {\eta_{2,c}} - x\frac{{d{\eta_{2,c}}}}{{dx}} + \frac{{{d^2}{\eta_{2,c}}}}{{d{x^2}}}} \right. \\ \left. { - \frac{d}{{dx}}\left( {{\eta_{2,c}}\frac{{{d^2}{\mu_{1,c}}}}{{d{x^2}}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - {\eta_{2,c}}\frac{{{d^2}{\mu_{1,c}}}}{{d{x^2}}}\frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right) \\ = 2\left( { - x{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\left( { - {\eta_{2,c}} - x\frac{{d{\eta_{2,c}}}}{{dx}} + \frac{{{d^2}{\eta_{2,c}}}}{{d{x^2}}}} \right. \\ \left. { - \frac{{d{\eta_{2,c}}}}{{dx}}\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - {\eta_{2,c}}\frac{{{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + {\eta_{2,c}}\frac{{{d^2}{\mu_{1,c}}}}{{d{x^2}}}\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right). \\ \end{array} \) from (4.251) is equal to

$$ \Delta {l_{2,2}} $$
(4.254)

which leads to

$$ \begin{array} {c} \Delta {l_{2,2}} = \frac{1}{8}\left[ {\left( {\frac{{ - x}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}} - 3\frac{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^4}}}} \right){{\left( { - x\,{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)}^2}} \right. \\ + \frac{2}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}\left( { - x\,{\eta_{2,c}} + \frac{{d{\eta_{2,c}}}}{{dx}} - \frac{{{\eta_{2,c}}{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\left( { - {\eta_{2,c}} - x\frac{{d{\eta_{2,c}}}}{{dx}} + \frac{{{d^2}{\eta_{2,c}}}}{{d{x^2}}}} \right. \\ \left. {\left. { - \frac{{d{\eta_{2,c}}}}{{dx}}\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - {\eta_{2,c}}\frac{{{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + {\eta_{2,c}}\frac{{{d^2}{\mu_{1,c}}}}{{d{x^2}}}\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right)} \right], \\ \end{array} $$
(4.255)

Adding the terms \( \begin{array} {c} \Delta {l_{2,2}} = \frac{1}{{8{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}{\left[ {\left( { - x - 3\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\left( {{\eta_{2,c}}\left[ { - x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right] + \frac{{d{\eta_{2,c}}}}{{dx}}} \right)} \right.^2} \\ + 2\left( {{\eta_{2,c}}\left[ {x + \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right] - \frac{{d{\eta_{2,c}}}}{{dx}}} \right)\left( {{\eta_{2,c}}\left[ {1 + \frac{{{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - \frac{{{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right]} \right. \\ \left. {\left. { + \frac{{d{\eta_{2,c}}}}{{dx}}\left[ {x + \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right] - \frac{{{d^2}{\eta_{2,c}}}}{{d{x^2}}}} \right)} \right]. \\ \end{array} \) and \( \Delta {l_{2,1}} \) together results in

$$ \Delta {l_{2,2}} $$
(4.256)

4.1.10 Third Conditional Moment of Losses

Subsequently, the third conditional moment of the portfolios loss about the mean, \( \begin{array} {c} \Delta {l_2} = \frac{1}{{6{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}\left[ {{\eta_{3,c}}\left( {{x^2} - 1 - \frac{{{{{{{{d^3}{\mu_{1,c}}}} \left/ {{dx}} \right.}}^3}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{{3x\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + \frac{{3{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right)} \right. \\ \left. { + \frac{{d{\eta_{3,c}}}}{{dx}}\left( { - 2x - \frac{{3\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) + \frac{{{d^2}{\eta_{3,c}}}}{{d{x^2}}}} \right] \\ + \frac{1}{{8{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}{\left[ {\left( { - x - 3\frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)\left( {{\eta_{2,c}}\left[ { - x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right] + \frac{{d{\eta_{2,c}}}}{{dx}}} \right)} \right.^2} \\ + 2\left( {{\eta_{2,c}}\left[ {x + \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right] - \frac{{d{\eta_{2,c}}}}{{dx}}} \right)\left( {{\eta_{2,c}}\left[ {1 + \frac{{{{{{d^3}{\mu_{1,c}}}} \left/ {{d{x^3}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - \frac{{{{\left( {{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}} \right)}^2}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right]} \right. \\ {\left. {\left. {\left. { + \frac{{d{\eta_{2,c}}}}{{dx}}\left[ {x + \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right] - \frac{{{d^2}{\eta_{2,c}}}}{{d{x^2}}}} \right)} \right]} \right|_{x = {\Phi^{ - 1}}\left( {1 - \alpha } \right)}}. \\ \end{array} \), shall be expressed in terms of the moments of separated factors \( {\eta_{3,c}} = {\eta_3}(\tilde{L}|\tilde{x} = x) \) and \( \widetilde{{LG{D_i}}} \). With

$$ {1_{\left\{ {{{\tilde{D}}_i}} \right\}}} $$
(4.257)

which is due to the conditional independence property, we need to determine \( \begin{array} {c} {\eta_{3,c}} = {\eta_3}\left( {\tilde{L}|\tilde{x} = x} \right) \\ = {\eta_3}\left( {\sum\limits_{i = 1}^n {{w_i} \cdot \widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}} |\tilde{x} = x} \right) \\ = \sum\limits_{i = 1}^n {{w_i}^3 \cdot {\eta_3}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x} = x} \right)}, \\ \end{array} \). In general, the third moment about the mean is equal to

$$ {\eta_3}(\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}) $$
(4.258)

Thus, the conditional moment \( \begin{array} {c} {\eta_3}\left( {\tilde{X}} \right) = b{E}\left( {{{\left[ {\tilde{X} - b{E}\left( {\tilde{X}} \right)} \right]}^3}} \right) \\ = b{E}\left[ {{{\tilde{X}}^3} - 3{{\tilde{X}}^2}b{E}\left( {\tilde{X}} \right) + 3\tilde{X}{b{E}^2}\left( {\tilde{X}} \right) - {b{E}^3}\left( {\tilde{X}} \right)} \right] \\ = b{E}\left( {{{\tilde{X}}^3}} \right) - 3b{E}\left( {{{\tilde{X}}^2}} \right)b{E}\left( {\tilde{X}} \right) + 3b{E}\left( {\tilde{X}} \right){b{E}^2}\left( {\tilde{X}} \right) - {b{E}^3}\left( {\tilde{X}} \right) \\ = b{E}\left( {{{\tilde{X}}^3}} \right) - 3b{E}\left( {{{\tilde{X}}^2}} \right)b{E}\left( {\tilde{X}} \right) + 2{b{E}^3}\left( {\tilde{X}} \right). \\ \end{array} \) can be written as

$$ {\eta_3}(\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}) $$
(4.259)

Using the conditional independence property again, considering that the LGDs are assumed to be stochastically independent of each other, and with \( \begin{array} {c} {\eta_3}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right) = b{E}\left( {{{\left[ {\widetilde{{LGD}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right]}^3}} \right) - 3\,b{E}\left( {{{\left[ {\widetilde{{LGD}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right]}^2}} \right) \cdot b{E}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right) \\ + 2\,{b{E}^3}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right). \\ \end{array} \), we have

$$ b{E}[{({1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x})^i}] = b{E}[({1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x})] = p(\tilde{x}) $$
(4.260)

With the abbreviations \( \begin{array} {c} {\eta_3}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right) = b{E}\left( {{{\left[ {\widetilde{{LGD}}|\tilde{x}} \right]}^3}} \right)p\left( {\tilde{x}} \right) - 3\,b{E}\left( {{{\left[ {\widetilde{{LGD}}|\tilde{x}} \right]}^2}} \right)b{E}\left( {\widetilde{{LGD}}|\tilde{x}} \right){p^2}\left( {\tilde{x}} \right) \\ + 2\,{b{E}^3}\left( {\widetilde{{LGD}}|\tilde{x}} \right){p^3}\left( {\tilde{x}} \right) \\ = b{E}\left( {{{\widetilde{{LGD}}}^3}} \right)p\left( {\tilde{x}} \right) - 3\,b{E}\left( {{{\widetilde{{LGD}}}^2}} \right)b{E}\left( {\widetilde{{LGD}}} \right){p^2}\left( {\tilde{x}} \right) \\ + 2\,{b{E}^3}\left( {\widetilde{{LGD}}} \right){p^3}\left( {\tilde{x}} \right). \\ \end{array} \), \( ELGD = b{E}(\widetilde{{LGD}}) \) as well as \( VLGD = b{V}(\widetilde{{LGD}}) \) and using (4.258) again, we obtain

$$ SLGD = {\eta_3}(\widetilde{{LGD}}) $$
(4.261)
$$ b{E}\left( {{{\widetilde{{LGD}}}^2}} \right) = ELG{D^2} + VLGD $$
(4.262)

Consequently, (4.260) is equivalent to

$$ \begin{array} {c} b{E}\left( {{{\widetilde{{LGD}}}^3}} \right) = SLGD + 3(ELG{D^2} + VLGD)ELGD - 2\,ELG{D^3} \\ = ELG{D^3} + 3\,ELGD \cdot VLGD + SLGD. \\ \end{array} $$
(4.263)

Thus, the conditional moment of the portfolio loss (4.257) can finally be written as

$$ \begin{array} {c} {\eta_3}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}|\tilde{x}} \right) = \left( {ELG{D^3} + 3\,ELGD \cdot VLGD + SLGD} \right)p\left( {\tilde{x}} \right) \\ - 3\,\left( {ELG{D^3} + ELGD \cdot VLGD} \right){p^2}\left( {\tilde{x}} \right) + 2\,ELG{D^3}{p^3}\left( {\tilde{x}} \right). \\ \end{array} $$
(4.264)

4.1.11 Difference Between the VaR Definitions

For the case of homogeneous credits and with \( \begin{array} {c} {\eta_{3,c}} = \sum\limits_{i = 1}^n {{w_i}^3 \cdot {\eta_3}\left( {\widetilde{{LG{D_i}}} \cdot {1_{\left\{ {{{\tilde{D}}_i}} \right\}}}\left| {\widetilde{x} = x} \right.} \right)} \\ = \sum\limits_{i = 1}^n {{w_i}^3\left[ {\left( {ELG{D_i}^3 + 3 \cdot ELG{D_i} \cdot VLG{D_i} + SLG{D_i}} \right) \cdot {p_i}(x)} \right.} \\ \left. { - 3 \cdot \left( {ELG{D_i}^3 + ELG{D_i} \cdot VLG{D_i}} \right) \cdot {p_i}^2(x) + 2 \cdot ELG{D_i}^3 \cdot {p_i}^3(x)} \right]. \\ \end{array} \), the possible realizations of losses are

$$ LGD = 1 $$
(4.265)

which implies

$$ l \in \left\{ {0,\;\frac{1}{n},\;\frac{2}{n},\;...,\;\frac{{n - 1}}{n},\;1} \right\} $$
(4.266)

If we define \( b{P}\left[ {\tilde{L} \leq l} \right] = b{P}\left[ {\tilde{L} < \left( {l + {{1} \left/ {n} \right.}} \right)} \right] \), we get

$$ {l_2}: = {l_1} + {{1} \left/ {n} \right.} $$
(4.267)

4.1.12 Identity of ES Within the Basel Framework

Using the result of the ASRF framework (2.93), the definition of the ES (2.19), the integral representation of the conditional expectation, and the identity of the condition as in (4.9), the ES of the portfolio loss equals

$$ \begin{array} {c} VaR_\alpha^{( - )}\left( {\tilde{L}} \right) = \sup \left\{ {{l_1} \in b{R}|b{P}\left[ {\tilde{L} \leq {l_1}} \right] < \alpha } \right\} \\ = \sup \left\{ {{l_1} \in b{R}|b{P}\left[ {\tilde{L} < \left( {{l_1} + \frac{1}{n}} \right)} \right] < \alpha } \right\} \\ = \sup \left\{ {\left( {{l_2} - \frac{1}{n}} \right) \in b{R}|b{P}\left[ {\tilde{L} < {l_2}} \right] < \alpha } \right\} \\ = \sup \left\{ {{l_2} \in b{R}|b{P}\left[ {\tilde{L} < {l_2}} \right] < \alpha } \right\} - \frac{1}{n} \\ = VaR_\alpha^{( + )}\left( {\tilde{L}} \right) - \frac{1}{n}. \\ \end{array} $$
(4.268)

With the conditional independence property as in (2.92), the conditional PD of the Vasicek model (2.66), the integral representation (2.126), and the symmetry of the normal distribution, the ES can be written as

$$ \begin{array} {c} ES_\alpha^{{\text{(Basel)}}}\left( {\tilde{L}} \right) = E{S_\alpha }\left[ {b{E}\left( {\tilde{L}|\tilde{x}} \right)} \right] \\ = E{S_\alpha }\left[ {{\mu_{1,c}}\left( {\tilde{x}} \right)} \right] \\ = \frac{1}{{1 - \alpha }}\left[ {b{E}\left( {{\mu_{1,c}}\left( {\tilde{x}} \right)|{\mu_{1,c}}\left( {\tilde{x}} \right) \geq {q_\alpha }\left( {{\mu_{1,c}}\left( {\tilde{x}} \right)} \right)} \right)} \right] \\ = \frac{1}{{1 - \alpha }}\left[ {b{E}\left( {{\mu_{1,c}}\left( {\tilde{x}} \right)|\tilde{x} \leq {\Phi^{ - 1}}\left( {1 - \alpha } \right)} \right)} \right] \\ = \frac{1}{{1 - \alpha }}\int\limits_{ - \infty }^{{\Phi^{ - 1}}\left( {1 - \alpha } \right)} {{\mu_{1,c}}(x)\varphi (x)dx} . \\ \end{array} $$
(4.269)

4.1.13 Arbitrary Derivatives of ES

According to (2.20), the ES can be written as

$$ \begin{array} {c} ES_\alpha^{{\text{(Basel)}}}\left( {\tilde{L}} \right) = \frac{1}{{1 - \alpha }}\int\limits_{ - \infty }^{{\Phi^{ - 1}}\left( {1 - \alpha } \right)} {\sum\limits_{i = 1}^n {b{E}\left( {{w_i} \cdot {{\widetilde{{LGD}}}_i} \cdot {1_{\left\{ {{D_i}} \right\}}}|x} \right)\varphi (x)dx} } \\ = \frac{1}{{1 - \alpha }}\sum\limits_{i = 1}^n {{w_i} \cdot ELG{D_i} \cdot \int\limits_{ - \infty }^{{\Phi^{ - 1}}\left( {1 - \alpha } \right)} {{p_i}(x)\varphi (x)dx} } \\ = \frac{1}{{1 - \alpha }}\sum\limits_{i = 1}^n {{w_i} \cdot ELG{D_i} \cdot \int\limits_{ - \infty }^{{\Phi^{ - 1}}\left( {1 - \alpha } \right)} {\Phi \left( {\frac{{{\Phi^{ - 1}}(P{D_i}) - \sqrt {{{\rho_i}}} \cdot x}}{{\sqrt {{1 - {\rho_i}}} }}} \right)\varphi (x)dx} } \\ = \frac{1}{{1 - \alpha }}\sum\limits_{i = 1}^n {{w_i} \cdot ELG{D_i} \cdot {\Phi_2}\left( {{\Phi^{ - 1}}(1 - \alpha ),{\Phi^{ - 1}}(P{D_i}),\sqrt {{{\rho_i}}} } \right)} \\ = \frac{1}{{1 - \alpha }}\sum\limits_{i = 1}^n {{w_i} \cdot ELG{D_i} \cdot {\Phi_2}\left( { - {\Phi^{ - 1}}(\alpha ),{\Phi^{ - 1}}(P{D_i}),\sqrt {{{\rho_i}}} } \right)} . \\ \end{array} $$
(4.270)

Thus, for continuous distributions, all derivatives of ES can be expressed as

$$ E{S_\alpha }\left( {\tilde{L}} \right) = \frac{1}{{1 - \alpha }}\int\limits_\alpha^1 {{q^u}\left( {\tilde{L}} \right)du} . $$
(4.271)

The derivative of VaR is a function of \( \frac{{{d^m}E{S_\alpha }}}{{d{\lambda^m}}} = \frac{{{d^m}}}{{d{\lambda^m}}}\left( {\frac{1}{{1 - \alpha }}\int\limits_\alpha^1 {{q_u}du} } \right) = \frac{1}{{1 - \alpha }}\int\limits_\alpha^1 {\frac{{{d^m}{q_u}}}{{d{\lambda^m}}}du} . \) and \( {f_Y}(y) \) evaluated at \( {\mu_{i,c}}(y) \). The substitution \( {q_u}(\tilde{Y}) \), so that \( u = {F_Y}(y) \), \( {{{du}} \left/ {{dy}} \right.} = {f_Y}(y) \), and \( y(u = \alpha ) = F_Y^{ - 1}(\alpha ) = {q_\alpha }(\tilde{Y}) \), leads to:Footnote 103

$$ y(u = 1) = F_Y^{ - 1}(1) = \infty $$
(4.272)

where the expression resulting from the derivative of VaR simply has to be evaluated at y since \( {\left. {\frac{{{d^m}E{S_\alpha }}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = \frac{1}{{1 - \alpha }}\int\limits_{u = \alpha }^1 {{{\left. {\frac{{{d^m}{q_u}}}{{d{\lambda^m}}}} \right|}_{\lambda = 0}}du} = \frac{1}{{1 - \alpha }}\int\limits_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty {{{\left. {\frac{{{d^m}{q_u}}}{{d{\lambda^m}}}} \right|}_{\lambda = 0}}{f_Y}dy}, \). Using the derivatives of VaR from (4.212), this leads to

$$ {q_u}(\tilde{Y}) = y $$
(4.273)

with \(\begin{array}{ll} \left.\frac{{d^m} E{S_\alpha }}{{d{\lambda^m}}}\right|_{\lambda = 0} & = \frac{1}{{1 - \alpha }} \int\limits_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty{{( - 1)}^m} \left[\sum\limits_{ p \prec m, u \prec s \leq |p| - 1 } \right. {\frac{{{\alpha_p}{\alpha_{\hat{u}}}\left( {\left| p \right| + \left| u \right| - 1} \right)!}}{{\left( {s + \left| u \right|} \right)!\left( {\left| p \right| - 1 - s} \right)!}}} \\ & \cdot{{\left( { - f} \right)}^{ - \left| p \right| - \left| u \right|}} \cdot \left( {\prod\limits_{i = 1}^s {{{\left[ {\frac{{{d^i}f}}{{d{y^i}}}} \right]}^{{e_{ui}}}}} } \right)\left. { \cdot \frac{{{d^{\left| p \right| - 1 - s}}}}{{d{y^{\left| p \right| - 1 - s}}}}\left( {\prod\limits_{i = 1}^m {{{\left[ {\frac{{{d^{i - 1}}\left( {{\mu_{i,c}}f} \right)}}{{d{y^{i - 1}}}}} \right]}^{{e_{pi}}}}} } \right)} \right]f\,dy,\end{array}\).

4.1.14 Determination of the First Five Derivatives of ES

Instead of solving the integral (4.272) for each of the derivatives of VaR (4.228)–(4.232), we will directly evaluate the integral for the first five derivatives. Using the expression for the first five derivatives of VaR (4.233), we obtain

$$ {\alpha_p} = \frac{{m!}}{{{{(1!)}^{{e_p}_1}}{e_p}_{,1}! \cdot ... \cdot {{(m!)}^{{e_{p,m}}}}{e_{p,m}}!}} $$
(4.274)

This term is equal to

$$ \begin{array} {c} {\left. {\frac{{{d^m}ES}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = \frac{1}{{1 - \alpha }}\int\limits_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty {\frac{{{d^m}q}}{{d{\lambda^m}}}{f_Y}dy} \\ = \frac{1}{{1 - \alpha }}\int\limits_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty {{{\left( { - 1} \right)}^m}\left( { - \frac{1}{f}} \right)\left[ {\frac{{{d^{m - 1}}\left( {{\mu_{m,c}}f} \right)}}{{d{y^{m - 1}}}}} \right.} \\ - \kappa (m) \cdot \left. {\frac{d}{{dy}}\left( {\frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}}\frac{{{d^{m - 3}}\left( {{\mu_{m - 2,c}}f} \right)}}{{d{y^{m - 3}}}}} \right)} \right]f\,dy. \\ \end{array} $$
(4.275)

or written without abbreviations as

$$ \begin{array} {c} {\left. {\frac{{{d^m}ES}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^m} \cdot \frac{1}{{1 - \alpha }} \cdot \left( {\int\limits_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty {\left( { - \frac{{{d^{m - 1}}\left( {{\mu_{m,c}}f} \right)}}{{d{y^{m - 1}}}}} \right)dy} } \right. \\ + \left. {\kappa (m) \cdot \int\limits_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty {\frac{d}{{dy}}\left( {\frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}} \cdot \frac{{{d^{m - 3}}\left( {{\mu_{m - 2,c}}f} \right)}}{{d{y^{m - 3}}}}} \right)dy} } \right) \\ = {\left( { - 1} \right)^m} \cdot \frac{1}{{1 - \alpha }} \cdot \left( {\left[ { - \frac{{{d^{m - 2}}\left( {{\mu_{m,c}}f} \right)}}{{d{y^{m - 2}}}}} \right]_{{q_\alpha }\left( {\tilde{Y}} \right)}^\infty } \right. \\ \left. { + \kappa (m) \cdot \left[ {\frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}} \cdot \frac{{{d^{m - 3}}\left( {{\mu_{m - 2,c}}f} \right)}}{{d{y^{m - 3}}}}} \right]_{y = {q_\alpha }\left( {\tilde{Y}} \right)}^\infty } \right) \\ = {\left( { - 1} \right)^m} \cdot \frac{1}{{1 - \alpha }} \cdot \left( {\frac{{{d^{m - 2}}\left( {{\mu_{m,c}}f} \right)}}{{d{y^{m - 2}}}}} \right. \\ {\left. {\left. { - \kappa (m) \cdot \left[ {\frac{1}{f} \cdot \frac{{d\left( {{\mu_{2,c}}f} \right)}}{{dy}} \cdot \frac{{{d^{m - 3}}\left( {{\mu_{m - 2,c}}f} \right)}}{{d{y^{m - 3}}}}} \right]} \right)} \right|_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}, \\ \end{array} $$
(4.276)

with \( \begin{array} {c} {\left. {\frac{{{d^m}E{S_\alpha }\left( {\tilde{Y} + \lambda \tilde{Z}} \right)}}{{d{\lambda^m}}}} \right|_{\lambda = 0}} = {\left( { - 1} \right)^m} \cdot \frac{1}{{1 - \alpha }} \cdot \left( {\frac{{{d^{m - 2}}\left( {{\mu_m}\left( {\tilde{Z}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{d{y^{m - 2}}}}} \right. \\ - \kappa (m) \cdot \left[ {\frac{1}{{{f_Y}(y)}}} \right.{\left. {\left. {\left. { \cdot \frac{{d\left( {{\mu_2}\left( {\tilde{Z}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{dy}} \cdot \frac{{{d^{m - 3}}\left( {{\mu_{m - 2}}\left( {\tilde{Z}|\tilde{Y} = y} \right){f_Y}(y)} \right)}}{{d{y^{m - 3}}}}} \right]} \right)} \right|_{y = {q_\alpha }\left( {\tilde{Y}} \right)}}, \\ \end{array} \), and \( \kappa (1) = \kappa (2) = 0,\,\,\kappa (3) = 1,\,\,\kappa (4) = 3 \). This is the result of Wilde (2003), except that the algebraic signs of Wilde (2003) seem to be wrong.

4.1.15 ES-Based Second-Order Granularity Adjustment for a Normally Distributed Systematic Factor

The summands of the second-order granularity add-on \( \kappa (5) = 10 \) can be expressed as

$$ \Delta {l_2} $$
(4.277)

Using the derivative of the normal distribution (4.242), the summand \( \begin{array} {c} \Delta {l_2} = \frac{1}{{6\left( {1 - \alpha } \right)}}\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\frac{d}{{dx}}\left( {\frac{{{\eta_{3,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) \\ + \frac{1}{{8\left( {1 - \alpha } \right)}}\frac{1}{\varphi }\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{\left. {{{\left[ {\frac{d}{{dx}}\left( {\frac{{{\eta_{2,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]}^2}} \right|_{x = {\Phi^{ - 1}}(1 - \alpha )}} \\ = :{\left. {\Delta {l_{2,1}} + \Delta {l_{2,2}}} \right|_{x = {\Phi^{ - 1}}(1 - \alpha )}}. \\ \end{array} \) equals

$$ \Delta {l_{2,1}} $$
(4.278)

Using the same transformations, the summand \( \begin{array} {c} \Delta {l_{2,1}} = \frac{1}{{6\left( {1 - \alpha } \right)}}\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\frac{d}{{dx}}\left( {\frac{{{\eta_{3,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right) \\ = \frac{1}{{6\left( {1 - \alpha } \right)}}\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\left[ {\frac{d}{{dx}}\left( {{\eta_{3,c}}\varphi } \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} + {\eta_{3,c}}\varphi \frac{d}{{dx}}\left( {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right] \\ = \frac{1}{{6\left( {1 - \alpha } \right)}}\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\left[ {\left( {\frac{{d{\eta_{3,c}}}}{{dx}}\varphi + {\eta_{3,c}}\frac{{d\varphi }}{{dx}}} \right)\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}} - {\eta_{3,c}}\varphi \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}} \right] \\ = \frac{1}{{6\left( {1 - \alpha } \right)}}\frac{\varphi }{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}\left[ {\frac{{d{\eta_{3,c}}}}{{dx}} - {\eta_{3,c}}\left( {x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]. \\ \end{array} \) is equivalent to

$$ \Delta {l_{2,2}} $$
(4.279)

leading to a second-order adjustment of

$$ \begin{array} {c} \Delta {l_{2,2}} = \frac{1}{{8\left( {1 - \alpha } \right)}}\frac{1}{\varphi }\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{\left[ {\frac{d}{{dx}}\left( {\frac{{{\eta_{2,c}}\varphi }}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]^2} \\ = \frac{1}{{8\left( {1 - \alpha } \right)}}\frac{1}{\varphi }\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}{\left[ {\frac{1}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}\left[ {\frac{{d{\eta_{2,c}}}}{{dx}}\varphi - {\eta_{2,c}}\varphi \left( {x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]} \right]^2} \\ = \frac{1}{{8\left( {1 - \alpha } \right)}}\frac{\varphi }{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}{\left[ {\frac{{d{\eta_{2,c}}}}{{dx}} - {\eta_{2,c}}\left( {x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]^2}, \\ \end{array} $$
(4.280)

4.1.16 Probability Density Function of the Logit-Normal Distribution

The derivation of the density function is based on the inverse function theoremFootnote 104

$$ \begin{array} {c} \Delta {l_2} = \frac{1}{{6\left( {1 - \alpha } \right)}}\frac{\varphi }{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^2}}}\left[ {\frac{{d{\eta_{3,c}}}}{{dx}} - {\eta_{3,c}}\left( {x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right] \\ + \frac{1}{{8\left( {1 - \alpha } \right)}}\frac{\varphi }{{{{\left( {{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}} \right)}^3}}}{\left. {{{\left[ {\frac{{d{\eta_{2,c}}}}{{dx}} - {\eta_{2,c}}\left( {x - \frac{{{{{{d^2}{\mu_{1,c}}}} \left/ {{d{x^2}}} \right.}}}{{{{{d{\mu_{1,c}}}} \left/ {{dx}} \right.}}}} \right)} \right]}^2}} \right|_{x = {\Phi^{ - 1}}(1 - \alpha )}}. \\ \end{array} $$
(4.281)

For the logit function \( {f_Y}(y) = {f_X}\left( {{g^{ - 1}}(y)} \right) \cdot \left| {\frac{{d{g^{ - 1}}(y)}}{{dy}}} \right|. \), we have

$$ \tilde{Y} = {{{{e^{\tilde{X}}}}} \left/ {{(1 + {e^{\tilde{X}}})}} \right.} $$
(4.282)

and

$$ \begin{array} {c} \\ g(x) = y = \frac{{{e^x}}}{{1 + {e^x}}} = \frac{1}{{{e^{ - x}} + 1}} \\ \Leftrightarrow \\ {e^{ - x}} = \frac{1}{y} - 1 \\ \Leftrightarrow \\ {g^{ - 1}}(y) = x = - \ln \left( {\frac{1}{y} - 1} \right) \\ \end{array} $$
(4.283)

Using the density of a normal distribution (4.82) for \( \frac{{d{g^{ - 1}}(y)}}{{dy}} = \frac{d}{{dy}}\left( { - \ln \left( {\frac{1}{y} - 1} \right)} \right) = - \frac{1}{{\frac{1}{y} - 1}} \cdot \left( { - \frac{1}{{{y^2}}}} \right) = \frac{1}{{y\left( {1 - y} \right)}}. \) and recognizing that y is bounded in the interval [0, 1], we get

$$ {f_X} $$
(4.284)

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Physica-Verlag HD

About this chapter

Cite this chapter

Hibbeln, M. (2010). Model-Based Measurement of Name Concentration Risk in Credit Portfolios. In: Risk Management in Credit Portfolios. Contributions to Economics. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-2607-4_4

Download citation

Publish with us

Policies and ethics