Skip to main content
Log in

The distribution and percentiles of channel capacity for multiple arrays

  • Published:
Sādhanā Aims and scope Submit manuscript

Abstract

We show that channel capacity of N-transmitter M-receiver antenna systems is approximately normal for both Raleigh fading and Ricean environments whether or not antennas are correlated. We give the distribution and percentiles of capacity as a power-series in \((MN)^{-1/2}\) when M or M/N is fixed, both for the case of fixed total power transmitted and also for the case, where total power transmitted increases with N.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12
Figure 13
Figure 14
Figure 15
Figure 16
Figure 17
Figure 18
Figure 19
Figure 20

Similar content being viewed by others

References

  1. Yan Q N and Yue D W 2009 Matrix variate distributions and MIMO channel capacity. In Recent Advance in Statistics Application and Related Areas, pp. 386–394

  2. Trigui I, Laourine A, Affes S and Stephenne A 2012 The inverse Gaussian distribution in wireless channels: Second-order statistics and channel capacity. IEEE Transactions on Communications 60 : 3167-3173

    Article  Google Scholar 

  3. Withers C S and Nadarajah S 2012 The distribution of Foschini’s lower bound for channel capacity. Advances in Applied Probability 44 : 260-269

    Article  MathSciNet  Google Scholar 

  4. Dadamahalleh K A and Hodtani G A 2013 A general upper bound for FSO channel capacity with input-dependent Gaussian noise and the corresponding optimal input distribution. In Proceedings of the 2013 IEEE International Symposium on Information Theory, pp. 1700–1704

  5. Sousa I, Queluz M P and Rodrigues A 2013 MIMO channel capacity spatial distribution in a microcell environment. In Proceedings of the 2013 IEEE Wireless Communications and Networking Conference, pp. 3197–3202

  6. Zhang L, Wu Y Y, Li W, Kim H M, Park S I, Angueira P, Montalban J and Velez M 2014 Channel capacity distribution of layer-division-multiplexing system for next generation digital broadcasting transmission. In Proceedings of the 2014 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting

  7. Foschini G J 1996 Layered space-time architecture for wireless communication in a fading environment when using multi-element antennas. Bell Labs Technical Journal, pp. 41–59

  8. Foschini G J and Gans M J 1998 On limits of wireless communications in a fading environment when using multiple antennas. Wireless Personal Communications 6 : 311-335

    Article  Google Scholar 

  9. Winters J H 1987 On the capacity of radio communication systems with diversity in a Rayleigh fading environment. IEEE Journal on Selected Areas in Communications SAC-5 : 871-878

  10. Withers C S and Nadarajah S 2011 Reciprocity for MIMO systems. European Transactions on Telecommunications 22 : 276-281

    Article  Google Scholar 

  11. Billingsley P 1968 Convergence of probability measures. New York: John Wiley and Sons

    MATH  Google Scholar 

  12. Driessen P F and Foschini G J 1999 On the capacity formula for multiple input multiple output wireless channels: a geometric interpretation. IEEE Transactions on Communications 47 : 173-176

    Article  Google Scholar 

  13. Telatar I E 1999 Capacity of multi-antenna Gaussian channels. European Transactions on Telecommunications 10 : 585-595

    Article  MathSciNet  Google Scholar 

  14. Marzetta T L and Hochwald B M 1999 Capacity of a mobile multiple-antenna communication link in Rayleigh flat fading. http://mars.bell-labs.com/

  15. Withers C S 1984 Asymptotic expansions for distributions and quantiles with power series cumulants. Journal of the Royal Statistical Society B 46 : 389-396

    MathSciNet  MATH  Google Scholar 

  16. Withers C S 2000 A simple expression for the multivariate Hermite polynomials. Statistics and Probability Letters 47 : 165-169

    Article  MathSciNet  Google Scholar 

  17. Withers C S 1982 The distribution and quantiles of a function of parameter estimates. Annals of the Institute of Statistical Mathematics A 34 : 55-68

    Article  MathSciNet  Google Scholar 

  18. Withers C S 1987 Bias reduction by Taylor series. Communications in Statistics—Theory and Methods 16 : 2369-2383

    Article  MathSciNet  Google Scholar 

  19. Henderson H V and Searle S R 1981 On deriving the inverse of a sum of matrices. SIAM Review 23 : 53-6

    Article  MathSciNet  Google Scholar 

  20. Wooding R A 1956 The multivariate distribution of complex normal variables. Biometrika 53 : 212-215

    Article  MathSciNet  Google Scholar 

  21. Turin G L 1960 The characteristic function of Hermitian quadratic forms in complex normal variables. Biometrika 47 : 199-201

    Article  MathSciNet  Google Scholar 

  22. Goodman N R 1963 Statistical analysis based on a certain multivariate complex Gaussian distribution (an introduction). Annals of Mathematical Statistics 34 : 152-177

    Article  MathSciNet  Google Scholar 

  23. Reed I S 1962 On a moment theorem for complex Gaussian processes. IRE Transactions on Information Theory IT-8 : 194-195

  24. Maiwald D and Kraus D 2000 Calculation of moments of complex Wishart and complex inverse Wishart distributed matrices. IEE Proceedings Radar, Sonar and Navigation 147 : 162-168

    Article  Google Scholar 

  25. Comtet L 1974 Advanced combinatorics. Dordrecht: Reidel

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saralees Nadarajah.

Appendices

Appendix A

Here, we give some results for the complex normal, introduced by Wooding [20], and for the central and non-central complex Wishart. We write

$$\begin{aligned} \displaystyle \mathbf{Z} = \mathbf{X} + j\mathbf{Y} \sim \mathcal{CN}_M \left( \mathbf{0}, \mathbf{V} \right) \end{aligned}$$

if

$$\begin{aligned} \displaystyle \left( \begin{array}{l} \mathbf{X} \\ \mathbf{Y} \end{array}\right) \sim \displaystyle \mathcal{N}_{2M} \left( \mathbf{0}, \left( \begin{array}{ll} \displaystyle \mathbf{A} &{} \mathbf{B} \\ \displaystyle -\mathbf{B} &{} \mathbf{A} \end{array}\right) \right) \end{aligned}$$

with \(\mathbf{A}\) and \(\mathbf{B}\) both \(M \times M\). This is sometimes referred to as the circular complex normal to distinguish it from the case of arbitrary cov\(\left( {\mathbf{X} \atopwithdelims ()\mathbf{Y}} \right) \). So,

$$\begin{aligned}&\displaystyle \mathbf{V} = E\mathbf{Z} \mathbf{Z}^+ = 2 (\mathbf{A} - j\mathbf{B}), \\&\displaystyle \det \begin{pmatrix} \displaystyle \mathbf{A} &{} \mathbf{B} \\ \displaystyle -\mathbf{B} &{} \mathbf{A} \end{pmatrix} = \det {}^2 \left( \mathbf{A} - j\mathbf{B}\right) = 2^{-2M} \det {}^2 \mathbf{V}, \end{aligned}$$

and \(\mathbf{Z}\) has density

$$\begin{aligned} \displaystyle f_\mathbf{Z} (\mathbf{z}) = \pi ^{-M} \left( \det \mathbf{V}\right) ^{-1} \exp \left( -\mathbf{z}^+ \mathbf{V}^{-1} \mathbf{z} \right) \end{aligned}$$

for \(\mathbf{z}\) in \(\mathcal {C}^M\). Also for \(\mathbf{r}\), \(\mathbf{s}\) in \(\mathcal {C}^M\) with transposes \(\mathbf{r}^T\), \(\mathbf{s}^T\), if \(\mathbf{t} = \mathbf{r}+j\mathbf{s}\), \(T = \mathbf{r}^T \mathbf{X} + \mathbf{s}^T \mathbf{Y} = \left( \mathbf{t}^+ \mathbf{Z} + \mathbf{Z}^+ \mathbf{t} \right) / 2\) then

$$\begin{aligned} \displaystyle E\exp {T} = \exp \left( \mathbf{t}^+ \mathbf{V} \mathbf{t}/4\right) \end{aligned}$$

and so

$$\begin{aligned} \displaystyle E \exp \left( \mathbf{t}^+ \mathbf{Z} + \mathbf{Z}^+ \mathbf{t} \right) = \exp \left( \mathbf{t}^+ \mathbf{Vt} \right) . \end{aligned}$$
(A.1)

We write \(\mathbf{Z} + {{\varvec{\mu }}} \sim \mathcal{CN}_M \left( {{\varvec{\mu }}}, \mathbf{V} \right) \). Turin [21] showed that for \(\mathbf{Q}^+ = \mathbf{Q}\) in \(\mathcal{C}^{M \times M}\) and t in \(\mathcal C\),

$$\begin{aligned}&\displaystyle E \exp \left\{ \left( \mathbf{Z} + {{\varvec{\mu }}} \right) ^+ \mathbf{Q} \left( \mathbf{Z} + {{\varvec{\mu }}}\right) t \right\} \\&\quad = \det \left( \mathbf{I}_M - t \mathbf{QV} \right) ^{-1} \exp (-\gamma ), \end{aligned}$$

where

$$\begin{aligned} \displaystyle \gamma= & {} {{\varvec{\mu }}}^+ \mathbf{V}^{-1} \left[ \mathbf{I} - \left( \mathbf{I} - t \mathbf{VQ} \right) ^{-1} \right] \\ {{\varvec{\mu }}}= & {} t {{\varvec{\mu }}}^+ \left( \mathbf{Q}^{-1} - t \mathbf{V} \right) ^{-1} {{\varvec{\mu }}}. \end{aligned}$$

Although not stated, this requires that \(\lambda _1 {\mathcal Re} (t) <1\), where \(\lambda _1\) is the maximum eigenvalue of \(\mathbf{V}^{1/2} \mathbf{Q} \mathbf{V}^{1/2}\).

For \(\mathbf{Z}_1, \ldots , \mathbf{Z}_N\) independent \(\mathcal{CN}_M \left( \mathbf{0}, \mathbf{V} \right) \) the (central) complex Wishart is defined as

$$\begin{aligned} \displaystyle \mathbf{W}_N = \sum ^N_{n=1} \mathbf{Z}_n \mathbf{Z}_n^+. \end{aligned}$$
(A.2)

Goodman [22] proved

Theorem A. 1

Suppose \(\mathbf{T}^+ = \mathbf{T}\) lies in \(\mathcal{C}^{M \times M}\). Then

$$\begin{aligned}&\displaystyle E \exp \hbox {trace} \left( \mathbf{T} \mathbf{W}_N \right) \nonumber \\&\quad = \left\{ \det \left( \mathbf{I} - \mathbf{T} \mathbf{V}^{-1} \right) \right\} ^{-N}. \end{aligned}$$
(A.3)

Again, the condition \(\lambda _1 < 1\) is implicit, where now \(\lambda _1\) is the maximum eigenvalue of \(\mathbf{V}^{-1/2} \mathbf{T} \mathbf{V}^{-1/2}\). He also gave parallels to real theory for complex multiple coherence, correlation and conditional coherence. His Sect 6 considers the case \(\mathbf{X}(t) : \mathcal{R} \rightarrow \mathcal{R}^M\) a stationary Gaussian process with mean zero; define its Fourier transform \(\mathbf{Z} (\omega ) : \mathcal{R} \rightarrow \mathcal{C}^M\) by

$$\begin{aligned} \displaystyle \mathbf{X} (t) = \int \exp \left( j \omega t \right) d\mathbf{Z}(\omega ). \end{aligned}$$

For any given \(\omega \), \(\mathbf{Z}(\omega ) \sim \mathcal{CN} \left( \mathbf{0}, \mathbf{V}(\omega ) \right) \) for a certain \(\mathbf{V}(\omega )\). For \(\omega _1 \ne \omega _2\), \(\mathbf{Z} \left( \omega _1 \right) \) and \(\mathbf{Z} \left( \omega _2 \right) \) are independent.

For z a scalar, set \(\overline{z} = z^+\), the complex conjugate. From a version of (A.1), Reed [23] (using \(\mathbf{H} = \mathbf{V}^T / 2\)) proved

Theorem A. 2

We have

$$\begin{aligned}&\displaystyle E Z_{a_1} \cdots Z_{a_r} \overline{Z}_{b_1} \cdots \overline{Z}_{b_s} \nonumber \\&\quad = I(r = s) \sum ^{r!}_{p_1 \cdots p_r} V_{a_1 p_1} \cdots V_{a_r p_r} \end{aligned}$$
(A.4)

summed over all r! permutations \(p_1 \cdots p_r\) of \(b_1 \cdots b_r\).

He noted as a corollary that

$$\begin{aligned}&\displaystyle E|Z|^{2n} = n! \left( E|Z|^2\right) ^n,\\&\displaystyle E \left( Z_1 \overline{Z}_r \right) ^n = n! \left( EZ_1 \overline{Z}_2\right) ^n, \\&\displaystyle E Z_1 Z_2 \overline{Z}_3 \overline{Z}_4 = \sum ^2_{34} E Z_1 \overline{Z}_3 EZ_2 \overline{Z}_4. \end{aligned}$$

Setting

$$\begin{aligned} \displaystyle v_{ij} = V_{a_i b_j}, \ \displaystyle m_{1 \cdots r} = EZ_{a_1} \cdots Z_{a_r} \overline{Z}_{b_1} \cdots \overline{Z}_{b_r}, \end{aligned}$$

(A.4) gives

$$\begin{aligned} \displaystyle m_1= & {} v_{11}, \ \displaystyle m_{12} = v_{11} v_{12} + v_{12} v_{21}, \\ \displaystyle m_{123}= & {} v_{11} v_{22} v_{33} + \sum ^3_{123} v_{11} v_{23} v_{32} + \sum ^2_{23} v_{12} v_{23} v_{31}, \end{aligned}$$

and so on. Note that \(|Z|^2 = \chi ^2_{2M} /2 = G_M\), where \(G_M\) is a gamma random variable with mean M and density \(x^{M-1} \exp (-x) / (M-1)!\) on \((0, \infty )\).

From (A.3) with \(N=1\) one can prove

Theorem A. 3

We have

$$\begin{aligned}&\displaystyle \kappa \left( Z_{a_1} \overline{Z}_{b_1}, \ldots , Z_{a_r} \overline{Z}_{b_r} \right) \nonumber \\&\quad = \sum ^{(r-1)!}_{C \left( p_1 \cdots p_r\right) } V_{a_1 p_1} \cdots V_{a_r p_r} \end{aligned}$$
(A.5)

summed over all \((r-1)!\) permutations \(p_1 \cdot p_r\) of \(b_1 \cdots b_r\) giving connected expressions. (By connected we disqualify breaking \(1 \cdots r\) into two or more groups. For example, \(V_{a_1 b_1} V_{a_2 b_2}\) breaks 12 into 1 and 2; \(V_{a_1 b_2} V_{a_2 b_1} V_{a_3 b_4} V_{a_4 b_3}\) breaks 1234 into 12 and 34.

Here, the joint cumulants for complex random variables \(U_1, U_2, \ldots \) are defined as for real random variables. For example, \(\kappa \left( U_1, U_2 \right) = EU_1 U_2 - EU_1 EU_2\) not \(EU_1 \overline{U}_2 - EU_1 E\overline{U}_2\). Setting \(k_{1 \cdots r}\) equal to the left hand side of (A.5), this gives

$$\begin{aligned} \displaystyle k_1= & {} v_{11}, \ \displaystyle k_{12} = v_{12} v_{21}, \\ \displaystyle k_{123}= & {} v_{12} v_{23} v_{31} + v_{13} v_{32} v_{21}, \\ \displaystyle k_{1234}= & {} v_{12} \left( v_{23} v_{34} v_{41}\right. \\&\left. + v_{24} v_{43} v_{31}\right) \\&+ v_{13} \left( v_{34} v_{42} v_{21}\right. \\&\left. + v_{32} v_{24} v_{41}\right) \\&+ v_{14} \left( v_{43} v_{32} v_{21}\right. \\&\left. + v_{42} v_{23} v_{31}\right) . \end{aligned}$$

We now give a ‘brute force’ extension of Theorem A. 2 from \(\mathbf{Z}\) to \(\mathbf{H} = {{\varvec{\mu }}} + \mathbf{Z}\). We shall give the cumulants of \(\mathbf{X} = \mathbf{H} \mathbf{H}^+\) for \(\mathbf{H} = {{\varvec{\mu }}} + \mathbf{Z}\).

Theorem A. 4

Set

$$\begin{aligned} \displaystyle x_i = X_{a_i b_i} = H_{a_i} \overline{H}_{b_i} =\sum ^2_{j=0} x_{ij} \end{aligned}$$

for

$$\begin{aligned} \displaystyle x_{i0}= & {} \mu _{a_i} \overline{\mu }_{b_i}, \\ \displaystyle x_{i1}= & {} Z_{a_i} \overline{\mu }_{b_i} + \mu _{a_i} \overline{Z}_{b_i}, \\ \displaystyle x_{i2}= & {} Z_{a_i} \overline{Z}_{b_i}. \end{aligned}$$

Then

$$\begin{aligned}&\displaystyle Ex_1 = x_{10}+v_{11}, \nonumber \\&\displaystyle \kappa \left( x_1, \ldots , x_r \right) = \sum ^{2r}_{j=r} K_{rj} \end{aligned}$$
(A.6)

for \(r \ge 2\), where

$$\begin{aligned} \small {K_{rj} = \left\{ \begin{array}{ll} \displaystyle \sum _{i_1 + \cdots + i_r = j} \kappa \left( x_{1i_1}, \ldots , x_{ri_r}\right) , &{} \hbox {if }j\hbox { is even,}\\ \displaystyle 0, &{} \hbox {if }j\hbox { is odd} \end{array} \right. } \end{aligned}$$

follows by (A.4) with \(r \ne s\). So,

$$\begin{aligned} \displaystyle K_{r, 2r} = \kappa \left( x_{12}, \ldots , x_{r2} \right) , \end{aligned}$$

given by (A.5), and

$$\begin{aligned}&\displaystyle K_{r,r} = \kappa \left( x_{11}, \ldots , x_{r1}\right) , \\&\displaystyle K_{r, r+1} = \sum ^r_{1 \cdots r} \kappa \left( x_{{p_1}2}, x_{{p_2}1} x_{{p_3}1}, \ldots , x_{{p_r}1} \right) \end{aligned}$$

summed over all r permutations \(p_1 \cdots p_r\) of \(1 \cdots r\) giving distinct terms and so on for \(K_{r, r+2}, \ldots , K_{r, 2r-1}\). In particular,

$$\begin{aligned}&\displaystyle K_{22} = \sum ^2_{12} \mu _{a_1} \overline{\mu }_{b_2} v_{21}, \end{aligned}$$
(A.7)
$$\begin{aligned}&\displaystyle K_{34} = \sum ^6_{123} \mu _{a_1} \overline{\mu }_{b_2} T_{123}, \end{aligned}$$
(A.8)
$$\begin{aligned}&\displaystyle K_{44} = \sum ^6 \overline{\mu }_{b_1} \overline{\mu }_{b_2} \mu _{a_3} \mu _{a_4} T_{1234} \end{aligned}$$
(A.9)

for \(T_{123} = \kappa \left( \overline{Z}_{b_1}, Z_{a_2}, Z_{a_3} \overline{Z}_{b_3} \right) = v_{23} v_{31}\) and \(T_{1234} = \kappa \left( Z_{a_1}, Z_{a_2}, \overline{Z}_{b_3}, \overline{Z}_{b_4} \right) = 0\) by (A.5). Also,

$$\begin{aligned} \displaystyle K_{46} = \sum ^6_{1122} \sum ^2_{12} \mu _{a_1} \overline{\mu }_{b_2} \sum ^2_{34} v_{23} v_{34} v_{41}. \end{aligned}$$
(A.10)

This is enough to give \(\kappa \left( H_{a_1} \overline{H}_{b_1}, \ldots , H_{a_r} \overline{H}_{b_r} \right) \) for \(1 \le r \le 4\). Other values may be derived from (A.6) similarly.

Proof

Note that (A.7)–(A.9) follow from (A.4) and (A.5). To prove (A.10), note that

$$\begin{aligned} \displaystyle K_{46}= & {} k \left( 1^2 2^2\right) + k (12 12) + k (1221)\\&+k(2112) + k(2121) + k \left( 2^2 1^2\right) \\= & {} \sum ^6_{1122} k\left( 1^2 2^2\right) \end{aligned}$$

say, for \(k \left( i_1 \cdots i_r \right) = \kappa \left( x_{1 i_1}, \ldots , x_{ri_r} \right) \). So,

$$\begin{aligned} \displaystyle k \left( 1^22^2\right)= & {} \kappa \left( x_{11}, x_{21}, x_{32}, x_{42}\right) \\= & {} \sum _{12}^2 \mu _{a_1}\overline{\mu }_{b_2}\kappa _{1234} \end{aligned}$$

for

$$\begin{aligned} \displaystyle \kappa _{1234}= & {} \kappa \left( \overline{Z}_{b_1}, Z_{a_2}, Z_{a_3} \overline{Z}_{b_3}, Z_{a_4} \overline{Z}_{b_4} \right) \\= & {} \mu _{1234} - \sum ^3 \mu _{12} \mu _{34} \end{aligned}$$

say,

$$\begin{aligned} \displaystyle \mu _{1234}= & {} E \left( Z_{a_2} \overline{Z}_{b_1} - v_{21} + v_{21} \right) \\&\quad \left( Z_{a_3} \overline{Z}_{b_3} - v_{33} \right) \\&\quad \left( Z_{a_4} \overline{Z}_{b_4} - v_{44} \right) \\= & {} \sum ^2_{34} v_{23} v_{34} v_{41} + v_{21} v_{34} v_{43} \end{aligned}$$

by (A.5), and \(\mu _{12} = v_{21}\), \(\mu _{34} = v_{34}v_{43}\), \(\mu _{13} = \mu _{14} = 0\). So,

$$\begin{aligned} \displaystyle \kappa _{1234} = \sum ^2_{34} v_{23} v_{34} v_{41}, \end{aligned}$$

so that (A.10) holds. \(\square \)

Maiwald and Kraus [24] have obtained the first four non-central moments of \(\mathbf{W}_N / N\) for \(\mathbf{W}_N\) the complex Wishart of (A.2) by differentiating (A.3). We now give a simple method which gives the central and non-central moments of the Wishart up to 12th order.

Theorem A. 5

Suppose

$$\begin{aligned} \displaystyle S = \sum ^N_{n=1} X_n, \end{aligned}$$

where \(X_n\) are independently-distributed as X in \(\mathcal C\) with cumulants \(\left\{ \kappa _r \right\} \) defined by

$$\begin{aligned} \displaystyle \ln E \exp (tX) = \sum ^\infty _{r=1} t^r \kappa _r / r! = K(t) \end{aligned}$$
(A.11)

say for t in \(\mathcal{C}\). Then the non-central moments of S are

$$\begin{aligned} \displaystyle ES^r = \sum ^r_{i=1} N^i B_{ri} \end{aligned}$$
(A.12)

for \(r \ge 1\), where

$$\begin{aligned} \displaystyle K(t)^i / i! = \sum ^\infty _{r=i} B_{ri} t^r / r!, \end{aligned}$$
(A.13)

where \(B_{ri} = B_{ri} \left( {{\varvec{\kappa }}} \right) \) are the exponential Bell polynomials in \({{\varvec{\kappa }}} = \left( \kappa _1, \kappa _2, \ldots \right) \) tabled on page 30 of Comtet [25] up to \(r=12\). Putting \(\kappa _1 = 0\) gives the central moment

$$\begin{aligned} \displaystyle \mu _r (S) = E \left( S - ES\right) ^r = \sum ^r_{1\le i \le r/2} N^i B_{ri0}, \end{aligned}$$

where \(B_{ri0} = B_{ri}|_{\kappa _1 = 0}\) since \(B_{ri0} = 0\) for \(i > r/2\). For example,

$$\begin{aligned} \displaystyle ES^4 = \sum ^4_{i=1} N^i B_{4i}, \ \displaystyle \mu _4 (S) = \sum ^2_{i=1} N^i B_{4i0}, \end{aligned}$$

where \(B_{41} = \kappa _4\), \(B_{42} = 4 \kappa _1 \kappa _3 + 3 \kappa _2^2\), \(B_{43} = 6 \kappa _1^2 \kappa _2\), \(B_{44} = \kappa _1^4\), and \(B_{410} = \kappa _4, B_{420} = 3 \kappa _2^2\). For \(\mathbf{X}\) in \(\mathcal{C}^p\) these become

$$\begin{aligned} \displaystyle E S_{a_1} \cdots S_{a_r} = \sum ^r_{i=1} N^i B_i^{a_i \cdots a_r}, \end{aligned}$$

and

$$\begin{aligned}&\displaystyle \mu \left( S_{a_1}, \ldots , S_{a_r} \right) = E \left( W_{a_1} - EW_{a_1} \right) \\&\quad \cdots \left( W_{a_r} - EW_{a_r} \right) = \sum _{1 \le i \le r/2} N^i B_{i0}^{a_1 \cdots a_r}, \end{aligned}$$

where \(B_i^{a_1 \cdots a_r}\) and \(B_{i0}^{a_1 \cdots a_r}\) can be written down immediately from \(B_{ri}\) and \(B_{ri0}\). For example,

$$\begin{aligned} \displaystyle ES_{a_1} \cdots S_{a_4} = \sum ^4_{i=1} N^i B_i^{a_1 \cdots a_4}, \end{aligned}$$

and

$$\begin{aligned} \displaystyle \mu \left( S_{a_1} \cdots S_{a_4} \right) = \sum ^2_{i=1} N^i B_{i0}^{a_1 \cdots a_4}, \end{aligned}$$

where

$$\begin{aligned}&\displaystyle B_1^{a_1 \cdots a_4} = B_{10}^{a_1 \cdots a_4} = \kappa ^{a_1 \cdots a_4}, \\&\displaystyle B_2^{a_1 \cdots a_4} = \sum ^4 \kappa ^{a_1} \kappa ^{a_2 a_3 a_4} + \sum ^3 \kappa ^{a_1 a_2} \kappa ^{a_3 a_4}, \\&\displaystyle B_{20}^{a_1 \cdots a_4} = \sum ^3 \kappa ^{a_1 a_2} \kappa ^{a_3 a_4},\\&\displaystyle B_3^{a_1 \cdots a_4} = \sum ^6 \kappa ^{a_1} \kappa ^{a_2} \kappa ^{a_3 a_4}, \\&\displaystyle B_4^{a_1 \cdots a_4} = \kappa ^{a_1} \cdots \kappa ^{a_4}, \end{aligned}$$

where \(\displaystyle \sum ^M\) sums over all M permutations of indices \(1 \cdots 4\) giving distinct terms, and

$$\begin{aligned} \displaystyle \kappa ^{a_1 \cdots a_r} = \kappa \left( X_{a_1}, \ldots , X_{a_r} \right) \end{aligned}$$

for \(1 \le a_i \le p\), \(i=1, \ldots , r\). For \(\mathbf{X}\) in \(\mathcal{C}^{p \times q}\) these become

$$\begin{aligned} \displaystyle ES_{a_1 b_1} \cdots S_{a_r b_r} = \sum ^r_{i=1} N^i B_i^{a_1 b_1 \cdots a_r b_r}, \end{aligned}$$
(A.14)

and

$$\begin{aligned}&\displaystyle \mu \left( S_{a_1 b_1}, \ldots , S_{a_r b_r} \right) \nonumber \\&\quad = \sum _{1 \le i \le r/2} N^i B_{i0}^{a_1 b_1 \cdots a_r b_r}, \end{aligned}$$
(A.15)

where \(B_i^{a_1 b_1 \cdots a_r b_r}\) is \(B_i^{a_1 \cdots a_r}\) with \(a_i\) replaced by \(\left( a_i b_i \right) \) and similarly for \(B_{i0}^{a_1 b_1 \cdots a_r b_r}\), these being given in terms of

$$\begin{aligned} \displaystyle \kappa ^{a_1 b_1 \cdots a_r b_r} = \kappa \left( X_{a_1 b_1}, \ldots , X_{a_r b_r} \right) \end{aligned}$$

for \(1 \le a_i \le p\), \(1 \le b_i \le q\) and \(i = 1, \ldots , r\). For example,

$$\begin{aligned} \displaystyle ES_{a_1 b_1} \cdots S_{a_4 b_4} = \sum ^4_{i=1} N^i B_i^{a_1 b_1 \cdots a_4 b_4} \end{aligned}$$

and

$$\begin{aligned} \displaystyle \mu \left( S_{a_1 b_1}, \ldots , S_{a_4 b_4} \right) = \sum ^2_{i=1} N^i B_{i0}^{a_1 b_1 \cdots a_4 b_4}, \end{aligned}$$

where

$$\begin{aligned} \displaystyle B_1^{a_1 b_1 \cdots a_4 b_4}= & {} B_{10}^{a_1 b_1 \cdots a_4 b_4} = \kappa ^{a_1 b_1 \cdots a_4 b_4}, \\ \displaystyle B_2^{a_1 b_1 \cdots a_4 b_4}= & {} \sum ^4 \kappa ^{a_1 b_1} \kappa ^{a_2 b_2, a_3 b_3, a_4 b_4} \\&+ B_{20}^{a_1 b_1 \cdots a_4 b_4}, \\ \displaystyle B_{20}^{a_1 b_1 \cdots a_4 b_4}= & {} \sum ^3 \kappa ^{a_1 b_1} \kappa ^{a_2 b_2} \kappa ^{a_3 b_3, a_4 b_4}, \\ \displaystyle B_4^{a_1 b_1 \cdots a_4 b_4}= & {} \kappa ^{a_1 b_1} \cdots \kappa ^{a_4 b_4}. \end{aligned}$$

Now consider the weighted version:

$$\begin{aligned} \displaystyle S_P = \sum ^N_{n=1} P_n X_n, \end{aligned}$$

where \(\left\{ P_n \right\} \) are constants in \(\mathcal C\) and \(\left\{ X_n \right\} \) are independent copies of X in \(\mathcal C\) with cumulants \(\left\{ \kappa _r \right\} \). Then

$$\begin{aligned} \displaystyle K_{S_P} (t)= & {} \sum ^N_{n=1} K_X \left( P_n t\right) \\= & {} N \sum ^\infty _{r=1} \kappa _r P_{rN} t^r / r! = N K(t) \end{aligned}$$

say, so

$$\begin{aligned} \displaystyle ES_P^r = \sum ^r_{i=1} N^i B_{ri} ({{\varvec{\alpha }}}) \end{aligned}$$

for \(r \ge 1\), where \(\alpha _r = P_{rN} \kappa _r\). Similarly, for \(\mathbf{X}\) in \(\mathcal{C}^{p \times q}\), (A.14)–(A.15) hold for

$$\begin{aligned} \displaystyle \mathbf{S} = \mathbf{S}_P \hbox { with } \kappa ^{a_1 b_1 \cdots a_r b_r} \hbox { multiplied by } P_{rN}. \end{aligned}$$
(A.16)

To apply (A.14), (A.15) to \(\mathbf{S}\) the central complex Wishart \(\mathbf{W}_N\) of (A.2), put \(p = q = M\) and substitute \(\kappa ^{a_1 b_1 \cdots a_r b_r}\) of (A.5). Finally, for the non-central Wishart defined by

$$\begin{aligned} \displaystyle \mathbf{W}_N = \sum ^N_{n=1} \mathbf{H}_n \mathbf{H}_n^+, \end{aligned}$$

where \(\mathbf{H}_1, \ldots , \mathbf{H}_N\) are independent \(\mathcal{CN}_M \left( {{\varvec{\mu }}}, \mathbf{V}\right) \), the moments of \(\mathbf{W}_N\) are given by (A.14), (A.15) with \(p=q=M\) and \(\kappa ^{a_1 b_1 \cdots a_r b_r}\) of (A.6).

Proof

It follows from (A.11) that

$$\begin{aligned} \displaystyle E \exp (t S) = \exp \left\{ N K(t) \right\} = \sum ^\infty _{i=0} N^i K(t)^i / i!. \end{aligned}$$

So, (A.12) follows from (A.13) since \(B_{r0} = 0\) for \(r \ne 0\). \(\square \)

Appendix B

Theorem B. 1

Suppose that \(\mathbf{A}\) in \(\mathcal{C}^{M \times M}\) is a non-singular matrix, \({{\varvec{\tau }}}\) lies in \(\mathcal{C}^M\) and \(\rho \) lies in \(\mathcal C\). Set

$$\begin{aligned}&\displaystyle \mathbf{B} = \mathbf{A} + {{\varvec{\tau }}} {{\varvec{\tau }}}^+,\\&\displaystyle f_r = {{\varvec{\tau }}}^+ \mathbf{A}^{-r} {{\varvec{\tau }}}, \ \displaystyle \Delta = 1 + \rho f_1, \ \displaystyle d = \rho / \Delta . \end{aligned}$$

Then

$$\begin{aligned}&\displaystyle \mathbf{B}^{-1} = \mathbf{A}^{-1} - d \mathbf{A}^{-1} {{\varvec{\tau }}} {{\varvec{\tau }}}^+ \mathbf{A}^{-1}, \end{aligned}$$
(B.1)
$$\begin{aligned}&\displaystyle \det \mathbf{B} = \Delta \det \mathbf{A}, \end{aligned}$$
(B.2)

and, for \(r = 1,2,\ldots \),

$$\begin{aligned} \displaystyle \gamma _r = \hbox {trace } \mathbf{B}^{-r} \end{aligned}$$

is given in terms of

$$\begin{aligned} \displaystyle a_r = \hbox {trace } \mathbf{A}^{-r} \end{aligned}$$

by

$$\begin{aligned} \displaystyle \gamma _r= & {} \displaystyle a_r + \sum ^r_{i=1} \begin{pmatrix} r \\ i \end{pmatrix} (-d)^i f_2^{i-1} f_{r+2-i} \end{aligned}$$
(B.3)
$$\begin{aligned}= & {} \displaystyle a_r - f_2^{-1} f_{r+2} + f_2^{-1} {{\varvec{\tau }}}^+ \mathbf{A}^{-1} \nonumber \\&\quad \left( \mathbf{A}^{-1} - df_2 \mathbf{I}_M \right) ^r \mathbf{A}^{-1} {{\varvec{\tau }}}. \end{aligned}$$
(B.4)

So,

$$\begin{aligned} \displaystyle \gamma _1= & {} a_1 - f_2 d, \\ \displaystyle \gamma _2= & {} a_2 - 2 f_3 d + f_2^2 d^2, \\ \displaystyle \gamma _3= & {} a_3 - 3f_4 d + 3f_2 f_3 d^2 - f_2^3 d^3,\\ \displaystyle \gamma _4= & {} a_4 - 4 f_5 d + 6f_2 f_4 d^2 \\&- 4f_2^2 f_3 d^3 + f_2^4 d^4, \\ \displaystyle \gamma _5= & {} a_5 - 5f_6 d + 10f_2 f_5 d^2 \\&- 10 f_2^2 f_4 d^3 + 5 f_2^3 f_3 - f_2^5 d^5, \\ \displaystyle \gamma _6= & {} a_6 - 6f_7 d + 15f_2 f_6 d^2 \\&- 20 f_2^2 f_5 d^3 + 15 f_2^3 f_4 d^4 \\&- 6f_2^4 f_3 d^5 + f_2^6 d^6. \end{aligned}$$

Also \(e_r = {{\varvec{\tau }}}^+ \mathbf{B}^{-r} {{\varvec{\tau }}}\) is given by

$$\begin{aligned} \displaystyle e_r= & {} \Delta ^{-r} f_r \hbox { for } r = 1, 2, \\ \displaystyle e_3= & {} \Delta ^{-2} \left( f_3 - f_2^2 d\right) , \\ \displaystyle e_4= & {} \Delta ^{-2} \left( f_4 - 2f_2 f_3 d + f_2^3 d^2\right) , \\ \displaystyle e_5= & {} \Delta ^{-2} \left\{ f_5 - \left( 2f_2 f_4 + f_3^2 \right) d\right. \\&\left. + 3f_2^2 f_3 d^2 - f_2^4 d^3 \right\} , \\ \displaystyle e_6= & {} \Delta ^{-2} \left\{ f_6 - \left( 2f_2 f_5 + 2f_3 f_4\right) d\right. \\&+ \left( 3f_2^2 f_4 + 3f_2 f_3^2 \right) d^2 \\&\left. - 4f_2^3 f_3 d^3 + f_2^5 d^4 \right\} . \end{aligned}$$

For \(r \ge 2\) the general formula is

$$\begin{aligned} \displaystyle e_r = \Delta ^{-2} \sum _{i=0}^{r-2} (-d)^i \left[ \widehat{B}_{i+r, i+1} (\mathbf{f}) \right] _{f_1 = 0}, \end{aligned}$$
(B.5)

where for \(\mathbf{f} = \left( f_1, f_2, \ldots \right) \), \(\widehat{B}_{ri} (\mathbf{f})\) is the ordinary Bell polynomial tabled on page 309 of Comtet [25], and defined by

$$\begin{aligned} \displaystyle \left( \sum ^\infty _{r=1} z^r f_r \right) ^i = \sum ^\infty _{r=i} z^r \widehat{B}_{ri} (\mathbf{f}) \end{aligned}$$

for z in \(\mathcal{C}\).

Proof

Note that (B.1) holds by equation (3) of Henderson and Searle [19], and (B.2) follows. Set \(\mathbf{a} = \mathbf{A}^{-1}\) and \(\mathbf{b} = \mathbf{A}^{-1} {{\varvec{\tau }}} {{\varvec{\tau }}}^+ \mathbf{A}^{-1}\), so

$$\begin{aligned} \displaystyle \mathbf{B}^{-1}= & {} \mathbf{a} - d \mathbf{b},\nonumber \\ \displaystyle \left( \mathbf{a} - d\mathbf{b}\right) ^r \nonumber \\= & {} \mathbf{a}^r - d \sum ^r \mathbf{a}^{r-1} \mathbf{b} \nonumber \\&+ d^2 \sum ^{{r \atopwithdelims ()2}} \mathbf{a}^{r-2} \mathbf{b}^2 - \cdots , \end{aligned}$$
(B.6)

where \(\displaystyle \sum ^m \mathbf{a}^i \mathbf{b}^j\) sums over all m permutations of the \(i+j\) elements of \(\mathbf{a}^i \mathbf{b}^j\) giving distinct terms. So,

$$\begin{aligned}&\displaystyle \hbox {trace } \left( \mathbf{a}-d\mathbf{b}\right) ^r = \sum ^r_{i=0} \begin{pmatrix} r \\ i \end{pmatrix} (-d)^i \\&\hbox {trace } \left( \mathbf{a}^{r-i} \mathbf{b}^i \right) . \end{aligned}$$

Set \({{\varvec{\tau }}}_r = \mathbf{A}^{-r} {{\varvec{\tau }}}\) so that

$$\begin{aligned} \displaystyle {{\varvec{\tau }}}_r^+ {{\varvec{\tau }}}_s f_{r +s}, \displaystyle \hbox {trace } \left( \mathbf{a}^j \mathbf{b}^i \right) = {{\varvec{\tau }}}_1^+ \mathbf{a}^j {{\varvec{\tau }}}_1 f_2^{i-1} \end{aligned}$$

for \(i \ge 1\), and (B.3), (B.4) follow.

Set

$$\begin{aligned} \displaystyle \mathbf{S}_r^m = \sum _r^m {{\varvec{\tau }}}_i {{\varvec{\tau }}}_j^+ \end{aligned}$$

summed over \(\left\{ i + j = r, i> 0, j > 0 \right\} \), where m is the number of terms: \(\mathbf{S}_2^1 = {{\varvec{\tau }}}_1 {{\varvec{\tau }}}_1^+\), \(\mathbf{S}_3^2 = {{\varvec{\tau }}}_1 {{\varvec{\tau }}}_2^+ + {{\varvec{\tau }}}_2 {{\varvec{\tau }}}_1^+\), \(\mathbf{S}_4^3 = {{\varvec{\tau }}}_1 {{\varvec{\tau }}}_3^+ + {{\varvec{\tau }}}_2 {{\varvec{\tau }}}_2^+ + {{\varvec{\tau }}}_3 {{\varvec{\tau }}}_1^+\), and so on. By (B.6),

$$\begin{aligned} \displaystyle \mathbf{B}^{-2}= & {} \mathbf{a}^2 - d \mathbf{S}_3^2 + d^2 f_2 \mathbf{S}_2^1, \\ \displaystyle \mathbf{B}^{-3}= & {} \mathbf{a}^3 - d \mathbf{S}_4^3 + d^2 \left( f_3 \mathbf{S}_2^1 + f_2 \mathbf{S}_3^2 \right) - d^3 f_2^2 \mathbf{S}_2^1, \\ \displaystyle \mathbf{B}^{-4}= & {} \mathbf{a}^4 - d \mathbf{S}_5^4 + d^2 \left( f_4 \mathbf{S}_2^1 + f_3 \mathbf{S}_3^2 + f_2 \mathbf{S}_4^3 \right) \\&- d^3 \left( 2 f_2 f_3 \mathbf{S}_2^1 + f_2^2 \mathbf{S}_3^2 \right) + d^4 f_2^3 \mathbf{S}_2^1, \\ \displaystyle \mathbf{B}^{-5}= & {} \mathbf{a}^5 - d \mathbf{S}_6^5 + d^2 \sum ^4_{i=1} f_{5-i} \mathbf{S}^i_{i+1} \\&- d^3 \left( 2f_2 f_4 \mathbf{S}_2^1 + 2 f_2 f_3 \mathbf{S}_3^2 + f_2^2 \mathbf{S}_4^3 \right) \\&+d^4 \left( 3f_2^2 f_3 \mathbf{S}_2^1 + f_2^3 \mathbf{S}_3^2 \right) \\&- d^5 f_2^4 \mathbf{S}_2^1, \end{aligned}$$

and so on. Note (B.5) and the expressions for \(e_1, \ldots , e_6\) above follow. \(\square \)

As a special case we have

$$\begin{aligned} \displaystyle {{\varvec{\tau }}}^+ \left( \mathbf{I}_M + {{\varvec{\tau }}} {{\varvec{\tau }}}^+ \right) ^{-r} {{\varvec{\tau }}} = \left| {{\varvec{\tau }}} \right| ^2 \left( 1 + \left| {{\varvec{\tau }}} \right| ^2 \right) ^{-r}. \end{aligned}$$
(B.7)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Withers, C.S., Nadarajah, S. The distribution and percentiles of channel capacity for multiple arrays. Sādhanā 45, 155 (2020). https://doi.org/10.1007/s12046-020-01388-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12046-020-01388-0

Keywords

Navigation