Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In the previous chapter, several lower and upper bounds have been established for various classes of graphs, among which bipartite graphs are of particular interest. But only a few graphs attain the equalities in these bounds. In [105], an exact estimate of the energy of random graphs G n (p) was established, by using the Wigner semicircle law for any probability p. Furthermore, in [105], the energy of random multipartite graphs was investigated, by considering a generalization of the Wigner matrix, and some estimates of the energy of random multipartite graphs were obtained.

6.1 The Energy of G n (p)

In this section, we formulate an exact estimate of the energy of almost all graphs by means of the Wigner semicircle law.

We start by recalling the Erdős–Rényi’s random graph model \({\mathcal{G}}_{n}(p)\) (see [38]), consisting of all graphs with vertex set [n] = { 1, 2, , n} in which the edges are chosen independently with probability p = p(n). Evidently, the adjacency matrix A(G n (p)) of the random graph \({G}_{n}(p) \in {\mathcal{G}}_{n}(p)\) is a random matrix, and thus, one readily evaluates the energy of G n (p) once the spectral distribution of the random matrix A(G n (p)) is known.

In fact, the study on the spectral distributions of random matrices is rather abundant and active and can be traced back to [493]. We refer the readers to [25, 100, 369] for an overview and some spectacular progress in this field. One important achievement in that field is the Wigner semicircle law which characterizes the limiting spectral distribution of the empirical spectral distribution of eigenvalues for a type of random matrices.

In order to characterize the statistical properties of the wave functions of quantum mechanical systems, Wigner in the 1950s investigated the spectral distribution for a class of random matrices, so-called Wigner matrices,

$${\bf{X}}_{n} := ({x}_{ij})\,\ 1 \leq i,j \leq n$$

which satisfy the following conditions:

  • The x ij ’s are independent random variables with x ij  = x ji  .

  • The x ii ’s have the same distribution F 1, whereas the x ij ’s (ij) have the same distribution F 2 .

  • \(\mathbb{Var}({x}_{ij}) = {\sigma }_{2}^{2} < \infty \) for all 1 ≤ i < j ≤ n.

We denote the eigenvalues of X n by λ1, n , λ2, n , , λ n, n and their empirical spectral distribution (ESD) by

$${\Phi }_{{\bf{X}}_{n}}(x) = \frac{1} {n} \cdot \#\{{\lambda }_{i,n}\mid {\lambda }_{i,n} \leq x,\,i = 1,2,\ldots,n\}.$$

Wigner [491, 492] considered the limiting spectral distribution (LSD) of X n and obtained his semicircle law as follows:

Theorem 6.1.

Let X n be a Wigner matrix. Then

$$\lim\limits_{n\rightarrow \infty }{\Phi }_{{n}^{-1/2}\,{\bf{X}}_{n}}(x) = \Phi (x)\ \ \mbox{ a.s. }$$

i.e., with probability 1, the ESD \({\Phi }_{{n}^{-1/2}{\bf{X}}_{n}}(x)\) converges weakly to a distribution Φ(x) as n tends to infinity, where Φ(x) has the density

$$ \phi (x) = \frac{1} {2\pi {\sigma }_{2}^{2}}\sqrt{4{\sigma }_{2 }^{2 } - {x}^{2}}\ {\mathbf{1}}_{\vert x\vert \leq 2{\sigma }_{2}}. \square$$

Remark 6.1.

One of the classical methods to prove the above theorem is the moment approach. Employing this method, we get more information about the LSD of the Wigner matrix. Set μ i  =  ∫x dF i (i = 1, 2) and \({\overline{\bf{X}}}_{n} ={ \bf{X}}_{n} - {\mu }_{1}\,{\bf{I}}_{n} - {\mu }_{2}\,({\bf{J}}_{n} -{\bf{I}}_{n})\), where I n is the unit matrix of order n and J n is the matrix of order n in which all entries are equal to 1. It is easily seen that the random matrix \({\overline{\bf{X}}}_{n}\) is also a Wigner matrix. By means of Theorem 6.1, we have

$$\lim\limits_{n\rightarrow \infty }{\Phi }_{{n}^{-1/2}\,{\overline{\bf{X}}}_{ n}}(x) = \Phi (x)\ \ \mbox{ a.s.}$$
(6.1)

Evidently, each entry of \({\overline{\bf{X}}}_{n}\) has mean 0. Furthermore, using the moment approach, Wigner [491, 492] showed that for each positive integer k,

$$\lim\limits_{n\rightarrow \infty }\int \nolimits \nolimits {x}^{k}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,{\overline{\bf{X}}}_{n}}(x) = \int \nolimits \nolimits {x}^{k}\,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.}$$
(6.2)

It is interesting that the existence of the second moment of the off-diagonal entries is the necessary and sufficient condition for the semicircle law, but there is no moment requirement on the diagonal elements. For further comments on the moment approach and the Wigner semicircle law, we refer the readers to the seminal survey by Bai [25].

We say that almost every (a.e.) graph in \({\mathcal{G}}_{n}(p)\) has a certain property Q (see [38]) if the probability that a random graph G n (p) has the property Q converges to 1 as n tends to infinity. Occasionally, we write almost all instead of almost every. It is easy to see that if F 1 is a point mass at 0, i.e., F 1(x) = 1 for x ≥ 0 and F 1(x) = 0 for x < 0, and F 2 is the Bernoulli distribution with mean p, then the Wigner matrix X n coincides with the adjacency matrix A(G n (p)) of the random graph G n (p). Obviously, \({\sigma }_{2} = \sqrt{p(1 - p)}\) in this case.

In order to establish the exact estimate of the energy (G n (p)) for a.e. graph G n (p), we first present some notions. In what follows, for convenience we use A to denote the adjacency matrix A(G n (p)). Set

$$\overline{\bf{A}} = \bf{A} - p({\bf{J}}_{n} -{\bf{I}}_{n}).$$

It is easy to check that each entry of \(\overline{\bf{A}}\) has mean 0. We define the energy ℰ( M) of a matrix M as the sum of absolute values of the eigenvalues of M (for details, see Sect. 11.3). By virtue of the following two lemmas, we shall formulate an estimate of the energy \(\mathcal{E}(\overline{\bf{A}})\) and then establish the exact estimate of (A) = (G n (p)) by using Theorem 4.17 (the Ky Fan’s theorem). Let I be the interval [ − 1, 1].

Lemma 6.1.

Let I c be the set ℝ ∖ I. Then

$$\lim\limits_{n\rightarrow \infty }{\int \nolimits \nolimits }_{{I}^{c}}{x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x) ={ \int \nolimits \nolimits }_{{I}^{c}}{x}^{2}\,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.}$$

Proof.

Suppose that \({\phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\) is the density of \({\Phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\). According to Eq. (6.1), with probability 1, \({\phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\) converges to ϕ(x) almost everywhere as n tends to infinity. Since ϕ(x) is bounded on I, it follows that with probability 1, \({x}^{2}{\phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\) is bounded almost everywhere on I. Then the bounded convergence theorem yields

$$\lim\limits_{n\rightarrow \infty }{\int \nolimits \nolimits }_{I}{x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x) ={ \int \nolimits \nolimits }_{I}{x}^{2}\,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.}$$

Combining the above fact with Eq. (6.2), we get

$$ \begin{array}{rcl}\lim\limits_{n\rightarrow \infty }{\int \nolimits \nolimits }_{{I}^{c}}{x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x)& =& \lim\limits_{n\rightarrow \infty }\left (\int \nolimits \nolimits {x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x) -{\int \nolimits \nolimits }_{I}{x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x)\right ) \\ & =& \lim\limits_{n\rightarrow \infty }\int \nolimits \nolimits {x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x) -\lim\limits_{n\rightarrow \infty }{\int \nolimits \nolimits }_{I}{x}^{2}\,\mathrm{d}{\Phi }_{{ n}^{-1/2}\,\overline{\bf{A}}}(x) \\ & =& \int \nolimits \nolimits {x}^{2}\,\mathrm{d}\Phi (x) -{\int \nolimits \nolimits }_{I}{x}^{2}\mathrm{d}\Phi (x)\ \ \mbox{ a.s.} \\ & =& {\int \nolimits \nolimits }_{{I}^{c}}{x}^{2}\,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.} \square\\ \end{array}$$

Lemma 6.2.

[([34, p. 219])]. Let μ be a measure. Suppose that the functions a n , b n , and f n converge almost everywhere to functions a, b, and f, respectively, and that a n ≤ f n ≤ b n almost everywhere. If ∫ a n d μ →∫ a d μ and ∫ b n d μ →∫ b d μ, then ∫ f n d μ →∫ f d μ. ■

We now turn to the estimate of the energy \(\mathcal{E}(\overline{\bf{A}})\). To this end, we first investigate the convergence of \(\int \nolimits \nolimits \vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\). According to Eq.  (6.1) and the bounded convergence theorem, by an argument similar to the first part of the proof of Lemma 6.1, we deduce that

$$\lim\limits_{n\rightarrow \infty }{\int \nolimits \nolimits }_{I}\vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x) ={ \int \nolimits \nolimits }_{I}\vert x\vert \,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.}$$

Obviously, | x | ≤ x 2 if x ∈ I c : =  ∖ I. Set a n (x) = 0, \({b}_{n}(x) = {x}^{2}\,{\phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\), and \({f}_{n}(x) = \vert x\vert \,{\phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x)\). Employing Lemmas 6.1 and 6.2, we have

$$\lim\limits_{n\rightarrow \infty }{\int \nolimits \nolimits }_{{I}^{c}}\vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x) ={ \int \nolimits \nolimits }_{{I}^{c}}\vert x\vert \,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.}$$

Consequently,

$$\lim\limits_{n\rightarrow \infty }\int \nolimits \nolimits \vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x) = \int \nolimits \nolimits \vert x\vert \,\mathrm{d}\Phi (x)\ \ \mbox{ a.s.}$$
(6.3)

Suppose that \({\overline{\lambda }}_{1},\ldots,{\overline{\lambda }}_{n}\) and \({\overline{\lambda }}_{1}^{\prime},\ldots,{\overline{\lambda }}_{n}^{\prime}\) are the eigenvalues of \(\overline{\bf{A}}\) and \({n}^{-1/2}\,\overline{\bf{A}}\), respectively. Clearly,

$$\sum\limits_{i=1}^{n}\vert {\overline{\lambda }}_{ i}\vert = {n}^{1/2}\, \sum\limits_{i=1}^{n}\vert {\overline{\lambda }}_{ i}^{\prime}\vert.$$

By Eq. (6.3), we deduce that

$$\begin{array}{rcl} \mathcal{E}\left (\overline{\bf{A}}\right )/{n}^{3/2}& =& \frac{1} {{n}^{3/2}} \sum\limits _{i=1}^{n}\vert {\overline{\lambda }}_{ i}\vert = \frac{1} {n}\sum\limits_{i=1}^{n}\vert {\overline{\lambda }}_{ i}^{\prime}\vert = \int \nolimits \nolimits \vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,\overline{\bf{A}}}(x) \\ & \rightarrow & \int \nolimits \nolimits \vert x\vert \,\mathrm{d}\Phi (x)\ \ \mbox{ a.s. }\ \ (n \rightarrow \infty ) \\ & =& \frac{1} {2\pi {\sigma }_{2}^{2}} \int\limits_{-2{\sigma }_{2}}^{2{\sigma }_{2} }\vert x\vert \sqrt{4{\sigma }_{2 }^{2 } - {x}^{2}}\,\mathrm{d}x = \frac{8} {3\pi }{\sigma }_{2} = \frac{8} {3\pi }\,\sqrt{p(1 - p)}.\end{array}$$

Therefore, the energy \(\mathcal{E}\left (\overline{\bf{A}}\right )\) satisfies a.s. the equation

$$\mathcal{E}\left (\overline{\bf{A}}\right ) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right ).$$

By means of Theorem 4.17, it is not difficult to verify that the eigenvalues of the matrix J n  − I n are n − 1 and − 1 (n − 1 times). Consequently, \(\mathcal{E}({\bf{J}}_{n} -{\bf{I}}_{n}) = 2(n - 1)\). One readily sees that \(\mathcal{E}\left(p({\bf{J}}_{n} -{\bf{I}}_{n})\right) = p\mathcal{E}({\bf{J}}_{n} -{\bf{I}}_{n})\). Thus, \(\mathcal{E}\left(p({\bf{J}}_{n} -{\bf{I}}_{n})\right) = 2p(n - 1)\). Since \(\bf{A} = \overline{\bf{A}} + p({\bf{J}}_{n} -{\bf{I}}_{n})\), it follows from Theorem 4.17 that with probability 1,

$$\begin{array}{rcl} \mathcal{E}(\bf{A})& \leq & \mathcal{E}\left (\overline{\bf{A}}\right ) + \mathcal{E}(p({\bf{J}}_{n} -{\bf{I}}_{n})) \\ & =& {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right ) + 2p(n - 1).\end{array}$$

Consequently,

$$ \lim\limits_{n\rightarrow \infty }\frac{\mathcal{E}(\bf{A})} {{n}^{3/2}} \leq \frac{8} {3\pi }\sqrt{p(1 - p)}\ \ \mbox{ a.s.}$$
(6.4)

On the other hand, since \(\overline{\bf{A}} = \bf{A} + p\left( - ({\bf{J}}_{n} -{\bf{I}}_{n})\right)\), by Theorem 4.17, we deduce that with probability 1,

$$\begin{array}{rcl} \mathcal{E}(\bf{A})& \geq & \mathcal{E}\left (\overline{\bf{A}}\right ) -\mathcal{E}\left (p\left( - ({\bf{J}}_{n} -{\bf{I}}_{n})\right)\right ) \\ & =& \mathcal{E}\left (\overline{\bf{A}}\right ) -\mathcal{E}(p({\bf{J}}_{n} -{\bf{I}}_{n})) \\ & =& {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right ) - 2p(n - 1).\end{array}$$

Consequently,

$$\lim\limits_{n\rightarrow \infty }\frac{\mathcal{E}(\bf{A})} {{n}^{3/2}} \geq \frac{8} {3\pi }\sqrt{p(1 - p)}\ \ \mbox{ a.s.}$$
(6.5)

Combining Ineq. (6.4) with Ineq. (6.5), we obtain

$$\mathcal{E}(\bf{A}) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right )\ \mbox{ a.s.}$$

Recalling that A is the adjacency matrix of G n (p), we thus obtain:

Theorem 6.2.

Almost every graph G in G n (p) satisfies:

$$\mathcal{E}(G) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right ).$$
(6.6)

6.2 The Energy of the Random Multipartite Graph

We begin with the definition of the random multipartite graph. We use \({K}_{n;{\nu }_{1},\ldots,{\nu }_{m}}\) to denote the complete m-partite graph with vertex set [n] whose parts V 1, , V m (m = m(n) ≥ 2) are such that \(\vert {V }_{i}\vert = n{\nu }_{i} = n{\nu }_{i}(n)\), i = 1, , m. Let \({\mathcal{G}}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\) be the set of random m-partite graphs with vertex set [n] in which the edges are chosen independently with probability p from the set of edges of \({K}_{n;{\nu }_{1},\ldots,{\nu }_{m}}\). We further introduce two classes of random m-partite graphs. Denote by \({\mathcal{G}}_{n,m}(p)\) and \({\mathcal{G}^{\prime}}_{n,m}(p)\), respectively, the sets of random m-partite graphs satisfying, respectively, the following conditions:

$$\lim\limits_{n\rightarrow \infty }\max \{{\nu }_{1}(n),\ldots,{\nu }_{m}(n)\} > 0,\lim\limits_{n\rightarrow \infty }\frac{{\nu }_{i}(n)} {{\nu }_{j}(n)} = 1$$
(6.7)

and

$$\lim\limits_{n\rightarrow \infty }\max \{{\nu }_{1}(n),\ldots,{\nu }_{m}(n)\} = 0.$$
(6.8)

One can easily see that in order to obtain an estimate of the energy of the random multipartite graph \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p) \in {\mathcal{G}}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\), we need to investigate the spectral distribution of the random matrix \(\bf{A}({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p))\). It is not difficult to verify that \(\bf{A}({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p))\) would be a special case of a random matrix X n 1, , ν m ) (or X n, m for short) called a random multipartite matrix which has the following properties:

  • The x ij ’s are independent random variables with x ij  = x ji  .

  • The x ij ’s have the same distribution F 1 if i and j ∈ V k , whereas the x ij ’s have the same distribution F 2 if i ∈ V k and j ∈ [n] ∖ V k , where V 1, , V m are the parts of \({K}_{n;{\nu }_{1},\ldots,{\nu }_{m}}\) and k is an integer such that 1 ≤ k ≤ m.

  •  | x ij  | ≤ K for some constant K.

Evidently, if F 1 is a point mass at 0 and F 2 is a Bernoulli distribution with mean p, then the random matrix X n, m coincides with the adjacency matrix \(\bf{A}({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p))\). Thus, we can readily evaluate the energy \(\mathcal{E}({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p))\) once we obtain the spectral distribution of X n, m . In fact, the random matrix X n, m is a special case of the random matrix considered by Anderson and Zeitouni [19] in a more general setting called the band matrix model which may be regarded as a generalization of the Wigner matrix. We shall employ their results to deal with the spectral distribution of X n, m .

The rest of this section is divided into three parts. In the first part, we present, respectively, exact estimates of the energies of random graphs \({G}_{n,m}(p) \in {\mathcal{G}}_{n,m}(p)\) and \({G^{\prime}}_{n,m}(p) \in {\mathcal{G}^{\prime}}_{n,m}(p)\) by exploring the spectral distribution of the band matrix. In the second part, we establish lower and upper bounds of the energy of the random multipartite graph \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\). In the third part, we obtain an exact estimate of the energy of the random bipartite graph \({G}_{n;{\nu }_{1},{\nu }_{2}}(p)\).

6.2.1 The Energy of G n, m (p) and G′ n, m (p)

Here we formulate exact estimates of the energy of the random graphs G n, m (p) and G′ n, m (p). For this purpose, we establish the following theorem. In order to state our result, we first present some notation. Let I n, m  = (i p, q ) n ×n be a quasi-unit matrix such that

$$\begin{array}{rcl}{ i}_{p,q} = \left \{\begin{array}{ll} 1&\mbox{ if }p,q \in {V }_{k} \\ 0&\mbox{ if }p \in {V }_{k}\mbox{ and }q \in [n] \setminus {V }_{k} \end{array} \right.& & \\ \end{array}$$

where V 1, , V m are the parts of \({K}_{n;{\nu }_{1},\ldots,{\nu }_{m}}\) and k is an integer such that 1 ≤ k ≤ m. Set μ i  =  ∫x dF i (i = 1, 2) and

$${\overline{\bf{X}}}_{n,m} ={ \bf{X}}_{n,m} - {\mu }_{1}{\bf{I}}_{n,m} - {\mu }_{2}({\bf{J}}_{n} -{\bf{I}}_{n,m}).$$

Evidently, \({\overline{\bf{X}}}_{n,m}\) is a random multipartite matrix as well, in which each entry has mean 0. In order to make our statement concise, we define \({\Delta }^{2} = ({\sigma }_{1}^{2} + (m - 1)\) σ2 2) ∕ m.

Theorem 6.3.

  • If condition (6.7) holds, then

    $${\Phi }_{{n}^{-1/2}\,{\overline{\bf{X}}}_{ n,m}}(x) {\rightarrow }_{P}\Psi (x)\mbox{ as }n \rightarrow \infty $$

    i.e., the ESD \({\Phi }_{{n}^{-1/2}\,{\overline{\bf{X}}}_{ n,m}}(x)\) converges weakly to a probability distribution Ψ(x) as n tends to infinity, where Ψ(x) has the density

    $$\psi (x) = \frac{1} {2\pi {\Delta }^{2}}\sqrt{4{\Delta }^{2 } - {x}^{2}}\ {\mathbf{1}}_{\vert x\vert \leq 2\Delta }.$$
  •  If condition (6.8) holds, then \({\Phi }_{{n}^{-1/2}\,{\overline{\bf{X}}}_{ n,m}}(x) {\rightarrow }_{P}\Phi (x)\) as n →∞ . ■

Our theorem can be proven by a result of Anderson and Zeitouni [19]. We begin with a brief introduction of the band matrix model, defined by Anderson and Zeitouni [19], from which one can readily see that a random multipartite matrix is a band matrix.

We fix a nonempty set \(\mathcal{C} =\{ {c}_{1},{c}_{2},\ldots,{c}_{m}\}\) which is finite or countably infinite. The elements of \(\mathcal{C}\) are called colors. Let κ be a surjection from [n] to the color set \(\mathcal{C}\). Then we say that κ(i) is the color of i. Naturally, we can obtain a partition V 1, , V m of [n] according to the colors of its elements, i.e., two elements i and i′ in [n] belong to the same part V j if and only if their colors are identical. We next define the probability measure θ m on the color set as:

$${\theta }_{m}(C) = {\theta }_{m(n)}(C) = \vert {\kappa }^{-1}(C)\vert /n\ \,\ \ 1 \leq i \leq m = m(n)$$

where \(C \subseteq \mathcal{C}\) and \({\kappa }^{-1}(C) =\{ x \in [n] : \kappa (x) \in C\}\). Evidently, the probability space \((\mathcal{C},{2}^{\mathcal{C}},{\theta }_{m})\) is discrete. Set θ = lim n →  θ m  . For each positive integer k, we fix a bounded nonnegative function d (k) on the color set and a symmetric bounded nonnegative function s (k) on the product of two copies of the color set. We make the following assumptions:

  1. (1)

    d (k) is constant for k≠2.

  2. (2)

    s (k) is constant for k∉{2, 4}.

Let {ξ ij } i, j = 1 n be a family of independent real-valued mean zero random variables. Suppose that for all 1 ≤ i, j ≤ n, and positive integers k,

$$\mathrm{e}(\vert {\xi }_{ij}{\vert }^{k}) \leq \left \{\begin{array}{ll} {s}^{(k)}(\kappa (i),\kappa (j))&\mbox{ if }i\neq j \\ {d}^{(k)}(\kappa (i)) &\mbox{ if }i = j, \end{array} \right.$$

and moreover, assume that equality holds above whenever one of the conditions (a) and (b) holds: (a) k = 2  (b) ij and k = 4.

In other words, the rule is to enforce equality whenever the not-necessarily-constant functions d (2), s (2), or s (4) are involved but otherwise merely to impose a bound.

We are now ready to present the random symmetric matrix Y n , called band matrix, in which the entries are the r.v. ξ ij . Evidently, Y n is the same as \({\overline{\bf{X}}}_{n,m}\) provided that

$${ s}^{(2)}(\kappa (i),\kappa (j)) = \left \{\begin{array}{ll} {\sigma }_{1}^{2} & \mbox{ if }\kappa (i) = \kappa (j) \\ {\sigma }_{2}^{2} & \mbox{ if }\kappa (i)\neq \kappa (j)\\ \end{array} \right.\mbox{ and }{d}^{(2)}(\kappa (i)) = {\sigma }_{ 1}^{2},1 \leq i,j \leq n.$$
(6.9)

So the random multipartite matrix \({\overline{\bf{X}}}_{n,m}\) is a special case of the band matrix Y n .

Define the standard semicircle distribution Φ 0, 1 of zero mean and unit variance to be the measure on the real set of compact support with density

$${\phi }_{0,1}(x) = \frac{1} {2\pi }\sqrt{4 - {x}^{2}}\ {\mathbf{1}}_{\vert x\vert \leq 2}.$$

Anderson and Zeitouni investigated the LSD of Y n and proved the following result (Theorem 3.5 in [19]):

Lemma 6.3.

If ∫ s (2) (c,c′)θ(d c′) ≡ 1, then \({\Phi }_{{n}^{-1/2}\,{\mathbf{Y}}_{n}}(x)\) converges weakly to the standard semicircle probability distribution Φ 0,1 as n tends to infinity. ■

Remark 6.2.

The main approach employed by Anderson and Zeitouni to prove the assertion is a combinatorial enumeration scheme for different types of terms that contribute to the expectation of products of traces of powers of the matrices. It is worthwhile to point out that by an analogous method, called moment approach, one can readily obtain a stronger assertion for \({\overline{\bf{X}}}_{n,m}\) that the convergence could be valid with probability 1. Moreover, Anderson and Zeitouni [19] showed that for each positive integer k,

$$\lim\limits_{n\rightarrow \infty }\int \nolimits \nolimits {x}^{k}\,{\Phi }_{{ n}^{-1/2}\,{\overline{\bf{X}}}_{n}}(x) = \left \{\begin{array}{ll} \int \nolimits \nolimits {x}^{k}\,\Psi (x)\ \ \mbox{ a.s.}&\mbox{ if condition (6.7) holds,} \\ \int \nolimits \nolimits {x}^{k}\,\Phi (x)\ \ \mbox{ a.s.} &\mbox{ if condition (6.8) holds.} \end{array} \right.$$
(6.10)

However, we shall not present the proof of Eq. (6.10) here since the arguments of the two methods are similar and the calculation of the moment approach is rather tedious. We refer the readers to Bai’s survey [25] for details.

Using Lemma 6.3 to prove Theorem 6.3, we just need to verify that

$$\int \nolimits \nolimits {s}^{(2)}(c,c^{\prime})\theta (\mathrm{d}c^{\prime}) \equiv 1.$$

For Theorem 6.3(i), we consider the matrix \({\Delta }^{-1}{\overline{\bf{X}}}_{n,m}\) where

$${\Delta }^{2} = ({\sigma }_{ 1}^{2} + (m - 1){\sigma }_{ 2}^{2})/m.$$

Note that condition (6.7) implies that θ m (c i ) → 1 ∕ m as n → , 1 ≤ i ≤ m. By means of condition (6.9), it is readily seen that for the random matrix \({\Delta }^{-1}{\overline{\bf{X}}}_{n,m}\),

$$\int \nolimits \nolimits {s}^{(2)}(c,c^{\prime})\theta (\mathrm{d}c^{\prime}) = \frac{1} {{\Delta }^{2}}\left (\frac{{\sigma }_{1}^{2}} {m} + \frac{(m - 1){\sigma }_{2}^{2}} {m} \right ) \equiv 1.$$

Consequently, Lemma 6.3 implies that

$${\Phi }_{{n}^{-1/2}\,{\Delta }^{-1}{\overline{\bf{X}}}_{ n,m}} {\rightarrow }_{P}{\Phi }_{0,1}\mbox{ as }n \rightarrow \infty.$$

Therefore,

$${\Phi }_{{n}^{-1/2}\,{\overline{\bf{X}}}_{ n,m}} {\rightarrow }_{P}\Psi (x)\mbox{ as }n \rightarrow \infty $$

and thus, the first part of Theorem 6.3 follows.

For the second part of Theorem 6.3, we consider the matrix \({\sigma }_{2}^{-1}{\overline{\bf{X}}}_{n,m}\). Note that condition (6.8) implies that \(\theta ({c}_{i}) =\lim\limits_{n\rightarrow \infty }{\theta }_{m}({c}_{i}) =\lim\limits_{n\rightarrow \infty }{\nu }_{i}(n) = 0\), 1 ≤ i ≤ m. By virtue of condition (6.9), if cc′, then s (2)(c, c′) = 1. Consequently, for the random matrix \({\sigma }_{2}^{-1}{\overline{\bf{X}}}_{n,m}\), we have

$$\begin{array}{rcl} \int \nolimits \nolimits {s}^{(2)}(c,c^{\prime})\theta (\mathrm{d}c^{\prime})& =& \int \nolimits \nolimits {s}^{(2)}(c,c^{\prime}){\chi }_{{}_{ \mathcal{C}\setminus \{c\}}}\theta (\mathrm{d}c^{\prime}) \\ & =& \int \nolimits \nolimits {\chi }_{{}_{\mathcal{C}\setminus \{c\}}}\theta (\mathrm{d}c^{\prime}) = \theta (\mathcal{C}\setminus \{ c\}) \equiv 1.\end{array}$$

As a result, Lemma 6.3 implies that

$${\Phi }_{{n}^{-1/2}\,{\sigma }_{ 2}^{-1}{\overline{\bf{X}}}_{n,m}} {\rightarrow }_{P}{\Phi }_{0,1}\mbox{ as }n \rightarrow \infty.$$

Therefore,

$${\Phi }_{{n}^{-1/2}\,{\overline{\bf{X}}}_{ n,m}} {\rightarrow }_{P}\Phi (x)\mbox{ as }n \rightarrow \infty $$

and thus, the second part follows.

We now employ Theorem 6.3 to estimate the energy of \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\) under conditions (6.7) or (6.8). For convenience, we use A n, m to denote the adjacency matrix A(G n, m (p)). One readily sees that if a random multipartite matrix X n, m satisfies condition (6.7) and F 1 is a point mass at 0 and F 2 is a Bernoulli distribution with mean p, then X n, m coincides with the adjacency matrix A n, m . Set

$${ \overline{\bf{A}}}_{n,m} ={ \bf{A}}_{n,m} - p({\bf{J}}_{n} -{\bf{I}}_{n,m})$$
(6.11)

where I n, m is the quasi-unit matrix whose parts are the same as A n, m . Evidently, each entry of \({\overline{\bf{A}}}_{n,m}\) has mean 0. It follows from the first part of Theorem 6.3 that

$${\Phi }_{{n}^{-1/2}\,{\overline{\bf{A}}}_{ n,m}} {\rightarrow }_{P}\Psi (x)\mbox{ as }n \rightarrow \infty.$$

Since the density of Ψ(x) is bounded with the finite support, we can use a method similar as for obtaining Eq. (6.3), to prove that

$$\int \nolimits \nolimits \vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,{\overline{\bf{A}}}_{ n,m}}(x) {\rightarrow }_{P} \int \nolimits \nolimits \vert x\vert \,\mathrm{d}\Psi (x)\mbox{ as }n \rightarrow \infty.$$

Consequently,

$$\begin{array}{rcl} \mathcal{E}\left ({\overline{\bf{A}}}_{n,m}\right )/{n}^{3/2}& = & \int \nolimits \nolimits \vert x\vert \,\mathrm{d}{\Phi }_{{n}^{-1/2}\,{\overline{\bf{A}}}_{ n,m}}(x) \\ & {\rightarrow }_{P}& \int \nolimits \nolimits \vert x\vert \,\mathrm{d}\Psi (x)\mbox{ as }n \rightarrow \infty \\ & = & \frac{\sigma m} {2\pi (m - 1){\sigma }_{2}^{2}}{ \int \nolimits }_{-2\sqrt{\frac{m-1} {m}} {\sigma }_{2}}^{2\sqrt{\frac{m-1} {m}} {\sigma }_{2}}\vert x\vert \sqrt{4\frac{(m - 1){\sigma }_{2 }^{2 }} {m} - {x}^{2}}\,\mathrm{d}x \\ & = & \frac{8} {3\pi }\sqrt{\frac{m - 1} {m}} \,{\sigma }_{2} = \frac{8} {3\pi }\,\sqrt{\frac{m - 1} {m} \,p(1 - p)}.\end{array}$$

Therefore, a.e. random matrix \({\overline{\bf{A}}}_{n,m}\) satisfies:

$$\mathcal{E}\left ({\overline{\bf{A}}}_{n,m}\right ) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{\frac{m - 1} {m} p(1 - p)} + o(1)\right ).$$

We now turn to the estimate of the energy (A n, m ) = (G n, m (p)). Evidently,

$${\bf{J}}_{n} -{\bf{I}}_{n,m} = ({\bf{J}}_{n} -{\bf{I}}_{n}) + ({\bf{I}}_{n} -{\bf{I}}_{n,m}).$$

By virtue of Theorem 4.17, we arrive at

$$\mathcal{E}({\bf{J}}_{n} -{\bf{I}}_{n,m}) \leq \mathcal{E}({\bf{J}}_{n} -{\bf{I}}_{n}) + \mathcal{E}({\bf{I}}_{n} -{\bf{I}}_{n,m}).$$

Recalling the definition of the quasi-unit matrix I n, m and the fact that \(\mathcal{E}({\bf{J}}_{n} -{\bf{I}}_{n}) = 2(n - 1)\), we have (J n  − I n, m ) ≤ O(n). According to Eq. (6.11), we use a similar argument for the estimate of the energy (A) from \(\mathcal{E}(\overline{\bf{A}})\) to show that a.e. random matrix A n, m satisfies the following equation:

$$\mathcal{E}({\bf{A}}_{n,m}) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{\frac{m - 1} {m} p(1 - p)} + o(1)\right ).$$

Since the random matrix A n, m is the adjacency matrix of G n, m (p), we thus have:

Theorem 6.4.

Almost every graph G in G n,m (p), satisfying the condition (6.7), obeys

$$\mathcal{E}(G) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{\frac{m - 1} {m} p(1 - p)} + o(1)\right ).$$
(6.12)

In what follows, we use A′ n, m to denote the adjacency matrix A(G′ n, m (p)). It is easily seen that if a random multipartite matrix X n, m satisfies the condition (6.8), if F 1 is a point mass at 0, and if F 2 is a Bernoulli distribution with mean p, then X n, m coincides with the adjacency matrix A′ n, m . Set

$${\overline{\bf{A}^{\prime}}}_{n,m} ={ \bf{A}^{\prime}}_{n,m} - p({\bf{J}}_{n} -{\bf{I}^{\prime}}_{n,m})$$

where I n, m is the quasi-unit matrix whose parts are the same as A n, m . One can readily check that each entry in \({\overline{\bf{A}^{\prime}}}_{n,m}\) has mean 0. It follows from the second part of Theorem 6.3 that

$${\Phi }_{{n}^{-1/2}\,{\overline{\mathbf{A^{\prime}}}}_{ n,m}}(x) {\rightarrow }_{P}\Phi (x)\mbox{ as }n \rightarrow \infty.$$

Employing the argument analogous to the estimate of (p(J n  − I n, m )), \(\mathcal{E}({\overline{\bf{A}}}_{n,m})\), and (A n, m ), one can evaluate, respectively, (p(J n  − I n, m )), \(\mathcal{E}({\overline{\bf{A}^{\prime}}}_{n,m})\), and (A n, m ) and finally arrive at:

Theorem 6.5.

Almost every graph G in G′ n,m (p), satisfying the condition (6.8), obeys the equation:

$$\mathcal{E}(G) = {n}^{3/2}\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right ).$$
(6.13)

6.2.2 The Energy of G n; ν 1 ν m (p)

In this subsection we give an estimate of the energy of the random multipartite graph \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\) satisfying the condition:

$$\begin{array}{rlrlrl} \lim\limits_{n\rightarrow \infty }\max \{{\nu }_{1}(n),\ldots,{\nu }_{m}(n)\} > 0\mbox{ and there exist }{\nu }_{i}\mbox{ and }{\nu }_{j}, \lim\limits_{n\rightarrow \infty }\frac{{\nu }_{i}(n)} {{\nu }_{j}(n)} < 1. & & \\ & &\end{array}$$
(6.14)

Moreover, for random bipartite graphs \({G}_{n;{\nu }_{1},{\nu }_{2}}(p)\) satisfying lim n →  ν i (n) > 0 (i = 1, 2), we formulate an exact estimate of the energy.

Anderson and Zeitouni [19] established the existence of the LSD of X n, m with partitions satisfying condition (6.14). Unfortunately, they failed to get the exact form of the LSD, which appears to be a much harder and complicated task. However, we can establish lower and upper bounds for the energy \(\mathcal{E}({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p))\) in another way.

Here, we still denote the adjacency matrix of the multipartite graph satisfying condition (6.14) by A n, m . Without loss of generality, we assume that for some r ≥ 1, | V 1 | , , | V r  | are of order O(n) while | V r + 1 | , , | V m  | are of order o(n). Let A n, m be a random symmetric matrix such that

$${\bf{A}^{\prime}}_{n,m}(ij) = \left \{\begin{array}{ll} {\bf{A}}_{n,m}(ij)&\mbox{ if }i\mbox{ or }j\notin {V }_{s},1 \leq s \leq r \\ {t}_{ij} &\mbox{ if }i,j \in {V }_{s},1 \leq s \leq r\mbox{ and }i > j \\ 0 &\mbox{ if }i,j \in {V }_{s}(r + 1 \leq s \leq m)\mbox{ or }i = j\\ \end{array} \right.$$

where the t ij ’s are independent Bernoulli r.v. with mean p. Evidently, A n, m is a random multipartite matrix. By means of Eq. (6.13), we have

$$\mathcal{E}({\bf{A}^{\prime}}_{n,m}) = \left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right ){n}^{3/2}.$$

Set

$${ \bf{D}}_{n} ={ \bf{A}^{\prime}}_{n,m}-{\bf{A}}_{n,m} ={ \left (\begin{array}{lllllll} {\bf{K}}_{1} & & & & \\ & {\bf{K}}_{2} & & & \\ & & \ddots & & \\ & & & {\bf{K}}_{r}& \\ & && &\mathbf{0}\\ & & & &&\ddots&\\ & & & & & &\mathbf{0} \end{array} \right )}_{n\times n}.$$
(6.15)

Then one can readily see that K i  (i = 1, , r) is a Wigner matrix, and thus, a.e. K i satisfies:

$$\mathcal{E}({\bf{K}}_{i}) = \left ( \frac{8} {3\pi }\,\sqrt{p(1 - p)} + o(1)\right ){({\nu }_{i}\,n)}^{3/2}.$$

Consequently, for a.e. matrix D n , it holds:

$$\mathcal{E}({\mathbf{D}}_{n}) = \left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right )\left ({\nu }_{1}^{3/2} + \cdots + {\nu }_{ r}^{3/2}\right ){n}^{3/2}.$$

By Eq. (6.15), \({\bf{A}}_{n,m} +{ \mathbf{D}}_{n} ={ \bf{A}^{\prime}}_{n,m}\) and \({\bf{A}^{\prime}}_{n,m} + (-{\mathbf{D}}_{n}) ={ \bf{A}}_{n,m}\). Employing Theorem 4.17, we deduce

$$\mathcal{E}({\bf{A}^{\prime}}_{n,m}) -\mathcal{E}({\mathbf{D}}_{n}) \leq \mathcal{E}({\bf{A}}_{n,m}) \leq \mathcal{E}({\bf{A}^{\prime}}_{n,m}) + \mathcal{E}({\mathbf{D}}_{n}).$$

Recalling that A n, m is the adjacency matrix of \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\), the following result is relevant:

Theorem 6.6.

Almost every graph G in \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\) satisfies the inequalities:

$$\begin{array}{rcl} \left (1 -\sum\limits_{i=1}^{r}{\nu }_{ i}^{3/2}\right ){n}^{3/2}& \leq & \mathcal{E}(G){\left ( \frac{8} {3\pi }\sqrt{p(1 - p)} + o(1)\right )}^{-1} \\ & \leq & \left (1 + \sum\limits_{i=1}^{r}{\nu }_{ i}^{3/2}\right ){n}^{3/2}. \square\\ \end{array}$$

Remark 6.3.

Since ν1, , ν r are positive real numbers with ∑ i = 1 rν i  ≤ 1, we have \({\sum \nolimits }_{i=1}^{r}{\nu }_{i}(1 - {\nu }_{i}^{1/2}) > 0\). Therefore, \({\sum \nolimits }_{i=1}^{r}{\nu }_{i} >{ \sum \nolimits }_{i=1}^{r}{\nu }_{i}^{3/2}\), and thus, \(1 >{ \sum \nolimits }_{i=1}^{r}{\nu }_{i}^{3/2}\). Hence, the above theorem implies that a.e. random graph \({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)\) obeys:

$$\mathcal{E}({G}_{n;{\nu }_{1}\ldots {\nu }_{m}}(p)) = O({n}^{3/2}).$$

6.2.3 The Energy of Random Bipartite Graphs

In this subsection, we investigate the energy of random bipartite graphs \({G}_{n;{\nu }_{1},{\nu }_{2}}(p)\) satisfying lim n →  ν i (n) > 0 (i = 1, 2) and present the precise estimate of \(\mathcal{E}({G}_{n;{\nu }_{1},{\nu }_{2}}\) (p)) by employing the Marčenko–Pastur Law.

For convenience, set n 1 = ν1n and n 2 = ν2n. Let I n, 2 be a quasi-unit matrix with the same partition as A n, 2. Set

$${ \overline{\bf{A}}}_{n,2} ={ \bf{A}}_{n,2}-p({\bf{J}}_{n}-{\bf{I}}_{n,2}) = \left [\begin{array}{ll} \bf{O}&{\bf{X}}^{\mathrm{T}} \\ \bf{X}&\bf{O} \end{array} \right ]$$
(6.16)

where X is a random matrix of order n 2 ×n 1 in which the entries X(ij) are iid. with mean zero and variance \({\sigma }^{2} = p(1 - p)\). By

$$\left (\begin{array}{ll} \lambda {\bf{I}}_{{n}_{1}} & \mathbf{0} \\ -\bf{X}&\lambda {\bf{I}}_{{n}_{2}} \end{array} \right )\left (\begin{array}{ll} \lambda {\bf{I}}_{{n}_{1}} & -{\bf{X}}^{\mathrm{T}} \\ \mathbf{0} &\lambda {\bf{I}}_{{n}_{2}} - {\lambda }^{-1}\bf{X}{\bf{X}}^{\mathrm{T}} \end{array} \right ) = \lambda \left (\begin{array}{ll} \lambda {\bf{I}}_{{n}_{1}} & -{\bf{X}}^{\mathrm{T}} \\ -\bf{X}&\lambda {\bf{I}}_{{n}_{2}}, \end{array} \right )$$

we have

$${\lambda }^{n} \cdot {\lambda }^{{n}_{1} }\vert \lambda {\bf{I}}_{{n}_{2}} - {\lambda }^{-1}\bf{X}{\bf{X}}^{\mathrm{T}}\vert = {\lambda }^{n}\vert \lambda {\bf{I}}_{ n} -{\overline{\bf{A}}}_{n,2}\vert $$

and, consequently,

$${\lambda }^{{n}_{1} }\vert {\lambda }^{2}{\bf{I}}_{{ n}_{2}} -\bf{X}{\bf{X}}^{\mathrm{T}}\vert = {\lambda }^{{n}_{2} }\vert \lambda {\bf{I}}_{n} -{\overline{\bf{A}}}_{n,2}\vert.$$

Thus, the eigenvalues of \({\overline{\bf{A}}}_{n,2}\) are symmetric, and moreover, \(\overline{\lambda }\) is the eigenvalue of \(\frac{1} {\sqrt{{n}_{1}}} \,{\overline{\bf{A}}}_{n,2}\) if and only if \({\overline{\lambda }}^{2}\) is the eigenvalue of \(\frac{1} {{n}_{1}} \,\bf{X}{\bf{X}}^{\mathrm{T}}\). Therefore, we can characterize the spectrum of \({\overline{\bf{A}}}_{n,2}\) by means of the spectrum of XX T. Bai formulated the LSD of \(\frac{1} {{n}_{1}} \,\bf{X}{\bf{X}}^{\mathrm{T}}\) (Theorem 2.5 in [25]) by moment approach.

Lemma 6.4 (Marčenko–Pastur Law [25]). 

Suppose that the X(ij)’s are iid. with mean zero and variance \({\sigma }^{2} = p(1 - p)\) and ν2∕ν1 → y ∈ (0,∞). Then, with probability 1, the ESD \({\Phi }_{ \frac{1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}\) converges weakly to the Marčenko–Pastur Law Fy as n →∞ where Fy has the density

$${f}_{y}(x) = \frac{1} {2\pi p(1 - p)xy}\sqrt{(b - x)(x - a)}\ {\mathbf{1}}_{a\leq x\leq b}$$

and has a point mass \(1 - 1/y\) at the origin if y > 1, where \(a = p(1 - p){(1 -\sqrt{y})}^{2}\) and \(b = p(1 - p){(1 + \sqrt{y})}^{2}\). ■

By the symmetry of the eigenvalues of \(\frac{1} {\sqrt{{n}_{1}}} \,{\overline{\bf{A}}}_{n,2}\), in order to evaluate the energy \(\mathcal{E}( \frac{1} {\sqrt{{n}_{1}}} \,{\overline{\bf{A}}}_{n,2})\), we just need to consider the positive eigenvalues. Define \({\Theta }_{{n}_{2}}(x) = \frac{\sum \nolimits {1}_{\overline{\lambda } <x}} {{n}_{2}}\). One can see that the sum of the positive eigenvalues of \(\frac{1} {\sqrt{{n}_{1}}} \,{\overline{\bf{A}}}_{n,2}\) is equal to \({n}_{2}{ \int \nolimits \nolimits }_{0}^{\infty }x\,\mathrm{d}{\Theta }_{{n}_{2}}(x)\). Suppose that 0 < x 1 < x 2. Then we have

$${\Theta }_{{n}_{2}}({x}_{2}) - {\Theta }_{{n}_{2}}({x}_{1}) = {\Phi }_{ \frac{1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}({x}_{2}^{2}) - {\Phi }_{ \frac{ 1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}({x}_{1}^{2}).$$

It follows that

$$\int\limits_{0}^{\infty }x\,\mathrm{d}{\Theta }_{{ n}_{2}}(x) = \int\limits_{0}^{\infty }\sqrt{x}\,\mathrm{d}{\Phi }_{ \frac{ 1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}(x).$$

Note that all eigenvalues of \(\frac{1} {{n}_{1}} \,\bf{X}{\bf{X}}^{\mathrm{T}}\) are nonnegative. By the moment approach (see [25] for instance), we get

$$\begin{array}{rcl} \int \nolimits \nolimits {x}^{2}\,\mathrm{d}{\Phi }_{ \frac{ 1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}(x)& =& \int\limits_{0}^{\infty }{x}^{2}\,\mathrm{d}{\Phi }_{ \frac{ 1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}(x) \\ & \rightarrow & \int\limits_{0}^{\infty }{x}^{2}\,\mathrm{d}{F}_{ y}(x)\ \ \mbox{ a.s. }(n \rightarrow \infty ) \\ & =& \int \nolimits \nolimits {x}^{2}\,\mathrm{d}{F}_{ y}(x).\end{array}$$

Analogous to the proof of Eq. (6.3), we deduce that

$$\lim\limits_{n\rightarrow \infty }\int\limits_{0}^{\infty }\sqrt{x}\,\mathrm{d}{\Phi }_{ \frac{ 1} {{n}_{1}} \bf{X}{\bf{X}}^{\mathrm{T}}}(x) = \int\limits_{0}^{\infty }\sqrt{x}\,\mathrm{d}{F}_{ y}(x)\ \ \mbox{ a.s. }$$

Therefore,

$$\lim\limits_{n\rightarrow \infty }\int\limits_{0}^{\infty }x\,\mathrm{d}{\Theta }_{{ n}_{2}}(x) = \int\limits_{\sqrt{a}}^{\sqrt{b}} \frac{1} {\pi p(1 - p)y}\,\sqrt{(b - {x}^{2 } )({x}^{2 } - a)}\,\mathrm{d}x\ \ \mbox{ a.s. }$$

Let

$$\Lambda = \int\limits_{\sqrt{a}}^{\sqrt{b}} \frac{1} {\pi p(1 - p)y}\sqrt{(b - {x}^{2 } )({x}^{2 } - a)}\,\mathrm{d}x.$$

Then we obtain that for a.e. \({\overline{\bf{A}}}_{n,2}\), the sum of the positive eigenvalues is \((\Lambda + o(1))\,{n}_{2}\,\sqrt{{n}_{1}}\). Thus, a.e. \(\mathcal{E}({\overline{\bf{A}}}_{n,2})\) satisfies:

$$\mathcal{E}({\overline{\bf{A}}}_{n,2}) = (2\Lambda + o(1)){n}_{2}\sqrt{{n}_{1}}.$$

Furthermore,

$$\Lambda = \frac{\sqrt{b}[(a + b)Ep (1 - a/b) - 2aEk (1 - a/b)]} {3\pi p(1 - p)y}$$

where {Ek} is the complete elliptic integral of the first kind and {Ep} is the complete elliptic integral of the second kind. For t ∈ [0, 1], these are defined as

$$Ek (t) = \int\limits_{0}^{\pi /2} \frac{\mathrm{d}\theta } {\sqrt{1 - {t\sin }^{2 } \theta }}\mbox{ and }Ep (t) = \int\limits_{0}^{\pi /2}\sqrt{1 - {t\sin }^{2 } \theta }\,\mathrm{d}\theta.$$

For any t, the numerical values of El(t) and Ep(t) are readily computed by appropriate software.

Employing Eq. (6.16) and Theorem 4.17, we have

$$\mathcal{E}({\overline{\bf{A}}}_{n,2}) -\mathcal{E}(p({\bf{J}}_{n} -{\bf{I}}_{n,2})) \leq \mathcal{E}({\bf{A}}_{n,2}) \leq \mathcal{E}({\overline{\bf{A}}}_{n,2}) + \mathcal{E}(p({\bf{J}}_{n} -{\bf{I}}_{n,2})).$$

Together with the fact that \(\mathcal{E}(p({\bf{J}}_{n} -{\bf{I}}_{n,2})) = 2p\sqrt{{\nu }_{1 } \,{\nu }_{2}}\,n\) and \({n}_{2}\sqrt{{n}_{1}} = {\nu }_{2}\sqrt{{\nu }_{1}}\,{n}^{3/2}\), we get

$$\mathcal{E}({\bf{A}}_{n,2}) = (2{\nu }_{2}\sqrt{{\nu }_{1}}\,\Lambda + o(1)){n}^{3/2}.$$

Therefore, the following theorem is relevant:

Theorem 6.7.

Almost every bipartite graph G in \({G}_{n;{\nu }_{1},{\nu }_{2}}(p)\) with ν 2 ∕ν 1 → y satisfies

$$ \mathcal{E}(G) = (2{\nu }_{2}\sqrt{{\nu }_{1}}\,\Lambda + o(1)){n}^{3/2}. \square$$

We now compare the above estimate of the energy \(\mathcal{E}({G}_{n;{\nu }_{1},{\nu }_{2}}(p))\) with bounds obtained in Theorem 6.6 for \(p = 1/2\). For the upper bound, Theorem 5.9 established an upper bound \(\frac{n} {2} (\sqrt{n} + 1)\) of the energy (G) for simple graphs G. It is easy to see that for \(p = 1/2\), this upper bound is better than ours. So we turn our attention to comparing the estimate of \(\mathcal{E}({G}_{n;{\nu }_{1},{\nu }_{2}}(1/2))\) in Theorem 6.7 with the lower bound in Theorem 6.6. By numerical computation (see the table below), the energy \(\mathcal{E}({G}_{n;{\nu }_{1},{\nu }_{2}}(1/2))\) of a.e. random bipartite graphs \({G}_{n;{\nu }_{1},{\nu }_{2}}(1/2)\) is found to be close to our lower bound.

Table 1