Skip to main content
Log in

Generalized Elias schemes for efficient harvesting of truly random bits

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

The problem of generating a sequence of true random bits (suitable for cryptographic applications) from random discrete or analog sources is considered. A generalized version, including vector quantization, of the classical approach by Elias for the generation of truly random bits is introduced, and its performance is analyzed, both in the finite case and asymptotically. The theory allows us to provide an alternative proof of the optimality of the original Elias’ scheme. We also consider the problem of deriving random bits from measurements of a Poisson process and from vectors of iid Gaussian variables. The comparison with the scheme of Elias, applied to geometric-like non-binary vectors, originally based on the iso-probability property of permutations of iid variables, confirms the potential of the generalized scheme proposed in our work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We consider here non-binary vectors. One could exploit the fact that geometric variables can be equivalently represented by sparse binary strings and use the original Elias’ scheme based on permutations.

References

  1. von Neumann, J.: Various techniques used in connection with random digits. Appl. Math. Ser., Notes by G. E. Forstyle, Nat. Bur. Stand 12, 36–38 (1951)

    Google Scholar 

  2. Hoeffding, W., Simon, G.: Unbiased coin tossing with a biased coin. Ann. Math. Stat. 41, 341–352 (1970)

  3. Elias, P.: The efficient construction of an unbiased random sequence. Ann. Math. Stat. 43, 865–870 (1972)

    Article  MATH  Google Scholar 

  4. Stout, Q., Warren, B.: Unbiased coin tossing with a biased coin. Ann. Probab. 12, 212–222 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  5. Peres, Y.: Unbiased coin tossing with a biased coin. Ann. Stat. 20, 590–597 (1992)

    Article  Google Scholar 

  6. Zhou, H., Bruck, J.: Efficient generation of random bits from finite state Markov chains. Inf. Theory IEEE Trans. 58(4), 2490–2506 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  7. Elias, P.: The efficient construction of an unbiased random sequence. Ann. Math. Stat. 43(3), 865–870 (1972)

    Article  MATH  Google Scholar 

  8. Stănică, P.: Good lower and upper bounds on binomial coefficients. JIPAM. J. Inequal. Pure Appl. Math. 2(3). http://www.emis.de/journals/JIPAM/images/043_00_JIPAM/043_00.pdf (2001) (Paper No. 30, 5 p., electronic only)

  9. Leopardi, P.: Diameter bounds for equal area partitions of the unit sphere. Electron. Trans. Numer. Anal. 35, 1–16 (2009)

  10. Leopardi, P.: A partition of the unit sphere into regions of equal area and small diameter. Electron. Trans. Numer. Anal. 25, 309–327 (2006)

  11. Inglot, T.: Inequalities for quantiles of the chi-square distribution. Probab. Math. Stat. 30(2), 339–351 (2010)

    MathSciNet  MATH  Google Scholar 

  12. Qi, F.: Bounds for the ratio of two gamma functions. J. Inequal. Appl., (2010)

  13. Laforgia, A., Natalini, P.: On some inequalities for the gamma function. Adv. Dyn. Syst. Appl. 8(2), 261–267 (2013)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riccardo Bernardini.

Additional information

A reduced 4-page version of this manuscript, with only the claims of the major results, has been submitted for presentation at ICASSP 2014.

Appendix: Proofs and technical details

Appendix: Proofs and technical details

1.1 Formal definition of output process \(b_k\)

In order to make the exposition simpler, we will extend the set of bit values with a symbol \(\varLambda \) to be used as an “undefined bit.” As a first step, we define \({b_{k}^{(N)}}\) as the k-th output bit after processing block number N as \({{{S}_{N,k}}}\) if \(\ell _{N} > k\) or \(\varLambda \) if if \(\ell _{N} \le k\). In other words, if after processing the N-th block, the k-th bit has been generated, \({b_{k}^{(N)}}\) is equal to the generated bit; otherwise, its value is the undefined bit value \(\varLambda \). Note that \({b_{k}^{(N)}}\) is “stable” in the following sense: if \({b_{k}^{(N)}} \ne \varLambda \) and \(M>N\) then \({b_{k}^{(N)}} = {b_{k}^{(M)}}\).

We will define \({\mathscr {N}}{{({k})}} \in {\mathbb N}\cup \{\infty \}\) as the smallest N such that \(\ell _{N} > k\); if no such N exists (that is, it is always \(\ell _{N} \le k\)), we define \({\mathscr {N}}{{({k})}} = \infty \).

Now we can define \({{{{b}_{k}}}}\), the k-th bit of the output process as \({{{S}_{{\mathscr {N}}{{({k})}},k}}}\) if \({\mathscr {N}}{{({k})}} <\infty \) or \(\varLambda \) if \({\mathscr {N}}{{({k})}} =\infty \) In words, if at some time the k-th bit is generated, \({{{{b}_{k}}}}\) is the generated value, while if the number of generated bits remains not larger then k, then \({{{{b}_{k}}}}\) is undefined.

Remark 5

It is intuitive that the case \({\mathscr {N}}{{({k})}} =\infty \) is very unlikely (actually, we will show that it is a zero probability event); nevertheless, we must take it into account since one can find realizations of the input process \({X}\) that correspond to a finite string of output bits; for example, if the conditioner uses the von Neumann algorithm, any realization ending with an infinite sequence of zeros produces a finite length output.

Property 6

If \(\mathbb {E}\left[ {\mu _{0}}\right] > 0\), then for every \(k\in {\mathbb N}\), the event \({{{{b}_{k}}}} \ne \varLambda \) (or equivalently, \({\mathscr {N}}{{({k})}} < \infty \)) happens almost surely.

Remark 6

Note that \(\mathbb {E}\left[ {\mu _{0}}\right] = 0\) only if the conditioner produces the empty string for every possible input block.

Proof

It suffices to show that for every \(k\in {\mathbb N}\) the probability of having \(\ell _{N} \le k\) for all \(N\in {\mathbb N}\) is zero. Let \(E_{N,k}\) denote the event \(\ell _{N} \le k\) and observe that the \(E_{1,k} \subseteq E_{2,k} \subseteq \cdots \). Therefore, it suffices to show that \(\lim _{N\rightarrow \infty } P[E_{N,k}]=0\).

We will use the Chebyshev inequality. Observe that \(\ell _{N}\) is a random variable with mean \(N \mathbb {E}\left[ {\mu _{0}}\right] \) and variance \(N\sigma ^2\), where \(\sigma ^2\) is the variance of \(\mu _{0}\). If \(\ell _{N} \le k\), then

$$\begin{aligned} \ell _{N} - \mathbb {E}\left[ {\ell _{N}}\right] = \ell _{N} - N \mathbb {E}\left[ {\mu _{0}}\right] \le k - N\mathbb {E}\left[ {\mu _{0}}\right] \end{aligned}$$
(38)

When \(N > k/\mathbb {E}\left[ {\mu _{0}}\right] \), we have \(k - N\mathbb {E}\left[ {\mu _{0}}\right] < 0\), and event (38) implies (that is, is contained in) \(\left|{\ell _{N} - \mathbb {E}\left[ {\ell _{N}}\right] }\right| \ge N\mathbb {E}\left[ {\mu _{0}}\right] -k\), so that

$$\begin{aligned} P[E_{N,k}]&\le P\left[ \left|{\ell _{N} - \mathbb {E}\left[ {\ell _{N}}\right] }\right| \ge N\mathbb {E}\left[ {\mu _{0}}\right] -k\right] \nonumber \\&\le \frac{N \sigma ^2}{(N\mathbb {E}\left[ {\mu _{0}}\right] - k)^2} = \left( \frac{\sigma }{\sqrt{N} \mathbb {E}\left[ {\mu _{0}}\right] - k/\sqrt{N}}\right) ^2 \end{aligned}$$
(39)

and the last term in (39) goes to zero when \(N\rightarrow \infty \). \(\square \)

1.2 Admissibility is stronger than UBP

In this appendix we are going to show that if a conditioner is admissible according to Definition 1, then the output process is an UBP. Although this result is quite intuitive, its formal proof has few technicalities. First, however, we prove the fact that scheme (5) is UBP, providing a counterexample that proves that UBP does not imply admissibility.

Proof 9.1

It is possible to see that scheme (5) is UBP; in several ways. An easy, albeit informal, approach is to observe that mapping (5) is, actually, the Huffmann code for \({{{X}_{n}}}\), which in this case is optimal (and not only asymptotically optimal) since the symbol probabilities are of the form \(2^{-\ell }\). This implies that the average number of bits per input symbol generated by the conditioner is equal to the source entropy, so the resulting bit-string \({{b}}\) is not compressible and iid (i.e., \({{b}}\) is a UBP).

A more precise proof is the following.

Consider bit \(b_k\), let \(n_k\) be the index of the input symbol that generates \(b_k\). Note that it must be \(j_k :=\ell _{n_k-1} \in \{k-1, k-2\}\). Compute \(P[b_k =0]\) by conditioning with respect to \(j_k\)

$$\begin{aligned} P[b_k =0] = P[b_k =0, j_k =k-1] + P[b_k =0, j_k =k-2] \end{aligned}$$
(40)

If \(j_k = k-1\) and \(b_k=0\) it must be \({{{X}_{n_k}}} \in \{a,b\}\). Since \(P[{{{X}_{n_k}}} \in \{a,b\}]=1/2\), the first conditional probability is 1 / 2. If \(j_k = k-2\) and \(b_k=0\)...\(\square \)

Our first step will be to prove that (6), used in the definition of an admissible conditioner, implies a more general relation.

Lemma 4

If \({\mathscr {C}}\) is an admissible conditioner, then the following holds

$$\begin{aligned}&\forall N \in {\mathbb N}, G \ge 1, {\mathbf k}\in \mathbf {{\mathbb N}}^{G}, m > k_G, m \in {\text {Im}}({\ell _{N}}), {\mathbf b}\in \{0,1\}^G\nonumber \\&\quad P[{{{S}_{n,{\mathbf k}}}} = {\mathbf b}| \ell _{N} = m] = 2^{-G} \end{aligned}$$
(41)

Remark 7

The difference between (6) and (41) is that (6) takes into account only a single block, while (41) is relative to the output after processing the N-th block.

Proof

The proof is complicated by the fact that two different bits of \({{{S}_{N}}}\), say \({{{S}_{N,k}}}\) and \({{{S}_{N,j}}}\), can derive from the same block or from different blocks, depending on the specific realization of \({X}\). In order to take into account this problem, we are going to condition on the specific sequence of lengths \(\mu _{0}\), \(\mu _{1}\), ..., \(\mu _{N}\) that gave rise to \(\ell _{N} = m\).

In order to simplify the manipulations, it is worth to introduce some notation. With \(\mu _{{0}:{N}}\) we will denote the vector \([\mu _{0}, \mu _{1}, \ldots , \mu _{N}]\); with \({\mathfrak M}\) we denote the set of vectors \(\mu _{{0}:{N}}\) such that \(\ell _{N} = m\), formally,

$$\begin{aligned} {\mathfrak M} :=\left\{ {\mathbf u}\in {\text {Im}}({\mu _{0}})^{N+1} : \sum _{k=0}^{N} {\mathbf u}_k = m \right\} \end{aligned}$$
(42)

Note that the condition \({\mathbf u}\in {\text {Im}}({\mu _{0}})^{N+1}\) in (42) forces every component of \({\mathbf u}\) to be a possible length for an output block.

It is clear that the event \(\ell _{N} = m\) is equal to the event \(\mu _{{0}:{N}} \in {\mathfrak M}\), so that the following holds

$$\begin{aligned} P[{{{S}_{N,{\mathbf k}}}}= & {} {\mathbf b}| \ell _{N} = m] = \sum _{{\mathbf u}\in {\mathfrak M}} P[{{{S}_{N,{\mathbf k}}}}\nonumber \\= & {} {\mathbf b}| \mu _{{0}:{N}} = {\mathbf u}]\; P[\mu _{{0}:{N}} = {\mathbf u}| \ell _{N} = m] \end{aligned}$$
(43)

Therefore, if we prove that \(P[{{{S}_{N,{\mathbf k}}}} = {\mathbf b}| \mu _{{0}:{N}} = {\mathbf u}] = 2^{-G}\) for every \({\mathbf u}\in {\mathfrak M}\), the thesis will follow.

Given \({\mathbf u}\in {\mathfrak M}\), split \({\mathbf k}\) into \(N+1\) pieces \(\xi _j\), \(j=0, \ldots , N\), as follows

$$\begin{aligned} \xi _j = \{ o \in {\mathbf k}: \ell _{j-1} \le o < \ell _{j}\} \end{aligned}$$
(44)

that is, \(\xi _j\) contains the indexes in \({\mathbf k}\) that correspond to bits output after processing block number j.

Define \({\mathbf b}_j\) as the subword of \({\mathbf b}\) corresponding to the indices in \(\xi _j\) and observe that

$$\begin{aligned} P[{{{S}_{N,{\mathbf k}}}} = {\mathbf b}| \mu _{{0}:{N}} = {\mathbf u}]&=P\left[ \bigwedge _{j=0}^N {{{S}_{N,k_j}}} = {\mathbf b}_j | \mu _{{0}:{N}} = {\mathbf u}\right] \nonumber \\&=\prod _{j=0}^N P\left[ {{{S}_{N,k_j}}} = {\mathbf b}_j | \mu _{{0}:{N}} = {\mathbf u}\right] \nonumber \\&=\prod _{j=0}^N P\left[ {{{S}_{N,k_j}}} = {\mathbf b}_j | \mu _{j} = {\mathbf u}_j\right] \nonumber \\&=\prod _{j=0}^N 2^{-|{k_j}|}\nonumber \\&=2^{-\sum _{j=0}^N |{k_j}|} = 2^{-G} \end{aligned}$$
(45)

where we exploited the fact that different blocks are independent at second step and the admissibility hypothesis at the last one.\(\square \)

Theorem 4

If \({\mathscr {C}}\) is admissible, then process \({{b}}\) is an UBP;

Proof

We need to prove that for every \(G \ge 1\), every \({\mathbf k}\in \mathbf {{\mathbb N}}^{G}\) and every \({\mathbf b}\in \{0,1\}^G\), the following equality holds

$$\begin{aligned} P[{{{{b}_{{\mathbf k}}}}} = {\mathbf b}]=2^{-G} \end{aligned}$$
(46)

We are going to compute probability (46) by conditioning over the values assumed by \({\mathscr {N}}{{({k_L})}}\). It is

$$\begin{aligned} P[{{{{b}_{{\mathbf k}}}}} = {\mathbf b}] = \sum _{N \in {\mathbb N}} P[{{{{b}_{{\mathbf k}}}}} = {\mathbf b}| {\mathscr {N}}{{({k_G})}}=N] P[{\mathscr {N}}{{({k_G})}}=N] \end{aligned}$$
(47)

where we take the convention that in sum (47) the terms with \(P[{\mathscr {N}}{{({k_G})}}=N]=0\) are removed (otherwise conditional probability \(P[{{{{b}_{{\mathbf k}}}}} = {\mathbf b}| {\mathscr {N}}{{({k_G})}}=N]\) would not make sense). Since by Lemma 4 every conditional probability in (47) is equal to \(2^{-G}\), the thesis follows. \(\square \)

1.3 Proof of property 1: (7) implies (6)

Proof

Let G, \({\mathbf k}\), m and \({\mathbf b}\) be chosen according to (6). Let \(\overline{{\mathbf k}}\in \mathbf {{\mathbb N}}^{m-G}\) be the “complement” of \({\mathbf k}\) in the sense that the union of \({\mathbf k}\) and \(\overline{{\mathbf k}}\) is \(J_{m} = \{0, 1, \ldots , m-1\}\) and \({\mathbf k}\) and \(\overline{{\mathbf k}}\) are disjoint. Moreover, if \({\mathbf b}\in \{0,1\}^G\) and \({\mathbf a}\in \{0,1\}^{m-G}\), we will denote with \(\,[{{\mathbf b}}_{{\mathbf k}} , {{\mathbf a}}_{\overline{{\mathbf k}}}]\,\) the m-bit bit-string equal to \({\mathbf b}\) in the positions specified by \({\mathbf k}\) and equal to \({\mathbf a}\) in the positions specified by \(\overline{{\mathbf k}}\).

Note that the event \({{{S}_{0,{\mathbf k}}}}={\mathbf b}\wedge \mu _{0} = m\) can be written as

$$\begin{aligned} \bigvee _{{\mathbf a}\in \{0,1\}^{m-G}} ({{{S}_{0,{\mathbf k}}}}={\mathbf b}) \wedge ({{{S}_{0,\overline{{\mathbf k}}}}} = {\mathbf a}) \wedge (\mu _{0} = m) \end{aligned}$$
(48)

and that all the events in (48) are disjoint. By using (48) one can write

$$\begin{aligned}&P[{{{S}_{0,{\mathbf k}}}}={\mathbf b}| \mu _{0} = m]\nonumber \\&\quad = \frac{P[{{{S}_{0,{\mathbf k}}}}={\mathbf b}, \mu _{0} = m]}{P[\mu _{0} = m]}\nonumber \\&\quad = \frac{\sum _{{{\mathbf a}\in \{0,1\}^{m-G}}} P[{{{S}_{0,{\mathbf k}}}}={\mathbf b}, {{{S}_{0,\overline{{\mathbf k}}}}} = {\mathbf a}, \mu _{0} = m]}{P[\mu _{0} = m]}\nonumber \\&\quad = \sum _{{{\mathbf a}\in \{0,1\}^{m-G}}} P[{{{S}_{0,{\mathbf k}}}}={\mathbf b}, {{{S}_{0,\overline{{\mathbf k}}}}} = {\mathbf a}| \mu _{0} = m]\nonumber \\&\quad = \sum _{{{\mathbf a}\in \{0,1\}^{m-G}}} P[{{{S}_{0}}}=\,[{{\mathbf b}}_{{\mathbf k}} , {{\mathbf a}}_{\overline{{\mathbf k}}}]\,| \mu _{0} = m]\nonumber \\&\quad = \sum _{{{\mathbf a}\in \{0,1\}^{m-G}}} 2^{-m}\nonumber \\&\quad = 2^{m-G} 2^{-m} = 2^{-G} \end{aligned}$$
(49)

where we exploited decomposition (48) at second step and hypothesis (7) at the last one. \(\square \)

1.4 Proof of bounds (19)

The key observation used to derive bounds (19) is given by the following general lemma.

Lemma 5

Let A and B be two finite sets. Let \({\mathscr {P}} :=\{Q_1, Q_2, \ldots , Q_M\}\) be a partition of A and let \(\pi : A \rightarrow \mathscr {P}\) be the corresponding projection, that is, the function mapping every \(x\in A\) to the set of \(\mathscr {P}\) that includes x.

Let \(f : A \rightarrow B\) be a map such that for every \(k=1, \ldots , M\), its restriction \({\left. \phantom {\big |} \right| _{Q} }_k\) to \(Q_k\) is injective. Finally, let X be a random variable assuming values in A and let \(Y=f(X)\) and \(U=\pi (X)\).

The following relations hold

$$\begin{aligned} H(Y)&= H(X) - H(U|Y) \nonumber \\&\ge H(X) - H(U) \nonumber \\&\ge H(X) - \log _2 |{\mathscr {P}}| \end{aligned}$$
(50)

Proof

The key observation is that, with the theorem hypothesis, map \(x \mapsto (\pi (x), f(x))\) is injective. Indeed, if \(x,y \in A\) and \((\pi (x), f(x)) = (\pi (y), f(y))\), then x and y must belong to the same set \(Q = \pi (x)\) of \(\mathscr {P}\) (because \(\pi (x)=\pi (y)\)). Then \(f(x)=f(y)\) implies \( {\left. \phantom {\big |} \right| _{Q} }(x) = {\left. \phantom {\big |} \right| _{Q} }(y)\) which in turn implies \(x=y\) since f restricted to Q is injective.

By exploiting the injectivity of \(x \mapsto (\pi (x), f(x))\) one deduces \(H(X) = H(YU)\). By observing that \(H(YU) = H(Y) + H(U|Y)\), (50) follows. \(\square \)

The following lemma allows us to apply Lemma 5 to the specific case of GES.

Lemma 6

If \({\mathscr {C}}: {{\mathscr {Q}}_{L}}\rightarrow \{0,1\}^*\) is an indexing map, then its restriction to any \(P_{k}\) is injective, that is, for every \(k \in \{1, \ldots , {\mathscr {P}}\}\), if \(x,y \in P_{k}\) and \({\mathscr {C}}(x) = {\mathscr {C}}(y)\), then \(x=y\).

Proof

Suppose \(x,y \in P_{k}\). Since \({\mathscr {C}}(x)\) and \({\mathscr {C}}(y)\) have the same length, both x and y must belong to the same \(V_{k,i}\), so that

$$\begin{aligned} \phi _{k,i}(x) = {\mathscr {C}}(x) = {\mathscr {C}}(y) = \phi _{k,i}(y) \end{aligned}$$
(51)

Since \(\phi _{k,i}\) is bijective, \(x=y\) follows. \(\square \)

By using Lemma 5 and Lemma 6 it is easy to prove the following corollary

Corollary 2

In a GES the following inequality holds.

$$\begin{aligned} H({{V}_{0}}) \;\;\ge \;\; H({{{S}_{0}}}) \;\;\ge \;\; H({{V}_{0}}) - \log _2 {\mathscr {P}}\end{aligned}$$
(52)

Corollary 2 is interesting because it shows that a GES with a small number of sets in partition (13) has the potential of being more efficient than a scheme with a larger number of sets.

Corollary 3

In a GES inequality (19a) holds.

Proof

Use (11) in (52). \(\square \)

Finally, we are going to prove bound (19b).

Lemma 7

For every GES (19b) holds. In particular, for every GES \(\lim _{L\rightarrow \infty } H(\mu _{0})/L= 0\).

Proof

Observe that \({\text {Im}}({\mu _{0}})\) is the set of the lengths of the bit-strings in the image of \({\mathscr {C}}\) and that

$$\begin{aligned} H(\mu _{0}) \le \log _2 |{{\text {Im}}({\mu _{0}})}| \end{aligned}$$
(53)

The maximum value in \({\text {Im}}({\mu _{0}})\) is \(\max _{k,i} \nu _{k,i}\), so that

$$\begin{aligned} |{{\text {Im}}({\mu _{0}})}| \le \max _{k,i} \nu _{k,i} = \max _{k,i} \log _2 |{V_{k,i}}| \le \log _2 |{{{\mathscr {Q}}_{L}}}| \end{aligned}$$
(54)

where the second inequality follows from \(V_{k,i} \subseteq {{\mathscr {Q}}_{L}}\). Using (54) in (53) we obtain

$$\begin{aligned} \frac{H(\mu _{0})}{L} \le \frac{\log _2 |{{\text {Im}}({\mu _{0}})}|}{L} \le \frac{\log _2\log _2 {{\mathscr {Q}}_{L}}}{L} \end{aligned}$$
(55)

\(\square \)

1.5 Bounds on VQ size for the Gaussian case

Proof 9.2

(Proof of Lemma 2) Define, for notation convenience, \(\theta = \sqrt{\pi }\). As well known, the surface area of an \(L\)-dimensional sphere of radius r is

$$\begin{aligned} \frac{2 \pi ^{L/2} {r}^{L-1}}{\varGamma \left( \frac{L}{2} \right) } = \frac{2}{r} \frac{\theta ^L{r}^L}{\varGamma \left( \frac{L}{2} \right) } \end{aligned}$$
(56)

Therefore, the number of pieces is

$$\begin{aligned} W_{L,r} = \frac{2D}{\beta {r}} \frac{\theta ^L{r}^L}{D^L \varGamma \left( \frac{L}{2}\right) } \end{aligned}$$
(57)

Let, for notation convenience, \(C=\ln (2D/\beta {r})\). By remembering the Stirling’s approximation

$$\begin{aligned} \ln \varGamma (x) = x \ln x - x + O(\ln x) \end{aligned}$$
(58)

it follows that

$$\begin{aligned} \frac{\ln W_{L,r}}{L}&= \mathrel {\mathop {\underbrace{\frac{C}{L}}}\limits _{o(1)}} + \ln \frac{\theta {r}}{D} - {\frac{1}{2} \ln \frac{L}{2}} + {\frac{1}{2}} + \mathrel {\mathop {\underbrace{\frac{O(\ln L)}{L}}}\limits _{o(1)}} \nonumber \\&= \ln \sqrt{2\pi e} - \ln D+ \ln \frac{r}{\sqrt{L}} + o(1) \nonumber \\&= \ln \sqrt{2\pi e}+\ln \frac{\sigma }{D}+ o(1) \end{aligned}$$
(59)

The thesis follows by multiplying (59) by \(\log _2 e\).

Proof 9.3

(Proof of (36a)) It is well known that r.v. \(\Vert {{\mathscr {X}}_{0}}\Vert _{}^2/\sigma ^2\) is a Chi-squared r.v. with \(L\) degrees of freedom. The requirement that the overflow probability is smaller than \(\epsilon \) can be written as

$$\begin{aligned} \epsilon&= P[\Vert {{\mathscr {X}}_{0}}\Vert _{}>R_{\text {max}}] \nonumber \\&= P[\Vert {{\mathscr {X}}_{0}}\Vert _{}^2/\sigma ^2>R_{\text {max}}^2/\sigma ^2] \nonumber \\&= P[\chi _L^2 >R_{\text {max}}^2/\sigma ^2] \end{aligned}$$
(60)

With the notation of [11], from (60) one deduces \(u(\epsilon ,L) = R_{\text {max}}^2/\sigma ^2\). By using bounds [11]

$$\begin{aligned} u(\epsilon , L) \le L-2\ln \epsilon + 2\sqrt{-L\ln \epsilon } \end{aligned}$$
(61)

one deduces

$$\begin{aligned} R_{\text {max}}\le \sigma \sqrt{L-2\ln \epsilon + 2\sqrt{-L\ln \epsilon }} \end{aligned}$$
(62)

The final step is to upper bound the argument of the first square root in (62) with \(2L\). This happens if

$$\begin{aligned} -2\ln \epsilon + 2\sqrt{-L\ln \epsilon } \le L\end{aligned}$$
(63)

which is equivalent to

$$\begin{aligned} \frac{-\ln \epsilon }{L} + 2\sqrt{\frac{-\ln \epsilon }{L}} \le 1 \end{aligned}$$
(64)

It is immediate to verify that the left-hand side of (64) is monotone decreasing in \(L\) and it goes to zero when \(L\rightarrow \infty \). Therefore, there exists \(L_0\) such that (64) is true for every \(L> L_0\). Therefore, (36a) is true for \(L> L_0\) and \(C_1= \sigma \sqrt{2}\).

Proof 9.4

(Proof of (36b)) For the sake of simplicity, we will suppose that \(D\) was chosen in order to make (65) integer. Since every shell has thickness \(D\) and every shell is an element of the partition, the partition size is the total number of shells, that is,

$$\begin{aligned} {{{\mathscr {P}}_{L}}} = K+1= \frac{R_{\text {max}}}{D}+1 \le \left( \frac{\sqrt{2} \sigma }{D}+1\right) \sqrt{L} \end{aligned}$$
(65)

which is (36b) with \(C_2 = \sqrt{2}\sigma /D+1\).

Proof 9.5

(Proof of (36c)) According to (57), with \( W_{L,1}=N_{1}\), the number of regions in the VQ can be upper bounded as

$$\begin{aligned} |{{{\mathscr {Q}}_{L}}}| = (K+1) N_{1}&\le (C_2 \sqrt{L}) \frac{2D}{\sigma \sqrt{L}\beta } \frac{\theta ^L\sigma ^LL^{L/2}}{D^L \varGamma \left( \frac{L}{2}\right) }\nonumber \\&\le A \left( \frac{\theta \sigma \sqrt{L}}{De^{\gamma /2}} \right) ^L= A \left( B \sqrt{L}\right) ^L\nonumber \\&\le \left( A B \sqrt{L}\right) ^L\end{aligned}$$
(66)

with obvious meaning of A and B. In (66) we used bound \(\varGamma (x+1) > \exp (\gamma x)\), \(x>0\), [12, 13], where \(\gamma \approx 0.577\) is the Euler-Mascheroni constant. Equation (66) is (36c) with \(C_3 = AB\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bernardini, R., Rinaldo, R. Generalized Elias schemes for efficient harvesting of truly random bits. Int. J. Inf. Secur. 17, 67–81 (2018). https://doi.org/10.1007/s10207-016-0358-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10207-016-0358-5

Keywords

Navigation