Skip to main content

On the Complexity of Breaking Pseudoentropy

  • Conference paper
  • First Online:
Book cover Theory and Applications of Models of Computation (TAMC 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10185))

  • 805 Accesses

Abstract

Pseudoentropy has found a lot of important applications to cryptography and complexity theory. In this paper we focus on the foundational problem that has not been investigated so far, namely by how much pseudoentropy (the amount seen by computationally bounded attackers) differs from its information-theoretic counterpart (seen by unbounded observers), given certain limits on attacker’s computational power?

We provide the following answer for HILL pseudoentropy, which exhibits a threshold behavior around the size exponential in the entropy amount:

  • If the attacker size (s) and advantage (\(\epsilon \)) satisfy \(s \gg 2^k\epsilon ^{-2}\) where k is the claimed amount of pseudoentropy, then the pseudoentropy boils down to the information-theoretic smooth entropy.

  • If \(s \ll 2^k\epsilon ^2\) then pseudoentropy could be arbitrarily bigger than the information-theoretic smooth entropy.

Besides answering the posted question, we show an elegant application of our result to the complexity theory, namely that it implies the classical result on the existence of functions hard to approximate (due to Pippenger). In our approach we utilize non-constructive techniques: the duality of linear programming and the probabilistic method.

The paper is available (with updates) at https://eprint.iacr.org/2016/1186.pdf.

M. Skorski—Supported by the European Research Council Consolidator Grant (682815-TOCNeT).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We consider here the most popular notion of HILL pseudoentropy.

  2. 2.

    This matches the definition of pseudorandomness when k is the length of X.

  3. 3.

    As this complexity is enough to compute every boolean function.

  4. 4.

    f is \(\delta \) hard for size s if every circuit of size s fails to predict f w.p. at least \(\frac{1+\delta }{2}\).

  5. 5.

    If the domain consists of n-bit strings, it is enough to assume \(s > 2^n\) as every function over n bits has complexity at most \(2^n\).

  6. 6.

    We use the convention for which \(\delta =1\) corresponds to completely unpredictable function. Some works substitute \(1-\delta \) in place of \(\delta \).

References

  • Barak, B., Shaltiel, R., Wigderson, A.: Computational analogues of entropy. In: Arora, S., Jansen, K., Rolim, J.D.P., Sahai, A. (eds.) APPROX/RANDOM -2003. LNCS, vol. 2764, pp. 200–215. Springer, Heidelberg (2003). doi:10.1007/978-3-540-45198-3_18

    Google Scholar 

  • Chung, K.-M., Kalai, Y.T., Liu, F.-H., Raz, R.: Memory delegation. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 151–168. Springer, Heidelberg (2011). doi:10.1007/978-3-642-22792-9_9

    Chapter  Google Scholar 

  • Dziembowski, S., Pietrzak, K.: Leakage-resilient cryptography in the standard model. IACR Cryptology ePrint Arch. 2008, 240 (2008)

    Google Scholar 

  • De, A., Trevisan, L., Tulsiani, M.: Time space tradeoffs for attacks against one-way functions and PRGs. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 649–665. Springer, Heidelberg (2010). doi:10.1007/978-3-642-14623-7_35

    Chapter  Google Scholar 

  • Fuller, B., O’neill, A., Reyzin, L.: A unified approach to deterministic encryption: new constructions and a connection to computational entropy. J. Cryptol. 28(3), 671–717 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Goldreich, O.: Foundations of Cryptography: Volume 1. Cambridge University Press, New York (2006)

    Google Scholar 

  • Gentry, C., Wichs, D.: Separating succinct non-interactive arguments from all falsifiable assumptions. In: Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6–8 June 2011, pp. 99–108 (2011)

    Google Scholar 

  • HÃ¥stad, J., Impagliazzo, R., Levin, L.A., Luby, M.: Pseudo-random generation from one-way functions. In: Proceedings of the 20th STOC, pp. 12–24 (1988)

    Google Scholar 

  • Hsiao, C.-Y., Lu, C.-J., Reyzin, L.: Conditional computational entropy, or toward separating pseudoentropy from compressibility. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS, vol. 4515, pp. 169–186. Springer, Heidelberg (2007). doi:10.1007/978-3-540-72540-4_10

    Chapter  Google Scholar 

  • Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58(301), 13–30 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  • Janson, S.: Large deviations for sums of partly dependent random variables. Random Struct. Algorithms 24(3), 234–248 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  • Pietrzak, K.: A leakage-resilient mode of operation. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 462–482. Springer, Heidelberg (2009). doi:10.1007/978-3-642-01001-9_27

    Chapter  Google Scholar 

  • Pippenger, N.: Information theory and the complexity of boolean functions. Math. Syst. Theory 10(1), 129–167 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  • Reingold, O., Trevisan, L., Tulsiani, M., Vadhan, S.: Dense subsets of pseudorandom sets. In: Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (Washington, DC, USA), FOCS 2008, pp. 76–85. IEEE Computer Society (2008)

    Google Scholar 

  • Renner, R., Wolf, S.: Simple and tight bounds for information reconciliation and privacy amplification. In: Roy, B. (ed.) ASIACRYPT 2005. LNCS, vol. 3788, pp. 199–216. Springer, Heidelberg (2005). doi:10.1007/11593447_11

    Chapter  Google Scholar 

  • Serfling, R.J.: Probability inequalities for the sum in sampling without replacement. Ann. Stat. 2(1), 39–48 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  • Skórski, M., Golovnev, A., Pietrzak, K.: Condensed unpredictability. In: Halldórsson, M.M., Iwama, K., Kobayashi, N., Speckmann, B. (eds.) ICALP 2015. LNCS, vol. 9134, pp. 1046–1057. Springer, Heidelberg (2015). doi:10.1007/978-3-662-47672-7_85

    Chapter  Google Scholar 

  • Skorski, M.: Metric pseudoentropy: characterizations, transformations and applications. In: Lehmann, A., Wolf, S. (eds.) ICITS 2015. LNCS, vol. 9063, pp. 105–122. Springer, Heidelberg (2015). doi:10.1007/978-3-319-17470-9_7

    Google Scholar 

  • Vadhan, S., Zheng, C.J.: Characterizing pseudoentropy and simplifying pseudorandom generator constructions. In: Proceedings of the 44th symposium on Theory of Computing (New York, NY, USA), STOC 2012, pp. 817–836. ACM (2012)

    Google Scholar 

  • Yu, Y., Li, X., Weng, J.: Pseudorandom generators from regular one-way functions: new constructions with improved parameters. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013. LNCS, vol. 8270, pp. 261–279. Springer, Heidelberg (2013). doi:10.1007/978-3-642-42045-0_14

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maciej Skorski .

Editor information

Editors and Affiliations

Appendices

A Proof of Theorem 1

Proof

We start with proving a weaker result, namely that for Metric pseudoentropy (weaker notion) the threshold equals \(2^k\).

Lemma 4

(The complexity of breaking Metric pseudoentropy). If \(\mathbf {H}_{\infty }^{\epsilon }(X) = k\) then also \({\mathbf {H}}^{\mathrm{Metric}}_{s,\epsilon }\left( X\right) = k\) for \(s> n 2^k\).

Proof

(Proof of Lemma 4 ). We will show the following claim which, by Proposition 1, implies the statement.

Claim

If \(s > n2^k\) and \(s'=\infty \) then \({\mathbf {H}}^{\mathrm{Metric}}_{s,\epsilon }\left( X\right) = {\mathbf {H}}^{\mathrm{Metric}}_{s',\epsilon }\left( X\right) \)

Proof

(Proof of Claim). It suffices to show only \({\mathbf {H}}^{\mathrm{Metric}}_{s,\epsilon }\left( X\right) \leqslant {\mathbf {H}}^{\mathrm{Metric}}_{s',\epsilon }\left( X\right) \) as the other implication is trivial. Our strategy is to show that any distinguisher \({\mathsf {D}}\) that negates the definition of Metric entropy can be implemented in size \(2^k\).

Suppose that \({\mathbf {H}}^{\mathrm{Metric}}_{s',\epsilon }\left( X\right) < k\). This means that for some \({\mathsf {D}}\) of size \(s'\) and all Y of min-entropy at least k we have \(|{\mathbb {E}}{\mathsf {D}}(X) - {\mathbb {E}}{\mathsf {D}}(Y)| \geqslant \epsilon \). Since the set of all Y of min-entropy at least k is convex, the range of the expression \(|{\mathbb {E}}{\mathsf {D}}(X) - {\mathbb {E}}{\mathsf {D}}(Y)|\) is an interval, so we either have always \({\mathbb {E}}{\mathsf {D}}(X) - {\mathbb {E}}{\mathsf {D}}(Y) > \epsilon \) or \({\mathbb {E}}{\mathsf {D}}(X) - {\mathbb {E}}{\mathsf {D}}(Y) < -\epsilon \). Without loosing generality assume the first possibility (otherwise we proceed the same way with the negation \({\mathsf {D}}'(x) = 1-{\mathsf {D}}(x)\)). Thus

$$ {\mathbb {E}}{\mathsf {D}}(X) - {\mathbb {E}}{\mathsf {D}}(Y) > \epsilon \quad \text { for all} \, n\, \text {bit} \, Y \text { of min-entropy } k$$

where by Remark 2 we can assume that \({\mathsf {D}}\) is boolean. In particular, the set \(\{x:\ {\mathsf {D}}(x) = 1\}\) cannot have more than \(2^k\) elements, as otherwise we would put Y being uniform over x such that \({\mathsf {D}}(x)=1\) and get \({\mathbb {E}}{\mathsf {D}}(X)-1> \epsilon > 0\) which contradicts the fact that \({\mathsf {D}}\) is boolean. But if \({\mathsf {D}}\) is boolean and outputs 1 at most \(2^k\) times, can be implemented in size \(n 2^k\), by hardcoding this set and outputting 0 everywhere else. This means precisely that \({\mathbf {H}}^{\mathrm{Metric}}_{s,\epsilon }\left( X\right) < k\). Now by Proposition 1 we see that also \(\mathbf {H}_{\infty }^{\epsilon }(X) < k\) which proves that \({\mathbf {H}}^{\mathrm{Metric}}_{s,\epsilon }\left( X\right) \leqslant \mathbf {H}_{\infty }^{\epsilon }(X)\) finishes the proof of the claim.

Having proven Lemma 4, we obtain the statement for HILL pseudoentropy by applying the transformation from Lemma 1.

B Proof of Theorem 2

Proof

(Proof of Theorem 2 ). Let \(\mathcal {X}\) be a random subset of \(\mathcal {S}\) of cardinality \(2^{k-C}\). Let \(x_{1},\ldots ,x_{2^{k-C}}\) be the all elements of \(\mathcal {X}\) enumerated according the lexicographic order. Define the following random variables \(\xi (x)\)

$$\begin{aligned} \xi (x) = \left\{ \begin{array}{rrl} \text {random element from } \{-1,1\}, &{} \;\; x = x_{2i-1} &{}\text { for some }i \\ -x_{2i-1}, &{} \;\; x = x_{2i} &{}\text { for some }i \\ \end{array} \right. \end{aligned}$$
(1)

for any x such that \(x\in \mathcal {X}\). Once the choice of \(\xi (x)\) is fixed, consider the distribution

$$\begin{aligned} \Pr [X=x] = \left\{ \begin{aligned} 2^{-k}+2\epsilon \cdot 2^{-k}\cdot \xi (x)&x\in \mathcal {X} \\ 0,&x\not \in \mathcal {X} \\ \end{aligned} \right. \end{aligned}$$
(2)

The rest of the proof splits into the following two claims:

Claim

( X has small smooth min-entropy). For any choice of X and \(\epsilon (x)\), we have \(\mathbf {H}_{\infty }^{\epsilon }(X) \leqslant k-C+\log \left( \frac{1}{1-2\epsilon }\right) \).

Claim

( X has large metric pseudo-entropy). We have \({\mathbf {H}}^{\mathrm{Metric}}_{\mathcal {D},\epsilon }\left( X\right) = k\).

Proof

(Small smooth min-entropy). Note that by Eq. (1) the distribution of X is \(\epsilon \)-close to the uniform distribution over \(\mathcal {X}\). By Corollary 2 (note that k is replaced by \(\log |\mathcal {X}| = k-C\)), this means that the \(\epsilon \)-smooth min-entropy of X is at most \(k-C+\log \left( \frac{1}{1-2\epsilon }\right) \).

Proof

(Large metric entropy). Note that for any \({\mathsf {D}}\) we have

$$\begin{aligned} {\mathbb {E}}{\mathsf {D}}(X)&= \sum _{x\in \mathcal {X}} {\mathsf {D}}(x)\left( 2^{-k}+\xi (x)2^{-k}\cdot 2\epsilon \right) \\&= {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {X}}) + 2^{-k}\cdot 2\epsilon \cdot \sum _{x\in \mathcal {X}}{\mathsf {D}}(x)\xi (x) \end{aligned}$$

In the next step we observe that the random variables \(\xi (x)\) have the degree of dependence \(\varDelta = 1\). Indeed, by the construction in Eq. (1), for any fixed x the random variables \(\xi (x')\) are independent of \(\xi (x)\) except at most one value of \(x'\). Now, by Lemma 2 applied to the random variables \({\mathsf {D}}(x)\xi (x)\) we obtain

$$\begin{aligned} \Pr _{}\left[ 2^{-k}\sum _{x\in \mathcal {X}}{\mathsf {D}}(x)\xi (x) > \delta \right] \leqslant \exp \left( -2^{k-1}\delta ^2 \right) \end{aligned}$$

for any \(\delta > 0\), where the probability is over \(\xi (x)\) after fixing the choice of the set \(\mathcal {X}\) for \(z\in \{0,1\}^m\). In other words, we have

$$\begin{aligned} {\mathop {\Pr }\limits _{\xi (x)}}\left[ {\mathbb {E}}{\mathsf {D}}(X) \leqslant {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {X}}) + 2\delta \epsilon \right] \end{aligned}$$
(3)

with probability \(1- \exp \left( 2^{k-1}\delta ^2 \right) \) for any fixed choice of sets \(\mathcal {X}\).

In the last step, we observe that since the choice of the sets \(\mathcal {X}\) is random, we have \({\mathbb {E}}{\mathsf {D}}(U_{\mathcal {X}}) \approx {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}})\) with high probability. Indeed, by the Hoeffding bound for samples taken without repetitions (see Remark 3)

$$\begin{aligned} {\mathop {\Pr }\limits _{\mathcal {X}}}\left[ {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {X}}) \leqslant {\mathbb {E}}{\mathsf {D}}(U) + 2 \delta \epsilon \right] \geqslant 1-\exp (- 2^{k-C+3}\delta ^2\epsilon ^2 ) \end{aligned}$$
(4)

By combining Eqs. (3) and (4) for any \({\mathsf {D}}\) and any \(\epsilon < \frac{1}{4}\) we obtain

$$\begin{aligned} {\mathop {\Pr }\limits _{\mathcal {X},\xi (x)}}\left[ {\mathbb {E}}{\mathsf {D}}(X) \leqslant {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}}) + 4\delta \epsilon \right] \geqslant 1-2\exp (-2^{k-C+3}\delta ^2\epsilon ^2 ). \end{aligned}$$
(5)

Replacing \(\delta \) with \(\delta /4\) and applying the union bound over \(\mathcal {D}\) we see that

$$\begin{aligned} \Pr _{\mathcal {X},\xi (x)}\left[ \forall {\mathsf {D}}\in \mathcal {D}: \ {\mathbb {E}}{\mathsf {D}}(X) \leqslant {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}}) + \delta \epsilon \right] \geqslant 1-2|\mathcal {D}|\exp (-2^{k-C-1}\delta ^2\epsilon ^2 ). \end{aligned}$$

and thus we have a distribution X such that

$$\begin{aligned} \forall {\mathsf {D}}\in \mathcal {D}: \ {\mathbb {E}}{\mathsf {D}}(X) \leqslant {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}}) + \delta \epsilon \end{aligned}$$
(6)

as long as

$$\begin{aligned} 2|\mathcal {D}| < 2^{2^{k-C-1}\delta ^2\epsilon ^2}. \end{aligned}$$
(7)

Finally, note that by adding to the class \(\mathcal {D}\) all negations (functions \({\mathsf {D}}'(x) = 1-{\mathsf {D}}(x)\)) we have \({\mathbb {E}}{\mathsf {D}}(X) \leqslant {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}}) + \delta \epsilon \) as well as \( {\mathbb {E}}{\mathsf {D}}(X) \geqslant {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}}) - \delta \epsilon \), for every \({\mathsf {D}}\in \mathcal {D}\). In particular, we have

$$\begin{aligned} \forall {\mathsf {D}}\in \mathcal {D}: |{\mathbb {E}}{\mathsf {D}}(X) - {\mathbb {E}}{\mathsf {D}}(U_{\mathcal {S}}) | < \delta \epsilon \end{aligned}$$
(8)

as long as

$$\begin{aligned} 4|\mathcal {D}| < 2^{2^{k-C-1}\delta ^2\epsilon ^2}. \end{aligned}$$
(9)

It remains to observe that for every \(\mathcal {X}\) the probability mass function of X takes two values on two halves of \(\mathcal {X}\).

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Skorski, M. (2017). On the Complexity of Breaking Pseudoentropy. In: Gopal, T., Jäger , G., Steila, S. (eds) Theory and Applications of Models of Computation. TAMC 2017. Lecture Notes in Computer Science(), vol 10185. Springer, Cham. https://doi.org/10.1007/978-3-319-55911-7_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55911-7_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55910-0

  • Online ISBN: 978-3-319-55911-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics