Limits of Extractability Assumptions with Distributional Auxiliary Input
Abstract
Extractability, or “knowledge,” assumptions have recently gained popularity in the cryptographic community, leading to the study of primitives such as extractable oneway functions, extractable hash functions, succinct noninteractive arguments of knowledge (SNARKs), and (publiccoin) differinginputs obfuscation ((PC)\(di\mathcal {O}\)), and spurring the development of a wide spectrum of new applications relying on these primitives. For most of these applications, it is required that the extractability assumption holds even in the presence of attackers receiving some auxiliary information that is sampled from some fixed efficiently computable distribution \(\mathcal {Z}\).

PC\(di\mathcal {O}\) for Turing machines does not exist, or

extractable oneway functions w.r.t. auxiliary input \(\mathcal {Z}\) do not exist.

SNARKs for \(\mathsf {NP}\) w.r.t. auxiliary input \(\mathcal {Z}\) do not exist, or

PC\(di\mathcal {O}\) for \(NC^1\) circuits does not exist.
To achieve our results, we develop a “succinct punctured program” technique, mirroring the powerful punctured program technique of Sahai and Waters (STOC’14), and present several other applications of this new technique. In particular, we construct succinct perfect zero knowledge SNARGs and give a universal instantiation of random oracles in fulldomain hash applications, based on PC\(di\mathcal {O}\).
As a final contribution, we demonstrate that even in the absence of auxiliary input, care must be taken when making use of extractability assumptions. We show that (standard) \(di\mathcal {O}\) w.r.t. any distribution \(\mathcal {D}\) over programs and boundedlength auxiliary input is directly implied by any obfuscator that satisfies the weaker indistinguishability obfuscation (i\(\mathcal {O}\)) security notion and \(di\mathcal {O}\) for a slightly modified distribution \(\mathcal {D}'\) of programs (of slightly greater size) and no auxiliary input. As a consequence, we directly obtain negative results for (standard) \(di\mathcal {O}\) in the absence of auxiliary input.
Keywords
Hash Function Turing Machine Random Oracle Function Family Efficient Distribution1 Introduction

Extractable OWF: An extractable family of oneway (resp. collision resistant) functions [14, 15, 27], is a family of oneway (resp. collisionresistant) functions \(\{f_i\}\) such that any attacker who outputs an element y in the range of a randomly chosen function \(f_i\) given the index i must “know” a preimage x of y (i.e., \(f_i(x) = y\)). This is formalized by requiring for every adversary \(\mathcal {A}\), the existence of an “extractor” \(\mathcal {E}\) that (with overwhelming probability) given the view of \(\mathcal {A}\) outputs a preimage x whenever \(\mathcal {A}\) outputs an element y in the range of the function. For example, the “knowledgeofexponent” assumption of Damgard [15] stipulates the existence of a particular such extractable oneway function.

SNARKs: Succinct noninteractive arguments of knowledge (SNARKs) [5, 32, 35] are communicationefficient (i.e., “short” or “succinct”) arguments for \(\mathsf {NP}\) with the property that if a prover generates an accepting (short) proof, it must “know” a corresponding (potentially long) witness for the statement proved, and this witness can be efficiently “extracted” out from the prover.

DifferingInputs Obfuscation: [1, 2, 10] A differinginputs obfuscator \(\mathcal {O}\) for programpair distribution \(\mathcal {D}\) is an efficient procedure which ensures if any efficient attacker \(\mathcal {A}\) can distinguish obfuscations \(\mathcal {O}(C_1)\) and \(\mathcal {O}(C_2)\) of programs \(C_1,C_2\) generated via \(\mathcal {D}\) given the randomness r used in sampling, then it must “know” an input x such that \(C_1(x) \ne C_2(x)\), and this input can be efficiently “extracted” from \(\mathcal {A}\). A recently proposed (weaker) variant known as publiccoin differinginputs obfuscation [30] additionally provides the randomness used to sample the programs \((C_0,C_1) \leftarrow \mathcal {D}\) to the extraction algorithm (and to the attacker \(\mathcal {A}\)).
The above primitives have proven extremely useful in constructing cryptographic tools for which instantiations under complexitytheoretic hardness assumptions are not known (e.g., [1, 5, 10, 16, 24, 27, 30]).
Extraction with (DistributionSpecific) Auxiliary Input. In all of these applications, we require a notion of an auxiliaryinput extractable oneway function [14, 27], where both the attacker and the extractor may receive an auxiliary input. The strongest formulation requires extractability in the presence of an arbitrary auxiliary input. Yet, as informally discussed already in the original work by Hada and Tanaka [27], extractability w.r.t. an arbitrary auxiliary input is an “overly strong” (or in the language of [27], “unreasonable”) assumption. Indeed, a recent result of Bitansky, Canetti, Rosen and Paneth [7] (formalizing earlier intuitions from [5, 27]) demonstrates that assuming the existence of indistinguishability obfuscators for the class of polynomialsize circuits^{1} there cannot exist auxiliaryinput extractable oneway functions that remain secure for an arbitrary auxiliary input.
However, for most of the above applications, we actually do not require extractability to hold w.r.t. an arbitrary auxiliary input. Rather, as proposed by Bitansky et al. [5, 6], it often suffices to consider extractability with respect to specific distributions \(\mathcal {Z}\) of auxiliary input.^{2} More precisely, it would suffice to show that for every desired output length \(\ell (\cdot )\) and distribution \(\mathcal {Z}\) there exists a function family \(\mathcal {F}_{\mathcal {Z}}\) (which, in particular, may be tailored for \(\mathcal {Z}\)) such that \(\mathcal {F}_{\mathcal {Z}}\) is a family of extractable oneway (or collisionresistant) functions \(\{0,1\}^k \rightarrow \{0,1\}^{\ell (k)}\) with respect to \(\mathcal {Z}\). In fact, for some of these results (e.g., [5, 6]), it suffices to just assume that extraction works for just for the uniform distribution.
In contrast, the result of [7] can be interpreted as saying that (assuming \(i\mathcal {O}\)), there do not exist extractable oneway functions with respect to every distribution of auxiliary input: That is, for every candidate extractable oneway function family \(\mathcal {F}\), there exists some distribution \(\mathcal {Z}_{\mathcal {F}}\) of auxiliary input that breaks it.
Our Results. In this paper, we show limitations of extractability primitives with respect to distributionspecific auxiliary input (assuming the existence of publiccoin collisionresistant hash functions (CRHF) [29]). Our main result shows a conflict between publiccoin differinginputs obfuscation for Turing machines [30] and extractable oneway functions.
Theorem 1

extractable oneway functions \(\{0,1\}^k \!\rightarrow \! \{0,1\}^{\ell (k)}\) w.r.t. auxiliary input from \(\mathcal {Z}\).

publiccoin differinginputs obfuscation for Turing machines.
By combining our main theorem with results from [5, 30], we obtain the following corollary:
Theorem 2

SNARKs w.r.t. auxiliary input from \(\mathcal {Z}\).

publiccoin differinginputs obfuscation for \(NC^1\) circuits.
To prove our results, we develop a new proof technique, which we refer to as the “succinct punctured program” technique, extending the “punctured program” paradigm of Sahai and Waters [34]; see Sect. 1.1 for more details. This technique has several other interesting applications, as we discuss in Sect. 1.3.
As a final contribution, we demonstrate that even in the absence of auxiliary input, care must be taken when making use of extractability assumptions. Specifically, we show that differinginputs obfuscation (\(di\mathcal {O}\)) for any distribution \(\mathcal {D}\) of programs and boundedlength auxiliary inputs, is directly implied by any obfuscator that satisfies a weaker indistinguishability obfuscation (\(i\mathcal {O}\)) security notion (which is not an extractability assumption) and \(di\mathcal {O}\) security for a related distribution \(\mathcal {D}'\) of programs (of slightly greater size) which does not contain auxiliary input. Thus, negative results ruling out existence of \(di\mathcal {O}\) with boundedlength auxiliary input directly imply negative results for \(di\mathcal {O}\) in a setting without auxiliary input.
Theorem 3
(Informal). Let \(\mathcal {D}\) be a distribution over pairs of programs and \(\ell \)bounded auxiliary input information \(\mathcal {P}\times \mathcal {P}\times \{0,1\}^\ell \). There exists \(di\mathcal {O}\) with respect to \(\mathcal {D}\) if there exists an obfuscator satisfying \(i\mathcal {O}\) in addition to \(di\mathcal {O}\) with respect to a modified distribution \(\mathcal {D}'\) over \(\mathcal {P}' \times \mathcal {P}'\) for slightly enriched program class \(\mathcal {P}'\), and no auxiliary input.
Our transformation applies to a recent result of Garg et al. [20], which shows that based on a new assumption (pertaining to specialpurpose obfuscation of Turing machines) generalpurpose \(di\mathcal {O}\) w.r.t. auxiliary input cannot exist, by constructing a distribution over circuits and boundedlength auxiliary inputs for which no obfuscator can be \(di\mathcal {O}\)secure. Our resulting conclusion is that, assuming such specialpurpose obfuscation exists, then generalpurpose \(di\mathcal {O}\) cannot exist, even in the absence of auxiliary input.
We view this as evidence that publiccoin differing inputs may be the “right” approach definitionally, as restrictions on auxiliary input without regard to the programs themselves will not suffice.
Interpretation of Our Results. Our results suggest that one must take care when making extractability assumptions, even in the presence of specific distributions of auxiliary inputs, and in certain cases even in the absence of auxiliary input. In particular, we must develop a way to distinguish “good” distributions of instances and auxiliary inputs (for which extractability assumptions may make sense) and “bad” ones (for which extractability assumptions are unlikely to hold). As mentioned above, for some applications of extractability assumptions, it in fact suffices to consider a particularly simple distribution of auxiliary inputs—namely the uniform distribution.^{4} We emphasize that our results do not present any limitations of extractable oneway functions in the presence of uniform auxiliary input, and as such, this still seems like a plausible assumption.
Comparison to [20]. An interesting subsequent^{5} work of Garg et al. [19, 20] contains a related study of differinginputs obfuscation. In [20], the authors propose a new “specialpurpose” circuit obfuscation assumption, and demonstrate based on this assumption an auxiliary input distribution (whose size grows with the desired circuit size of circuits to be obfuscated) for which generalpurpose \(di\mathcal {O}\) cannot exist. Using similar techniques of hashing and obfuscating Turing machines as in the current work, they further conclude that if the new obfuscation assumption holds also for Turing machines, then the “bad” auxiliary input distribution can have bounded length (irrespective of the circuit size).
Garg et al. [20] show the “specialpurpose” obfuscation assumption is a falsifiable assumption (in the sense of [33]) and is implied by virtual blackbox obfuscation for the relevant restricted class of programs, but plausibility of the notion in relation to other primitives is otherwise unknown. In contrast, our results provide a direct relation between existing, studied topics (namely, \(di\mathcal {O}\), EOWFs, and SNARKs). Even in the case that the specialpurpose obfuscation assumption does hold, our primary results provide conclusions for publiccoin \(di\mathcal {O}\), whereas Garg et al. [20] consider (stronger) standard \(di\mathcal {O}\), with respect to auxiliary input.
And, utilizing our final observation (which occurred subsequent to [20]), we show that based on their same specialpurpose obfuscation assumption for Turing machines, we can in fact rule out generalpurpose \(di\mathcal {O}\) for circuits even in the absence of auxiliary input.
1.1 Proof Techniques
To explain our techniques, let us first explain earlier arguments against the plausibility of extractable oneway functions with auxiliary input. For simplicity of notation, we focus on extractable oneway function over \(\{0,1\}^k \rightarrow \{0,1\}^{k}\) (as opposed to over \(\{0,1\}^k \rightarrow \{0,1\}^{\ell (k)}\) for some polynomial \(\ell \)), but emphasize that the approach described directly extends to the more general setting.
Early Intuitions. As mentioned above, already the original work of Hada and Tanaka [27], which introduced auxiliary input extractable oneway functions (EOWFs) (for the specific case of exponentiation), argued the “unreasonableness” of such functions, reasoning informally that the auxiliary input could contain a program that evaluates the function, and thus a corresponding extractor must be able to “reverseengineer” any such program. Bitansky et al. [5] made this idea more explicit: Given some candidate EOWF family \(\mathcal {F}\), consider the distribution \(\mathcal {Z}_{\mathcal {F}}\) over auxiliary input formed by “obfuscating” a program \(\varPi ^s(\cdot )\) for uniformly chosen s, where \(\varPi ^s(\cdot )\) takes as input a function index e from the alleged EOWF family \(\mathcal {F}=\{f_i\}\), applies a pseudorandom function (PRF) with hardcoded seed s to the index i, and then outputs the evaluation \(f_i(\mathsf{PRF}_s(i))\). Now, consider an attacker \(\mathcal {A}\) who, given an index i, simply runs the obfuscated program to obtain a “random” point in the range of \(f_i\). If it were possible to obfuscate \(\varPi ^s\) in a “virtual blackbox (VBB)” way (as in [2]), then it easily follows that any extractor \(\mathcal {E}\) for this particular attacker \(\mathcal {A}\) can invert \(f_i\). Intuitively, the VBBobfuscated program hides the PRF seed s (revealing, in essence, only blackbox access to \(\varPi ^s\)), and so if \(\mathcal {E}\) can successfully invert \(f_i\) on \(\mathcal {A}\)’s output \(f_i(\mathsf{PRF}_s(i))\) on a pseudorandom input \(\mathsf{PRF}_s(i)\), he must also be able to invert for a truly random input. Formally, given an index i and a random point y in the image of \(f_i\), we can “program” the output of \(\varPi ^s(i)\) to simply be y, and thus E will be forced to invert y.
The problem with this argument is that (as shown by Barak et al. [2]), for large classes of functions VBB program obfuscation simply does not exist.
The Work of [7] and the “Punctured Program” Paradigm of [34]. Intriguingly, Bitansky, Canetti, Rosen and Paneth [7] show that by using a particular PRF and instead relying on indistinguishability obfuscation, the above argument still applies! To do so, they rely on the powerful “puncturedprogram” paradigm of Sahai and Waters [34] (and the closely related work of Hohenberger, Sahai and Waters [28] on “instantiating random oracles”). Roughly speaking, the punctured program paradigm shows that if we use indistinguishability obfuscation to obfuscate a (function of) a special kind of “puncturable” PRF^{6} [8, 11, 31], we can still “program” the output of the program on one input (which was used in [28, 34] to show various applications of indistinguishability obfuscation). Bitansky et al. [7] show that by using this approach, then from any alleged extractor \(\mathcal {E}\) we can construct a oneway function inverter Inv by “programming” the output of the program \(\varPi ^s\) at the input i with the challenge value y. More explicitly, mirroring [28, 34], they consider a hybrid experiment where \(\mathcal {E}\) is executed with fake (but indistinguishable) auxiliary input, formed by obfuscating a “punctured” variant \(\varPi ^{s}_{i,y}\) of the program \(\varPi ^s\) that contains an ipunctured PRF seed \(s^*\) (enabling evaluation of \(\mathsf{PRF}_s(j)\) for any \(j\ne i\)) and directly outputs the hardcoded value \(y := f_{i}(\mathsf{PRF}_s(i))\) on input i: indistinguishability of this auxiliary input follows by the security of indistinguishability obfuscation since the programs \(\varPi ^{s}_{i,y}\) and \(\varPi ^{s}\) are equivalent when \(y = f_{i}(\mathsf{PRF}_s(i)) = \varPi ^s(i)\). In a second hybrid experiment, the “correct” hardcoded value y is replaced by a random evaluation \(f_i(u)\) for uniform u; here, indistinguishability of the auxiliary inputs follows directly by the security of the punctured PRF. Finally, by indistinguishability of the three distributions of auxiliary input in the three experiments, it must be that \(\mathcal {E}\) can extract an inverse to y with nonnegligible probability in each hybrid; but, in the final experiment this implies the ability to invert a random evaluation, breaking onewayness of the EOWF.
The Problem: Dependence on \(\mathcal {F}\) . Note that in the above approach, the auxiliary input distribution is selected as a function of the family \(\mathcal {F}= \{f_j\}\) of (alleged) extractable oneway functions. Indeed, the obfuscated program \(\varPi ^s\) must be able to evaluate \(f_j\) given j. One may attempt to mitigate this situation by instead obfuscating a universal circuit that takes as input both \(\mathcal {F}\) and the index j, and appropriately evaluates \(f_j\). But here still the size of the universal circuit must be greater than the running time of \(f_j\), and thus such an auxiliary input distribution would only rule out EOWFs with apriori bounded running time. This does not suffice for what we aim to achieve: in particular, it still leaves open the possibility that for every distribution of auxiliary inputs, there may exist a family of extractable oneway functions that remains secure for that particular auxiliary input distribution (although the running time of the extractable oneway function needs to be greater than the length of the auxiliary input).
A First Idea: Using Turing Machine Obfuscators. At first sight, it would appear this problem could be solved if we could obfuscate Turing machines. Namely, by obfuscating a universal Turing machine in the place of a universal circuit in the construction above, the resulting program \(\varPi ^s\) would depend only on the size of the PRF seed s, and not on the runtime of \(f_j \in F\).
But there is a catch. To rely on the punctured program paradigm, we must be able to obfuscate the program \(\varPi ^s\) in such a way that the result is indistinguishable from an obfuscation of a related “punctured” program \(\varPi ^{s}_{i,y}\); in particular, the size of the obfuscation must be at least as large as \(\varPi ^s_{i,y}\). Whereas the size of \(\varPi ^s\) is now bounded by a polynomial in the size of the PRF seed s, the description of this punctured program must specify a punctured input i (corresponding to an index of the candidate EOWF \(\mathcal {F}\)) and hardcoded output value y, and hence must grow with the size of \(\mathcal {F}\). We thus run into a similar wall: even with obfuscation of Turing machines, the resulting auxiliary input distribution \(\mathcal {Z}\) would only rule out EOWF with apriori bounded index length.
Our “Succinct Punctured Program” Technique. To deal with this issue, we develop a “succinct punctured program” technique. That is, we show how to make the size of the obfuscation be independent of the length of the input, while still retaining its usability as an obfuscator. The idea is twofold: First, we modify the program \(\varPi ^s\) to hash the input to the PRF, using a collisionresistant hash function h. That is, we now consider a program \(\varPi ^{h,s}(j) = f_j(PRF_s(h(j)))\). Second, we make use of differinginputs obfuscation, as opposed to just indistinguishability obfuscation. Specifically, our constructed auxiliary input distribution \(\mathcal {Z}\) will sample a uniform s and a random hash function h (from some appropriate collection of collisionresistant hash functions) and then output a differinginputs obfuscation of \(\varPi ^{h,s}\).
To prove that this “universal” distribution \(\mathcal {Z}\) over auxiliary input breaks all alleged extractable oneway functions over \(\{0,1\}^k \rightarrow \{0,1\}^k\), we define a oneway function inverter Inv just as before, except that we now feed the EOWF extractor \(\mathcal {E}\) the obfuscation of the “punctured” variant \(\varPi ^{h,s}_{i,y}\) which contains a PRF seed punctured at point h(i). The program \(\varPi ^{h,s}_{i,y}\) proceeds just as \(\varPi ^{h,s}\) except on all inputs j such that h(j) is equal to this special value h(i); for those inputs it simply outputs the hardcoded value y. (Note that the index i is no longer needed to specify the function \(\varPi ^{h,s}_{i,y}\)—rather, just its hash h(i)—but is included for notational convenience). As before, consider a hybrid experiment where y is selected as \(y := \varPi ^{h,s}(i)\).
Whereas before the punctured program was equivalent to the original, and thus indistinguishability of auxiliary inputs in the different experiments followed by the definition of indistinguishability obfuscation, here it is no longer the case that if \(y = \varPi ^{h,s}(i)\), then \(\varPi ^{h,s}_{i,y}\) is equivalent to \(\varPi ^{h,s}\)—in fact, they may differ on many points. More precisely, the programs may differ in all points j such that \(h(j) = h(i)\), but \(j \ne i\) (since \(f_j\) and \(f_{i}\) may differ on the input \(PRF_s(h(i))\)). Thus, we can no longer rely on indistinguishability obfuscation to provide indistinguishability of these two hybrids.
We resolve this issue by relying differinginputs obfuscation instead of just indistinguishability obfuscation. Intuitively, if obfuscations of \(\varPi ^{h,s}\) and \(\varPi ^{h,s}_{i,y}\) can be distinguished when y is set to \(\varPi ^{h,s}(i)\), then we can efficiently recover some input j where the two programs differ. But, by construction, this must be some point j for which \(h(j) = h(i)\) (or else the two program are the same), and \(j \ne i\) (since we chose the hardcoded value \(y = \varPi ^{h,s}(i)\) to be consistent with \(\varPi ^{h,s}\) on input i. Thus, if the obfuscations can be distinguished, we can find a collision in h, contradicting its collision resistance.
To formalize this argument using just publiccoin \({di\mathcal {O}}\), we require that h is a publiccoin collisionresistant hash function [29].
1.2 Removing Auxiliary Input in \(di\mathcal {O}\)
The notion of publiccoin \({di\mathcal {O}}\) is weaker than “general” (not necessarily publiccoin) \({di\mathcal {O}}\) in two aspects: (1) the programs \(M_0\), \(M_1\) are sampled using only public randomness, and (2) we consider only a very specific auxiliary input that is given to the attacker—namely the randomness of the sampling procedure.
In this section, we explore another natural restriction of \({di\mathcal {O}}\) where we simply disallow auxiliary input, but allow for “private" sampling of \(M_0\), \(M_1\). We show that “bad side information” cannot be circumvented simply by simply disallowing auxiliary input, but rather such information can appear in the inputoutput behavior of the programs to be obfuscated.
More precisely, we show that for any distribution \(\mathcal {D}\) over \(\mathcal {P}\times \mathcal {P}\times \{0,1\}^{\ell }\) of programs \(\mathcal {P}\) and boundedlength auxiliary input, the existence of \(di\mathcal {O}\) w.r.t. \(\mathcal {D}\) is directly implied by the existence of any indistinguishability obfuscator (\(i\mathcal {O}\)) that is \(di\mathcal {O}\)secure for a slightly enriched distribution of programs \(\mathcal {D}'\) over \(\mathcal {P}' \times \mathcal {P}'\), without auxiliary input.
Intuitively, this transformation works by embedding the “bad auxiliary input” into the inputoutput behavior of the circuits to be obfuscated themselves. That is, the new distribution \(\mathcal {D}'\) is formed by sampling first a triple \((P_0,P_1,z)\) of programs and auxiliary input from the original distribution \(\mathcal {D}\), and then instead considering the tweaked programs \(P^z_0,P^z_1\) that have a special additional input \(x^*\) (denoted later as “\(\mathsf{mode}= *\)”) for which \(P^z_0(x^*) = P^z_1(x^*)\) is defined to be z. This introduces no new differing inputs to the original program pair \(P_0,P_1\), but now there is no hope of preventing the adversary from learning z without sacrificing correctness of the obfuscation scheme.
A technical challenge arises in the security reduction, however, in which we must modify the obfuscation of the zembedded program \(P^z_b\) to “look like” an obfuscation of the original program \(P_b\). Interestingly, this issue is solved by making use of a second layer of obfuscation, and is where the \(i\mathcal {O}\) security of the obfuscator is required. We refer the reader to the full version of this work for details.
1.3 Other Applications of the “Succinct Punctured Program” Technique

“Succinct” Perfect ZeroKnowledge NonInteractive Universal Argument System (with communication complexity \(k^{\epsilon }\) for every \(\epsilon \)), by relying on the nonsuccinct Perfect NIZK construction of [34].

A universal instantiation of Random Oracles, for which the Full Domain Hash (FDH) signature paradigm [4] is (selectively) secure for every trapdoor (onetoone) function (if hashing not only the message but also the index of the trapdoor function), by relying on the results of [28] showing how to provide a trapdoorfunction specific instantiation of the random oracle in the FDH.^{7}
1.4 Overview of Paper
We focus in this extended abstract on the primary result: the conflict between publiccoin differing inputs obfuscation and extractable OWFs (and SNARKs). Further preliminaries, applications of our succinct punctured programs technique, and our transformation removing auxiliary input in differinginputs obfuscation are deferred to the full version [12].
2 Preliminaries
2.1 PublicCoin DifferingInputs Obfuscation
The notion of publiccoin differinginputs obfuscation (PC\({di\mathcal {O}}\)) was introduced by Ishai et al. [30] as a refinement of (standard) differinginputs obfuscation [2] to exclude certain cases whose feasibility has been called into question. (Note that we also consider “standard” differinginputs obfuscation as described in Sect. 1.2. For a full treatment of the notion and our result, we refer the reader to the full version of this work [12]).
We now present the PC\({di\mathcal {O}}\) definition of [30], focusing only on Turing machine obfuscation; the definition easily extends also to circuits.
Definition 1
Definition 2

Correctness: For every \(k \in \mathbb {N}\), every \(M \in \mathcal {M}_k\), and every x, we have that \(\Pr [\tilde{M} \leftarrow \mathcal {O}(1^k,M) : \tilde{M}(x) = M(x)] = 1\).
 Security: For every publiccoin differinginputs sampler \(\mathsf{Samp}= \{\mathsf{Samp}_k\}\) for the ensemble \(\mathcal {M}\), every efficient nonuniform distinguishing algorithm \(\mathcal {D}= \{\mathcal {D}_k\}\), there exists a negligible function \(\epsilon \) such that for all k,$$\begin{aligned} \big  \Pr [ r \leftarrow \{0,1\}^*; (M_0,M_1) \leftarrow \mathsf{Samp}_k(r); \tilde{M} \leftarrow \mathcal {O}(1^k,M_0) :&\mathcal {D}_k(r,\tilde{M}) = 1]  \\ \Pr [ r \leftarrow \{0,1\}^*; (M_0,M_1) \leftarrow \mathsf{Samp}_k(r); \tilde{M} \leftarrow \mathcal {O}(1^k,M_1) :&\mathcal {D}_k(r,\tilde{M}) = 1] \big  \le \epsilon (k). \end{aligned}$$
2.2 Extractable OneWay Functions
We present a nonuniform version of the definition, in which both onewayness and extractability are with respect to nonuniform polynomialtime adversaries.
Definition 3
 Onewayness: For nonuniform polytime \(\mathcal {A}\) and sufficiently large \(k \in \mathbb {N}\),$$\begin{aligned} \Pr \big [ z \leftarrow \mathcal {Z}_k;~ i \leftarrow \mathcal{K_F}(1^k);~ x \leftarrow \{0,1\}^{k};~x'&\leftarrow \mathcal {A}(i, f_i(x); z) \\&: f_i(x') = f_i(x) \big ] \le \mathsf{negl}(k). \end{aligned}$$
 Extractability: For any nonuniform polynomialtime adversary \(\mathcal {A}\), there exists a nonuniform polynomialtime extractor \(\mathcal {E}\) such that, for sufficiently large security parameter \(k \in \mathbb {N}\):$$\begin{aligned} \Pr \big [ z \leftarrow \mathcal {Z}_k;~ i \leftarrow \mathcal{K_F}(1^k);~ y&\leftarrow \mathcal {A}(i;z);~ x' \leftarrow \mathcal {E}(i;z) \\&: \exists x ~\text {s.t.}~ f_i(x)=y \wedge f_i(x') \ne y \big ] \le \mathsf{negl}(k). \end{aligned}$$
2.3 Succinct NonInteractive Arguments of Knowledge (SNARKs)
We focus attention to publicly verifiable succinct arguments. We consider succinct noninteractive arguments of knowledge (SNARKs) with adaptive soundness in Sect. 3.2, and consider the case of specific distributional auxiliary input.
Definition 4
 Completeness: For any \((x,w) \in \mathcal {R}\),In addition, \(\mathsf{Prove}(x,w,\mathsf{crs})\) runs in time \(\mathsf{poly}(k,y,t)\).$$\begin{aligned} \Pr [ \mathsf{crs}\leftarrow \mathsf{CRSGen}(1^k); \pi \leftarrow \mathsf{Prove}(x,w,\mathsf{crs}) : { \mathsf Verify}(x,\pi ,\mathsf{crs})=1 ] =1. \end{aligned}$$

Succinctness: The length of the proof \(\pi \) output by \(\mathsf{Prove}(x,w,\mathsf{crs})\), as well as the running time of \({ \mathsf Verify}(x,\pi ,\mathsf{crs})\), is bounded by \(p(k+X)\), where p is a universal polynomial that does not depend on \(\mathcal {R}\). In addition, \(\mathsf{CRSGen}(1^k)\) runs in time \(\mathsf{poly}(k)\): in particular, \(\mathsf{crs}\) is of length \(\mathsf{poly}(k)\).
 Adaptive Proof of Knowledge: For any nonuniform polynomialsize prover \(P^*\) there exists a nonuniform polynomialsize extractor \(\mathcal {E}_{P^*}\), such that for all sufficiently large \(k \in \mathbb {N}\) and auxiliary input \(z \leftarrow \mathcal {Z}\), it holds that
In the full version of this work, we obtain as an application of our succinct programs technique zeroknowledge (ZK) succinct noninteractive arguments (SNARGs), without the extraction property. We refer the reader to [12] for a full treatment.
2.4 Puncturable PRFs
Our result makes use of puncturable PRFs, which are PRFs with an extra capability to generate keys that allow one to evaluate the function on all bit strings of a certain length, except for any polynomialsize set of inputs. We focus on the simple case of puncturing PRFs at a single point: that is, given a punctured key \(k^*\) with respect to input x, one can efficiently evaluate the PRF at all points except x, whose evaluation remains pseudorandom. We refer the reader to [34] for a formal definition.
As observed in [8, 11, 31], the GGM treebased PRF construction [22] yields puncturable PRFs, based on any oneway function.
3 PublicCoin DifferingInputs Obfuscation or Extractable OneWay Functions
In this section, we present our main result: a conflict between extractable oneway functions (EOWF) w.r.t. a particular distribution of auxiliary information and publiccoin differinginputs obfuscation (“\(\mathsf{PC}di\mathcal {O}\)”) (for Turing Machines).
3.1 From PC\(di\mathcal {O}\) to Impossibility of \(\mathcal {Z}\)AuxiliaryInput EOWF
We demonstrate a bounded polynomialtime uniformly samplable distribution \(\mathcal {Z}\) (with bounded polysize output length) and a publiccoin differinginputs sampler for Turing Machines \(\mathcal {D}\) (over \(\mathsf{TM}\times \mathsf{TM})\) such that if there exists publiccoin differinginputs obfuscation for Turing machines (and, in particular, for the program sampler \(\mathcal {D}\)), and there exist publiccoin collisionresistant hash functions (CRHF), then there do not exist extractable oneway functions (EOWF) w.r.t. auxiliary information sampled from distribution \(\mathcal {Z}\). In our construction, \(\mathcal {Z}\) consists of an obfuscated Turing machine.
We emphasize that we provide a single distribution \(\mathcal {Z}\) of auxiliary inputs for which all candidate EOWF families \(\mathcal {F}\) with given output length will fail. This is in contrast to the result of [7], which show for each candidate family \(\mathcal {F}\) that there exists a tailored distribution \(\mathcal {Z}_\mathcal {F}\) (whose size grows with \(\mathcal {F}\)) for which \(\mathcal {F}\) will fail.
Theorem 5
For every polynomial \(\ell \), there exists an efficient, uniformly samplable distribution \(\mathcal {Z}\) such that, assuming the existence of publiccoin collisionresistant hash functions and publiccoin differinginputs obfuscation for Turing machines, then there cannot exist \(\mathcal {Z}\)auxiliaryinput extractable oneway functions \(\{f_i : \{0,1\}^{k} \rightarrow \{0,1\}^{\ell (k)}\}\).
Proof
We construct an adversary \(\mathcal {A}\) and desired distribution \(\mathcal {Z}\) on auxiliary inputs, such that for any alleged EOWF family \(\mathcal {F}\), there cannot exist an efficient extractor corresponding to \(\mathcal {A}\) given auxiliary input from \(\mathcal {Z}\) (assuming publiccoin CRHFs and \(\mathsf{PC}di\mathcal {O}\)).
The Universal Adversary \(\mathcal {A}\) . We consider a universal PPT adversary \(\mathcal {A}\) that, given \((i,z) \in \{0,1\}^{\mathsf{poly}(k)} \times \{0,1\}^{n(k)}\), parses z as a Turing machine and returns z(i). Note that in our setting, i corresponds to the index of the selected function \(f_i \in \mathcal {F}\), and (looking ahead) the auxiliary input z will contain an obfuscated program.
The machines \(\varPi ^{h,s}_{i,y}\) perform a similar task, except that instead of having the entire PRF seed s hardcoded, they instead only have a punctured seed \(s^*\) derived from s by puncturing it at the point h(i) (i.e., enabling evaluation of the PRF on all points except h(i)). In addition, it has hardwired an output y to replace the punctured result. More specifically, on input a circuit description \(f_j\) (with explicitly specified index j), the program \(\varPi ^{h,s}_{i,y}\) first computes the hash \(h=h(j)\), continues computation as usual for any \(h \ne h(i)\) using the punctured PRF key, and for \(h=h(i)\), it skips the PRF and \(U_k\) evaluation steps and directly outputs y. Note that because h is not injective, this puncturing may change the value of the program on multiple inputs \(f_j\) (corresponding to functions \(f_j \in \mathcal {F}\) with \(h(j)=h(i)\)). When the hardcoded value y is set to \(y = f_{i}(\mathsf{PRF}_s(h(i)))\), then \(\varPi ^{h,s}_{i,y}\) agrees with \(\varPi ^{h,s}\) additionally on the input \(f_{i}\), but not necessarily on the other inputs \(f_j\) for which \(h(j)=h(i)\). (Indeed, whereas the hash of their indices collide, and thus their corresponding PRF outputs, \(\mathsf{PRF}(h(j))\), will agree, the final step will apply different functions \(f_j\) to this value).
We first remark that indistinguishability obfuscation arguments will thus not apply to this scenario, since we are modifying the computed functionality. In contrast, differinginputs obfuscation would guarantee that the two obfuscated programs are indistinguishable, since otherwise we could efficiently find one of the disagreeing inputs, which would correspond to a collision in the CRHF. But, most importantly, this argument holds even if the randomness used to sample the program pair \((\varPi ^{h,s},\varPi ^{h,s}_{i,y})\) is revealed. Namely, we consider a program sampler that generates pairs \((\varPi ^{h,s},\varPi ^{h,s}_{i,y})\) of the corresponding distribution; this amounts to sampling a hash function h, an EOWF challenge index i, and a PRF seed s, and a h(i)puncturing of the seed, \(s^*\). All remaining values specifying the programs, such as \(y = f_i(\mathsf{PRF}_s(h(i)))\), are deterministically computed given \((h,i,s,s^*)\). Now, since \(\mathcal {H}\) is a publiccoin CRHF family, revealing the randomness used to sample \(h \leftarrow \mathcal {H}\) is not detrimental to its collision resistance. And, the values i, s, and \(s^*\) are completely independent of the CRHF security (i.e., a CRHF adversary reduction could simply generate them on its own in order to break h). Therefore, we ultimately need only rely on publiccoin \(di\mathcal {O}\).
\(\mathcal {A}\) Has No Extractor. We show that, based on the assumed security of the underlying tools, the constructed adversary \(\mathcal {A}\) given auxiliary input from the constructed distribution \(\mathcal {Z}= \{Z_k\}\), cannot have an extractor \(\mathcal {E}\) satisfying Definition 3:
Proposition 1
Proof
Step 1: Replace \(\mathcal {Z}\) with “punctured” distribution \(\mathcal {Z}(i,y)\) . For every index i of the EOWF family \(\mathcal {F}\) and \(k \in \mathbb {N}\), consider an alternative distribution \(\mathcal {Z}_k(i,y)\) that, instead of sampling and obfuscating a Turing machine \(\varPi ^{h,s}\) from the class \(\mathcal {M}\), as is done for \(\mathcal {Z}\), it does so with a Turing machine \(\varPi ^{h,s}_{i,y}\in \mathcal {M}^*\) as follows. First, it samples a hash function \(h \leftarrow \mathcal {H}_k\) and PRF seed s as usual. It then generates a punctured PRF key \(s^*\leftarrow \mathsf{Punct}(s, h(i))\) that enables evaluation of the PRF on all points except the value h(i). For the specific index i, it computes the correct full evaluation \(y := f_{i}(\mathsf{PRF}_s(h(i)))\). Finally, \(\mathcal {Z}_k(i,y)\) outputs an obfuscation of the constructed program \(\varPi ^{h,s}_{i,y}\) as specified in Fig. 3 from the values \((h,s^*,y)\): i.e., \(\tilde{M} \leftarrow \mathsf{PC}di\mathcal {O}(\varPi ^{h,s}_{i,y})\). See Fig. 4 for a full description of \(\mathcal {Z}(i,y)\).
We now argue that the extractor \(\mathcal {E}\) must also succeed in extracting a preimage when given a value \(z^* \leftarrow \mathcal {Z}_k(i,y)\) from this modified distribution instead of \(\mathcal {Z}_k\).
We first argue that, by the (publiccoin) collision resistance of the hash family \(\mathcal {H}\), the sampler algorithm \(\mathsf{Samp}\) is a publiccoin differinginputs sampler, as per Definition 1.
Claim
Proof
 CRHF adversary \(\mathcal {A}_\mathsf{CR}(1^k,h,r_h)\):
 1.
Imitate the remaining steps of \(\mathsf{Samp}\). That is, sample an EOWF index \(i = K_\mathcal {F}(1^k;r_i)\); a PRF seed \(s = K_\mathsf{PRF}(1^k;r_s)\); and a punctured PRF seed \(s^* = \mathsf{Punct}(s,h(i); r_*)\). Define \(y = f_i(\mathsf{PRF}_s(h(i)))\) and \(r = (r_h,r_i,r_s,r_*)\), and let \(M_0 = \varPi ^{h,s}\) and \(M_1 = \varPi ^{h,s}_{i,y}\).
 2.
Run \(\mathcal {A}_\mathsf{PC}(1^k,r)\) on the collection of randomness r used above. In response, \(\mathcal {A}_\mathsf{PC}\) returns a pair \((x,1^t)\).
 3.
\(\mathcal {A}_\mathsf{CR}\) outputs the pair (i, x) as an alleged collision in the challenge hash function h.
 1.
Now, by assumption, the value x generated by \(\mathcal {A}_\mathsf{PC}\) satisfies (in particular) that \(M_0(x) \ne M_1(x)\). From the definition of \(M_0,M_1\) (i.e., \(\varPi ^{h,s},\varPi ^{h,s}_{i,y}\)), this must mean that \(h(i) = h(x)\) (since all values with \(h(x) \ne h(i)\) were not changed from \(\varPi ^{h,s}\) to \(\varPi ^{h,s}_{i,y}\)), and that \(i \ne x\) (since \(\varPi ^{h,s}_{i,y}(i)\) was specifically “patched” to the correct output value \(\varPi ^{h,s}(i)\)). That is, \(\mathcal {A}_\mathsf{CR}\) successfully identifies a collision with the same probability \(\alpha (k)\), which must thus be negligible.
We now show that this implies, by the security of the publiccoin \(di\mathcal {O}\), that our original EOWF extractor \(\mathcal {E}\) must succeed with nearly equivalent probability in the EOWF challenge when instead of receiving (real) auxiliary input from \(\mathcal {Z}_k\), both \(\mathcal {E}\) and \(\mathcal {A}\) are given auxiliary input from the fake distribution \(\mathcal {Z}_k(i,y)\). (Recall that \(\epsilon \) is assumed to be \(\mathcal {E}\)’s success in the same experiment as below but with \(z \leftarrow \mathcal {Z}_k\) instead of \(z^* \leftarrow \mathcal {Z}_k(i,y)\)).
Lemma 1
Proof
Note that given \(z^* \leftarrow \mathcal {Z}_k(i,y)\) (which corresponds to an obfuscated program of the form \(\varPi ^{h,s}_{i,y}\)) our EOWF adversary \(\mathcal {A}\) indeed will still output \(\varPi ^{h,s}_{i,y}(i) = y := f_i(\mathsf{PRF}_s(h(i)))\) (see Figs. 3,4).
Now, suppose there exists a nonnegligible function \(\alpha (k)\) for which the probability in Eq. (2) is less than \(\epsilon (k)\alpha (k)\). We directly use such \(\mathcal {E}\) to design another adversary \(\mathcal {A}_{di\mathcal {O}}\) to contradict the security of the publiccoin \(di\mathcal {O}\) with respect to the program pair sampler \(\mathsf{Samp}\) (which we showed in Claim 3.1 to be a void publiccoin differing inputs sampler). Recall the \(di\mathcal {O}\) challenge samples a program pair \((\varPi ^{h,s},\varPi ^{h,s}_{i,y}) \leftarrow \mathsf{Samp}(1^k,r)\), selects a random \(M \leftarrow \{\varPi ^{h,s},\varPi ^{h,s}_{i,y}\}\) to obfuscate as \(\tilde{M} \leftarrow \mathsf{PC}di\mathcal {O}(1^k,M)\), and gives as a challenge the pair \((r, \tilde{M})\) of the randomness used by \(\mathsf{Samp}\) and obfuscated program. Define \(\mathcal {A}_{di\mathcal {O}}\) (who wishes to distinguish which program was selected) as follows.
 PC\({di\mathcal {O}}\) adversary \(\mathcal {A}_{di\mathcal {O}}(1^k,r,\tilde{M})\):
 1.
Parse the given randomness r used in \(\mathsf{Samp}\) as \(r = (r_h,r_i,r_s,r_*)\) (see Fig. 5).
 2.
Recompute the “challenge index” \(i = K_\mathcal {F}(1^k;r_i)\). Let \(z^* = \tilde{M}\).
 3.
Run the extractor algorithm \(\mathcal {E}(i;z^*)\), and receive an alleged preimage \(x'\).
 4.
Recompute \(h=\mathcal {H}_k(r_h)\), \(s = K_\mathsf{PRF}(1^;r_s)\), again using the randomness from r.
 5.
If \(f_i(x') = f_i(\mathsf{PRF}_s(h(i)))\) — i.e., if \(\mathcal {E}\) succeeded in extracting a preimage — then \(\mathcal {A}_{di\mathcal {O}}\) outputs 1. Otherwise, \(\mathcal {A}_{di\mathcal {O}}\) outputs 0.
 1.
Now, if \(\tilde{M}\) is an obfuscation of \(\varPi ^{h,s}\), then this experiment corresponds directly to the EOWF challenge where \(\mathcal {E}\) (and \(\mathcal {A}\)) is given auxiliary input \(z \leftarrow \mathcal {Z}_k\). On the other hand, if \(\tilde{M}\) is an obfuscation of \(\varPi ^{h,s}_{i,y}\), then the experiment corresponds directly to the same challenge where \(\mathcal {E}\) (and \(\mathcal {A}\)) is given auxiliary input \(z^* \leftarrow \mathcal {Z}_k(i,y)\). Thus, \(\mathcal {A}_{di\mathcal {O}}\) will succeed in distinguishing these two cases with probability at least \([\epsilon (k)]  [\epsilon (k)\alpha (k)] = \alpha (k)\). By the security of \(\mathsf{PC}di\mathcal {O}\), it hence follows that \(\alpha (k)\) must be negligible.
Step 2: Replace “correct” hardcoded y in \(\mathcal {Z}(i,y)\) with random \(f_{i}\) evaluation. Next, we consider another experiment where \(\mathcal {Z}_k(i,y)\) is altered to a nearly identical distribution \(\mathcal {Z}_k(i,u)\) where, instead of hardcoding the “correct” ievaluation value \(y=f_{i}(\mathsf{PRF}_s(h(i)))\) in the generated “punctured” program \(\varPi ^{h,s}_{i,y}\), the distribution \(\mathcal {Z}_k(i,u)\) now simply samples a random \(f_i\) output \(y = f_{i}(u)\) for an independent random \(u \leftarrow \{0,1\}^{k}\). We claim that the original EOWF extractor \(\mathcal {E}\) still succeeds in finding a preimage when given this new auxiliary input distribution:
Lemma 2
Proof
This follows from the fact that \(\mathsf{PRF}_s(h(i))\) is pseudorandom, even given the h(i)punctured key \(s^*\).
Formally, consider an algorithm \(\mathcal {A}^0_\mathsf{PRF}\) which, on input the security parameter \(1^k\), a pair of values i, h, and a pair \(s^*, x\) (that will eventually correspond to a challenge punctured PRF key, and either \(\mathsf{PRF}_s(h(i))\) or random u), performs the following steps.
 Algorithm \(\mathcal {A}^0_\mathsf{PRF}(1^k,i,h,s^*,x)\):
 1.
Take \(y = f_{i}(x)\), and obfuscate the associated program \(\varPi ^{h,s}_{i,y}\): i.e., \(z^{**} \leftarrow \mathsf{PC}di\mathcal {O}(1^k,\varPi ^{h,s}_{i,y})\).
 2.
Run the EOWF extractor given index i and auxiliary input \(z^{**}\): \(x' \leftarrow \mathcal {E}(i;z^{**})\).
 3.
Output 0 if \(\mathcal {E}\) succeeds in extracting a valid preimage: i.e., if \(f_{i}(x') = y^* = f_{i}(x)\). Otherwise, output a random bit \(b \leftarrow \{0,1\}\).
 1.
Consider a nonuniform puncturedPRF adversary \(\mathcal {A}^I_\mathsf{PRF}\) (with the ensemble \(I=\{i_k,h_k\}\) hardcoded) that first selects the challenge point \(h_k(i_k)\); receives the PRF challenge information \((s^*,x)\) for this point; executes \(\mathcal {A}^0_\mathsf{PRF}\) on input \((1^k,i_k,h_k,s^*,x)\), and outputs the corresponding bit b output by \(\mathcal {A}^0_\mathsf{PRF}\). Then by (5), it follows that \(\mathcal {A}^I_\mathsf{PRF}\) breaks the security of the punctured PRF.
Step 3: Such an extractor breaks onewayness of EOWF. Finally, we observe that this means that \(\mathcal {E}\) can be used to break the onewayness of the original function family \(\mathcal {F}\). Indeed, given a random key i and a challenge output \(y = f_i(u)\), an inverter can simply sample a hash function h and h(i)punctured PRF seed \(s^*\) on its own, construct the program \(\varPi ^{h,s}_{i,y}\) with its challenge y hardcoded in, and sample an obfuscation \(z^{**} \leftarrow \mathsf{PC}di\mathcal {O}(\varPi ^{h,s}_{i,y})\). Finally, it runs \(\mathcal {E}(i,z^{**})\) to invert \(y^*\), with the same probability \(\epsilon (k)\mathsf{negl}(k)\).
This concludes the proof of Theorem 5.
3.2 PC\({di\mathcal {O}}\) or SNARKs
 1.
Assuming SNARKs for \(\mathsf {NP}\), there exists an efficient distribution \(\mathcal {Z}\) such that publiccoin differinginputs obfuscation for \(NC^1\) implies that there cannot exist PEOWFs \(\{f : \{0,1\}^k \rightarrow \{0,1\}^k\}\) w.r.t. \(\mathcal {Z}\).
 2.
PEOWFs \(\{f : \{0,1\}^k \rightarrow \{0,1\}^k\}\) w.r.t. this auxiliary input distribution \(\mathcal {Z}\) are implied by the existence of SNARKs for \(\mathsf {NP}\) secure w.r.t. a second efficient auxiliary input distribution \(\mathcal {Z}'\), as shown in [5].
 3.
Thus, one of these conflicting hypotheses must be false. That is, there exists an efficient distribution \(\mathcal {Z}'\) such that assuming existence of FHE with decryption in \(NC^1\) and collisionresistant hash functions, then either: (1) publiccoin differinginputs obfuscation for \(NC^1\) does not exist, or (2) SNARKS for \(\mathsf {NP}\) w.r.t. \(\mathcal {Z}'\) do not exist.
Note that we focus on the specific case of PEOWFs with kbit inputs and kbit outputs, as this suffices to derive the desired contradiction; however, the theorems following extend also to the more general case of PEOWF output length (demonstrating an efficient distribution \(\mathcal {Z}\) to rule out each potential output length \(\ell (k)\)).
Proximity EOWFs. We begin by defining Proximity EOWFs.
Proximity Extractable OneWay Functions (PEOWFs).
In a Proximity EOWF (PEOWF), the extractable function family \(\{f_i\}\) is associated with a “proximity” equivalence relation \(\sim \) on the range of \(f_i\), and the onewayness and extractability properties are modified with respect to this relation. The onewayness is strengthened: not only must it be hard to find an exact preimage of v, but it is also hard to find a preimage of any equivalent \(v \sim v'\). The extractability requirement is weakened accordingly: the extractor does not have to output an exact preimage of v, but only a preimage of of some equivalent value \(v' \sim v\).
As an example, consider functions of the form \(f : x \mapsto (f_1(x),f_2(x))\) and equivalence relation on range elements \((a,b)\sim (a,b')\) whose first components agree. Then the proximity extraction property requires for any adversary \(\mathcal {A}\) who outputs an image element \((a,b) \in Range(f)\) that there exists an extractor \(\mathcal {E}\) finding an input x s.t. \(f(x) = (a,b')\) for some \(b'\) not necessarily equal to b.
In this work, we allow the relation \(\sim \) to depend on the function index i, but require that the relation \(\sim \) is publicly (and efficiently) testable. We further consider nonuniform adversaries and extraction algorithms, and (in line with this work) auxiliary inputs coming from a specified distribution \(\mathcal {Z}\).
Definition 5
 (Strengthened) Onewayness: For nonuniform polynomialtime \(\mathcal {A}\) and sufficiently large security parameter \(k \in \mathbb {N}\),$$\begin{aligned} \Pr \Big [ z \leftarrow \mathcal {Z}_k;~ i \leftarrow \mathcal{K_F}(1^k);~ x \leftarrow \{0,1\}^{k};~ x' \leftarrow \mathcal {A}(i, f_i(x); z)&\\ : f_i(x') \sim f_i(x)&\Big ] \le \mathsf{negl}(k). \end{aligned}$$
 (Weakened) Extractability: For any nonuniform polynomialtime adversary \(\mathcal {A}\), there exists a nonuniform polynomialtime extractor \(\mathcal {E}\) such that, for sufficiently large security parameter \(k \in \mathbb {N}\),

Publicly Testable Relation: There exists a deterministic polytime machine \(\mathcal {T}\) such that, given the function index i, \(\mathcal {T}\) accepts \(y,y' \in \{0,1\}^{\ell (k)}\) if and only if \(y \sim _k y'\).
( \({{\mathbf {\mathsf{{PC}}}}}{{\varvec{di}}}\varvec{\mathcal {O}}\) for \({{\varvec{NC}}}^\mathbf{1}\) + PCCRHF + FHE + SNARK ) \(\Rightarrow \) No \(\varvec{\mathcal {Z}}\) PEOWF. We now show that, assuming the existence of publiccoin collisionresistant hash functions (CRHF) and fully homomorphic encryption (FHE) with decryption in \(NC^1\),^{8} then for some efficiently computable distributions \(\mathcal {Z}_\mathsf{SNARK}, \mathcal {Z}_\mathsf{PEOWF}\), if there exist publiccoin differinginputs obfuscators for \(NC^1\) circuits, and SNARKs w.r.t. auxiliary input \(\mathcal {Z}_\mathsf{SNARK}\), then there cannot exist PEOWFs w.r.t. auxiliary input \(\mathcal {Z}_\mathsf{PEOWF}\). This takes place in two steps.
First, we remark that an identical proof to that of Theorem 5 rules out the existence of \(\mathcal {Z}\)auxiliaryinput proximity EOWFs in addition to standard EOWFs, based on the same assumptions: namely, assuming publiccoin differinginputs obfuscation for Turing machines, and publiccoin collisionresistant hash functions. Indeed, assuming the existence of a PEOWF extractor \(\mathcal {E}\) for the adversary \(\mathcal {A}\) and auxiliary input distribution \(\mathcal {Z}\) (who extracts a “related” preimage to the target value), the same procedure yields a PEOWF inverter who similarly extracts a “related” preimage to any challenge output. In the reduction, it is merely required that the success of \(\mathcal {E}\) is efficiently and publicly testable (this is used to construct a distinguishing adversary for the differinginputs obfuscation scheme, in Step 1). However, this is directly implied by the public testability of the PEOWF relation \(\sim \), as specified in Definition 5.
Theorem 6
There exist an efficient, uniformly samplable distribution \(\mathcal {Z}\) such that, assuming the existence of publiccoin collisionresistant hash functions and publiccoin differinginputs obfuscation for polynomialsize Turing machines, there cannot exist (publicly testable) \(\mathcal {Z}\)auxiliaryinput PEOWFs \(\{f_i : \{0,1\}^k \rightarrow \{0,1\}^k\}\).
Now, in [30], it was shown that publiccoin differinginputs obfuscation for the class of all polynomialtime Turing machines can be achieved by bootstrapping up from publiccoin differinginputs obfuscation for circuits in the class \(NC^1\), assuming the existence of FHE with decryption in \(NC^1\), publiccoin CRHF, and publiccoin SNARKs for NP.
Putting this together with Theorem 6, we thus have the following corollary.
Corollary 1
There exists an efficient, uniformly samplable distribution \(\mathcal {Z}\) s.t., assuming existence of publiccoin SNARKs and FHE with decryption in \(NC^1\), then assuming the existence of publiccoin differinginputs obfuscation for \(NC^1\), there cannot exist PEOWFs \(\{f_i : \{0,1\}^k \rightarrow \{0,1\}^k\}\) w.r.t. auxiliary input \(\mathcal {Z}\).
( SNARK + CRHF) \(\implies \) \(\varvec{\mathcal {Z}}\) PEOWF. As shown in [5], Proximity EOWFs (PEOWFs) with respect to an auxiliary input distribution \(\mathcal {Z}\) are implied by collisionresistant hash functions (CRHF) and SNARKs secure with respect to a related auxiliary input distribution \(\mathcal {Z}'\).^{9}
Now (as proved in [5]), the resulting function family will be a PEOWF with respect to auxiliary input \(\mathcal {Z}\) if the underlying SNARK system is secure with respect to an augmented auxiliary input distribution \(\mathcal {Z}_\mathsf{SNARK}:= (\mathcal {Z},h)\), formed by concatenating a sample from \(\mathcal {Z}\) with a function index h sampled from the collisionresistant hash function family \(\mathcal {F}\). (Note that we will be considering publiccoin CRHF, in which case h is uniform).
Theorem 7
([5]). There exist efficient, uniformly samplable distributions \(\mathcal {Z},\mathcal {Z}_\mathsf{SNARK}\) such that, assuming the existence of collisionresistant hash functions and SNARKs for \(\mathsf {NP}\) secure w.r.t. auxiliary input distribution \(\mathcal {Z}_\mathsf{SNARK}\), then there exist PEOWFs \(\{f_i : \{0,1\}^k \rightarrow \{0,1\}^k\}\) w.r.t. \(\mathcal {Z}\).
Reaching a Standoff. Observe that the conclusions of Corollary 1 and Theorem 7 are in direct contradiction. Thus, it must be that one of the two sets of assumptions is false. Namely,
Corollary 2

SNARKs w.r.t. auxiliary input distribution \(\mathcal {Z}_\mathsf{SNARK}\).

Publiccoin differinginputs obfuscation for \(NC^1\).
More explicitly, we have that \(\mathcal {Z}_\mathsf{SNARK}= (\mathcal {Z},U)\), where \(\mathcal {Z}\) is composed of an obfuscated program, and U is a uniform string (corresponding to a randomly sampled index from a publiccoin CRHF family).
Footnotes
 1.
The notion of indistinguishability obfuscation [2] requires that obfuscations \(\mathcal {O}(C_1)\) and \(\mathcal {O}(C_2)\) of any two equivalent circuits \(C_1\) and \(C_2\) (i.e., whose outputs agree on all inputs) from some class \(\mathcal C\) are computationally indistinguishable. A candidate construction for generalpurpose indistinguishability obfuscation was recently given by Garg et al. [18].
 2.
As far as we know, the only exceptions are in the context of zeroknowledge simulation, where the extractor is used in the simulation (as opposed to being used as part of a reduction), and we require simulation w.r.t. arbitrary auxiliary inputs. Nevertheless, as pointed out in the works on zeroknowledge [26, 27], to acheive “plain” zeroknowledge [3, 25] (where the verifier does not receive any auxiliary input), weaker “bounded” auxiliary input assumptions suffice.
 3.
 4.
 5.
 6.
That is, a PRF where we can surgically remove one point in the domain of the PRF, keeping the rest of the PRF intact, and yet, even if we are given the seed of the punctured PRF, the value of the original PRF on the surgically removed point remains computationally indistinguishable from random.
 7.
That is, [28] shows that for every trapdoor onetoone function, there exists some way to instantiate the random oracle so that the resulting scheme is secure. In contrast, our results shows that there exists a single instantiation that works no matter what the trapdoor function is.
 8.
 9.
[5] consider the setting of arbitrary auxiliary input; however, their construction directly implies similar results for specific auxiliary input distributions.
Notes
Acknowledgements
The authors would like to thank KaiMin Chung for several insightful discussions.
References
 1.Ananth, P., Boneh, D., Garg, S., Sahai, A., Zhandry, M.: Differinginputs obfuscation and applications. Cryptology ePrint Archive, Report 2013/689 (2013)Google Scholar
 2.Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S.P., Yang, K.: On the (im)possibility of obfuscating programs. J. ACM 59(2), Article No. 6 (2012)Google Scholar
 3.Barak, B., Lindell, Y., Vadhan, S.P.: Lower bounds for nonblackbox zero knowledge. J. Comput. Syst. Sci. 72(2), 321–391 (2006)zbMATHMathSciNetCrossRefGoogle Scholar
 4.Bellare, M., Rogaway, P.: Random oracles are practical: A paradigm for designing efficient protocols. In: ACM Conference on Computer and Communications Security, pp. 62–73 (1993)Google Scholar
 5.Bitansky, N., Canetti, R., Chiesa, A., Tromer, E.: From extractable collision resistance to succinct noninteractive arguments of knowledge, and back again. In: ITCS, pp. 326–349 (2012)Google Scholar
 6.Bitansky, N., Canetti, R., Chiesa, A., Tromer, E.: Recursive composition and bootstrapping for snarks and proofcarrying data. In: STOC, pp. 111–120 (2013)Google Scholar
 7.Bitansky, Nir, Canetti, Ran, Paneth, Omer, Rosen, Alon: On the existence of extractable oneway functions. In: STOC 2014, pp. 505–514 (2014)Google Scholar
 8.Boneh, D., Waters, B.: Constrained pseudorandom functions and their applications. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013, Part II. LNCS, vol. 8270, pp. 280–300. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 9.Boneh, D., Zhandry, M.: Multiparty key exchange, efficient traitor tracing, and more from indistinguishability obfuscation. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616, pp. 480–499. Springer, Heidelberg (2014) Google Scholar
 10.Boyle, E., Chung, K.M., Pass, R.: On extractability obfuscation. In: Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349, pp. 52–73. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 11.Boyle, E., Goldwasser, S., Ivan, I.: Functional signatures and pseudorandom functions. In: Krawczyk, H. (ed.) PKC 2014. LNCS, vol. 8383, pp. 501–519. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 12.Boyle, E., Pass, R.: Limits of extractability assumptions with distributional auxiliary input. Cryptology ePrint Archive, Report 2013/703 (2013)Google Scholar
 13.Brakerski, Z., Vaikuntanathan, V.: Latticebased FHE as secure as PKE. In: Innovations in Theoretical Computer Science, ITCS 2014, pp. 1–12 (2014)Google Scholar
 14.Canetti, R., Dakdouk, R.R.: Towards a theory of extractable functions. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 595–613. Springer, Heidelberg (2009) CrossRefGoogle Scholar
 15.Damgård, I.B.: Towards practical public key systems secure against chosen ciphertext attacks. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 445–456. Springer, Heidelberg (1992) Google Scholar
 16.Damgård, I., Faust, S., Hazay, C.: Secure twoparty computation with low communication. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 54–74. Springer, Heidelberg (2012) CrossRefGoogle Scholar
 17.Garg, S., Gentry, C., Halevi, S., Raykova, M.: Tworound secure MPC from indistinguishability obfuscation. In: Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349, pp. 74–94. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 18.Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: FOCS (2013)Google Scholar
 19.Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Wichs, D.: On the implausibility of differinginputs obfuscation and extractable witness encryption with auxiliary input. Cryptology ePrint Archive, Report 2013/860 (2013)Google Scholar
 20.Garg, S., Gentry, C., Halevi, S., Wichs, D.: On the implausibility of differinginputs obfuscation and extractable witness encryption with auxiliary input. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616, pp. 518–535. Springer, Heidelberg (2014) Google Scholar
 21.Gentry, C., Sahai, A., Waters, B.: Homomorphic encryption from learning with errors: conceptuallysimpler, asymptoticallyfaster, attributebased. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 75–92. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 22.Goldreich, O., Goldwasser, S., Micali, S.: How to construct random functions. J. ACM 33(4), 792–807 (1986)MathSciNetCrossRefGoogle Scholar
 23.Goldwasser, S., Kalai, Y.T., Popa, R.A., Vaikuntanathan, V., Zeldovich, N.: How to run turing machines on encrypted data. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 536–553. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 24.Goldwasser, S., Lin, H., Rubinstein, A.: Delegation of computation without rejection problem from designated verifier csproofs. IACR Cryptology ePrint Archive 2011, 456 (2011)Google Scholar
 25.Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of interactive proof systems. SIAM J. Comput. 18(1), 186–208 (1989)zbMATHMathSciNetCrossRefGoogle Scholar
 26.Gupta, D., Sahai, A.: On constantround concurrent zeroknowledge from a knowledge assumption. Progress in Cryptology  INDOCRYPT 2014, pp. 71–88 (2014)Google Scholar
 27.Hada, S., Tanaka, T.: On the existence of 3round zeroknowledge protocols. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 408–423. Springer, Heidelberg (1998) Google Scholar
 28.Hohenberger, S., Sahai, A., Waters, B.: Replacing a random oracle: full domain hash from indistinguishability obfuscation. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 201–220. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 29.Hsiao, C.Y., Reyzin, L.: Finding collisions on a public road, or do secure hash functions need secret coins? In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 92–105. Springer, Heidelberg (2004) CrossRefGoogle Scholar
 30.Ishai, Y., Pandey, O., Sahai, A.: Publiccoin differinginputs obfuscation and its applications. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015, Part II. LNCS, vol. 9015, pp. 668–697. Springer, Heidelberg (2015) Google Scholar
 31.Kiayias, A., Papadopoulos, S., Triandopoulos, N., Zacharias, T.: Delegatable pseudorandom functions and applications. In: CCS2013, pp. 669–684 (2013)Google Scholar
 32.Micali, S.: CS proofs (extended abstracts). In: FOCS, pp. 436–453 (1994)Google Scholar
 33.Naor, M.: On cryptographic assumptions and challenges. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 96–109. Springer, Heidelberg (2003) CrossRefGoogle Scholar
 34.Sahai, Amit, Waters, Brent: How to use indistinguishability obfuscation: deniable encryption, and more. In: STOC 2014, pp. 475–484 (2014)Google Scholar
 35.Valiant, P.: Incrementally verifiable computation or proofs of knowledge imply time/space efficiency. In: Canetti, R. (ed.) TCC 2008. LNCS, vol. 4948, pp. 1–18. Springer, Heidelberg (2008) CrossRefGoogle Scholar