Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In the theory of point-function obfuscation (PO), there are many different goals and definitions. It is (at least to us) hard territory to navigate. Meanwhile, there are few constructions; indeed, there are fewer constructions than there are definitions. And the ones that exist use strong assumptions. We try to bring some structure and unity to this area via a parameterized definitional framework, generic constructions and relations between definitions.

1.1 The State of Point-Function Obfuscation

A point function with target \(k\in \{0,1\}^*\) is the circuit \(\mathbf {I}_k\) that on input \(k'\in \{0,1\}^{|k|}\) returns 1 if \(k'=k\) and 0 otherwise. A point-function obfuscator \(\mathsf {Obf}\) takes input \(\mathbf {I}_k\) and returns another circuit \(\overline{\mathrm {P}}\) that is functionally equivalent to \(\mathbf {I}_k\), meaning on input \(k'\in \{0,1\}^{|k|}\) it also returns 1 if \(k'=k\) and 0 otherwise. Security requires that \(\overline{\mathrm {P}}\) hides k. We now discuss the state of the area with regard to both definitions and constructions.

Definitions. The theory of PO contains a large number of different goals and definitions. Sometimes there is auxiliary information [14, 21, 35], other times not [23, 27, 39, 47]. Sometimes security pertains to a single target, other times to many [25]. Sometimes the formalization is a VBB-style simulation based one, other times indistinguishability based. Within each category, there are variants, for example, for indistinguishability, whether the necessary unpredictability condition on targets should be for polynomial-time or unbounded adversaries, and with negligible or sub-exponential advantage. And this list is not complete.

While from one perspective there are too many definitions, from other perspectives there are too few. Think of different elements that have been considered (for example whether or not auxiliary information is present, one target or many, polynomial-time or unbounded predictability adversaries, ..., in the context of an indistinguishability-based definition) as dimensions or axes in a multi-dimensional space. Then definitions in the literature can be seen as capturing some points in this space. But there is no systematic attempt to look in some unified way at all the points in this space. There is a connection that does not seem to have been explicitly made and pursued, namely that definitionally, there is little to no difference between PO and deterministic public-key encryption DPKE [3, 4, 17] or other forms of entropic security [30, 36]. Existing systematic and in-depth consideration of DPKE definitions and relations between them [4, 17] can be exploited to obtain semantic-security formalizations of PO that address issues with current definitions, and also to obtain definitional relations.

Constructions. Existing constructions use strong assumptions and achieve only some of the goals. A primary construction is from the AI-DHI (Auxiliary-Input Diffie-Hellman Inversion) assumption [14, 23]. Calling it a construction is a bit of a stretch; the security just amounts to the assumption. The latter cannot co-exist with VGBO (Virtual Grey Box Obfuscation) [10]. That doesn’t mean it is wrong (perhaps VGBO does not exist) but it would be preferable to base PO on assumptions not in contention with VGBO. Wee [47] provides a construction based on a fixed permutation about which a novel, strong uninvertibility assumption is made. He only proves security in the absence of auxiliary information, and GK [35] show that the construction does not in fact provide security in the presence of auxiliary information. However BP [14] specify an extension of Wee’s construction with a family of permutations rather than a fixed one, and show, under a novel assumption called Assumption 2.1 in their paper, that it achieves security with targets that are hard to predict given the auxiliary information. BP [14] explain that Assumption 2.1 asks for (a weak form of) extractability, making it a strong assumption in light of the impossibility of related extractable primitives [13]. DKL [29] use a novel assumption they call LSN to give a construction for targets that are exponentially hard to predict given the auxiliary information. BHK [6] give a construction for statistically hard to predict targets and no auxiliary information based on a multi-key version of their UCE assumption. There are simple constructions in the ROM [39].

In summary, there are few (standard-model) constructions and those that exist all use strong and sometimes novel assumptions. Also, each construction achieves a different variant of the goal and it is hard to visualize, or say in a concise way, what has been done. The framework that we now discuss provides language to do this.

1.2 Contributions in Brief

We pick one, simple indistinguishability-based definitional template. Using this, we provide a framework parameterized by a class \(\mathbf {X}\) of objects we call target generators, giving a definition of what it means for a point-function obfuscator to be \(\mathrm {IND}[\mathbf {X}]\) secure. This allows us to recover and explain different notions in the literature as each corresponding to a choice of \(\mathbf {X}\), and also obtain many natural new ones, points in the above-mentioned multi-dimensional space that had not been explicitly considered.

This taxonomy leads to a compelling and general new question: Is it possible to find a generic construction, meaning a compiler that given an arbitrary \(\mathbf {X}\) returns a point-function obfuscator secure relative to it? We answer this in the affirmative by providing three such generic constructions. As a consequence we obtain new constructions for both old and new forms of PO.

We then step back to consider other definitions of PO. These include existing simulation and indistinguishability style notions, as well as new, semantic security style ones emanating from the above-mentioned connection to DPKE. We formulate these also in a parameterized framework and then provide relations (implications and separations) between these notions and our IND notion.

We now look at these three contributions in more detail.

1.3 Definitional Framework

Recall that a point-function obfuscator \(\mathsf {Obf}\) takes input \(\mathbf {I}_{k}\) and returns another circuit \(\overline{\mathrm {P}}\) that is functionally equivalent to \(\mathbf {I}_k\). Security requires that \(\overline{\mathrm {P}}\) hides \(k\). We define a target generator \(\mathsf {X}\) as a polynomial-time algorithm that on input the security parameter returns a vector \(\mathbf {k}\) of target points together with auxiliary information \(a\). We measure security of a candidate point-function obfuscator \(\mathsf {Obf}\) relative to \(\mathsf {X}\). To do this, we associate to an adversary \(\mathcal{A}\) its advantage \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\cdot )\) in guessing the challenge bit b in the following game. We run \(\mathsf {X}\) to get \((\mathbf {k},a)\). We let \(\varvec{\overline{\mathrm {P}}}\) be the vector obtained by independently obfuscating \(\mathbf {I}_{k}\) for each of the targets \(k\) from \(\mathbf {k}\) (\(b=1\)) or by obfuscating the same number of random, independent targets (\(b=0\)). The input to \(\mathcal{A}\) is \(\varvec{\overline{\mathrm {P}}}\) and \(a\). Now we let \(\mathbf {X}\) be a class (set) of target generators \(\mathsf {X}\) and say that obfuscator \(\mathsf {Obf}\) is \(\mathrm {IND}[\mathbf {X}]\)-secure if \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for all polynomial time \(\mathcal{A}\) and all \(\mathsf {X}\in \mathbf {X}\). See Sect. 3 for a formal definition.

What we have here is a notion of point-function obfuscation parameterized by a class of target generators. We view the latter as knobs. By turning these knobs (defining specific classes) we can capture specific restrictions, and by intersecting classes we can combine them, allowing us to speak precisely yet concisely about different variant notions that are unified in this way.

\(\mathrm {IND}[\mathbf {X}]\)-security is not achievable for all \(\mathbf {X}\). For example, \(\mathsf {X}\) could pick \(\mathbf {k}[1]\) to be the string of all zeroes, and the adversary could test whether or not \(\overline{\mathrm {P}}\) returns 1 on input that string. The minimal requirement for security is that the target points produced by \(\mathsf {X}\) are unpredictable given \(a\). In Sect. 3 we formalize a prediction game and advantage so that we can define the classes \(\mathbf {X}^{\text {cup}},\mathbf {X}^{\text {seup}}\) and \(\mathbf {X}^{\text {sup}}\) of computationally, sub-exponentially and statistically unpredictable target generators. We let \(\mathbf {X}^{q(\cdot )}\) denote the class of target generators outputting \(q(\cdot )\) target points and \(\mathbf {X}^{\varepsilon }\) the class of target generators that produce no auxiliary information. (Formally it is the empty string.)

Already we can characterize prior work in a precise way. \(\mathrm {IND}[\mathbf {X}^{\text {cup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}]\) is plain point-function obfuscation [23, 27, 39, 47], where there is just one target point, no auxiliary information, and unpredictability is computational. \(\mathrm {IND}[\mathbf {X}^{\text {cup}}\cap \mathbf {X}^{1}]\) is AIPO [14, 20], where there is again one target point, but auxiliary information is now present, while unpredictability continues to be computational. \(\mathrm {IND}[\mathbf {X}^{\text {cup}}]\) is composable AIPO [25], where there are many arbitrarily correlated target points, auxiliary information is present, and unpredictability is computational. DKL [29] achieve \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{1}]\), where there is a single target that is statistically hard to predict given the auxiliary information. BHK [6] achieve \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }]\), where there are multiple targets, unpredictability is statistical, and there is no auxiliary information. Other prior notions can be captured in similar ways, and many natural new notions emerge as well.

1.4 Generic Constructions

As we saw above, constructions so far have been ad hoc, targeting different security goals and using strong, novel assumptions to achieve them. The above framework allows us to frame a compelling question, namely whether there are generic constructions. By this we mean that we are handed an arbitrary class \(\mathbf {X}\) of target generators and asked to craft an obfuscator that is \(\mathrm {IND}[\mathbf {X}]\)-secure. If we can do this, we can, in one unified swoop, obtain constructions for a wide variety of forms of PO, not only ones considered in the past, but also new ones.

In this paper we provide three such generic constructions. The first is based on indistinguishability obfuscation, the second on deterministic public-key encryption and the third on (multi-key) UCE.

One natural objection at this point is that we know that \(\mathrm {IND}[\mathbf {X}]\) is not achievable for some choices of \(\mathbf {X}\). For example, assuming iO, this is true for \(\mathbf {X}= \mathbf {X}^{\text {cup}}\), meaning composable PO. (This follows by combining [20, 24].) So how can our constructions achieve \(\mathrm {IND}[\mathbf {X}]\) for any given \(\mathbf {X}\)? In fact, they do, and this, interestingly, yields new negative results, ruling out the primitives we start from for those particular values of \(\mathbf {X}\). We will explain further below.

PO from iO. The emergence of candidate constructions for iO (indistinguishability obfuscation) [12, 33, 34, 43] raised a natural hope, namely that one could obtain PO from iO. But this has not happened. Despite the many powerful applications of iO, constructing point-function obfuscation from it has surprisingly evaded effort.

We show that iO plus a OWF yields PO. More precisely, we show \(\mathrm {iO}+\mathrm {OWF}[\mathbf {X}] \Rightarrow \mathrm {IND}[\mathbf {X}]\): Given iO and a family of functions that is one-way relative to \(\mathbf {X}\) as defined in Sect. 5.1 we can construct an obfuscator that is \(\mathrm {IND}[\mathbf {X}]\)-secure. The construction, result and proof are in Sect. 5.1. The idea is that to obfuscate \(\mathbf {I}_{k}\) we pick at random a key for the OWF \(\mathsf {F}\) (formally, the latter is a family of functions) and let . We consider the circuit \(\mathrm {C}\) that hardwires and on input \(k'\) returns 1 if and 0 otherwise. We then apply an indistinguishability obfuscator to \(\mathrm {C}\) to produce the obfuscated point function. The security proof is a sequence of hybrids. Although we assume only iO, we exploit diO [1, 2, 16] in the proof in a manner similar to [9]. We will need it for circuits that differ only on one input, and in this case the result of BCP [16] says that an iO-secure obfuscator is also diO-secure, so the assumption remains iO. As part of the proof we state and prove a lemma reducing (d)iO on polynomially-many, related circuits to the usual single-circuit case. We note that to guarantee the usual (perfect) correctness condition of a PO, we require the OWF to be injective.

We highlight the simplest case of this result as still being novel and of interest. Namely, given iO and an ordinary injective OWF, we achieve plain point-function obfuscation, \(\mathrm {IND}[\mathbf {X}^{\text {cup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}]\) in our notation. Previous constructions have been under assumptions that at this point seem less accepted than iO, and Wee [47] gives various arguments as to why this goal is hard under standard assumptions. Also on the negative side, combining our result with [20, 24] allows us, under iO, to rule out \(\mathrm {OWF}[\mathbf {X}^{\text {cup}}]\) (one-way functions secure for polynomially-many, computationally unpredictable correlated inputs), at least in the injective case.

PO from DPKE. Deterministic public key encryption (DPKE) [3] was motivated by applications to efficient searchable encryption [3]. It cannot provide IND-CPA security. Instead, BBO [3] provide a definition of a goal called PRIV which captures the best-possible security that encryption can provide subject to being deterministic. At this point many constructions of DPKE are known for various variant goals [35, 15, 17, 32, 38, 42, 45, 48, 50].

We show how to leverage these for point-function obfuscation via our second generic construction. We show that \(\mathrm {{PRIV1}}[\mathbf {X}] \Rightarrow \mathrm {IND}[\mathbf {X}]\). That is, given a deterministic public-key encryption scheme that is PRIV1 secure relative to \(\mathbf {X}\) we can build a point-function obfuscator secure relative to the same class in a simple and natural way. Namely to obfuscate \(\mathbf {I}_{k}\) we pick at random a public key and the associated secret key for the DPKE scheme and let c be the encryption of k under . The point-function obfuscation is the circuit \(\mathrm {C}\) that hardwires and on input \(k'\), returns 1 if the encryption of \(k'\) under equals c, and 0 otherwise. The fact that the encryption is deterministic is used crucially to define the circuit. (The latter must be deterministic.) The secret key is discarded and not used in the construction. We note that we only require security of the DPKE scheme for a single message (PRIV1) so the negative result of Wichs [49] does not apply. The construction, result and proof are in Sect. 5.2.

From the LTDF-based DPKE scheme of BFO [15] and LTDFs from [31, 37, 44, 48, 51] we now get \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}]\)-secure obfuscators under a large number of standard assumptions. We also get \(\mathrm {IND}[\mathbf {X}^{\text {seup}}\cap \mathbf {X}^{1}]\)-secure obfuscators under the DLIN, Subgroup Indistinguishability and LWE assumptions via [17, 48, 50]. On the negative side we can rule out \(\mathrm {{PRIV1}}[\mathbf {X}^{\text {cup}}]\)-secure DPKE under iO via [20, 24].

PO from UCE. UCE [6] is a class of assumptions on function families crafted to allow instantiation of random oracles in certain settings. UCE security is parameterized so that we have \(\mathrm {UCE}[\mathbf {S}]\) security of a family of functions for different choices of classes \(\mathbf {S}\) of algorithms called sources. The parameterization is necessary because security is not achievable for the class of all sources. Different applications rely on UCE relative to different classes of sources [5, 6, 18, 21, 28, 41].

In this work we use the multi-key version of UCE, abbreviated mUCE [6]. We show how to associate to any given class \(\mathbf {X}\) of target generators a class \(\mathbf {S}^{\mathbf {X}}\) of sources such that \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}] \Rightarrow \mathrm {IND}[\mathbf {X}]\), meaning we can build a point-function obfuscator secure for \(\mathbf {X}\) given a family of functions that is \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}]\)-secure. The definition of \(\mathbf {S}^{\mathbf {X}}\) is given in Sect. 5.3. But what is most relevant here is that the strength of UCE-framework assumptions is very sensitive to the choice of class of sources that parameterizes the particular assumption, and \(\mathbf {S}^{\mathbf {X}}\) has good properties in this regard. The sources are what are called “split” in [6], and they inherit the unpredictability attributes of the target generators. \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}]\)-security is not achievable for all choices of \(\mathbf {X}\) but the assumption is valid as far as we know for many choices of \(\mathbf {X}\), yielding new constructions.

1.5 Alternative Notions and Relations Between Notions

Above, we fixed one, basic definitional template, which we called \(\mathrm {IND}\), and then parameterized it by classes \(\mathbf {X}\) of target generators to get notions \(\mathrm {IND}[\mathbf {X}]\). However, there are other possible choices for the basic template, some emanating from the literature, and others from the definitional similarity of PO with DPKE. We consider parameterized versions of some of these and relate them to each other and to \(\mathrm {IND}\). Specifically we define and consider the following (see Sect. 6 for formal definitions):

  • \(\mathrm {SIM}[\mathbf {X}]\): (Simulation) The first definitions for PO simply restricted VBB security [2] to the class of point functions [25, 35, 39, 47]. With \(\mathrm {SSS}[\mathbf {X}]\) we give an \(\mathbf {X}\)-parameterized version of this.

  • \(\mathrm {SIND}[\mathbf {X}]\): (Strong Indistinguishability) Recall that in \(\mathrm {IND}[\mathbf {X}]\), the adversary decision bit is produced as a function of the vector \(\varvec{\overline{\mathrm {P}}}\) of obfuscated point functions and the auxiliary information \(a\). In \(\mathrm {SIND}[\mathbf {X}]\), this bit is not the final decision, but is passed to another adversary who produces the final decision based on it and the target vector itself. This is a parameterized version of the definition of [23].

  • \(\mathrm {CSS}[\mathbf {X}]\): (Comparison-based semantic security) This is an analogue of comparision based semantic security for boolean functions for DPKE [4] in which the adversary needs to compute some predicate on the target vector and auxiliary information.

  • \(\mathrm {SSS}[\mathbf {X}]\): (Simulation-based semantic security) This is an analogue of simulation based semantic security for boolean functions for DPKE [4] in which a simulator with an oracle for the point functions must compute a predicate on the target vectors and auxiliary information.

Figure 8 shows the relations between five parameterized notions of PO, namely the four above and our original \(\mathrm {IND}[\mathbf {X}]\).

1.6 Discussion and Further Related Work

In concurrent and independent work, BM3 [22] take first steps towards a parameterized definition for point-function obfuscation, with separate definitions for the basic and composable cases. They also show that injective mUCE-secure function families for strongly unpredictable sources making one oracle query per key implies composable AIPO (both for computational and statistical unpredictability), which is a special case of our mUCE result.

Multi-bit auxiliary-input point-function obfuscation (MB-AIPO) [11, 25, 40] allows one to obfuscate the circuit \(\mathbf {I}_{k,m}\) that on input \(k'\) returns m if \(k=k'\) and \(\bot \) otherwise, where km are strings. CD [25] show that composable AIPO implies MB-AIPO. MB-AIPO was subsequently used in BP [14] and MH [40]. BM1 [20] show that if iO is possible then MB-AIPO is not. MB-AIPO seems to be quite a bit stronger than AIPO itself and in particular this result does not rule out AIPO.

In Sect. 5.1 we define \(\mathrm {OWF}[\mathbf {X}]\), one-wayness of a function family relative to a class of target generators, the targets here being the inputs to the OWF. We note that \(\mathrm {OWF}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }]\) (inputs are statistically unpredictable and there is no auxiliary information) is the notion of a one-way correlation intractable hash (CIH) function family as per GOR [36].

Our parameterized \(\mathrm {{PRIV1}}[\mathbf {X}]\) notions of security for DPKE schemes apply equally to function families and thus recover, via particular choices of \(\mathbf {X}\), some of the security notions for CIH function families from GOR [36]. In these cases, since our DPKE-based constructions of PO do not require that decryption in the DPKE scheme is polynomial-time, CIH function families meeting the corresponding notions suffice as well.

Seeing that prior work can be characterized in terms of intersections of certain basic classes in our framework makes apparent that so far the literature has considered only a few points from the larger space of all possible intersections. A systematic consideration of the full space (which is lacking) would surface other notions of interest and give a coherent picture of the area.

2 Notation and Standard Definitions

Notation. We denote by \(\lambda \in {{\mathbb N}}\) the security parameter and by \(1^\lambda \) its unary representation. We let \(\varepsilon \) denote the empty string. If s is an integer then \(\mathsf {Pad}_s(\mathrm {C})\) denotes circuit \(\mathrm {C}\) padded to have size s. We say that circuits \(\mathrm {C}_0,\mathrm {C}_1\) are equivalent, written \(\mathrm {C}_0 \equiv \mathrm {C}_1\), if they agree on all inputs. If \(\mathbf {x}\) is a vector then \(|\mathbf {x}|\) denotes the number of its coordinates and \(\mathbf {x}[i]\) denotes its i-th coordinate. We write \(x\in \mathbf {x}\) as shorthand for \(x\in \{\mathbf {x}[1],\ldots ,\mathbf {x}[|\mathbf {x}|]\}\). If \({X}\) is a finite set, we let \(x \mathop {\leftarrow }\limits {\scriptscriptstyle \$}\,{X}\) denote picking an element of \({X}\) uniformly at random and assigning it to x. Algorithms may be randomized unless otherwise indicated. Running time is worst case. “PT” stands for “polynomial-time,” whether for randomized algorithms or deterministic ones. If A is an algorithm, we let \(y \leftarrow A(x_1,\ldots ;r)\) denote running A with random coins r on inputs \(x_1,\ldots \) and assigning the output to y. We let \(y \mathop {\leftarrow }\limits {\scriptscriptstyle \$}\, A(x_1,\ldots )\) be the result of picking r at random and letting \(y \leftarrow A(x_1,\ldots ;r)\). We let \([A(x_1,\ldots )]\) denote the set of all possible outputs of A when invoked with inputs \(x_1,\ldots \). We say that \(f{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb R}}\) is negligible if for every positive polynomial p, there exists \(\lambda _p \in {{\mathbb N}}\) such that \(f(\lambda ) < 1/p(\lambda )\) for all \(\lambda > \lambda _p\). We use the code based game playing framework of [7]. (See Fig. 3 for an example.) By \(\mathrm {G}^\mathcal{A}(\lambda )\) we denote the event that the execution of game \(\mathrm {G}\) with adversary \(\mathcal{A}\) and security parameter \(\lambda \) results in the game returning \(\mathsf {true}\).

Obfuscators. An obfuscator is a PT algorithm \(\mathsf {Obf}\) that on input \(1^\lambda \) and a circuit \(\mathrm {C}\) returns a circuit \(\overline{\mathrm {C}}\). If \(\varvec{\mathrm {C}}\) is an n-vector of circuits then \(\mathsf {Obf}(1^\lambda ,\varvec{\mathrm {C}})\) denotes the vector \((\mathsf {Obf}(1^\lambda ,\varvec{\mathrm {C}}[1]),\ldots ,\) \(\mathsf {Obf}(1^\lambda ,\varvec{\mathrm {C}}[n]))\) formed by applying \(\mathsf {Obf}\) independently to each coordinate of \(\varvec{\mathrm {C}}\). The correctness condition of obfuscator \(\mathsf {Obf}\) requires that for every circuit \(\mathrm {C}\), every \(\lambda \in {{\mathbb N}}\) and every \(\overline{\mathrm {C}}\in [\mathsf {Obf}(1^\lambda , \mathrm {C})]\) we have \(\overline{\mathrm {C}}\equiv \mathrm {C}\) (meaning \(\overline{\mathrm {C}}(x) = \mathrm {C}(x)\) for all x). We also call the latter a perfect correctness condition and we require that it holds for all obfuscators. We consider various notions of security for obfuscators, namely indistinguishability obfuscation and variants of point-function obfuscation, including AIPO.

Indistinguishability Obfuscation. Although our results need only iO, we use diO [1, 2, 16] in the proof, applying BCP [16] to then reduce the assumption to iO. To give the definitions compactly, we use the definitional framework of BST [9] which allows us to capture iO variants (including diO) via classes of circuit samplers. Let \(\mathsf {Obf}\) be an obfuscator. A sampler in this context is a PT algorithm \(\mathsf {S}\) that on input \(1^\lambda \) returns a triple \((\mathrm {C}_0,\mathrm {C}_1, aux )\) where \(\mathrm {C}_0,\mathrm {C}_1\) are circuits of the same size, number of inputs and number of outputs, and \( aux \) is a string. If \(\mathcal{O}\) is an adversary and \(\lambda \in {{\mathbb N}}\) we let \(\mathsf {Adv}^{\mathsf {io}}_{\mathsf {Obf},\mathsf {S},\mathcal{O}}(\lambda )=2\Pr [ \mathrm {IO}_{\mathsf {Obf},\mathsf {S}}^{\mathcal{O}}(\lambda )]-1\) where game \(\mathrm {IO}_{\mathsf {Obf},\mathsf {S}}^{\mathcal{O}}(\lambda )\) is defined in Fig. 1. Now let be a class (set) of circuit samplers. We say that \(\mathsf {Obf}\) is -secure if \(\mathsf {Adv}^{\mathsf {io}}_{\mathsf {Obf},\mathsf {S},\mathcal{O}}(\cdot )\) is negligible for every PT adversary \(\mathcal{O}\) and every circuit sampler . We say that circuit sampler \(\mathsf {S}\) produces equivalent circuits if there exists a negligible function \(\nu \) such that for all \(\lambda \in {{\mathbb N}}\). Let be the class of all circuit samplers that produce equivalent circuits. We say that \(\mathsf {Obf}\) is an indistinguishability obfuscator if it is -secure [2, 33, 46].

Fig. 1.
figure 1

Games defining difference-security of circuit sampler \(\mathsf {S}\) and iO-security of obfuscator \(\mathsf {Obf}\) relative to circuit sampler \(\mathsf {S}\).

We say that a circuit sampler \(\mathsf {S}\) is difference secure if \(\mathsf {Adv}^{\mathsf {diff}}_{\mathsf {S},\mathcal{D}}(\cdot )\) is negligible for every PT adversary \(\mathcal{D}\), where \(\mathsf {Adv}^{\mathsf {diff}}_{\mathsf {S},\mathcal{D}}(\lambda )=\Pr [ \mathrm {DIFF}_{\mathsf {S}}^{\mathcal{D}}(\lambda )]\) and game \(\mathrm {DIFF}_{\mathsf {S}}^{\mathcal{D}}( \lambda )\) is defined in Fig. 1. Difference security of \(\mathsf {S}\) means that given \(\mathrm {C}_0,\mathrm {C}_1, aux \) it is hard to find an input on which the circuits differ [1, 2, 16]. Let be the class of all difference-secure circuit samplers. We say that circuit sampler \(\mathsf {S}\) produces d-differing circuits, where \(d{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\), if for all \(\lambda \in {{\mathbb N}}\) circuits \(\mathrm {C}_0\) and \(\mathrm {C}_1\) differ on at most \(d(\lambda )\) inputs with an overwhelming probability over . Let be the class of all difference-secure circuit samplers that produce d-differing circuits, so that . The interest of this definition is the following result of BCP [16] that we use:

Proposition 1

If d is a polynomial then any -secure circuit obfuscator is also an -secure circuit obfuscator.

Function Families. A family of functions \(\mathsf {F}\) specifies the following. PT key generation algorithm \(\mathsf {F.Kg}\) takes \(1^\lambda \) to return a key , where \(\mathsf {F.kl}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) is the key length function associated to \(\mathsf {F}\). Deterministic, PT evaluation algorithm \(\mathsf {F.Ev}\) takes \(1^\lambda \), key and an input \(x\in \{0,1\}^{\mathsf {F.il}(\lambda )}\) to return an output , where \(\mathsf {F.il},\mathsf {F.ol}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) are the input and output length functions associated to \(\mathsf {F}\), respectively. We say that \(\mathsf {F}\) is injective if the function is injective for every \(\lambda \in {{\mathbb N}}\) and every . Notions of security for function families that we use are \(\mathrm {mUCE}\) and \(\mathrm {OWF}\), the latter defined in Sect. 5.1.

UCE Framework. We recall the Universal Computational Extractor (UCE) framework of BHK [6]. We will use what BHK call the multi-key version of UCE (\(\mathrm {mUCE}\)). It is an extension of the more commonly used UCE notion for a single key, meaning that it implies the latter. Meanwhile, no implications in the other direction (from single-key to multi-key) are known.

Let \(\mathsf {H}\) be a family of functions. Let \(\mathcal{S}\) be an adversary called the source and \(\mathcal{D}\) an adversary called the distinguisher. Consider game \(\mathrm {mUCE}_{\mathsf {H}}^{\mathcal{S},\mathcal{D}}(\lambda )\) in the left panel of Fig. 2. Associated to \(\mathcal{S}\) is a polynomial \(\mathcal{S}.\mathsf {nk}\) that indicates how many keys \(\mathcal{S}\) uses. The source has access to an oracle \(\textsc {HASH}\). A query to \(\textsc {HASH}\) consists of an index i of a key and the actual input \(x\), which is a string required to have length \(\mathsf {H.il}(\lambda )\). When the challenge bit b is 1 (the “real” case) the oracle responds via \(\mathsf {H.Ev}\) under a key \(\mathbf {hk}[i]\) that is chosen by the game and not given to the source. When \(b=0\) (the “random” case) it responds as a random oracle. The source then leaks a string L to its accomplice distinguisher. The latter does get the key vector \(\mathbf {hk}\) as input and must now return its guess \(b'\in \{0,1\}\) for b. The game returns \(\mathsf {true}\) iff \(b'=b\). The advantage of \((\mathcal{S},\mathcal{D})\) against the \(\mathrm {mUCE}\) security of \(\mathsf {H}\) is defined for \(\lambda \in {{\mathbb N}}\) via \(\mathsf {Adv}^{\mathsf {m\text {-}uce}}_{\mathsf {H},\mathcal{S}, \mathcal{D}}(\lambda ) = 2\Pr [\mathrm {mUCE}_{\mathsf {H}}^{\mathcal{S}, \mathcal{D}}(\lambda )]-1\). If \(\mathbf {S}\) is a class (set) of sources, we say that \(\mathsf {H}\) is \(\mathrm {mUCE}[\mathbf {S}]\)-secure if \(\mathsf {Adv}^{\mathsf {m\text {-}uce}}_{\mathsf {H},\mathcal{S}, \mathcal{D}}(\cdot )\) is negligible for all sources \(\mathcal{S}\in \mathbf {S}\) and all PT distinguishers \(\mathcal{D}\).

Fig. 2.
figure 2

Games defining \(\mathrm {mUCE}\) security of function family \(\mathsf {H}\) and unpredictability of source \(\mathcal{S}\).

It is easy to see that \(\mathrm {mUCE}[\mathbf {S}]\)-security is not achievable if \(\mathbf {S}\) is the class of all PT sources [6]. To obtain meaningful notions of security, BHK [6] impose restrictions on the source. A central restriction is unpredictability. A source is unpredictable if it is hard to guess the source’s \(\textsc {HASH}\) queries even given the leakage, in the random case of the \(\mathrm {mUCE}\) game. Formally, let \(\mathcal{S}\) be a source and \(\mathcal{P}\) an adversary called a predictor and consider game \(\mathrm {mSPRED}_{\mathcal{S}}^{\mathcal{P}}(\lambda )\) in Fig. 2. For \(\lambda \in {{\mathbb N}}\) we let \(\mathsf {Adv}^{\mathsf {m\text {-}spred}}_{\mathcal{S},\mathcal{P}}(\lambda ) = \Pr [\mathrm {mSPRED}_{\mathcal{S}}^{\mathcal{P}}(\lambda )]\). We say that \(\mathcal{S}\) is computationally unpredictable if \(\mathsf {Adv}^{\mathsf {m\text {-}spred}}_{\mathcal{S},\mathcal{P}}(\cdot )\) is negligible for all PT predictors \(\mathcal{P}\), and let \(\mathbf {S}^{\mathrm {cup}}\) be the class of all PT computationally unpredictable sources. We say that \(\mathcal{S}\) is statistically unpredictable if \(\mathsf {Adv}^{\mathsf {m\text {-}spred}}_{\mathcal{S},\mathcal{P}}(\cdot )\) is negligible for all (not necessarily PT) predictors \(\mathcal{P}\), and let \(\mathbf {S}^{\mathrm {sup}}\subseteq \mathbf {S}^{\mathrm {cup}}\) be the class of all PT statistically unpredictable sources. We say that \(\mathcal{S}\) is sub-exponentially unpredictable if there is an \(\epsilon >0\) such that for any PT predictor \(\mathcal{P}\) there is a \(\lambda _{\mathcal{P}}\) such that \(\mathsf {Adv}^{\mathsf {m\text {-}spred}}_{\mathcal{S},\mathcal{P}}(\lambda ) \le 2^{-\lambda ^{\epsilon }}\) for all \(\lambda \ge \lambda _{\mathcal{P}}\), and let \(\mathbf {S}^{\mathrm {seup}}\subseteq \mathbf {S}^{\mathrm {cup}}\) be the class of all PT sub-exponentially unpredictable sources.

BFM [18] show that UCE-framework security notions (both single-key and multi-key) are not achievable for \(\mathbf {S}^{\mathrm {cup}}\) assuming that indistinguishability obfuscation exists. This has lead applications to impose further restrictions on the source by using either \(\mathbf {S}^{\mathrm {sup}}\) or subsets of \(\mathbf {S}^{\mathrm {cup}}\). Assumptions based on \(\mathbf {S}^{\mathrm {sup}}\), introduced in [6, 18], at this point seem to be a viable. In order to restrict the computational case, one can consider split sources as defined in BHK [6]. Such sources can leak information about oracle queries and answers separately, but not together. We let \(\mathbf {S}^{\mathrm {splt}}\) denote the class of split sources. Another way to restrict a source is by limiting the number of queries it can make. Let \(\mathbf {S}^{n,q}\) be the class of sources \(\mathcal{S}\) such that \(\mathcal{S}.\mathsf {nk}(\cdot ) \le n(\cdot )\) and \(\mathcal{S}\) makes at most \(q(\cdot )\) queries to each key. In particular \(\mathbf {S}^{1,1}\) is the class of sources that use only one key and make only one query to it.

3 Point-Function Obfuscation Framework

The literature considers many different variants of point function obfuscation. Here we provide a definitional framework that unifies these concepts and allows us to obtain not just known but also new variants of point function obfuscation as special cases. The framework parameterizes the security of a point-obfuscator by a class of algorithms we call target generators. Different notions of point obfuscation then correspond to different choices of this class. We start by defining target generators.

Target Generators. A target generator \(\mathsf {X}\) specifies a PT algorithm \(\mathsf {\mathsf {X}.Ev}\) that takes \(1^\lambda \) to return a target vector \(\mathbf {k}\) and auxiliary information \(a\in \{0,1\}^*\). The entries of \(\mathbf {k}\) are the targets, each of length \(\mathsf {\mathsf {X}.tl}(\lambda )\), and the vector itself has length \(\mathsf {\mathsf {X}.vl}(\lambda )\), where \(\mathsf {\mathsf {X}.tl},\mathsf {\mathsf {X}.vl}:{{\mathbb N}}\rightarrow {{\mathbb N}}\) are the target length and target-vector length functions associated to \(\mathsf {X}\), respectively.

Fig. 3.
figure 3

Games defining \(\mathrm {IND}\) security of point-function obfuscator \(\mathsf {Obf}\) relative to target generator \(\mathsf {X}\), unpredictabilty of target generator \(\mathsf {X}\) and triviality of target generator \(\mathsf {X}\).

Point-Function Obfuscation. If \(k\) is a bit-string then \(\mathbf {I}_{k}{:\;\;}\{0,1\}^{|k|}\rightarrow \{0,1\}\) denotes a canonical representation of the circuit that on input \(k'\in \{0,1\}^{|k|}\) returns 1 if \(k=k'\) and 0 otherwise. It is assumed that given \(\mathbf {I}_{k}\), one can compute \(k\) in time linear in \(|k|\). A circuit \(\mathrm {C}\) is called a point circuit if there is a \(k\), called the circuit target, such that \(\mathrm {C}\equiv \mathbf {I}_{k}\). If \(\mathbf {k}\) is an n-vector of strings then we let \(\mathbf {I}_{\mathbf {k}} = (\mathbf {I}_{\mathbf {k}[1]},\ldots ,\mathbf {I}_{\mathbf {k}[n]})\).

Let \(\mathsf {Obf}\) be an obfuscator, as defined in Sect. 2. Its correctness condition guarantees that on input \(1^\lambda ,\mathbf {I}_{k}\), it returns a point circuit with target \(k\), which is the condition for calling it a point-function obfuscator. We say that \(\mathsf {Obf}\) has target length \(\mathsf {\mathsf {Obf}.tl}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) if its correctness condition is only required on inputs \(\mathbf {I}_{k}\) with \(k\in \{0,1\}^{\mathsf {\mathsf {Obf}.tl}(\lambda )}\).

Security of Point-Function Obfuscation. We now define security of point-function obfuscator relative to a class of target generators. We will then consider various choices of these classes.

Consider game \(\mathrm {IND}\) of Fig. 3 associated to a point-function obfuscator \(\mathsf {Obf}\), a target generator \(\mathsf {X}\) and an adversary \(\mathcal{A}\), such that \(\mathsf {\mathsf {Obf}.tl}= \mathsf {\mathsf {X}.tl}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\lambda )=2\Pr [\mathrm {IND}_{\mathsf {Obf},\mathsf {X}}^{\mathcal{A}}(\lambda )]-1\). The game generates a target vector \(\mathbf {k}_1\) and corresponding auxiliary information \(a_1\) via \(\mathsf {X}\). It also samples a target vector \(\mathbf {k}_0\) uniformly at random, containing \(\mathsf {\mathsf {X}.vl}(\lambda )\) elements each of length \(\mathsf {\mathsf {X}.tl}(\lambda )\). It then obfuscates the targets in the challenge vector \(\mathbf {k}_b\) via \(\mathsf {Obf}\) to produce \(\varvec{\overline{\mathrm {P}}}\) which, as per our notation, will be the vector \((\mathsf {Obf}(1^\lambda ,\mathbf {I}_{\mathbf {k}_b[1]}),\ldots , \mathsf {Obf}(1^\lambda ,\mathbf {I}_{\mathbf {k}_b[\mathsf {\mathsf {X}.vl}(\lambda )]}))\) formed by independently obfuscating the targets in the target vector. Given \(\varvec{\overline{\mathrm {P}}}\) and \(a_1\), adversary \(\mathcal{A}\) outputs a bit \(b'\), and wins the game if this equals b, meaning it guesses whether the target vector that was obfuscated was the one corresponding to auxiliary information \(a_1\) or one independent of it.

Let \(\mathbf {X}\) be a class (set) of target generators. We say that \(\mathsf {Obf}\) is \(\mathrm {IND}[\mathbf {X}]\)-secure if \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for every PT \(\mathcal{A}\) and every \(\mathsf {X}\in \mathbf {X}\). We now capture different notions in the literature, as well as new ones, by considering particular classes \(\mathbf {X}\). At the end of this section we will present what we call the triviality theorem, showing how the definition is vacuous for some classes, and discuss its implications. We will further discuss alternative security definitions for point-function obfuscation in Sect. 6.

Classes of Target Generators. One important (and necessary) condition on a target generator is unpredictability. To define this, consider game \(\mathrm {PRED}\) of Fig. 3 associated to \(\mathsf {X}\) and a predictor adversary \(\mathcal{Q}\). For \(\lambda \in {{\mathbb N}}\) let \( \mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda )=\Pr [\mathrm {PRED}_{\mathsf {X}}^{\mathcal{Q}}(\lambda )]\). The game generates a target vector \(\mathbf {k}\) and associated auxiliary information \(a\). The adversary \(\mathcal{Q}\) gets \(a\) and wins if it can predict any entry of the vector \(\mathbf {k}\).

The first dimension along which point-function obfuscators are classified is the type of unpredictability, encompassing two sub-dimensions: the success probability of predictors (may be required to be negligible or sub-exponential) and their computational power (PT and computationally unbounded are the popular choices, but one could also consider sub-exponential time). Some relevant classes are the following:

  • \(\mathbf {X}^{\text {cup}}\) — Class of computationally unpredictable target generators — \(\mathsf {X}\in \mathbf {X}^{\text {cup}}\) if \(\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\cdot )\) is negligible for all PT predictor adversaries \(\mathcal{Q}\).

  • \(\mathbf {X}^{\text {seup}}\) — Class of sub-exponentially unpredictable target generators — \(\mathsf {X}\in \mathbf {X}^{\text {seup}}\) if there exists \(0 < \epsilon < 1\) such that for every PT predictor adversary \(\mathcal{Q}\) there is a \(\lambda _{\mathcal{Q}}\) such that \(\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda ) \le 2^{-\lambda ^{\epsilon }}\) for all \(\lambda \ge \lambda _{\mathcal{Q}}\).

  • \(\mathbf {X}^{\text {sup}}\) — Class of statistically unpredictable target generators — \(\mathsf {X}\in \mathbf {X}^{\text {sup}}\) if \(\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\cdot )\) is negligible for all (even computationally unbounded) predictor adversaries \(\mathcal{Q}\).

Another dimension is the number of target points in the target vector, to capture which, for any polynomial \(q{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\), we let

  • \(\mathbf {X}^{q(\cdot )}\) — Class of generators producing \(q(\cdot )\) target points — \(\mathsf {X}\in \mathbf {X}^{q(\cdot )}\) if \(\mathsf {\mathsf {X}.vl}= q\). An important special case is \(q(\cdot )=1\).

Another important dimension is auxiliary information, which may be present or absent (the latter, formally means it is the empty string), to capture which we let

  • \(\mathbf {X}^{\varepsilon }\) — Class of generators with no auxiliary information — \(\mathsf {X}\in \mathbf {X}^{\varepsilon }\) if \(a=\varepsilon \) for all \((\mathbf {k},a)\in [\mathsf {\mathsf {X}.Ev}(1^\lambda )]\) and all \(\lambda \in {{\mathbb N}}\).

We can recover notions from the literature as follows:

  • \(\mathrm {IND}[\mathbf {X}^{\text {cup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}]\) — This is basic point-function obfuscation, secure for a single computationally unpredictable target point, and no auxiliary information is allowed. It is achieved in [23, 27, 39, 47].

  • \(\mathrm {IND}[\mathbf {X}^{\text {cup}}\cap \mathbf {X}^{1}]\) — This is AIPO [14, 35], secure for a single computationally unpredictable target point in the presence of auxiliary information. It is achieved under the AI-DHI assumption by Canetti [23], and using the extended construction of Wee [47] by BP [14].

  • \(\mathrm {IND}[\mathbf {X}^{\text {cup}}]\) — This is composable AIPO [25], meaning that it is secure for arbitrarily many correlated target points that are computationally unpredictable in the presence of auxiliary information. BM1 [20] showed that this notion cannot co-exit with iO in the presence of OWFs.

  • \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }]\) — This is composable point-function obfuscation, secure for arbitrarily many correlated target points that are statistically unpredictable, and no auxiliary information is allowed. It is achieved from \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {sup}}]\) in BHK [6].

Furthermore, DKL [29] achieve \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{1}]\) from the LSN (i.e. auxiliary-input LPN) assumption and BM3 [22] build \(\mathrm {IND}[\mathbf {X}^{\text {sup}}]\) from \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {s\text{- }sup}}\cap \mathbf {X}^{1}]\). Here \(\mathbf {S}^{\mathrm {s\text{- }sup}}\) denotes a subclass of \(\mathbf {S}^{\mathrm {sup}}\cap \mathbf {S}^{\mathrm {splt}}\) that is used to denote sources with “strong statistical unpredictability”, as defined in BM2 [21]. We note that some of the above results achieve notions that are stronger than IND. Such notions are discussed and defined in Sect. 6.

Triviality Theorem. The \(\mathrm {IND}[\mathbf {X}]\) definition has the peculiar property of trivializing for some choices of \(\mathbf {X}\). For example, let \(\mathsf {X}\) be a target generator that returns a vector of random, independent targets and auxiliary information \(a= \varepsilon \) the empty string. Then any point-function obfuscator \(\mathsf {Obf}\) is \(\mathrm {IND}[\{\mathsf {X}\}]\)-secure. This is true because game \(\mathrm {IND}\) in this case samples \(\mathbf {k}_0, \mathbf {k}_1\) from the same distribution and the information provided to the adversary \(\mathcal{A}\) is thus independent of the challenge bit. Before discussing and assessing what this means for the definition, we provide a general triviality theorem that characterizes for what choices of \(\mathbf {X}\) this phenomenon happens.

Consider game \(\mathrm {TRIV}\) of Fig. 3 associated to a target generator \(\mathsf {X}\) and an adversary \(\mathcal{A}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {triv}}_{\mathsf {X},\mathcal{A}}(\lambda )=2\Pr [\mathrm {TRIV}_{\mathsf {X}}^{\mathcal{A}}(\lambda )]-1\). We say that \(\mathsf {X}\) is trivial if \(\mathsf {Adv}^{\mathsf {triv}}_{\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for every PT \(\mathcal{A}\). An example of trivial \(\mathsf {X}\) is the one given above. Let \(\mathbf {X}^{\mathrm {triv}}\) be the class of all trivial target generators, and say that a class \(\mathbf {X}\) is trivial if \(\mathbf {X}\subseteq \mathbf {X}^{\mathrm {triv}}\). The proof of the following triviality theorem follows directly from the definitions of games \(\mathrm {IND}\) and \(\mathrm {TRIV}\) and is omitted.

Theorem 2

Let \(\mathbf {X}\subseteq \mathbf {X}^{\mathrm {triv}}\) be a class of target generators. Let \(\mathsf {Obf}\) be any point-function obfuscator. Then \(\mathsf {Obf}\) is \(\mathrm {IND}[\mathbf {X}]\)-secure.

This can be viewed as a defect of the \(\mathrm {IND}\) definition, but whether or not this is true is debatable. The \(\mathrm {IND}\) definition has been successfully employed in applications [14, 21]. In these cases, \(\mathbf {X}= \mathbf {X}^{\text {cup}}\cap \mathbf {X}^{1}\), a class to which Theorem 2 does not apply. This indicates that the classes of target generators arising in applications are naturally not trivial. And the constructions we give in Sect. 5 cover such non-trivial classes. Thus we are on the whole unsure whether or not Theorem 2 should be viewed as a definitional weakness. In Sect. 6 we will provide alternative security definitions for PO that avoid this type of triviality theorem and are meaningful for all choices of target generators. But if an application can be obtained via \(\mathrm {IND}\), then it seems preferable, since this definition is simpler and easier to use and, from Sect. 5, we have more constructions for it.

4 (d)iO for Multi-circuit Samplers

Fig. 4.
figure 4

Games defining difference-security of multi-circuit sampler \(\mathsf {S}\) and iO-security of obfuscator \(\mathsf {Obf}\) relative to multi-circuit sampler \(\mathsf {S}\).

We state and prove a lemma we will use that may be of independent interest. We extend the standard definition of circuit samplers from Sect. 2 to get multi-circuit samplers, which are samplers that may produce a vector of circuit pairs (but still only a single auxiliary information string). We also extend the security definition of differing-inputs obfuscation to work with respect to multi-circuit samplers. We then use a hybrid argument to show that the security of the latter is implied by the standard definition of differing-inputs obfuscation for circuit samplers that produce only a single pair of circuits. This result will be used for our iO-based construction of a point-function obfuscator, BCP [16] being applied to move from diO to iO. (We stress that diO is used as a tool but not as an assumption in our results.)

iO for Multi-circuit Samplers. A multi-circuit sampler is a PT algorithm \(\mathsf {S}\) with an associated circuit-vector length function \(\mathsf {\mathsf {S}.vl}:{{\mathbb N}}\rightarrow {{\mathbb N}}\). Algorithm \(\mathsf {S}\) on input \(1^\lambda \) returns a triple \((\varvec{\mathrm {C}}_0,\varvec{\mathrm {C}}_1, aux )\) where \( aux \) is a string and \(\varvec{\mathrm {C}}_0,\varvec{\mathrm {C}}_1\) are circuit vectors of length \(\mathsf {\mathsf {S}.vl}(\lambda )\), such that circuits \(\varvec{\mathrm {C}}_0[i]\) and \(\varvec{\mathrm {C}}_1[i]\) are of the same size, number of inputs and number of outputs for every \(i\in \{1,\ldots ,\mathsf {\mathsf {S}.vl}(\lambda )\}\).

Consider game \(\mathrm {MIO}\) of Fig. 4 associated to an obfuscator \(\mathsf {Obf}\), a multi-circuit sampler \(\mathsf {S}\) and an adversary \(\mathcal{O}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {m\text {-}io}}_{\mathsf {Obf},\mathsf {S},\mathcal{O}}(\lambda )=2\Pr [\mathrm {MIO}_{\mathsf {Obf},\mathsf {S}}^{\mathcal{O}}(\lambda )]-1\). Let be a class of multi-circuit samplers. We say that \(\mathsf {Obf}\) is -secure if \(\mathsf {Adv}^{\mathsf {m\text {-}io}}_{\mathsf {Obf},\mathsf {S},\mathcal{O}}(\cdot )\) is negligible for every multi-circuit sampler and every PT adversary \(\mathcal{O}\).

Consider game \(\mathrm {MDIFF}\) of Fig. 4 associated to a multi-circuit sampler \(\mathsf {S}\) and an adversary \(\mathcal{D}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {m\text {-}diff}}_{\mathsf {S},\mathcal{D}}(\lambda )=\Pr [\mathrm {MDIFF}_{\mathsf {S}}^{\mathcal{D}}(\lambda )]\). We say that a multi-circuit sampler \(\mathsf {S}\) is difference secure if \(\mathsf {Adv}^{\mathsf {m\text {-}diff}}_{\mathsf {S},\mathcal{D}}(\cdot )\) is negligible for every PT adversary \(\mathcal{D}\). Let be the class of all difference-secure multi-circuit samplers and let \(d{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\). We say that multi-circuit sampler \(\mathsf {S}\) produces d-differing circuits if circuits \(\mathrm {C}_0[i]\) and \(\mathrm {C}_1[i]\) differ on at most \(d(\lambda )\) inputs with an overwhelming probability over \((\varvec{\mathrm {C}}_0, \varvec{\mathrm {C}}_1, aux )\in [\mathsf {S}(1^\lambda )]\), for all \(\lambda \in {{\mathbb N}}\) and all \(i\in \{1,\ldots ,\mathsf {\mathsf {S}.vl}(\lambda )\}\). Let be the class of all difference-secure multi-circuit samplers that produce d-differing circuits. The proof of the following lemma is provided in [8].

Lemma 3

Let \(d:{{\mathbb N}}\rightarrow {{\mathbb N}}\). Let \(\mathsf {Obf}\) be an -secure obfuscator. Then \(\mathsf {Obf}\) is also an -secure obfuscator.

5 Generic Constructions of PO

Prior constructions have targeted \(\mathrm {IND}[\mathbf {X}]\) for specific choices of \(\mathbf {X}\) in ad hoc ways and used non-standard assumptions. In this section we provide constructions that are generic. This means they take an arbitrary, given class \(\mathbf {X}\) of target generators and return a point-function obfuscator that is \(\mathrm {IND}[\mathbf {X}]\)-secure.

5.1 PO from iO

OWFs. Consider game \(\mathrm {OWF}\) of Fig. 5 associated to a function family \(\mathsf {F}\), a target generator \(\mathsf {X}\) with \(\mathsf {\mathsf {X}.tl}= \mathsf {F.il}\), and an adversary \(\mathcal{F}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {owf}}_{\mathsf {F}, \mathsf {X},\mathcal{F}}(\lambda ) = \Pr [\mathrm {OWF}_{\mathsf {F}, \mathsf {X}}^{\mathcal{F}}(\lambda )]\). Let \(\mathbf {X}\) be a class of target generators with target length \(\mathsf {F.il}\). Let \(\mathsf {X}^{\mathsf {1ur}}\) be the target generator with \(\mathsf {X}^{\mathsf {1ur}}.\mathsf {vl}(\cdot ) = 1\) and \(\mathsf {X}^{\mathsf {1ur}}.\mathsf {tl} = \mathsf {F.il}\), where the target is sampled from a uniform distribution and the auxiliary information is always empty, meaning \(a= \varepsilon \). We say that \(\mathsf {F}\) is \(\mathrm {OWF}[\mathbf {X}]\)-secure if \(\mathsf {Adv}^{\mathsf {owf}}_{\mathsf {F}, \mathsf {X},\mathcal{F}}(\cdot )\) is negligible for all PT adversaries \(\mathcal{F}\) and all \(\mathsf {X}\in \mathbf {X}\cup \{\mathsf {X}^{\mathsf {1ur}}\}\). Relevant classes \(\mathbf {X}\) are the same as for PO. The standard notion of a OWF is recovered as \(\mathbf {X}=\emptyset \), meaning that \(\mathsf {F}\) is secure only with respect to \(\mathsf {X}^{\mathsf {1ur}}\).

Fig. 5.
figure 5

Games defining one-wayness of function family \(\mathsf {F}\) relative to target generator \(\mathsf {X}\) and PRIV1-security of deterministic public-key encryption scheme \(\mathsf {DPKE}\) relative to target generator \(\mathsf {X}\).

The definition of CD [24] is the special case of ours with vectors of length one. That of FOR [32], like ours, considers evaluations of the function on multiple inputs, but in their case the key for the evaluations is the same and there is no auxiliary input, while in our case the key is independently chosen for each evaluation and auxiliary inputs may be present. We stress that we require only one-wayness; we do not require extractability. The latter is a much stronger assumption [13].

We now show that indistinguishability obfuscation can be used to build a \(\mathrm {IND}[\mathbf {X}]\)-secure point-function obfuscator for an arbitrary target generator class \(\mathbf {X}\) from any \(\mathrm {OWF}[\mathbf {X}]\)-secure function family.

Construction. Let \(\mathsf {F}\) be a family of functions. Let \(\mathsf {Obf}_{\mathsf {io}}\) be an obfuscator. We construct a point-function obfuscator \(\mathsf {Obf}\) with \(\mathsf {\mathsf {Obf}.tl}=\mathsf {F.il}\) as follows:

figure a

Theorem 4

Let \(\mathsf {F}\) be an injective family of functions. Let \(\mathbf {X}\) be a class of target generators with target length \(\mathsf {F.il}\). Assume that \(\mathsf {F}\) is \(\mathrm {OWF}[\mathbf {X}]\)-secure. Let \(\mathsf {Obf}_{\mathsf {io}}\) be an indistinguishability obfuscator. Then \(\mathsf {Obf}\) constructed above from \(\mathsf {F}\) and \(\mathsf {Obf}_{\mathsf {io}}\) is a \(\mathrm {IND}[\mathbf {X}]\)-secure point-function obfuscator.

Fig. 6.
figure 6

Games for proof of Theorem 4.

Proof

(Theorem 4). The injectivity of \(\mathsf {F}\) implies that \(\mathsf {Obf}\) satisfies the correctness condition of a point-function obfuscator. We now prove security.

Let \(\mathsf {X}\in \mathbf {X}\) be a target generator. Let \(\mathcal{A}\) be a PT adversary. Consider the games and the associated circuits of Fig. 6, where \(s\) is defined as follows. For any \(\lambda \) let \(s(\lambda )\) be a polynomial upper bound on , where the maximum is over all and \(y \in \{0,1\}^{\mathsf {F.ol}(\lambda )}\). Lines not annotated with comments are common to all games.

Game \(\mathrm {G}_0\) is equivalent to \(\mathrm {IND}_{\mathsf {Obf},\mathsf {X}}^{\mathcal{A}}(\lambda )\). The inputs to adversary \(\mathcal{A}\) in game \(\mathrm {G}_1\) do not depend on the challenge bit b, so we have \(\Pr [\mathrm {G}_1] = 1/2\). It follows that

figure b

The first equality holds by the definition of \(\mathrm {IND}\), and the second equality holds because of \(\Pr [\mathrm {G}_1] = 1/2\). We now show that \(\Pr [\mathrm {G}_0] - \Pr [\mathrm {G}_1]\) is negligible, meaning that \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\cdot )\) is also negligible. This proves the the theorem.

We construct a multi-circuit sampler \(\mathsf {S}\) and a PT iO-adversary \(\mathcal{O}\) as follows:

figure c

We have \(\Pr [\mathrm {G}_0] - \Pr [\mathrm {G}_1] = \mathsf {Adv}^{\mathsf {m\text {-}io}}_{\mathsf {Obf}_{\mathsf {io}},\mathsf {S},\mathcal{O}}(\lambda )\) by construction. Next, we show that . According to Proposition 1 (the result of BCP [16]), any indistinguishability obfuscator is also an -secure obfuscator. And according to Lemma 3, any -secure obfuscator is an -secure obfuscator. It follows that \(\mathsf {Adv}^{\mathsf {m\text {-}io}}_{\mathsf {Obf}_{\mathsf {io}},\mathsf {S},\mathcal{O}}(\cdot )\) is negligible by the iO-security of \(\mathsf {Obf}_{\mathsf {io}}\).

Let \(\mathsf {X}^{\mathsf {ur}}\) be the target generator with \(\mathsf {X}^{\mathsf {ur}}.\mathsf {vl} = \mathsf {\mathsf {X}.vl}\) and \(\mathsf {X}^{\mathsf {ur}}.\mathsf {tl} = \mathsf {F.il}\), where the targets are sampled independently, from a uniform distribution and auxiliary information is always \(a= \varepsilon \). Given any PT difference adversary \(\mathcal{D}\) against multi-circuit sampler \(\mathsf {S}\), we build PT adversaries \(\mathcal{F}_0\) and \(\mathcal{F}_1\) against the \(\mathrm {OWF}\)-security of \(\mathsf {F}\) relative to target generators \(\mathsf {X}^{\mathsf {ur}}\) and \(\mathsf {X}\), respectively. The constructions are as follows:

figure d

Let d denote the value sampled by multi-circuit sampler \(\mathsf {S}\) in game \(\mathrm {MDIFF}_{\mathsf {S}}^{\mathcal{D}}(\lambda )\). Then we have

$$\begin{aligned}&{\Pr }[\,\mathrm {MDIFF}_{\mathsf {S}}^{\mathcal{D}}(\lambda )\,\left| \right. \,d=0\,] = \Pr [\mathrm {OWF}_{\mathsf {F}, \mathsf {X}^{\mathsf {ur}}}^{\mathcal{F}_0}(\lambda )], \\&{\Pr }[\,\mathrm {MDIFF}_{\mathsf {S}}^{\mathcal{D}}(\lambda )\,\left| \right. \,d=1\,] = \Pr [\mathrm {OWF}_{\mathsf {F}, \mathsf {X}}^{\mathcal{F}_1}(\lambda )]. \end{aligned}$$

and \(\mathsf {Adv}^{\mathsf {m\text {-}diff}}_{\mathsf {S},\mathcal{D}}(\lambda ) = \frac{1}{2} (\mathsf {Adv}^{\mathsf {owf}}_{\mathsf {F},\mathsf {X}^{\mathsf {ur}},\mathcal{F}_0}(\lambda ) + \mathsf {Adv}^{\mathsf {owf}}_{\mathsf {F},\mathsf {X},\mathcal{F}_1}(\lambda ))\). Note that \(\mathrm {OWF}[\mathbf {X}]\)-security of \(\mathsf {F}\) requires that \(\mathsf {Adv}^{\mathsf {owf}}_{\mathsf {F},\mathsf {X}^{\mathsf {1ur}},\mathcal{F}}(\lambda )\) is negligible for all PT adversaries \(\mathcal{F}\). One can use the latter with a standard hybrid argument to further prove that \(\mathsf {Adv}^{\mathsf {owf}}_{\mathsf {F},\mathsf {X}^{\mathsf {ur}},\mathcal{F}_0}(\lambda )\) is also negligible for all PT adversaries \(\mathcal{F}_0\). It follows that the multi-circuit sampler \(\mathsf {S}\) is difference-secure. The injectivity of \(\mathsf {F}\) also implies that \(\mathsf {S}\) produces 1-differing circuits. Therefore, .

5.2 PO from DPKE

Our next generic construction is based on deterministic public-key encryption [3]. As before we aim to provide point-function obfuscation secure for any given class of target generators. We are able to do this assuming the existence of a deterministic public-key encryption scheme that is secure relative to the same class viewed as a class of message generators. We can then exploit known constructions of deterministic public-key encryption to get a slew of point-function obfuscators based on standard assumptions. We begin with a parameterized definition of security for deterministic public-key encryption.

DPKE. A deterministic public-key encryption scheme \(\mathsf {DPKE}\) [3] specifies the following. PT key generation algorithm \(\mathsf {DPKE.Kg}\) takes \(1^\lambda \) to return a public encryption key and a secret decryption key . Deterministic PT encryption algorithm \(\mathsf {DPKE.Enc}\) takes \(1^\lambda \), and a plaintext message \(k\in \{0,1\}^{\mathsf {DPKE.ml}(\lambda )}\) to return a ciphertext c, where \(\mathsf {DPKE.ml}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) is the message length function associated to \(\mathsf {DPKE}\). Deterministic decryption algorithm \(\mathsf {DPKE.Dec}\) takes \(1^\lambda \), , c to return plaintext message \(k\). We do not require the decryption algorithm to be PT but we do require decryption correctness, namely that for all \(\lambda \in {{\mathbb N}}\), all and all \(k\in \{0,1\}^{\mathsf {DPKE.ml}(\lambda )}\) we have .

Now consider game \(\mathrm {{PRIV1}}\) of Fig. 5 associated to a deterministic public-key encryption scheme \(\mathsf {DPKE}\), a target generator \(\mathsf {X}\) satisfying \(\mathsf {\mathsf {X}.tl}= \mathsf {DPKE.ml}\), and an adversary \(\mathcal{A}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {priv1}}_{\mathsf {DPKE},\mathsf {X},\mathcal{A}}(\lambda ) = 2\Pr [\mathrm {{PRIV1}}_{\mathsf {DPKE},\mathsf {X}}^{\mathcal{A}}(\lambda )] - 1\). If \(\mathbf {X}\) is a class of target generators then we say that \(\mathsf {DPKE}\) is \(\mathrm {{PRIV1}}[\mathbf {X}]\)-secure if \(\mathsf {Adv}^{\mathsf {priv1}}_{\mathsf {DPKE},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for all PT adversaries \(\mathcal{A}\) and all \(\mathsf {X}\in \mathbf {X}\).

This definition reflects what BBO [3] call the multi-user setting where there are many, independent public keys. However, in our case, only a single message is encrypted under each key. The single-key version of this is called PRIV1 in the literature, so we retained the name in moving to the multi-user setting. The definition is in the indistinguishability style of [4, 15] rather than the semantic security style of [3]. These definitions however did not allow auxiliary inputs. We are allowing those following BS [17]. Finally, while prior definitions require unpredictability of the message distribution, ours is simply parameterized by the latter. Prior definitions are captured as special cases, meaning they can be recovered as \(\mathrm {{PRIV1}}[\mathbf {X}]\) for some choice of \(\mathbf {X}\).

Construction. Let \(\mathsf {DPKE}\) be a deterministic public-key encryption scheme. We construct an obfuscator \(\mathsf {Obf}\) with \(\mathsf {\mathsf {Obf}.tl}= \mathsf {DPKE.ml}\) as follows:

figure e

The construction is simple. To obfuscate \(\mathbf {I}_k\) we pick a new key pair for the deterministic public-key encryption scheme and return a circuit that embeds the public key as well as the encryption c of the target point k. The circuit, given a candidate target point \(k'\), re-encrypts it under the embedded public key and checks that the ciphertext so obtained matches the embedded ciphertext c. Note that the determinism of \(\mathsf {DPKE.Enc}\) is used crucially to ensure that the circuit is deterministic. For randomized encryption, one cannot check that a message corresponds to a ciphertext by re-encryption. The secret key is discarded and not used in the construction, but its existence will guarantee correctness of the point-function obfuscator.

Result. We show that this is a generic construction. Namely, a point-function obfuscator for a given class \(\mathbf {X}\) of target generators can be obtained if we have a deterministic public-key encryption scheme secure for the same class.

Theorem 5

Let \(\mathsf {DPKE}\) be a deterministic public-key encryption scheme and \(\mathbf {X}\) a class of target generators such that \(\mathsf {\mathsf {X}.tl}= \mathsf {DPKE.ml}\) for all \(\mathsf {X}\in \mathbf {X}\). Assume \(\mathsf {DPKE}\) is \(\mathrm {{PRIV1}}[\mathbf {X}]\)-secure. Let \(\mathsf {Obf}\) be as defined above. Then \(\mathsf {Obf}\) is a \(\mathrm {IND}[\mathbf {X}]\)-secure point-function obfuscator.

Proof

(Theorem 5). The correctness of \(\mathsf {Obf}\) follows from the decryption correctness of \(\mathsf {DPKE}\), and it does not require the decryption algorithm \(\mathsf {DPKE.Dec}\) to be PT. We now prove that \(\mathsf {Obf}\) is \(\mathrm {IND}[\mathbf {X}]\)-secure.

Let \(\mathsf {X}\in \mathbf {X}\) be a target generator with \(\mathsf {\mathsf {X}.tl}= \mathsf {DPKE.ml}\). Let \(\mathcal{A}\) be PT adversary against the \(\mathrm {IND}\) security of \(\mathsf {Obf}\) relative to \(\mathsf {X}\). We construct a PT adversary \(\mathcal{B}\) against the \(\mathrm {{PRIV1}}\) security of \(\mathsf {DPKE}\) relative to \(\mathsf {X}\) as follows:

figure f

We have \(\mathsf {Adv}^{\mathsf {priv1}}_{\mathsf {DPKE}, \mathsf {X},\mathcal{B}}(\lambda )=\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\lambda )\) by construction. Hence, for any \(\mathsf {X}\in \mathbf {X}\) the \(\mathrm {IND}\)-security of \(\mathsf {Obf}\) relative to \(\mathsf {X}\) follows from the assumed \(\mathrm {{PRIV1}}\)-security of \(\mathsf {DPKE}\) relative to \(\mathsf {X}\).

In applying Theorem 5 to get point function obfuscators, the first case of interest is \(\mathbf {X}= \mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}\). In this case, \(\mathrm {{PRIV1}}[\mathbf {X}]\)-secure deterministic public-key encryption is a standard form of the latter for which many constructions are known. The central construction, due to BFO [15], is from lossy trapdoor functions (LTDFs). But the latter can be built from a wide variety of standard assumptions [31, 37, 44, 48, 51]. Thus we get \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}]\)-secure point-function obfuscators under the same assumptions. The second case of interest is \(\mathbf {X}=\mathbf {X}^{\text {seup}}\cap \mathbf {X}^{1}\). Unlike in the first case, there is now auxiliary information, but it leaves the targets sub-exponentially unpredictable. Constructions of \(\mathrm {{PRIV1}}[\mathbf {X}]\)-secure deterministic public-key encryption are known under standard assumptions including DLIN, Subgroup Indistinguishability and LWE [17, 48, 50]. Accordingly we get \(\mathrm {IND}[\mathbf {X}^{\text {seup}}\cap \mathbf {X}^{1}]\)-secure point-function obfuscators under the same assumptions. BH [5] obtain PRIV-secure DPKE from \(\mathrm {UCE}[\mathbf {S}^{\mathrm {sup}}]\), which via Theorem 5 yields \(\mathrm {IND}[\mathbf {X}^{\text {sup}}\cap \mathbf {X}^{\varepsilon }\cap \mathbf {X}^{1}]\) under \(\mathrm {UCE}[\mathbf {S}^{\mathrm {sup}}]\).

Theorem 5 also yields negative results. Assume iO exists. Then we know that there do not exist point function obfuscators that are \(\mathrm {IND}[\mathbf {X}^{\text {cup}}]\)-secure [20]. Theorem 5 then implies that there also do not exist deterministic public-key encryption schemes that are \(\mathrm {{PRIV1}}[\mathbf {X}^{\text {cup}}]\)-secure.

CIH function families as per GOR [36] do not seem to have a unique associated security notion. Rather the authors discuss a few choices. Our parameterized PRIV definitions above apply to function families as well and can be viewed as providing more security notions for CIH function families. These function families can also be used in our PO construction above as long as they are injective.

5.3 PO from UCE

Our next generic construction is based on UCE, a class of assumptions on function families from [6]. We use the multi-key version of the UCE assumption, denoted \(\mathrm {mUCE}\). As before we aim to provide point-function obfuscation secure for any given class of target generators. We are able to do this with \(\mathrm {mUCE}\) by associating to the class of target generators a class of sources. The existence of an \(\mathrm {mUCE}\)-secure function family relative to the latter suffices to construct a point-function obfuscator secure relative to the former.

Construction. Let \(\mathsf {H}\) be a family of functions. Associate to it a point-function obfuscator \(\mathsf {Obf}\) defined as follows. Let \(\mathsf {\mathsf {Obf}.tl}= \mathsf {H.il}\), and

figure g

The construction is simple and natural. The point-function obfuscation of \(\mathbf {I}_k\) is a circuit that embeds the hash y of target \(k\) under a freshly-chosen key also embedded in the circuit, and, given a candidate target \(k'\), checks whether its hash under equals the embedded hash value.

Source Classes. To state the result, we need a few definitions. Associate to a target generator \(\mathsf {X}\) a source \(\mathcal{S}^{\mathsf {X}}\) defined as follows:

figure h

The number of keys for this source is \(\mathcal{S}^{\mathsf {X}}.\mathsf {nk} = \mathsf {\mathsf {X}.vl}\), the number of points in the target vector. Now let \(\mathbf {X}\) be a class of target generators and let \(\mathbf {S}^{\mathbf {X}}=\{\,\mathcal{S}^{\mathsf {X}} \, :\,\mathsf {X}\in \mathbf {X}\,\}\) be the corresponding class of sources. We will show that the construction above is \(\mathrm {IND}[\mathbf {X}]\)-secure assuming \(\mathsf {H}\) is \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}]\)-secure. To appreciate what this provides we now discuss the assumption further.

Assumptions in the UCE framework are very sensitive to the class of sources for which security is assumed. Accordingly one tries to restrict sources in different ways. In this regard \(\mathbf {S}^{\mathbf {X}}=\{\,\mathcal{S}^{\mathsf {X}} \, :\,\mathsf {X}\in \mathbf {X}\,\}\) has some good attributes as we now discuss, referring to definitions of classes of \(\mathrm {mUCE}\) sources recalled in Sect. 2.

The first attribute is that the sources in \(\mathbf {S}^{\mathbf {X}}\) are what BHK [6] call “split," so that \(\mathbf {S}^{\mathbf {X}}\subseteq \mathbf {S}^{\mathrm {splt}}\). “Split” means that the leakage is a function of the oracle queries and answers separately, but not both together. (Above, \((d, a_1)\) depends only on the oracle queries, and \(\mathbf {y}\) depends only on the answers.) The second attribute is that the sources make only one query per key. (In particular when there is only one target point, the source makes only one query overall.) That is, \(\mathbf {S}^{\mathbf {X}}\subseteq \mathbf {S}^{n,1}\) if \(\mathcal{S}.\mathsf {nk}(\cdot )\le n(\cdot )\) for all \(\mathcal{S}\in \mathbf {S}^{\mathbf {X}}\). The third attribute is that the source class inherits the unpredictability properties of the target generator class. Thus if \(\mathbf {X}\subseteq \mathbf {X}^{\text {cup}}\) then \(\mathbf {S}^{\mathbf {X}}\subseteq \mathbf {S}^{\mathrm {cup}}\) consists of computationally unpredictable sources; if \(\mathbf {X}\subseteq \mathbf {X}^{\text {sup}}\) then \(\mathbf {S}^{ \mathbf {X}} \subseteq \mathbf {S}^{\mathrm {sup}}\) consists of statistically unpredictable sources; and if \(\mathbf {X}\subseteq \mathbf {X}^{\text {seup}}\) then \(\mathbf {S}^{\mathbf {X}}\subseteq \mathbf {S}^{\mathrm {seup}}\) consists of sources that are sub-exponentially unpredictable.

We warn that \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}]\)-security is not achievable for all choices of \(\mathbf {X}\). The value of our result is that it is entirely general, reducing \(\mathrm {IND}\) security for a given \(\mathbf {X}\) to a question of \(\mathrm {mUCE}\) security for a related class of sources, and we can then investigate the latter separately. In this way we get many new constructions.

Result. The following theorem shows that our construction above provides secure point-function obfuscation in a very general and modular way, namely the point-function obfuscator is secure relative to a class of target generators if \(\mathsf {H}\) is \(\mathrm {mUCE}\)-secure relative to the corresponding class of sources. After stating and proving this general result we will look at some special cases of interest.

Theorem 6

Let \(\mathsf {H}\) be an injective family of functions. Let \(\mathbf {X}\) be a class of target generators such that \(\mathsf {\mathsf {X}.tl}= \mathsf {H.il}\) for all \(\mathsf {X}\in \mathbf {X}\). Assume \(\mathsf {H}\) is \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}]\)-secure. Let \(\mathsf {Obf}\) be as defined above. Then \(\mathsf {Obf}\) is a \(\mathrm {IND}[\mathbf {X}]\)-secure point-function obfuscator.

Function family \(\mathsf {H}\) is assumed to be injective in order to meet the perfect correctness condition of a point-function obfuscator, and it is not important for security. In [8] we show that non-injective \(\mathrm {mUCE}\) is sufficient to construct a point-function obfuscator that satisfies a relaxed correctness condition and achieves the same security as above.

Proof

(Theorem 6). Correctness of the obfuscator follows from the assumed injectivity of \(\mathsf {H}\), meaning that the output of \(\mathsf {Obf}(1^\lambda ,\mathbf {I}_{k})\) is always a point circuit with target \(k\). We now prove that \(\mathsf {Obf}\) is \(\mathrm {IND}[\mathbf {X}]\)-secure.

Let \(\mathsf {X}\in \mathbf {X}\) be any target generator with \(\mathsf {\mathsf {X}.tl}= \mathsf {H.il}\). Let \(\mathcal{S}^{\mathsf {X}}\) be the corresponding source as defined above. Let \(\mathcal{A}\) be a PT adversary against the \(\mathrm {IND}\)-security of \(\mathsf {Obf}\) relative to \(\mathsf {X}\). We define a PT distinguisher \(\mathcal{D}\) as follows:

figure i

Let b denote the challenge bit in game \(\mathrm {mUCE}_{\mathsf {H}}^{\mathcal{S}^{\mathsf {X}},\mathcal{D}}(\lambda )\), and let \(b'\) denote the bit returned by \(\mathcal{D}\) in the same game. We claim that

figure j

The first equation holds by construction. The second equation is true because \(\mathcal{D}\) runs \(\mathcal{A}\) with inputs that are independent of the challenge bit d. Namely, for \(b=0\) the entries in \(\mathbf {y}\) are uniform and independent, since the source \(\mathcal{S}\) makes only one query per key index. We have \(\mathsf {Adv}^{\mathsf {m\text {-}uce}}_{\mathsf {H},\mathcal{S}^{\mathsf {X}},\mathcal{D}}(\lambda ) = \mathsf {Adv}^{\mathsf {ind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\lambda )/2\). Therefore, for any \(\mathsf {X}\in \mathbf {X}\) the \(\mathrm {IND}\) security of \(\mathsf {Obf}\) relative to \(\mathsf {X}\) follows from the assumed \(\mathrm {mUCE}[\{\mathcal{S}^{\mathsf {X}}\}]\)-security of \(\mathsf {H}\).

Negative Results for Multi-key UCE. Let \(n:{{\mathbb N}}\rightarrow {{\mathbb N}}\) be a polynomial such that \(n(\cdot )\in \varOmega ((\cdot )^{\epsilon })\). Theorem 6 allows us to conclude that \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{n,1}]\)-secure injective function families do not exist under certain assumptions. This is a simple corollary of the prior results which show that MB-AIPO can not co-exist with iO [20, 25]. We now explain our claim in more details.

Theorem 6 shows that the existence of \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{n,1}]\)-secure injective function families implies \(\mathrm {IND}[\mathbf {X}^{\text {cup}}\cap \mathbf {X}^{n}]\)-secure point-function obfuscation. Note that the latter is a composable AIPO as per CD [25]. CD [25] show that composable AIPO can be used to construct MB-AIPO, which is an obfuscation that is secure for functions that map a target point to a multi-bit output (as opposed to an output in \(\{0, 1\}\)). Finally, BM1 [20] show that MB-AIPO cannot co-exist with iO, assuming one-way functions. These results imply the following:

Corollary 7

Let \(\mathsf {H}\) be an injective function family. Let \(n:{{\mathbb N}}\rightarrow {{\mathbb N}}\) be a polynomial such that \(n(\cdot )\in \varOmega ((\cdot )^{\epsilon })\) for some constant \(\epsilon >0\). Assume the existence of one-way functions and indistinguishability obfuscation. Then \(\mathsf {H}\) is not \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{n,1}]\)-secure.

In a concurrent and independent work, BM3 [22] discuss a similar impossibility result for \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {s\text{- }cup}}\cap \mathbf {S}^{n,1}]\)-security. Here \(\mathbf {S}^{\mathrm {s\text{- }cup}}\) is a class of UCE sources introduced in (BM2) [21] who also show that \(\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\subsetneq \mathbf {S}^{\mathrm {s\text{- }cup}}\). We note that impossibility of \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{n,1}]\)-secure function families is a stronger result because it concerns a smaller class of sources.

No other impossibility results are known for \(\mathrm {mUCE}\) exclusively, but any negative results for (single-key) \(\mathrm {UCE}\) also apply to \(\mathrm {mUCE}\). Specifically, BFM [18] give an iO-based attack on \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}]\). And BST [10] show that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure function families do not exist assuming the existence of OWFs and iO, which is a strictly stronger impossibility result than the latter. The result by BST [10] implies that \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1,p}]\)-secure function families do not exist for a polynomial \(p(\cdot )\in \varOmega ((\cdot )^{\epsilon })\), but we currently do not know whether this notion is comparable to \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{n,1}]\).

Related Work. One special case of Theorem 6 is when \(\mathbf {X}= \mathbf {X}^{\text {cup}}\cap \mathbf {X}^{1}\), so that \(\mathrm {IND}[\mathbf {X}]\) is AIPO. The theorem and the remarks preceding it imply that we get this assuming \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1,1}]\)-security. This special case of our result was independently and concurrently obtained in [22]. Note that BM2 [21] showed that \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1,1}]\)-security is achievable assuming iO and AIPO. It follows from our result that \(\mathrm {mUCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1,1}]\) and AIPO are equivalent, assuming iO.

6 Alternative Security Notions for PO

In Sect. 3 we defined \(\mathrm {IND}\) security of point-function obfuscation. It extends security notions that were used for variants of AIPO in BP [14], MH [40], MB1 [20] and MB3 [22]. The main difference is that \(\mathrm {IND}\) is parameterized with a class of target generators, allowing us to unify the treatment of AIPO from the literature.

In this section we provide several alternative security notions for point-function obfuscation, and show relations between them and \(\mathrm {IND}\). Specifically, we extend the security notion introduced by Canetti [23] as well as the notions of average-case [26, 29] and worst-case [2, 25, 35, 39, 47] simulation-based security for point-function obfuscation. Similar to \(\mathrm {IND}\), our extended notions are parameterized with classes of target generators. We also define a novel security notion, called computational semantic security, by adapting the corresponding definition that was used for DPKE in [4] to the setting of point-function obfuscation and parameterizing it in the same way as above. Finally, we discuss the security achieved by our PO constructions from Sect. 5 with respect to the new notions.

Fig. 7.
figure 7

Games defining \(\mathrm {SIND}\) security, \(\mathrm {CSS}\) security and \(\mathrm {SSS}\) security of point-function obfuscator \(\mathsf {Obf}\) relative to target generator \(\mathsf {X}\).

Strong Indistinguishability. Consider game \(\mathrm {SIND}\) of Fig. 7 associated to a point-function obfuscator \(\mathsf {Obf}\), a target generator \(\mathsf {X}\), an adversary \(\mathcal{A}\) and a distinguisher \(\mathcal{D}\), such that \(\mathcal{A}\) returns an output in \(\{0,1\}\) and \(\mathsf {\mathsf {Obf}.tl}= \mathsf {\mathsf {X}.tl}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {sind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}, \mathcal{D}}(\lambda )=2\Pr [\mathrm {SIND}_{\mathsf {Obf},\mathsf {X}}^{\mathcal{A}, \mathcal{D}}(\lambda )]-1\). Let \(\mathbf {X}\) be a class of target generators. We say that \(\mathsf {Obf}\) is \(\mathrm {SIND}[\mathbf {X}]\)-secure if \(\mathsf {Adv}^{\mathsf {sind}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}, \mathcal{D}}(\cdot )\) is negligible for every \(\mathsf {X}\in \mathbf {X}\), every PT \(\mathcal{A}\) and every PT \(\mathcal{D}\). The difference between our definitions of \(\mathrm {IND}\) and \(\mathrm {SIND}\) is that the latter also runs a distinguisher in the last stage of the game, which makes this definition meaningful even for trivial target generators (as defined in Sect. 3). Our definition of \(\mathrm {SIND}\) extends the security notion used for oracle hashing by Canetti [23], parameterizing it with classes of target generators. Another difference is that \(\mathrm {SIND}\) samples target vectors \(\mathbf {k}_0, \mathbf {k}_1\) from distributions that are potentially different, whereas [23] used the same distribution for both. Note that adversary \(\mathcal{A}\) cannot be allowed to return an output of an arbitrary length because then it would be able to return \(\varvec{\overline{\mathrm {P}}}\), hence making the security trivially unachievable.

Computational Semantic Security. Consider game \(\mathrm {CSS}\) of Fig. 7 associated to a point-function obfuscator \(\mathsf {Obf}\), a target generator \(\mathsf {X}\) and an adversary \(\mathcal{A}= (\mathcal{A}_1, \mathcal{A}_2)\) such that algorithms \(\mathcal{A}_1, \mathcal{A}_2\) return outputs in \(\{0,1\}\) and \(\mathsf {\mathsf {Obf}.tl}= \mathsf {\mathsf {X}.tl}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {css}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\lambda )=2\Pr [\mathrm {CSS}_{\mathsf {Obf},\mathsf {X}}^{\mathcal{A}}(\lambda )]-1\). Let \(\mathbf {X}\) be a class of target generators. We say that \(\mathsf {Obf}\) is \(\mathrm {CSS}[\mathbf {X}]\)-secure if \(\mathsf {Adv}^{\mathsf {css}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for every \(\mathsf {X}\in \mathbf {X}\) and every PT \(\mathcal{A}\). This is an adaptation of the definition of computational semantic security for DPKE from [4], which we further parameterize with classes of target generators. It asks that adversary \(\mathcal{A}\) can not use an obfuscation \(\varvec{\overline{\mathrm {P}}}\) of \(\mathbf {k}_1\) to compute any partial information about the latter, even in the presence of auxiliary information \(a_1\). This provides us with a better intuition about the desired security of point-function obfuscation, as opposed to the less intuitive definition of \(\mathrm {SIND}\).

Simulation-Based Semantic Security. We consider two different definitions of simulation-based semantic security. Informally, both definitions require that for every PT adversary \(\mathcal{A}\) that receives as input an obfuscation of some point-function \(\mathbf {I}_{k}\), there exists a PT simulator with only an oracle access to \(\mathbf {I}_{k}\), such that the output distribution of the former is indistinguishable from that of the latter. The two definitions differ in the way how \(\mathbf {I}_{k}\) is chosen. One option is to quantify over all possible point-functions that can be produced by a particular target generator. For this purpose, we extend the definitions of worst-case security [2, 25, 35, 39, 47] for point-function obfuscation. We use \(\mathrm {SIM}\) to denote our new security notion. An alternative approach is to use target generator \(\mathsf {X}\) in order to sample point-functions. This follows the definitions of average-case security [26, 29] for point function obfuscation, and we use \(\mathrm {SSS}\) to denote our extended security notion.

Consider game \(\mathrm {SSS}\) of Fig. 7 associated to a point-function obfuscator \(\mathsf {Obf}\), a target generator \(\mathsf {X}\), an adversary \(\mathcal{A}\), a simulator \(\mathcal{S}\) and a predicate algorithm \(\mathcal{P}\), such that algorithms \(\mathcal{A}, \mathcal{S}, \mathcal{P}\) return outputs in \(\{0,1\}\) and \(\mathsf {\mathsf {Obf}.tl}= \mathsf {\mathsf {X}.tl}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {sss}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}, \mathcal{S}, \mathcal{P}}(\lambda )=2\Pr [\mathrm {SSS}_{\mathsf {Obf},\mathsf {X}}^{\mathcal{A}, \mathcal{S}, \mathcal{P}}(\lambda )]-1\). Let \(\mathbf {X}\) be a class of target generators. We say that \(\mathsf {Obf}\) is \(\mathrm {SSS}[\mathbf {X}]\)-secure if for every target generator \(\mathsf {X}\in \mathbf {X}\) and every PT \(\mathcal{A}\) there exists PT \(\mathcal{S}\) such that \(\mathsf {Adv}^{\mathsf {sss}}_{\mathsf {Obf},\mathsf {X},\mathcal{A}, \mathcal{S}, \mathcal{P}}(\cdot )\) is negligible for every PT \(\mathcal{P}\). Informally, this security notion requires that for every adversary \(\mathcal{A}\) there exists a simulator \(\mathcal{S}\) such that if \(\mathcal{A}\) can use obfuscations \(\mathbf {I}_{\mathbf {k}}\) to compute any property (function) \(\mathcal{P}\) of \(\mathbf {k}\), then \(\mathcal{S}\) can do the same using only an oracle access to \(\mathbf {I}_{\mathbf {k}}\) (meaning that \(\mathcal{S}\) has oracle access to each of \(\mathbf {I}_{\mathbf {k}[1]},\ldots ,\mathbf {I}_{\mathbf {k}[n]}\) for \(n = |\mathbf {k}|\)). This is required to hold even when \(\mathcal{A}, \mathcal{S}, \mathcal{P}\) receive as input some auxiliary information \(a\) about \(\mathbf {k}\).

SIM Security. Next, we define the \(\mathrm {SIM}\)-security of PO. Let \(\mathbf {X}\) be a class of target generators. Let \(\mathsf {Obf}\) be a point-function obfuscator. We say that \(\mathsf {Obf}\) is \(\mathrm {SIM}[\mathbf {X}]\)-secure if for every target generator \(\mathsf {X}\in \mathbf {X}\) and every PT adversary \(\mathcal{A}\) there exists a PT simulator \(\mathcal{S}\) and a negligible function \(\mu :{{\mathbb N}}\rightarrow {{\mathbb N}}\) such that

figure k

for every \(\lambda \in {{\mathbb N}}\), every \((\mathbf {k}, a)\in [\mathsf {\mathsf {X}.Ev}(1^\lambda )]\) and every PT predicate algorithm \(\mathcal{P}\) that returns an output in \(\{0,1\}\).

In the above definition of \(\mathrm {SIM}\)-security, predicate \(\mathcal{P}\) can be substituted with a constant function, resulting in an equivalent definition (as noted in [2, 35, 47]). In contrast, this is not true for the definition of \(\mathrm {SSS}\)-security. Replacing \(\mathcal{P}\) with a constant function will allow \(\mathcal{S}\) to run \(\mathsf {X}\) in order to generate fresh \((\mathbf {k}, a)\), obfuscate \(\mathbf {I}_{\mathbf {k}}\) to get \(\varvec{\overline{\mathrm {P}}}\), and simulate \(\mathcal{A}\) on \(\varvec{\overline{\mathrm {P}}}, a\). As a result, every obfuscator would be vaciously \(\mathrm {SSS}\)-secure for any class of target generators \(\mathbf {X}\).

Fig. 8.
figure 8

Relations between security notions for point-function obfuscation.

Relations Between Security Notions. Figure 8 shows relations between the security notions for point-function obfuscation that are discussed in this paper. Consider any two security notions \(\mathrm {A}\) and \(\mathrm {B}\). An arrow from \(\mathrm {A}\) to \(\mathrm {B}\) means that any \(\mathrm {A}[\mathbf {X}]\)-secure point-function obfuscator is also \(\mathrm {B}[\mathbf {X}]\)-secure, for every class of target generators \(\mathbf {X}\). A crossed arrow going from \(\mathrm {A}\) to \(\mathrm {B}\) means that there exists an obfuscator \(\mathsf {Obf}\) and a class of target generators \(\mathbf {X}\) such that \(\mathsf {Obf}\) is \(\mathrm {A}[\mathbf {X}]\)-secure but not \(\mathrm {B}[\mathbf {X}]\)-secure.

Implications \(\mathrm {SIM}\rightarrow \mathrm {SSS}\) and \(\mathrm {SIND}\rightarrow \mathrm {IND}\) trivially follow from our definitions of the corresponding security notions. The proofs for all other implications and separations shown in Fig. 8 are provided in [8]. The only relations that are missing in the figure (and can not be deduced using transitivity) are those between \(\mathrm {SIM}\) and both of \(\mathrm {SSS}, \mathrm {CSS}\). We leave it as an open question to show the remaining relations between these security notions.

Security of Our PO Constructions. Let \(\mathbf {X}\) be a class of target generators. In Sect. 5 we showed how to build a point-function obfuscator that is \(\mathrm {IND}[\mathbf {X}]\)-secure, based on any of the following: a \(\mathrm {OWF}[\mathbf {X}]\)-secure function family and an iO, or a \(\mathrm {{PRIV1}}[\mathbf {X}]\)-secure DPKE, or an \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}}]\)-secure function family for \(\mathbf {S}^{\mathbf {X}}\) as defined in Sect. 5.3. We do not know how to adapt our constructions to achieve \(\mathrm {SIM}[\mathbf {X}]\)-security. But each of our construction achieves \(\mathrm {CSS}[\mathbf {X}]\)-security, requiring only minimal changes in the used assumptions.

We now provide some intuition about our claim. Recall that game \(\mathrm {CSS}\) computes \(t\in \{0,1\}\) by running \(\mathcal{A}_1(1^\lambda , \mathbf {k}_1, a_1)\), and subsequently compares it to the output of \(\mathcal{A}_2(1^\lambda , \varvec{\overline{\mathrm {P}}}, a_1)\). This is different from game \(\mathrm {IND}\) where the adversary consists only of an algorithm \(\mathcal{A}(1^\lambda , \varvec{\overline{\mathrm {P}}}, a_1)\). The difficulty of adapting proofs of \(\mathrm {IND}[\mathbf {X}]\)-security to achieve \(\mathrm {CSS}[\mathbf {X}]\)-security is that in the latter \(\mathbf {k}_1\) (required to run \(\mathcal{A}_1\)) and \(\varvec{\overline{\mathrm {P}}}\) (required to run \(\mathcal{A}_2\)) are usually available in different stages of the security proof, meaning that one has to find a way to pass around the value of t (which depends on \(\mathbf {k}_1\)) across the stages. We resolve this by pushing t into the auxiliary information of target generators that parametrize our security notions.

Let \(\mathbf {X}\) be a class of target generators. Let \(\mathbf {P}\) be the set of all PT predicate algorithms \(\mathcal{P}\) such that \(\mathcal{P}(1^\lambda , \cdot , \cdot ):\{0,1\}^* \times \{0,1\}^*\rightarrow \{0,1\}\) for all \(\lambda \in {{\mathbb N}}\). For any \(\lambda \in {{\mathbb N}}\), \(\mathsf {X}\in \mathbf {X}\) and \(\mathcal{P}\in \mathbf {P}\) let \(\mathsf {X}^{\mathcal{P}}\) be defined as follows:

figure l

where \(\mathsf {X}^{\mathcal{P}}.\mathsf {vl} = \mathsf {\mathsf {X}.vl}\) and \(\mathsf {X}^{\mathcal{P}}.\mathsf {tl} = \mathsf {\mathsf {X}.tl}\). We define a new class of target generators \(\mathbf {X}' = \{\,\mathsf {X}^{\mathcal{P}} \, :\,\mathsf {X}\in \mathbf {X}, \mathcal{P}\in \mathbf {P}\,\}\). Then each of our constructions from Sect. 5 achieves \(\mathrm {CSS}[\mathbf {X}]\)-security, based on either of the following: a \(\mathrm {OWF}[\mathbf {X}']\)-secure function family and an iO, or a \(\mathrm {{PRIV1}}[\mathbf {X}']\)-secure DPKE, or an \(\mathrm {mUCE}[\mathbf {S}^{\mathbf {X}'}]\)-secure function family.

Note that for any \(\mathsf {X}\in \mathbf {X}\) and \(\mathcal{P}\in \mathbf {P}\), the construction of \(\mathsf {X}^{\mathcal{P}}\) expands the auxiliary information of \(\mathsf {X}\) only by a single bit. This means that \(\mathsf {X}^{\mathcal{P}}\) inherits the unpredictability properties of \(\mathsf {X}\). Namely, for any \(\lambda \in {{\mathbb N}}\), \(\mathsf {X}\in \mathbf {X}\), \(\mathcal{P}\in \mathbf {P}\) and any PT adversary \(\mathcal{R}\) we can construct a PT adversary \(\mathcal{Q}\) such that \(\Pr [\mathrm {PRED}_{\mathsf {X}}^{\mathcal{Q}}(\lambda )] \ge \frac{1}{2} \Pr [\mathrm {PRED}_{\mathsf {X}^{\mathcal{P}}}^{\mathcal{R}}(\lambda )]\) for all \(\lambda \in {{\mathbb N}}\). Adversary \(\mathcal{Q}\) would attempt to guess the extra bit of information and then simulate \(\mathcal{R}\). The same approach can be used to show that any \(\mathrm {OWF}[\mathbf {X}]\)-secure function family is also \(\mathrm {OWF}[\mathbf {X}']\)-secure, recoving the construction of \(\mathrm {CSS}[\mathbf {X}]\)-secure PO directly from a \(\mathrm {OWF}[\mathbf {X}]\)-secure function family and an iO.

Definitional Choices. All of our security notions for point-function obfuscation require that adversaries return single-bit outputs. This is consistent with the prior work. Specifically, simulation-based definitions in the prior literature always compare the outputs of adversary and simulator to either a predicate [27, 35] or a constant [2, 25, 26, 29, 39, 47]. However, it would be more intuitive to not restrict the size of outputs returned by adversaries in games \(\mathrm {CSS}\), \(\mathrm {SSS}\) and \(\mathrm {SIM}\). The goal of these adversaries can be thought as to compute some “property” of the target vector, and there is no reason to limit it to a single bit.

The initial work on obfuscation [2] discusses various definitional choices and chooses to use the weakest of them to achieve stronger impossibility results. Subsequent work continues to use definitions of the same style even for positive results. We are not aware of any follow-up discussion on alternative definitions.

Some of our implications from Fig. 8 might change if adversaries in games \(\mathrm {CSS}\), \(\mathrm {SSS}\) and \(\mathrm {SIM}\) are allowed to return multiple-bit outputs. In particular, note that our definitions of \(\mathrm {CSS}\) and \(\mathrm {SSS}\) are similar to those that were used for DPKE schemes in BFOR [4], who showed them to be equivalent for multiple-bit outputs in their setting. We leave it as an open problem to extend our definitions to allow outputs of an arbitrary size.