Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Cryptographic theory is being increasingly bold with regard to assumptions and conjectures. This is particularly true in the area of obfuscation, where candidate constructions have been provided whose claim to achieve a certain form of obfuscation is either itself an assumption [31] or is justified under other, new and strong assumptions [12, 34, 41]. This is attractive and exciting because we gain new capabilities and applications. But it behoves us also to be cautious and try to ascertain, not just whether the assumptions are true, but whether the goals are even achievable.

But how are we to determine this? The direct route is cryptanalysis, and we have indeed seen some success [27, 28, 33, 39]. But cryptanalysis can be difficult and runs into major open complexity-theoretic questions. There is another rewarding route, that we pursue here. This is to seek and establish relations that we call contentions. These take the form \(\lnot \text {A}\vee \lnot \text {B}\) where \(\text {A},\text {B}\) are different primitives or assumptions. This shows that \(\text {A},\text {B}\) are not both achievable, meaning they cannot co-exist. We may not know which of the two fails, but at least one of the two must, which is valuable and sometimes surprising information. Indeed, many intriguing contentions of this form have been provided in recent work [5, 11, 14, 2022, 32, 36]. For example, we know that the following cannot co-exist: “Special-purpose obfuscation” and \(\text {diO}\) [32]; Multi-bit point function obfuscation and \(\text {iO}\) [22]; extractable one-way functions and \(\text {iO}\) [14].

In this paper we begin by addressing the question of whether VGBO (Virtual Grey Box Obfuscation) for all circuits is possible, as conjectured by BCKP [13, Section 1.1]. We show that this is in contention with the AI-DHI assumption of [15, 25]. We go on to show that iO is in contention with certain forms of leakage resilient encryption and UCE.

1.1 VGBO and AI-DHI

We show that \(\lnot \text {VGBO}\vee \lnot \text {AI-DHI}\). That is, Virtual Grey Box Obfuscation (VGBO) of all circuits is in contention with Canaletto’s 1997 AI-DHI (Auxiliary-Input Diffie-Hellman Inversion) assumption [15, 25]. One of the two (or both) must fail. Let us now back up to provide more information on the objects involved and the proof.

The study of obfuscation began with VBBO (Virtual Black Box Obfuscation) [4, 37], which asks that for any PT adversary \(\mathcal{A}\) given the obfuscated circuit, there is a PT simulator \(\mathcal{S}\) given an oracle for the original circuit, such that the two have about the same probability of returning 1. The impossibility of VBBO [4, 16, 35] has lead to efforts to define and achieve weaker forms of obfuscation. VGBO [10] is a natural relaxation of VBBO allowing the simulator \(\mathcal{S}\) to be computationally unbounded but restricted to polynomially-many oracle queries. This bypasses known VBBO impossibility results while still allowing interesting applications. Furthermore BCKP [12, 13] show that VGBO for \(\text {NC}^1\) is achievable (under a novel assumption). They then say “existing candidate indistinguishability obfuscators for all circuits [3, 19, 31] may also be considered as candidates for VGB obfuscation, for all circuits” [13, Section 1.1]. This would mean, in particular, that VGBO for all circuits is achievable. In this paper we ask if this “VGB conjecture” is true.

The AI-DHI assumption [15, 25] says that there is an ensemble \(\mathcal{G}= \{\mathbb {G}_{\lambda }\,:\,\lambda \in {{\mathbb N}}\}\) of prime-order groups such that, for rs chosen at random from \(\mathbb {G}_{\lambda }\), no polynomial-time adversary can distinguish between \((r,r^x)\) and (rs), even when given auxiliary information \(a\) about x, as long as this information \(a\) is “x-prediction-precluding,” meaning does not allow one to just compute x in polynomial time. The assumption has been used for oracle hashing [25], AIPO (auxiliary-input point-function obfuscation) [15] and zero-knowledge proofs [15].

Our result is that \(\lnot \text {VGBO}\vee \lnot \text {AI-DHI}\). That is, either VGBO for all circuits is impossible or the AI-DHI assumption is false. To prove this, we take any ensemble \(\mathcal{G}=\{\mathbb {G}_{\lambda }\,:\,\lambda \in {{\mathbb N}}\}\) of prime-order groups. For random x, we define a way of picking the auxiliary information \(a\) such that (1) \(a\) is x-prediction-precluding, but (2) there is a polynomial-time adversary that, given \(a\), can distinguish between \((r,r^x)\) and (rs) for random rs. Consider the circuit \(\mathrm {C}_{x}\) that on input uv returns 1 if \(v=u^x\) and 0 otherwise. The auxiliary information \(a\) will be a VGB obfuscation \(\overline{\mathrm {C}}\) of \(\mathrm {C}_x\). Now (2) is easy to see: the adversary, given challenge (uv), can win by returning \(\overline{\mathrm {C}}(u,v)\). But why is (1) true? We use the assumed VGB security of the obfuscator to reduce (1) to showing that no, even unbounded, simulator, given an oracle for \(\mathrm {C}_x\), can extract x in a polynomial number of queries. This is shown through an information-theoretic argument that exploits the group structure.

The natural question about which one would be curious is, which of \(\text {VGBO}\) and \(\text {AI-DHI}\) is it that fails? This is an intriguing question and we do not know the answer at this point.

1.2 Key-Message Leakage Resilience

DKL [30] and CKVW [26] provide key leakage resilient symmetric encryption (K-LR-SE) schemes. This means they retain security even when the adversary has auxiliary information about the key, as long as this information is key-prediction-precluding, meaning does not allow one to compute the key. We consider a generalization that we call key-message leakage resilient symmetric encryption (KM-LR-SE). Here the auxiliary information is allowed to depend not just on the key but also on the message, the requirement however still being that it is key-prediction-precluding, meaning one cannot compute the key from the auxiliary information. The enhancement would appear to be innocuous, because the strong semantic-security style formalizations of encryption that we employ in any case allow the adversary to have a priori information about the message. However, we show that this goal is impossible to achieve if iO for all circuits is possible. That is, we show in Theorem 3 that \(\lnot \text {iO}\vee \lnot \text {KM-LR-SE}\). Since iO seems to be growing to be more broadly accepted, this indicates that \(\text {KM-LR-SE}\) is not likely to exist. We think this may be of direct interest from the perspective of leakage resilience, but its main importance for us is as a tool to establish new negative results for UCE as discussed in Sect. 1.3 below. The proof of Theorem 3 is a minor adaptation of the proof of BM [22] ruling out MB-AIPO under iO.

1.3 UCE for Split Sources

UCE is a class of assumptions for function families introduced in BHK [6] with the goal of instantiating random oracles. For a class \(\mathbf {S}\) of algorithms called sources, BHK define \(\mathrm {UCE}[\mathbf {S}]\) security of a family of functions. The parameterization is necessary because security is not achievable for the class of all sources. BHK and subsequent work [5, 6, 20, 23, 29, 40] have considered several restricted classes of sources and, based on the assumption of UCE security for these, been able to instantiate random oracles to obtain secure, efficient instantiations for primitives including deterministic public-key encryption, message-locked encryption, encryption secure for key-dependent messages, encryption secure under related-key attacks, adaptive garbling, hardcore functions and IND-CCA public-key encryption.

However UCE here has functioned as an assumption. We know little about its achievability. The basic foundational question in this area is, for which source classes \(\mathbf {S}\) is \(\mathrm {UCE}[\mathbf {S}]\) security achievable? The first step towards answering this was taken by BFM [20], who showed that \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}]\). That is, iO for all circuits is in contention with UCE security relative to the class \(\mathbf {S}^{\mathrm {cup}}\) of all computationally unpredictable sources. (These are sources whose leakage is computationally unpredictable when their queries are answered by a random oracle.) This lead BHK [6] to propose restricting attention to “split” sources. Such sources can leak information about an oracle query and its answer separately, but not together. This circumvents the BFM attack. Indeed, \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) appeared plausible even in the presence of iO. However in this paper we show \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\), meaning \(\text {iO}\) and \(\text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) security cannot co-exist. The interpretation is that \(\text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure function families are unlikely to exist. We obtain our \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) result by showing that \(\text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\Rightarrow \text {KM-LR-SE}\), meaning we can build a key-message leakage resilient symmetric encryption scheme given any \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure function family. But we saw above that \(\lnot \text {iO}\vee \lnot \text {KM-LR-SE}\) and can thus conclude that \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\).

BM2 [23] show that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1}]\) security —\(\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{q}\) is the class of computationally unpredictable split sources making q oracle queries— is achievable. (They assume iO and AIPO.) Our \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) result does not contradict this since our source makes a polynomial number of oracle queries. Indeed our result complements the BM2 one to show that a bound on the number of source oracle queries is necessary for a positive result. Together these results give a close to complete picture of the achievability of UCE for split sources, the remaining open question being achievability of \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{q}]\) for constant \(q>1\).

We note that we are not aware of any applications assuming \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\). Prior applications have used either \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1}]\) or quite different classes like \(\mathrm {UCE}[\mathbf {S}^{\mathrm {sup}}]\)\(\mathbf {S}^{\mathrm {sup}}\) is the class of statistically unpredictable sources [6, 20]— and neither of these is at risk from our results. However our \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) result is of interest towards understanding the achievability of UCE assumptions and the effectiveness of different kinds of restrictions (in this case, splitting) on sources. The achievability of \(\text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) security was an open problem from prior work.

1.4 Discussion and Related Work

The idea of using an obfuscated circuit as an auxiliary input to obtain contention results has appeared in many prior works [5, 11, 14, 2022, 32, 36]. Some of the contentions so established are between “Special-purpose obfuscation” and diO [32], between MB-AIPO and iO [22] and between extractable one-way functions and iO [14]. Our work follows in these footsteps.

KM-LR-SE can be viewed as a symmetric encryption re-formulation of MB-AIPO following the connection of the latter to symmetric encryption established by CKVW [26]. The main change is in the correctness condition. We formulate a weak correctness condition, which is important for our application to UCE. In its absence, our negative result for split-source UCE would only be for injective functions, which is much weaker. With this connection in mind, the proof of Theorem 3, as we have indicated above, is a minor adaptation of the proof of BM [22] ruling out MB-AIPO under iO. Our result about KM-LR-SE is thus not of technical novelty or interest but we think this symmetric encryption re-formulation of BM [22] is of interest from the leakage resilience perspective and as a tool to obtain more negative results, as exemplified by our application to UCE.

In independent and concurrent work, BM3 [24] show \(\lnot \text {iO}\vee \lnot \mathrm {UCE}[\mathbf {S}^{\mathrm {s\text{- }cup}}]\), where \(\mathbf {S}^{\mathrm {s\text{- }cup}}\) is the class of strongly computationally unpredictable sources as defined in [22]. But the latter show that \(\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\) is a strict subset of \(\mathbf {S}^{\mathrm {s\text{- }cup}}\). This means that our \(\lnot \text {iO}\vee \lnot \text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) result is strictly stronger than the \(\lnot \text {iO}\vee \lnot \mathrm {UCE}[\mathbf {S}^{\mathrm {s\text{- }cup}}]\) result of [24]. (Under iO, our result rules out UCE security for a smaller, more restricted class of sources.)

Our results on UCE, as with the prior ones of BFM [20], are for the basic setting, where there is a single key or single user [6]. BHK [6] also introduce a multi-key (multi-user) setting. Some negative results about this are provided in [8].

2 Preliminaries

Notation. We denote by \(\lambda \in {{\mathbb N}}\) the security parameter and by \(1^\lambda \) its unary representation. If \(x\in \{0,1\}^*\) is a string then |x| denotes its length, x[i] denotes its i-th bit, and \(x[i..j]=x[i]\ldots x[j]\) for \(1\le i\le j\le |x|\). We let \(\varepsilon \) denote the empty string. If s is an integer then \(\mathsf {Pad}_s(\mathrm {C})\) denotes circuit \(\mathrm {C}\) padded to have size s. We say that circuits \(\mathrm {C}_0,\mathrm {C}_1\) are equivalent, written \(\mathrm {C}_0 \equiv \mathrm {C}_1\), if they agree on all inputs. If \(\mathbf {x}\) is a vector then \(|\mathbf {x}|\) denotes the number of its coordinates and \(\mathbf {x}[i]\) denotes its i-th coordinate. If \({X}\) is a finite set, we let denote picking an element of \({X}\) uniformly at random and assigning it to x. Algorithms may be randomized unless otherwise indicated. Running time is worst case. “PT” stands for “polynomial-time,” whether for randomized algorithms or deterministic ones. If A is an algorithm, we let \(y \leftarrow A(x_1,\ldots ;r)\) denote running A with random coins r on inputs \(x_1,\ldots \) and assigning the output to y. We let be the result of picking r at random and letting \(y \leftarrow A(x_1,\ldots ;r)\). We let \([A(x_1,\ldots )]\) denote the set of all possible outputs of A when invoked with inputs \(x_1,\ldots \). We say that \(f{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb R}}\) is negligible if for every positive polynomial p, there exists \(\lambda _p \in {{\mathbb N}}\) such that \(f(\lambda ) < 1/p(\lambda )\) for all \(\lambda > \lambda _p\). We use the code based game playing framework of [7]. (See Fig. 1 for an example.) By \(\mathrm {G}^\mathcal{A}(\lambda )\) we denote the event that the execution of game \(\mathrm {G}\) with adversary \(\mathcal{A}\) and security parameter \(\lambda \) results in the game returning \(\mathsf {true}\).

Auxiliary Information Generators. Many of the notions we consider involve the computational unpredictability of some quantity even given “auxiliary information” about it. We abstract this out via our definition of an auxiliary information generator \(\mathsf {X}\). The latter specifies a PT algorithm \(\mathsf {\mathsf {X}.Ev}\) that takes \(1^\lambda \) to return a target \(k\in \{0,1\}^{\mathsf {\mathsf {X}.tl}(\lambda )}\), a payload \(m\in \{0,1\}^{\mathsf {\mathsf {X}.pl}(\lambda )}\) and an auxiliary information a, where \(\mathsf {\mathsf {X}.tl},\mathsf {\mathsf {X}.pl}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) are the target and payload length functions associated to \(\mathsf {X}\), respectively. Consider game \(\mathrm {PRED}\) of Fig. 1 associated to \(\mathsf {X}\) and a predictor adversary \(\mathcal{Q}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda )=\Pr [\mathrm {PRED}_{\mathsf {X}}^{\mathcal{Q}}(\lambda )]\). We say that \(\mathsf {X}\) is unpredictable if \(\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\cdot )\) is negligible for every PT adversary \(\mathcal{Q}\). We say that \(\mathsf {X}\) is uniform if \(\mathsf {\mathsf {X}.Ev}(1^\lambda )\) picks the target \(k\in \{0,1\}^{\mathsf {\mathsf {X}.tl}(\lambda )}\) and the payload \(m\in \{0,1\}^{\mathsf {\mathsf {X}.pl}(\lambda )}\) uniformly and independently. Note that the auxiliary information \(a\) may depend on both the target \(k\) and the payload \(m\), but unpredictability refers to recovery of the target \(k\) alone.

Fig. 1.
figure 1

Games defining unpredictabilty of auxiliary information generator \(\mathsf {X}\), PR-security of pseudorandom generator \(\mathsf {R}\) and iO-security of obfuscator \(\mathsf {Obf}\) relative to circuit sampler \(\mathsf {S}\).

PRGs. A pseudorandom generator \(\mathsf {R}\) [17, 44] specifies a deterministic PT algorithm \(\mathsf {\mathsf {R}.Ev}\) where \(\mathsf {\mathsf {R}.sl}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) is the seed length function of \(\mathsf {R}\) such that \(\mathsf {\mathsf {R}.Ev}(1^\lambda ,\cdot ) {:\;\;}\{0,1\}^{\mathsf {\mathsf {R}.sl}(\lambda )}\rightarrow \{0,1\}^{2\cdot \mathsf {\mathsf {R}.sl}(\lambda )}\) for all \(\lambda \in {{\mathbb N}}\). We say that \(\mathsf {R}\) is PR-secure if the function \(\mathsf {Adv}^{\mathsf {pr}}_{\mathsf {R},\mathcal{R}}(\cdot )\) is negligible for every PT adversary \(\mathcal{R}\), where for \(\lambda \in {{\mathbb N}}\) we let \(\mathsf {Adv}^{\mathsf {pr}}_{\mathsf {R},\mathcal{R}}(\lambda )=2\Pr [\mathrm {PRG}_{\mathsf {R}}^{\mathcal{R}}(\lambda )]-1\) and game \(\mathrm {PRG}\) is specified in Fig. 1.

Obfuscators. An obfuscator is a PT algorithm \(\mathsf {Obf}\) that on input \(1^\lambda \) and a circuit \(\mathrm {C}\) returns a circuit \(\overline{\mathrm {C}}\) such that \(\overline{\mathrm {C}}\equiv \mathrm {C}\). (That is, \(\overline{\mathrm {C}}(x) = \mathrm {C}(x)\) for all x.) We refer to the latter as the correctness condition. We will consider various notions of security for obfuscators, including VGBO and iO.

Indistinguishability Obfuscation. We use the BST [9] definitional framework which parameterizes security via classes of circuit samplers. Let \(\mathsf {Obf}\) be an obfuscator. A sampler in this context is a PT algorithm \(\mathsf {S}\) that on input \(1^\lambda \) returns a triple \((\mathrm {C}_0,\mathrm {C}_1, aux )\) where \(\mathrm {C}_0,\mathrm {C}_1\) are circuits of the same size, number of inputs and number of outputs, and \( aux \) is a string. If \(\mathcal{O}\) is an adversary and \(\lambda \in {{\mathbb N}}\) we let\(\mathsf {Adv}^{\mathsf {io}}_{\mathsf {Obf},\mathsf {S},\mathcal{O}}(\lambda )=2\Pr [\mathrm {IO}_{\mathsf {Obf},\mathsf {S}}^{\mathcal{O}}(\lambda )]-1\) where game \(\mathrm {IO}_{\mathsf {Obf},\mathsf {S}}^{\mathcal{O}}(\lambda )\) is defined in Fig. 1. Now let be a class (set) of circuit samplers. We say that \(\mathsf {Obf}\) is -secure if \(\mathsf {Adv}^{\mathsf {io}}_{\mathsf {Obf},\mathsf {S},\mathcal{O}}(\cdot )\) is negligible for every PT adversary \(\mathcal{O}\) and every circuit sampler . We say that circuit sampler \(\mathsf {S}\) produces equivalent circuits if there exists a negligible function \(\nu \) such that . Let be the class of all circuit samplers that produce equivalent circuits. We say that \(\mathsf {Obf}\) is an indistinguishability obfuscator if it is -secure [4, 31, 42].

Function Families. A family of functions \(\mathsf {F}\) specifies the following. PT key generation algorithm \(\mathsf {F.Kg}\) takes \(1^\lambda \) to return a key , where \(\mathsf {F.kl}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) is the key length function associated to \(\mathsf {F}\). Deterministic, PT evaluation algorithm \(\mathsf {F.Ev}\) takes \(1^\lambda \), key and an input \(x\in \{0,1\}^{\mathsf {F.il}(\lambda )}\) to return an output , where \(\mathsf {F.il},\mathsf {F.ol}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) are the input and output length functions associated to \(\mathsf {F}\), respectively. We say that \(\mathsf {F}\) is injective if the function is injective for every \(\lambda \in {{\mathbb N}}\) and every .

UCE Security. Let us recall the Universal Computational Extractor (UCE) framework of BHK [6]. Let \(\mathsf {H}\) be a family of functions. Let \(\mathcal{S}\) be an adversary called the source and \(\mathcal{D}\) an adversary called the distinguisher. We associate to them and \(\mathsf {H}\) the game \(\mathrm {UCE}_{\mathsf {H}}^{\mathcal{S},\mathcal{D}}(\lambda )\) in the left panel of Fig. 2. The source has access to an oracle \(\textsc {HASH}\) and we require that any query \(x\) made to this oracle have length \(\mathsf {H.il}(\lambda )\). When the challenge bit b is 1 (the “real” case) the oracle responds via \(\mathsf {H.Ev}\) under a key that is chosen by the game and not given to the source. When \(b=0\) (the “random” case) it responds as a random oracle. The source then leaks a string L to its accomplice distinguisher. The latter does get the key as input and must now return its guess \(b'\in \{0,1\}\) for b. The game returns \(\mathsf {true}\) iff \(b'=b\), and the uce-advantage of \((\mathcal{S},\mathcal{D})\) is defined for \(\lambda \in {{\mathbb N}}\) via \(\mathsf {Adv}^{\mathsf {uce}}_{\mathsf {H},\mathcal{S}, \mathcal{D}}(\lambda ) = 2\Pr [\mathrm {UCE}_{\mathsf {H}}^{\mathcal{S}, \mathcal{D}}(\lambda )]-1\). If \(\mathbf {S}\) is a class (set) of sources, we say that \(\mathsf {H}\) is \(\mathrm {UCE}[\mathbf {S}]\)-secure if \(\mathsf {Adv}^{\mathsf {uce}}_{\mathsf {H},\mathcal{S}, \mathcal{D}}(\cdot )\) is negligible for all sources \(\mathcal{S}\in \mathbf {S}\) and all PT distinguishers \(\mathcal{D}\).

Fig. 2.
figure 2

Games defining UCE security of function family \(\mathsf {H}\), unpredictability of source \(\mathcal{S}\), and the split source \(\mathcal{S}= \mathsf {Splt}[\mathcal{S}_0,\mathcal{S}_1]\) associated to \(\mathcal{S}_0,\mathcal{S}_1\).

It is easy to see that \(\mathrm {UCE}[\mathbf {S}]\)-security is not achievable if \(\mathbf {S}\) is the class of all PT sources [6]. To obtain meaningful notions of security, BHK [6] impose restrictions on the source. A central restriction is unpredictability. A source is unpredictable if it is hard to guess the source’s \(\textsc {HASH}\) queries even given the leakage, in the random case of the UCE game. Formally, let \(\mathcal{S}\) be a source and \(\mathcal{P}\) an adversary called a predictor and consider game \(\mathrm {PRED}_{\mathcal{S}}^{\mathcal{P}}(\lambda )\) in the middle panel of Fig. 2. For \(\lambda \in {{\mathbb N}}\) we let \(\mathsf {Adv}^{\mathsf {pred}}_{\mathcal{S},\mathcal{P}}(\lambda ) = \Pr [\mathrm {PRED}_{\mathcal{S}}^{\mathcal{P}}(\lambda )]\). We say that \(\mathcal{S}\) is computationally unpredictable if \(\mathsf {Adv}^{\mathsf {pred}}_{\mathcal{S},\mathcal{P}}(\cdot )\) is negligible for all PT predictors \(\mathcal{P}\), and let \(\mathbf {S}^{\mathrm {cup}}\) be the class of all PT computationally unpredictable sources. We say that \(\mathcal{S}\) is statistically unpredictable if \(\mathsf {Adv}^{\mathsf {pred}}_{\mathcal{S},\mathcal{P}}(\cdot )\) is negligible for all (not necessarily PT) predictors \(\mathcal{P}\), and let \(\mathbf {S}^{\mathrm {sup}}\subseteq \mathbf {S}^{\mathrm {cup}}\) be the class of all PT statistically unpredictable sources.

BFM [20] show that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}]\)-security is not achievable assuming that indistinguishability obfuscation is possible. This has lead applications to either be based on \(\mathrm {UCE}[\mathbf {S}^{\mathrm {sup}}]\) or on subsets of \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}]\), meaning to impose further restrictions on the source. \(\mathrm {UCE}[\mathbf {S}^{\mathrm {sup}}]\), introduced in [6, 20], seems at this point to be a viable assumption. In order to restrict the computational case, one can consider split sources as defined in BHK [6]. Let \(\mathcal{S}_0,\mathcal{S}_1\) be algorithms, neither of which have access to any oracles. The split source \(\mathcal{S}= \mathsf {Splt}[\mathcal{S}_0,\mathcal{S}_1]\) associated to \(\mathcal{S}_0,\mathcal{S}_1\) is defined in the right panel of Fig. 2. Algorithm \(\mathcal{S}_0\) returns a pair \((L_0,\mathbf {x})\). Here \(\mathbf {x}\) is a vector over \(\{0,1\}^{\mathsf {H.il}(\lambda )}\) all of whose entries are required to be distinct. (If the entries are not required to be distinct, collisions can be used to communicate information between the two components of the source, and the BFM [20] attack continues to apply, as pointed out in [23].) The first adversary creates the oracle queries for the source \(\mathcal{S}\), the latter making these queries and passing the replies to the second adversary to get the leakage. In this way, neither \(\mathcal{S}_0\) nor \(\mathcal{S}_1\) have an input-output pair from the oracle, limiting their ability to create leakage useful to the distinguisher. A source \(\mathcal{S}\) is said to belong to the class \(\mathbf {S}^{\mathrm {splt}}\) if there exist PT \(\mathcal{S}_0,\mathcal{S}_1\) such that \(\mathcal{S}= \mathsf {Splt}[\mathcal{S}_0,\mathcal{S}_1]\), meaning is defined as above. The class of interest is now \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\), meaning UCE-security for computationally unpredictable, split sources.

Another way to restrict a UCE source is by limiting the number of queries it can make. Let \(\mathbf {S}^{q}\) be the class of sources making \(q(\cdot )\) oracle queries. This allows to consider \(\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1}\), a class of computationally unpredictable split sources that make a single query. BM2 [23] show that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1}]\)-security is achievable assuming iO and AIPO.

3 VGBO and the AI-DHI Assumption

BCKP [12, 13] conjecture that existing candidate constructions of iO also achieve VGBO and thus in particular that VGB obfuscation for all circuits is possible. Here we explore the plausibility of this “VGB conjecture.” We show that it implies the failure of Canaletto’s AI-DHI assumption. Either this assumption is false or VGBO for all circuits is not possible. (In fact, our result refers to an even weaker VGBO assumption.) That is, the long-standing AI-DHI assumption and VGBO are in contention; at most one of these can exist. We start by defining VGBO and recalling the AI-DHI assumption, and then give our result and its proof. We then suggest a weakening of AI-DHI that we call AI-DHI2 that is parameterized by a group generator. We show that our attack on AI-DHI extends to rule out AI-DHI2 for group generators satisfying a property we call verifiability. However there may be group generators that do not appear to be verifiable, making AI-DHI2 a potential alternative to AI-DHI.

Fig. 3.
figure 3

Games defining VGB security of obfuscator \(\mathsf {Obf}\).

VGBO. Let \(\mathsf {Obf}\) be an obfuscator as defined in Sect. 2. We define what it means for it to be a VGB obfuscator. We will use a weak variant of the notion used in some of the literature [10, 12], which strengthens our results since they are negative relations with starting point VGBO.

A sampler \(\mathsf {Smp}\) in this context is an algorithm that takes \(1^\lambda \) to return a circuit \(\mathrm {C}\). Let q be a polynomial, \(\mathcal{A}\) an adversary and \(\mathcal{S}\) a (not necessarily PT) algorithm called a simulator. For \(\lambda \in {{\mathbb N}}\) let

$$\begin{aligned} \mathsf {Adv}^{\mathsf {vgb}}_{\mathsf {Obf},\mathsf {Smp},q,\mathcal{A},\mathcal{S}}(\lambda ) = \left| {\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}}^{\mathcal{A}}(\lambda )}\,\right] } - {\Pr \left[ \,{\mathrm {VGB}0_{\mathsf {Smp}, q}^{\mathcal{S}}(\lambda )}\,\right] }\right| \end{aligned}$$

where the games are in Fig. 3. Let \(\mathsf {SAMP}\) be a set of samplers. We say that \(\mathsf {Obf}\) is a VGB obfuscator for \(\mathsf {SAMP}\) if for every PT adversary \(\mathcal{A}\) there exists a (not necessarily PT) simulator \(\mathcal{S}\) and a polynomial q such that \(\mathsf {Adv}^{\mathsf {vgb}}_{\mathsf {Obf},\mathsf {Smp},q,\mathcal{A},\mathcal{S}}(\cdot )\) is negligible for all \(\mathsf {Smp}\in \mathsf {SAMP}\).

We note that [12] use a VGB variant stronger than the above where the advantage measures the difference in probabilities of \(\mathcal{A}\) and \(\mathcal{S}\) guessing a predicate \(\pi (\mathrm {C})\), rather than just the probabilities of outputting one, which is all we need here. Also note that our VGB definition is vacuously achievable whenever \(|\mathsf {SAMP}|=1\), since \(\mathcal{S}\) can simulate game \(\mathrm {VGB}1_{\mathsf {Obf},\mathsf {Smp}}^{\mathcal{A}}(\lambda )\) for any fixed choice of \(\mathcal{A}\) and \(\mathsf {Smp}\). Our applications however use a \(\mathsf {SAMP}\) of size 2.

The AI-DHI Assumption. Let \(\mathcal{G}= \{\mathbb {G}_{\lambda }:\lambda \in {{\mathbb N}}\}\) be an ensemble of groups where for every \(\lambda \in {{\mathbb N}}\) the order \(p(\lambda )\) of group \(\mathbb {G}_{\lambda }\) is a prime in the range \(2^{\lambda -1} < p(\lambda ) < 2^{\lambda }\). We assume that relevant operations are computable in time polynomial in \(\lambda \), including computing \(p(\cdot )\), testing membership in \(\mathbb {G}_{\lambda }\) and performing operations in \(\mathbb {G}_{\lambda }\). By \(\mathbb {G}_{\lambda }^*\) we denote the non-identity members of the group, which is the set of generators since the group has prime order. An auxiliary information generator \(\mathsf {X}\) for \(\mathcal{G}\) is an auxiliary information generator as per Sect. 2 with the additional property that the target \(k\) returned by \(\mathsf {\mathsf {X}.Ev}(1^\lambda )\) is in \({{\mathbb Z}}_{p(\lambda )}\) (i.e. is an exponent) and the payload \(m\) is \(\varepsilon \) (i.e. is effectively absent).

Now consider game \(\mathrm {AIDHI}\) of Fig. 4 associated to \(\mathcal{G}, \mathsf {X}\) and an adversary \(\mathcal{A}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {aidhi}}_{\mathcal{G},\mathsf {X},\mathcal{A}}(\lambda )=2\Pr [\mathrm {AIDHI}_{\mathcal{G},\mathsf {X}}^{\mathcal{A}}(\lambda )]-1\). We say that \(\mathcal{G}\) is AI-DHI-secure if \(\mathsf {Adv}^{\mathsf {aidhi}}_{\mathcal{G},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for every unpredictable \(\mathsf {X}\) for \(\mathcal{G}\) and every PT adversary \(\mathcal{A}\). The AI-DHI assumption [15, 25] is that there exists a family of groups \(\mathcal{G}\) that is AI-DHI secure.

Fig. 4.
figure 4

Games defining the AI-DHI assumption and the AI-DHI2 assumption.

\(\lnot \) VGB \(\vee \lnot \) AI-DHI. The following says if VGB obfuscation is possible then the AI-DHI assumption is false: there exists no family of groups \(\mathcal{G}\) that is AI-DHI secure. Our theorem only assumes a very weak form of VGB obfuscation for a class with two samplers (given in the proof).

Theorem 1

Let \(\mathcal{G}\) be a family of groups. Then there is a pair \(\mathsf {Smp},\mathsf {Smp}_0\) of PT samplers (defined in the proof) such that if there exists a VGB-secure obfuscator for the class \(\mathsf {SAMP}=\{\mathsf {Smp},\mathsf {Smp}_0\}\), then \(\mathcal{G}\) is not AI-DHI-secure.

Proof

(Theorem 1 ). Let \(\mathsf {Obf}\) be the assumed obfuscator. Let \(\mathsf {X}\) be the auxiliary information generator for \(\mathcal{G}\) defined as follows:

figure a

The auxiliary information \(a=\overline{\mathrm {C}}\) produced by \(\mathsf {X}\) is an obfuscation of the circuit \(\mathrm {C}_{1^\lambda ,k}\) shown on the right above. The circuit has \(1^\lambda \) and the target value \(k\) embedded inside. The circuit takes inputs gK and checks that the first is a group element different from the identity —and thus a generator— and the second is a group element. It then returns 1 if \(g^k\) equals K, and 0 otherwise.

We first construct a PT adversary \(\mathcal{A}^*\) such that \(\mathsf {Adv}^{\mathsf {aidhi}}_{\mathcal{G},\mathsf {X},\mathcal{A}^*}(\cdot )\) is non-negligible. On input \(1^\lambda , g, K_b, \overline{\mathrm {C}}\), it simply returns \(\overline{\mathrm {C}}(g, K_b)\). That is, it runs the obfuscated circuit \(\overline{\mathrm {C}}\) on g and \(K_b\) and returns its outcome. If the challenge bit b in game \(\mathrm {AIDHI}_{\mathcal{G},\mathsf {X}}^{\mathcal{A}^*}(\lambda )\) is 1 then the adversary always outputs \(b' = 1\). Otherwise, the adversary outputs \(b'=1\) with probability \(1/p(\lambda )\). We have \(\mathsf {Adv}^{\mathsf {aidhi}}_{\mathcal{G},\mathsf {X},\mathcal{A}^*}(\lambda ) = 1 - 1/{p(\lambda )} \ge 1 - 2^{1-\lambda }\), which is not negligible.

We now show that the constructed auxiliary information generator \(\mathsf {X}\) is unpredictable. In particular, for any PT adversary \(\mathcal{Q}\) we construct a PT adversary \(\mathcal{A}\) and samplers \(\mathsf {Smp}\), \(\mathsf {Smp}_0\) such that for all simulators \(\mathcal{S}\) and all polynomials q,

$$\begin{aligned} \mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda ) \le \mathsf {Adv}^{\mathsf {vgb}}_{\mathsf {Obf},\mathsf {Smp},q,\mathcal{A},\mathcal{S}}(\lambda ) + \mathsf {Adv}^{\mathsf {vgb}}_{\mathsf {Obf},\mathsf {Smp}_0,q,\mathcal{A},\mathcal{S}}(\lambda ) + \frac{q(\lambda )}{2^{\lambda - 1}} \;. \end{aligned}$$
(1)

Concretely, the adversary \(\mathcal{A}\) and the samplers \(\mathsf {Smp},\mathsf {Smp}_0\) operate as follows:

figure b

In \(\mathsf {Smp}_0\), the circuit \(C_0\) takes as input a pair of group elements \(g, g'\) from \(\mathbb {G}_{\lambda }\) and always returns 0.

To show Eq. (1), we first note that by construction

$$\begin{aligned} \mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda ) = {\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}}^{\mathcal{A}}(\lambda )}\,\right] } \;, \end{aligned}$$
(2)

because an execution of \(\mathrm {PRED}^{\mathcal{Q}}_{\mathsf {X}}(\lambda )\) results in the same output distribution as in \(\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}}^{\mathcal{A}}(\lambda )\). The only difference is that in the latter, the check of whether the guess is correct is done via the obfuscated circuit \(\overline{\mathrm {C}}\). Now, for all simulators \(\mathcal{S}\) and polynomials q, we can rewrite Eq. (2) as

$$\begin{aligned} \begin{aligned} \mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda )&= {\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}}^{\mathcal{A}}(\lambda )}\,\right] } - {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )}\,\right] } \\&\quad + {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )}\,\right] } - {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp}_0,q}^{\mathcal{S}}(\lambda )}\,\right] } \\&\quad + {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp}_0,q}^{\mathcal{S}}(\lambda )}\,\right] } -{\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}_0}^{\mathcal{A}}(\lambda )}\,\right] } \\&\quad + {\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}_0}^{\mathcal{A}}(\lambda )}\,\right] } \;. \end{aligned} \end{aligned}$$

To upper bound \(\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda )\), we first note that

$$\begin{aligned} {\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}}^{\mathcal{A}}(\lambda )}\,\right] } - {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )}\,\right] } \le \mathsf {Adv}^{\mathsf {vgb}}_{\mathsf {Obf},\mathsf {Smp},q,\mathcal{A},\mathcal{S}}(\lambda ) \end{aligned}$$

and

$$\begin{aligned} {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp}_0,q}^{\mathcal{S}}(\lambda )}\,\right] } -{\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}_0}^{\mathcal{A}}(\lambda )}\,\right] } \le \mathsf {Adv}^{\mathsf {vgb}}_{\mathsf {Obf},\mathsf {Smp}_0,q,\mathcal{A},\mathcal{S}}(\lambda ) \;. \end{aligned}$$

Moreover, we have \({\Pr \left[ \,{\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}_0}^{\mathcal{A}}(\lambda )}\,\right] } = 0\) by constructon. Namely, adversary \(\mathcal{A}\) never outputs 1 in game \(\mathrm {VGB}1_{\mathsf {Obf}, \mathsf {Smp}_0}^{\mathcal{A}}(\lambda )\), since it is given an obfuscation of the constant zero circuit \(\mathrm {C}_0\).

We are left with upper bounding the difference between \({\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )}\,\right] }\) and \({\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp}_0,q}^{\mathcal{S}}(\lambda )}\,\right] }\). Note that \(\mathcal{S}\) is allowed to issue at most \(q(\lambda )\) queries to the given circuit, which is either \(\mathrm {C}_{1^\lambda , k}\) for a random or \(\mathrm {C}_0\). Denote by \(\mathsf {Hit}\) the event that \(\mathcal{S}\) makes a query (gK) in \(\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )\) such that \(g^{k} = K\). Then, by a standard argument,

$$\begin{aligned} {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )}\,\right] } - {\Pr \left[ \,{\mathrm {VGB}0_{ \mathsf {Smp}_0,q}^{\mathcal{S}}(\lambda )}\,\right] } \le {\Pr \left[ \,{\mathsf {Hit}}\,\right] } \;. \end{aligned}$$

To compute \({\Pr \left[ \,{\mathsf {Hit}}\,\right] }\), we move from \(\mathrm {VGB}0_{ \mathsf {Smp},q}^{\mathcal{S}}(\lambda )\) to the simpler \(\mathrm {VGB}0_{ \mathsf {Smp}_0,q}^{\mathcal{S}}(\lambda )\), where all of \(\mathcal{S}\)’s queries are answered with 0. We extend the latter game to sample a random key , and we define \(\mathsf {Hit}'\) as the event in this game that for one of \(\mathcal{S}\)’s queries (gK) we have \(g^{k} = K\). It is not hard to see that \({\Pr \left[ \,{\mathsf {Hit}'}\,\right] }\) and \({\Pr \left[ \,{\mathsf {Hit}}\,\right] }\) are equal, as both games are identical as long as none of such queries occur. Since there are at most \(q(\lambda )\) queries, and exactly one \(k\) can produce the answer 1 for these queries, the union bound yields

$$\begin{aligned} {\Pr \left[ \,{\mathsf {Hit}}\,\right] } = {\Pr \left[ \,{\mathsf {Hit}'}\,\right] } \le \frac{q(\lambda )}{p(\lambda )} \le \frac{q(\lambda )}{2^{\lambda - 1}} \;, \end{aligned}$$

which concludes the proof.    \(\square \)

The AI-DHI2 Assumption. We now suggest a relaxation AI-DHI2 of the AI-DHI assumption given above. The idea is that for each value of \(\lambda \) there is not one, but many possible groups. Formally, a group generator is a PT algorithm \(\mathsf {GG}\) that on input \(1^\lambda \) returns a description \(\langle {\mathbb {G}}\rangle \) of a cyclic group \(\mathbb {G}\) whose order \(|\mathbb {G}|\) is in the range \(2^{\lambda -1} < |\mathbb {G}| < 2^{\lambda }\). We assume that given \(1^\lambda ,\langle {\mathbb {G}}\rangle \), relevant operations are computable in time polynomial in \(\lambda \), including performing group operations in \(\mathbb {G}\) and picking at random from \(\mathbb {G}\) and from the set \(\mathrm {Gen}(\mathbb {G})\) of generators of \(\mathbb {G}\). An auxiliary information generator \(\mathsf {X}\) for \(\mathsf {GG}\) is an auxiliary information generator as per Sect. 2 with the additional property that the target \(k\) returned by \(\mathsf {\mathsf {X}.Ev}(1^\lambda )\) is in \({{\mathbb Z}}_{2^{\lambda -1}}\) —this makes it a valid exponent for any group \(\mathbb {G}\) such that \(\langle {\mathbb {G}}\rangle \in [\mathsf {GG}(1^\lambda )]\)— and the payload \(m\) is \(\varepsilon \) (i.e. is effectively absent).

Now consider game \(\mathrm {AIDHI2}\) of Fig. 4 associated to \(\mathsf {GG}, \mathsf {X}\) and an adversary \(\mathcal{A}\). For \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {aidhi2}}_{\mathsf {GG},\mathsf {X},\mathcal{A}}(\lambda )=2\Pr [\mathrm {AIDHI2}_{\mathsf {GG},\mathsf {X}}^{\mathcal{A}}(\lambda )]-1\). We say that \(\mathsf {GG}\) is AI-DHI2-secure if \(\mathsf {Adv}^{\mathsf {aidhi2}}_{\mathsf {GG},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for every unpredictable \(\mathsf {X}\) for \(\mathsf {GG}\) and every PT adversary \(\mathcal{A}\). The (new) AI-DHI2 assumption is that there exists a group generator \(\mathsf {GG}\) which is AI-DHI2 secure.

BP [15] give a simple construction of AIPO from AI-DHI. It is easy to extend this to use AI-DHI2.

A verifier for group generator \(\mathsf {GG}\) is a deterministic, PT algorithm \(\mathsf {GG}.\mathsf {Vf}\) that can check whether a given string d is a valid description of a group generated by the generator \(\mathsf {GG}\). Formally, \(\mathsf {GG}.\mathsf {Vf}\) on input \(1^\lambda ,d\) returns \(\mathsf {true}\) if \(d\in [\mathsf {GG}(1^\lambda )]\) and \(\mathsf {false}\) otherwise, for all \(d\in \{0,1\}^*\). We say that \(\mathsf {GG}\) is verifiable if it has a verifier and additionally, in time polynomial in \(1^\lambda ,\langle {\mathbb {G}}\rangle \), where \(\langle {\mathbb {G}}\rangle \in [\mathsf {GG}(1^\lambda )]\), one can test membership in \(\mathbb {G}\) and in the set \(\mathrm {Gen}(\mathbb {G})\) of generators of \(\mathbb {G}\). The following extends Theorem 1 to say that if VGBO is possible then no verifiable group generator is AI-DHI2 secure.

Theorem 2

Let \(\mathsf {GG}\) be a verifiable group generator. Then there is a pair \(\mathsf {Smp}\), \(\mathsf {Smp}_0\) of PT samplers such that if there exists a VGB-secure obfuscator for the class \(\mathsf {SAMP}=\{\mathsf {Smp},\mathsf {Smp}_0\}\), then \(\mathsf {GG}\) is not AI-DHI2-secure.

We omit a full proof, as it is very similar to the one of Theorem 1. We only note that to adapt the proof, we require \(\mathsf {\mathsf {X}.Ev}(1^\lambda )\) to output a random \(k\) in \({{\mathbb Z}}_{2^{\lambda - 1}}\) together with the obfuscation of the following circuit \(\mathrm {C}_{1^\lambda ,k}\). The circuit \(C_{1^\lambda ,k}\) takes as input a string d expected to be a group description, together with two strings g and K. It first runs \(\mathsf {GG}.\mathsf {Vf}\) on input \(1^\lambda ,d\) to check whether \(d\in [\mathsf {GG}(1^\lambda )]\), returning 0 if the check fails. If the check succeeds, so that we can write \(d=\langle {\mathbb {G}}\rangle \), it further checks that \(g\in \mathrm {Gen}(\mathbb {G})\) and \(K\in \mathbb {G}\), returning 0 if this fails. Finally the circuit returns 1 if and only if \(g^{k} = K\) in the group \(\mathbb {G}\). The crucial point is that for every valid input (dgK), there is at most one \(k\in {{\mathbb Z}}_{2^{\lambda - 1}}\) which satisfies \(g^{k} = K\) in the group described by d. This uses the assumption that \(\mathbb {G}\) is cyclic.

Many group generators are cyclic and verifiable. For example, consider a generator \(\mathsf {GG}\) that on input \(1^\lambda \) returns a description of \(\mathbb {G}={{\mathbb Z}}_p^*\) for a safe prime \(p=2q-1\). (That is, q is also a prime.) The verifier can extract pq from \(\langle {\mathbb {G}}\rangle \) and check their primality in PT. For such generators, we may prefer not to assume AI-DHI2-security, due to Theorem 2. However there are group generators that do not appear to be verifiable and where Theorem 2 thus does not apply. One must be careful to note that this does not mean that VGBO would not rule out AI-DHI2 security for these group generators. It just means that our current proof method may not work. Still at this point, the AI-DHI2 assumption, which only says there is some group generator that is AI-DHI2-secure, seems plausible.

Discussion. As we indicated, one of the main applications of AI-DHI was AIPO [15], and furthermore this connection is very direct. If VGB is in contention with AI-DHI, it is thus natural to ask whether it is also in contention with AIPO. We do not know whether or not this is true. One can also ask whether VGB is in contention with other, particular AIPO constructions, in particular the one of BP [15] based on the construction of Wee [43]. Again, we do not know the answer. We note that alternative constructions of AIPO and other forms of point-function obfuscation are provided in [8].

4 KM-Leakage Resilient Encryption

We refer to a symmetric encryption scheme as K-leakage-resilient if it retains security in the presence of any leakage about the key that leaves the key computationally unpredictable [30]. Such schemes have been designed in [26, 30]. Here, we extend the model by allowing the leakage to depend not just on the key but also on the message, still leaving the key computationally unpredictable. The extension seems innocuous, since the indistinguishability style formalizations used here already capture the adversary having some information about the message. But Theorem 3 shows that KM-leakage-resilience is in contention with iO. The interpretation is KM-leakage-resilience is not achievable.

Theorem 3 is of direct interest with regard to understanding what is and is not achievable in leakage-resilient cryptography. But for us its main importance will be as a tool to rule out UCE for computationally unpredictable split sources assuming iO in Sect. 5.

We use standard definitions of indistinguishability obfuscation [2, 4, 18, 31, 42] and pseudorandom generators [17, 44], as recalled in Sect. 2. We now start by formalizing KM-leakage resilience.

KM-Leakage Resilient Encryption. Let a symmetric encryption scheme \(\mathsf {SE}\) specify the following. PT encryption algorithm \(\mathsf {\mathsf {SE}.Enc}\) takes \(1^\lambda \), a key \(k\in \{0,1\}^{\mathsf {\mathsf {SE}.kl}(\lambda )}\) and a message \(m\in \{0,1\}^{\mathsf {\mathsf {SE}.ml}(\lambda )}\) to return a ciphertext c, where \(\mathsf {\mathsf {SE}.kl},\mathsf {\mathsf {SE}.ml}{:\;\;}{{\mathbb N}}\rightarrow {{\mathbb N}}\) are the key length and message length functions of \(\mathsf {SE}\), respectively. Deterministic PT decryption algorithm \(\mathsf {\mathsf {SE}.Dec}\) takes \(1^\lambda ,k,c\) to return a plaintext \(m\in \{0,1\}^{\mathsf {\mathsf {SE}.ml}(\lambda )}\). Note that there is a key length but no prescribed key-generation algorithm.

Fig. 5.
figure 5

Games defining \(\mathsf {X}\)-KM-leakage resilience of symmetric encryption scheme \(\mathsf {SE}\) and decryption correctness of symmetric encryption scheme \(\mathsf {SE}\).

For security, let \(\mathsf {X}\) be an auxiliary information generator with \(\mathsf {\mathsf {X}.tl}=\mathsf {\mathsf {SE}.kl}\) and \(\mathsf {\mathsf {X}.pl}=\mathsf {\mathsf {SE}.ml}\). Consider game \(\mathrm {IND}_{\mathsf {SE},\mathsf {X}}^{\mathcal{A}}(\lambda )\) of Fig. 5 associated to \(\mathsf {SE}, \mathsf {X}\) and adversary \(\mathcal{A}\). The message \(m_0\) is picked uniformly at random. The adversary \(\mathcal{A}\) must determine which message has been encrypted, given not just the ciphertext but auxiliary information \(a\) on the key and message \(m_1\). For \(\lambda \in {{\mathbb N}}\) we let \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {SE},\mathsf {X},\mathcal{A}}(\lambda )= 2\Pr [\mathrm {IND}_{\mathsf {SE},\mathsf {X}}^{\mathcal{A}}(\lambda )]-1\). We say that \(\mathsf {SE}\) is \(\mathsf {X}\)-KM-leakage resilient if the function \(\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {SE},\mathsf {X},\mathcal{A}}(\cdot )\) is negligible for all PT adversaries \(\mathcal{A}\). This is of course not achievable if \(a\) allowed the adversary to compute \(k\), so we restrict attention to unpredictable \(\mathsf {X}\). Furthermore, weakening the definition, we restrict attention to uniform \(\mathsf {X}\), meaning \(k\) and \(m_1\) are uniformly and independently distributed. Thus we say that \(\mathsf {SE}\) is KM-leakage-resilient if it is \(\mathsf {X}\)-KM-leakage resilient for all unpredictable, uniform \(\mathsf {X}\).

The above requirement is strong in that security is required in the presence of (unpredictable) leakage on the key and first message. But beyond that, in other ways, it has been made weak, because this strengthens our negative results. Namely, we are only requiring security on random messages, not chosen ones, with the key being uniformly distributed, and the key and the two messages all being independently distributed. Furthermore, in contrast to a typical indistinguishability definition, the adversary does not get the messages as input.

The standard correctness condition would ask that \(\mathsf {\mathsf {SE}.Dec}(1^\lambda ,k, \mathsf {\mathsf {SE}.Enc}(1^\lambda ,k,m))=m\) for all \(k\in \{0,1\}^{\mathsf {\mathsf {SE}.kl}(\lambda )}\), all \(m\in \{0,1\}^{\mathsf {\mathsf {SE}.ml}(\lambda )}\) and all \(\lambda \in {{\mathbb N}}\). We call this perfect correctness. We formulate and use a weaker correctness condition because we can show un-achievability even under this and the weakening is crucial to our applications building KM-leakage-resilient encryption schemes to obtain further impossibility results. Specifically, we require correctness only for random messages and random keys with non-negligible probability. Formally, consider game \(\mathrm {DEC}_{\mathsf {SE}}(\lambda )\) of Fig. 5 associated to \(\mathsf {SE}\), and for \(\lambda \in {{\mathbb N}}\) let \(\mathsf {Adv}^{\mathsf {dec}}_{\mathsf {SE}}(\lambda )=\Pr [\mathrm {DEC}_{\mathsf {SE}}(\lambda )]\) be the decryption correctness function of \(\mathsf {SE}\). We require that \(\mathsf {Adv}^{\mathsf {dec}}_{\mathsf {SE}}(\cdot )\) be non-negligible.

figure c

The following says that KM-leakage-resilient symmetric encryption is not achievable if iO and PRGs (which can be obtained from one-way functions [38]) exist:

Theorem 3

Let \(\mathsf {SE}\) be a symmetric encryption scheme. Let \(\mathsf {Obf}\) be an indistinguishability obfuscator. Let \(\mathsf {R}\) be a PR-secure PRG with \(\mathsf {\mathsf {R}.sl}=\mathsf {\mathsf {SE}.ml}\). Assume that \(2^{-\mathsf {\mathsf {SE}.kl}(\lambda )}\) and \(2^{-\mathsf {\mathsf {R}.sl}(\lambda )}\) are negligible. Then there exists a uniform auxiliary information generator \(\mathsf {X}\) such that the following holds: (1) \(\mathsf {X}\) is unpredictable, but (2) \(\mathsf {SE}\) is not \(\mathsf {X}\)-KM-leakage resilient.

The proof is a minor adaptation of the proof of BM [22] ruling out MB-AIPO under iO. Following BM [22], the idea is that the auxiliary information generator \(\mathsf {X}\) picks a key \(k\) and message \(m\) uniformly and independently at random and lets \(\mathrm {C}\) be the circuit that embeds k and the result y of the PRG on m. On input a ciphertext c, circuit \(\mathrm {C}\) decrypts it under k and then checks that the PRG applied to the result equals y. The auxiliary information is an obfuscation \(\overline{\mathrm {C}}\) of \(\mathrm {C}\). The attack showing claim (2) of Theorem 3 is straightforward but its analysis is more work and exploits the security of the PRG. Next one shows that iO-security of the obfuscator coupled with security of the PRG implies claim (1), namely the unpredictability of \(\mathsf {X}\). For completeness we provide a self-contained proof in Appendix A. A consequence of Theorem 3 is the following.

Corollary 4

Let \(\mathsf {SE}\) be a symmetric encryption scheme such that \(\mathsf {\mathsf {SE}.ml}(\cdot )\in \varOmega ((\cdot )^{\epsilon })\) for some constant \(\epsilon >0\). Assume the existence of an indistinguishability obfuscator and a one-way function. Then \(\mathsf {SE}\) is not KM-leakage resilient.

Proof

(Corollary 4 ). The assumption on \(\mathsf {\mathsf {SE}.ml}\) implies that there exists a PR-secure PRG \(\mathsf {R}\) with \(\mathsf {\mathsf {R}.sl}=\mathsf {\mathsf {SE}.ml}\) [38]. To conclude we apply Theorem 3.    \(\square \)

Related Work. CKVW [26] show that symmetric encryption with weak keys satisfying a wrong key detection property is equivalent to MB-AIPO. Wrong key detection, a form of robustness [1], asks that, if you decrypt, under a certain key, a ciphertext created under a different key, then the result is \(\bot \). This is not a requirement for KM-LR-SE. However, implicit in the proof of Theorem 3 is a connection between KM-LR-SE and a form of MB-AIPO with a relaxed correctness condition.

5 UCE for Split Sources

BFM [20] showed that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}]\)-security is not possible if iO exists. We improve this to show that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-security is not possible if iO exists. We obtain this by giving a construction of a KM-leakage-resilient symmetric encryption scheme from \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) and then invoking our above-mentioned result. Definitions of \(\mathrm {UCE}\)-secure function families [6] are recalled in Sect. 2.

figure d

We give a construction of a KM-leakage resilient symmetric encryption scheme from a \(\text {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\) family \(\mathsf {H}\), which will allow us to rule out such families under iO. Assume for simplicity that \(\mathsf {H.il}\) is odd, and let \(\ell =(\mathsf {H.il}-1)/2\). We call the symmetric encryption scheme \( \mathsf {SE}=\mathbf {H \& C}[\mathsf {H}]\) that we associate to \(\mathsf {H}\) the Hash-and-Check scheme. It is defined as follows. Let \(\mathsf {\mathsf {SE}.kl}(\lambda )=\mathsf {\mathsf {SE}.ml}(\lambda )=\ell (\lambda )\) for all \(\lambda \in {{\mathbb N}}\). Let the encryption and decryption algorithms be as follows:

figure e

Here \(\langle i \rangle _{\ell (\lambda )}=1^i0^{\ell (\lambda )-i}\) denotes a particular, convenient encoding of integer \(i\in \{1,\ldots ,\ell (\lambda )\}\) as a string of \(\ell (\lambda )\) bits, and m[i] denotes the i-th bit of m. The ciphertext consists of a key for \(\mathsf {H}\) chosen randomly and anew at each encryption, together with the vector \(\mathbf {y}\) whose i-th entry is the hash of the i-th message bit along with the key and index i. This scheme will have perfect correctness if \(\mathsf {H}\) is injective, but we do not want to assume this. The following theorem says that the scheme is KM-leakage resilient and also has (somewhat better than) weak correctness under UCE-security of \(\mathsf {H}\).

Theorem 5

Let \(\mathsf {H}\) be a family of functions that is \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure. Assume \(\mathsf {H.il}(\cdot )\in \varOmega ((\cdot )^{\epsilon })\) for some constant \(\epsilon >0\) and \(2^{-\mathsf {H.ol}(\cdot )}\) is negligible. Let \( \mathsf {SE}=\mathbf {H \& C}[\mathsf {H}]\). Then (1) symmetric encryption scheme \(\mathsf {SE}\) is KM-leakage resilient, and (2) \(1 - \mathsf {Adv}^{\mathsf {dec}}_{\mathsf {SE}}(\cdot )\) is negligible.

Proof

(Theorem 5 ). Assuming for simplicity as in the construction that \(\mathsf {H.il}\) is odd, let \(\ell (\cdot )=(\mathsf {H.il}(\cdot )-1)/2\). We now prove part (1). Let \(\mathsf {X}\) be an unpredictable, uniform auxiliary information generator. Let \(\mathcal{A}\) be a PT adversary. We build a PT source \(\mathcal{S}\in \mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\) and a PT distinguisher \(\mathcal{D}\) such that

$$\begin{aligned} \mathsf {Adv}^{\mathsf {ind}}_{\mathsf {SE},\mathsf {X},\mathcal{A}}(\lambda ) \;\le \;2\cdot \mathsf {Adv}^{\mathsf {uce}}_{\mathsf {H},\mathcal{S},\mathcal{D}}(\lambda ) \end{aligned}$$
(3)

for all \(\lambda \in {{\mathbb N}}\). The assumption that \(\mathsf {H}\) is \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure now implies part (1) of the theorem.

We proceed to build \(\mathcal{S},\mathcal{D}\). We let \(\mathcal{S}\) be the split source \(\mathcal{S}=\mathsf {Splt}[\mathcal{S}_0,\mathcal{S}_1]\), where algorithms \(\mathcal{S}_0,\mathcal{S}_1\) are shown below, along with distinguisher \(\mathcal{D}\):

figure f

Here \(\mathcal{S}_0\) calls the auxiliary information generator \(\mathsf {X}\) to produce a key, a plaintext message and the corresponding auxiliary input. It then picks another plaintext message and the challenge bit d at random, and lets \(\mathbf {x}\) consist of the inputs on which the hash function would be applied to create the challenge ciphertext. It leaks the challenge bit and auxiliary information. Algorithm \(\mathcal{S}_1\) takes as input the result \(\mathbf {y}\) of oracle \(\textsc {HASH}\) on \(\mathbf {x}\), and leaks the entire vector \(\mathbf {y}\). The distinguisher gets the leakage from both stages, together with the key . Using the latter, it can create the ciphertext c, which it passes to \(\mathcal{A}\) to get back a decision. Its output reflects whether \(\mathcal{A}\) wins its game.

Letting b denote the challenge bit in game \(\mathrm {UCE}_{\mathsf {H}}^{\mathcal{S},\mathcal{D}}(\lambda )\), we claim that

$$\begin{aligned} {\Pr }[\,b'=1\,\left| \right. \,b=1\,] = \frac{1}{2} + \frac{1}{2}\mathsf {Adv}^{\mathsf {ind}}_{\mathsf {SE}, \mathsf {X},\mathcal{A}}(\lambda ) \text{ and } {\Pr }[\,b'=1\,\left| \right. \,b=0\,] = \frac{1}{2} \;, \end{aligned}$$

from which Eq. (3) follows. The first equation above should be clear from the construction. For the second, when \(b=0\), we know that \(\textsc {HASH}\) is a random oracle. But the entries of \(\mathbf {x}\) are all distinct, due to the \(\langle i \rangle _{\ell (\lambda )}\) components. So the entries of \(\mathbf {y}\) are uniform and independent, and in particular independent of the challenge bit d.

This however does not end the proof: We still need to show that \(\mathcal{S}\in \mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\). We have ensured that \(\mathcal{S}\in \mathbf {S}^{\mathrm {splt}}\) by construction. The crucial remainig step is to show that \(\mathcal{S}\in \mathbf {S}^{\mathrm {cup}}\). This will exploit the assumed unpredictability of \(\mathsf {X}\). Let \(\mathcal{P}\) be a PT predictor. We build PT adversary \(\mathcal{Q}\) such that

$$\begin{aligned} \mathsf {Adv}^{\mathsf {pred}}_{\mathcal{S},\mathcal{P}}(\lambda ) \;\le \;\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {X},\mathcal{Q}}(\lambda ) \end{aligned}$$
(4)

for all \(\lambda \in {{\mathbb N}}\). The assumption that \(\mathsf {X}\) is unpredictable now implies that \(\mathcal{S}\in \mathbf {S}^{\mathrm {cup}}\). The construction of \(\mathcal{Q}\) is as follows:

figure g

Adversary \(\mathcal{Q}\) computes leakage \(((d,a),\mathbf {y})\) distributed exactly as it would be in game \(\mathrm {PRED}_{\mathcal{S}}^{\mathcal{P}}(\lambda )\), where \(\textsc {HASH}\) is a random oracle. It then runs \(\mathcal{P}\) to get a prediction \(x'\) of some oracle query of \(\mathcal{S}\). If game \(\mathrm {PRED}_{\mathcal{S}}^{\mathcal{P}}(\lambda )\) returns \(\mathsf {true}\), then \(x'\) must have the form \(k\Vert m_d[i]\Vert \langle i \rangle _{\ell (\lambda )}\) for some \(i\in \{1, \ldots ,\ell (\lambda )\}\), where \(k,d\) are the key and challenge bit, respectively, chosen by \(\mathcal{S}\). Adversary \(\mathcal{Q}\) can then win its \(\mathrm {PRED}_{\mathsf {X}}^{\mathcal{Q}}(\lambda )\) game by simply returning \(k\), which establishes Eq. (4).

This completes the proof of part (1) of the theorem. We prove part (2) by building a PT source \(\mathcal{S}\in \mathbf {S}^{\mathrm {sup}}\cap \mathbf {S}^{\mathrm {splt}}\) and a PT distinguisher \(\mathcal{D}\) such that

$$\begin{aligned} 1 - \mathsf {Adv}^{\mathsf {dec}}_{\mathsf {SE}}(\lambda ) \;\le \;\mathsf {Adv}^{\mathsf {uce}}_{\mathsf {H},\mathcal{S},\mathcal{D}}(\lambda ) + \frac{\ell (\lambda )}{2^{\mathsf {H.ol}(\lambda )}} \end{aligned}$$
(5)

for all \(\lambda \in {{\mathbb N}}\). But we have assumed that \(\mathsf {H}\) is \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure, so it is also \(\mathrm {UCE}[\mathbf {S}^{\mathrm {sup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure. We have also assumed \(2^{-\mathsf {H.ol}(\cdot )}\) is negligible. Part (2) of the theorem follows.

We proceed to build \(\mathcal{S},\mathcal{D}\). We let \(\mathcal{S}\) be the split source \(\mathcal{S}=\mathsf {Splt}[\mathcal{S}_0,\mathcal{S}_1]\), where algorithms \(\mathcal{S}_0,\mathcal{S}_1\) are shown below, along with distinguisher \(\mathcal{D}\):

figure h

Letting b denote the challenge bit in game \(\mathrm {UCE}_{\mathsf {H}}^{\mathcal{S},\mathcal{D}}(\lambda )\), we claim that

$$\begin{aligned} {\Pr }[\,b'=1\,\left| \right. \,b=1\,] \ge 1 - \mathsf {Adv}^{\mathsf {dec}}_{\mathsf {SE}}(\lambda ) \text{ and } {\Pr }[\,b'=1\,\left| \right. \,b=0\,] \le \frac{\ell (\lambda )}{2^{\mathsf {H.ol}(\lambda )}} \;, \end{aligned}$$

from which Eq. (5) follows. The first equation above is true because decryption errors only happen when hash outputs collide for different values of the message bit. For the second, when \(b=0\), we know that \(\textsc {HASH}\) is a random oracle. But the entries of \(\mathbf {x}\) are all distinct. So the entries of \(\mathbf {y}\) are uniform and independent. The chance of a collision of two entries is thus \(2^{-\mathsf {H.ol}(\lambda )}\), and the equation then follows from the union bound.

\(\mathcal{S}\) is a split source by construction. To conclude the proof we need to show that \(\mathcal{S}\in \mathbf {S}^{\mathrm {sup}}\). In the case \(\textsc {HASH}\) is a random oracle, the distinctness of the oracle queries of \(\mathcal{S}\) means that the entries of \(\mathbf {y}\) are uniformly and independently distributed. Since there is no leakage beyond \(\mathbf {y}\), the leakage gives the predictor \(\mathcal{P}\) no extra information about the entries of \(\mathbf {x}\). The uniform choice of \(k\) by \(\mathcal{S}\) means that \(\mathsf {Adv}^{\mathsf {pred}}_{\mathcal{S},\mathcal{P}}(\cdot )\le 2^{-\ell (\cdot )}\), even if \(\mathcal{P}\) is not restricted to PT. But our assumption on \(\mathsf {H.il}(\cdot )\) in the theorem statement implies that \(2^{-\ell (\cdot )}\) is negligible.    \(\square \)

figure i

In the BFM [20] iO-based attack on \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}]\), the source builds a circuit which embeds an oracle query x and its answer y, and outputs an obfuscation of this circuit in the leakage. Splitting is a restriction on sources introduced in BHK [6] with the aim of preventing such attacks. A split source cannot build the BFM circuit because the split structure denies it the ability to leak information that depends both on a query and its answer. Thus, the BFM attack does not work for \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\). However, we show that in fact \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-security is still not achievable assuming iO. This is now a simple corollary of Theorems 3 and 5 that in particular was the motivation for the latter:

Theorem 6

Let \(\mathsf {H}\) be a family of functions such that \(\mathsf {H.il}(\cdot )\in \varOmega ((\cdot )^{\epsilon })\) for some constant \(\epsilon >0\) and \(2^{-\mathsf {H.ol}(\cdot )}\) is negligible. Assume the existence of an indistinguishability obfuscator and a one-way function. Then \(\mathsf {H}\) is not \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}]\)-secure.

BM2 [23] show that \(\mathrm {UCE}[\mathbf {S}^{\mathrm {cup}}\cap \mathbf {S}^{\mathrm {splt}}\cap \mathbf {S}^{1}]\)-security is achievable assuming iO and AIPO. Our negative result of Theorem 6 does not contradict this, and in fact complements it to give a full picture of the achievability of UCE security for split sources.