Keywords

1 Introduction

Proofs of knowledge (PoKs) are ubiquitous in cryptographic protocols. When enjoying additional features such as honest-verifier zero knowledge (HVZK), witness indistinguishability (WI) or zero knowledge (ZK), they are used as building blocks in basically any protocol for secure computation. As such, the degree of security and efficiency achieved by the underlying PoKs directly (and dramatically) impacts on the security and efficiency of the larger protocol. For instance, very efficient WI PoKs for specific languages, such as Discrete Log and DDH, have been instrumental for constructing efficient maliciously secure two-party computation (see [17] and references within). Furthermore, stronger security notions of PoKs, such as soundness, WI and ZK in presence of adaptive-input selection, are useful for constructing round-efficient protocols [18, 25].

Proofs of Partial Knowledge. In [10], Cramer et al. showed how to construct efficient PoKs for compound statements starting from \(\varSigma \)-protocols. More precisely, the compound statement consists of n instances, and the goal is to prove knowledge of witnesses for at least k of the n instances. As such, these proofs are named “proofs of partial knowledge” in [10]. The transform of [10] cleverly combines n parallel executions of PoKs that are \(\varSigma \)-protocols in an efficient 3-round public-coin perfect WI (kn)-proof of partial knowledge. A similar result was given in [26].

Note that, if efficiency is not a concern, proofs of partial knowledge were already possible (with computational WI, though) thanks to the general construction of Lapidot and Shamir (LaSh) [19]. Proving compound statements via LaSh constructions however requires expensive NP reductions. On the other hand, LaSh PoKs provide a stronger security guarantee: honest players use the instances specified in the statements only in the last round, and security holds even if the adversarial verifier (resp., prover) chooses the instances adaptively after having seen the first (resp., second) round. LaSh’s construction is therefore an adaptive-input WI proof of partial knowledge for all NP. As mentioned above, this property can be instrumental to save at least one round of communication, when the proof of partial knowledge is used in a larger protocol. The construction shown in [10], instead, although efficient, does not provide any form of adaptivity, as all the n instances must be fully specified before the protocols starts. As a consequence, the better efficiency of [10] can be paid in additional rounds compared to [19] when the construction is used in larger applications.

The Proof of Partial Knowledge of [6]. A very recent work by Ciampi et al. [6] makes a first preliminary step towards closing the gap between [10, 19]. [6] proposes a different transform for WI proofs of partial knowledge that gives some adaptiveness at the price of generality. Namely, their technique yields to a (1, 2)-proof of partial knowledge where the knowledge of one of the two instances can be postponed to the last round. In more details, they show a PoK for a statement “\(x_0 \in L_0 \vee x_1 \in L_1\)” such that \(x_0\) and \(x_1\) are not immediately needed (in contrast to [10]). The honest prover needs \(x_0\) to run the 1st round while \(x_1\) is needed only in the 3rd round along with a witness for either \(x_0\) or \(x_1\). The verifier needs to see \(x_0\) and \(x_1\) only at the end, in order to accept/reject the proof. Ciampi et al. [6] defined the property of delayed input requiring that the honest prover does not need to know the instance to start the protocol. In other words, the need of the input is delayed to the very last round. For clarity, we stress that a delayed-input protocol is not necessarily secure against inputs that have been adaptively chosen. Indeed, their technique yields a proof of partial knowledge that is delayed input for one of the instances but is not adaptively secure against malicious provers (although it is adaptive-input WI). The security achieved by their transform is sufficient for their target applications.

The Open Question and its Importance. The above preliminary progress leaves open the following fascinating question: can we design an efficient transform that yields an adaptive-input WI (kn)-proof of partial knowledge where all n instances are known only in the last round?

Previous efficient transforms require the a-priori knowledge of all instances or of one out of two instances, even if the corresponding languages admit efficient delayed-input \(\varSigma \)-protocols. For the sake of concreteness, assume one wants to prove knowledge of the discrete logarithm of at least one of \(g^{x_0}\) or \(g^{x_1}\). There exists a very efficient \(\varSigma \)-protocol \(\varSigma ^{\mathsf{dl}}\) due to Schnorr [27], for proving knowledge of one discrete log and that also enjoys the delayed-input property, i.e., the prover can compute the first round without knowing the instance \(g^x\). However, when we apply known transforms to combine \(\varSigma ^{\mathsf{dl}}\), the resulting protocol loses the delayed-input property, as it will still need either both instances \(g^{x_0}\) and \(g^{x_1}\), if using [10], or at least one \(g^{x_0}\), to be specified in advance if using [6].

1.1 Our Results

In this work we study the above open question and give various positive answers.

\(\varSigma \) -Protocols and Adaptive-input Selection. We shed light on the relation between delayed-input \(\varSigma \)-protocols and adaptive-input \(\varSigma \)-protocols. Recall that a \(\varSigma \)-protocols enjoys a special soundnessFootnote 1 property, which means that given two accepting transcripts for the same statement having the same first round, one can efficiently extract a witness for that statement.

We show that in general \(\varSigma \)-protocols are delayed-input but are not adaptive-input sound; that is, they are not sound if the malicious prover can choose the statements adaptively. Indeed, in Sect. 4.1 we show how a malicious prover, based on the second round played by the verifier, can craft a false statement that will make the verifier accept (and the extractor of special soundness fail even when the statement is true). The attack applies to very popular \(\varSigma \)-protocols like Schnorr’s protocol for discrete logarithm (DLog), the protocol for proving equality of DLogs for Diffie-Hellman (DH) tuples and the protocol of [22] for proving knowledge of committed messages. These protocols all fall into a well known class of protocols studied by Cramer in [9] and Maurer in [21].

The above issue was already noticed in [1] for the case of non-interactive zero-knowledge arguments obtained from \(\varSigma \)-protocols by applying the Fiat-Shamir transform [14]. Indeed there are in literature some incorrect use of the Fiat-Shamir transform where the instance is not given in input to the random oracle. As a consequence an adversarial prover can first create a transcript and then can try to find an instance not in the language such that the transcript is accepting. Of course in the random-oracle model the above issue has the trivial fix consisting of giving the instance as input to the random oracle to generate the challenge. This fix is meaningless in the standard model that is the focus of our work.

We then analyze the transform of [6], that is delayed-input with respect to one instance only. We observe that when [6] combines protocols belonging to the class of [9, 21], it also suffers from the same attack, when the malicious prover is allowed to adaptively choose his input. Therefore the transform of [6] is not adaptive-input sound. We stress however, that in the applications targeted in [6] the input that is specified only in the last round is chosen by the verifier. As such, for their applications they do not need any form of adaptive-input soundness, but only adaptive-input witness-indistinguishability (which they achieve). Moreover, the special soundness of their transform preserves security w.r.t. adaptive-input selection. Summing up, [6] correctly defines and achieves delayed-input \(\varSigma \)-protocols and adaptive-input WI and uses it in the applications. However adaptive-input special soundness is not defined and not achieved in their work.

Adaptive-input Special-sound \(\varSigma \) -protocols. In light of the above discussion, a natural question is whether we can upgrade the security of the class of \(\varSigma \)-protocols that are delayed input, but not adaptive-input sound. Towards this, we first clarify the conceptual gap between adaptive-input selection and the adaptiveness considered in [6] by defining formally adaptive-input special soundness. Then we show a compiler that takes as input any delayed-input \(\varSigma \)-protocol belonging to the class specified in [9, 21], and outputs a \(\varSigma \)-protocol, that is adaptive-input sound, i.e., it is sound even when the malicious prover adaptively chooses his input in the last round. The main idea behind this compiler is to force the prover to send correctly the first round of the \(\varSigma \)-protocol through another parallel run of the \(\varSigma \)-protocol. This allows for the extraction of any witness in the proof of knowledge. The compiler is shown in Sect. 4.2. We also show (in Sect. 5) that nevertheless, [6]’s transform preserves the adaptivity of the \(\varSigma \)-protocols that are combined. Namely, on input \(\varSigma \)-protocols that are already adaptive-input special sound and WI, the [6]’s transform outputs a (1, 2)-proof of partial knowledge that is an adaptive-input proof of knowledge as well.

Adaptive-input (kn)-proofs of Partial Knowledge. The main contribution of this paper is a new transform that yields the first efficient (kn)-proofs of partial knowledge where all n instances can be specified in the last round.

Our new transform takes as input a delayed-input \(\varSigma \)-protocol for a relation \(\mathcal {R}\), and outputs a 3-round public-coin WI special-sound (kn)-proof of partial knowledge for the relation (\(\mathcal {R}\vee \cdots \vee \mathcal {R})\) where no instance is known at the beginning. The security of our transform is based on the DDH assumption. The WI property of the resulting protocol holds also with respect to adaptive-input selection, while the PoK property holds also in case of adaptive-input selection only if the underlying \(\varSigma \)-protocol is adaptive-input special sound.

We also show a transform that admits instances taken from different relations. Interestingly, this construction makes use as subprotocol of the first construction where instances are taken from the same relation.

1.2 Our Technique

We provide a technique for composing a delayed-input \(\varSigma \)-protocol for a relation \(\mathcal {R}\) into a delayed-input \(\varSigma \)-protocol for the (kn)-proof of partial knowledge for relation \((\mathcal {R}\vee \ldots \vee \mathcal {R})\). For a better understanding of our technique, it is instructive to see why the transform of [10] (resp., [6]) requires that all n (resp., 1 out of 2) instances are specified before the protocol starts.

Limitations of Previous Transforms. Let \(\varSigma _{\mathcal {R}}\) be a delayed-input \(\varSigma \)-protocol, and let \((\mathcal {R}\vee \ldots \vee \mathcal {R})\) be the relation for which we would like to have a (kn)-proof of partial knowledge. The technique of [10] works as follows. The prover P, on input the instances \((x_1\in \mathcal {R}\vee \ldots \vee x_n\in \mathcal {R})\), runs protocols \(\varSigma _{\mathcal {R}}, \ldots , \varSigma _{\mathcal {R}}\) in parallel. P gets only k witnesses for k different instances but it needs to somehow generate an accepting transcript for all instances. How to prove the remaining \(n-k\) instances without having the witness? The idea of [10] consists simply in letting the prover generate the \(n-k\) transcripts (corresponding to the instances for which he did not get the witnesses) using the HVZK simulator \(S\) associated to the \(\varSigma \)-protocol. Additionally [10] introduces a mechanism that allows the prover to control the value of exactly \((n-k)\) of the challenges played by V, so that the prover can force the transcripts computed by the simulator in \((n-k)\) positions.

So, why does the transform of [10] need all instances to be known already in the 1st round? The answer is that P needs to run \(S\) already in the 1st round, and \(S\) expects the instance as input. Similar arguments apply for [6] as it requires that 1 instance out of 2 is known already in the 1st round.

The Core Idea of Our Technique. Previous transforms fail because the prover runs the HVZK simulator to compute the 1st round of some of the transcripts of \(\varSigma _{\mathcal {R}}\). Our core idea is to provide mechanisms allowing P to postpone the use of the simulator to the 3rd round. The main challenge is to implement mechanisms that are very efficient and preserve soundness and WI of the composed \(\varSigma \)-protocol. We stress that we want to solve the open problems in full, and thus none of the instances are known at the beginning of the protocol. To be more explicit, in the 1st round, the prover starts with the following statement \((?\in L_{\mathcal {R}}\vee \ldots \vee ?\in L_{\mathcal {R}})\).

Assume we have a (kn)-equivocal commitment scheme that allows the prover to compute n commitments such that k of them are binding and the remaining \(n-k\) are equivocal, and the verifier cannot distinguish between the two types of commitment, where the k positions that are binding must be chosen already in the commitment phase (a similar tool is constructed in [24]). With this gadget in hand, we can construct a delayed-input (kn)-proof of partial knowledge \(\varSigma ^{\mathsf {OR}}_{k,n}\) as follows. Let (acz) denote generically the 3 messages exchanged during the execution of a \(\varSigma \)-protocol \(\varSigma _{\mathcal {R}}\).

In the 1st round, P honestly computes \(a_i\) for the i-th execution of \(\varSigma _{\mathcal {R}}\). Here we are using the fact that \(\varSigma _{\mathcal {R}}\) is delayed-input, and thus \(a_i\) can be computed without using the instance. Then he commits to \(a_1, \ldots , a_n\) using the (kn)-equivocal commitment scheme discussed above, where the k binding positions are randomly chosen. Thus, the 1st round of protocol \(\varSigma ^{\mathsf {OR}}_{k,n}\) consists of n commitments. In the 2nd round V simply sends a single challenge c according to \(\varSigma _{\mathcal {R}}\). In the 3rd round, P obtains the n instances \(x_1,\ldots , x_n\) and k witnesses. At this point, for the instances \(x_i\) for which he did not receive the witness, he will use the HVZK simulator to compute an accepting transcript \((\tilde{a}_i, c, \tilde{z}_i)\) and then equivocate the \((n-k)\) equivocal commitments so that they decommit to the new generated \(\tilde{a}_i\). For the k remaining instances he will honestly compute the 3rd round using the committed input \(a_i\). Intuitively, soundness follows from the fact that k commitments are binding, and from the soundness of \(\varSigma _{\mathcal {R}}\). WI follows from the hiding of the equivocal commitment scheme and the HVZK property of \(\varSigma _{\mathcal {R}}\). Note that in this solution we are crucially using the fact that we are composing the same \(\varSigma \)-protocol so that P can use any of the \(a_i\) committed in the 1st round to compute an honest transcript. This technique thus falls short as soon as we want to compose arbitrary \(\varSigma \)-protocols together. Nevertheless, this transformation turns to be useful for the case of different \(\varSigma \)-protocols.

(kn)-equivocal Commitment Scheme. A (kn)-equivocal commitment scheme allows a sender to compute n commitments \(\mathtt {com}_1, \ldots , \mathtt {com}_n\) such that k of them are binding and \(n-k\) are equivocal. We will use the language DH of DH tuples and we will call non-DH a tuple that is not a DH tuple. We will implement a (kn)-equivocal commitment scheme very efficiently under the DDH assumption as follows. In the commitment phase, the sender computes n tuples \(T_1=(g_1, A_1,B_1, X_1),\ldots ,T_n=(g_n, A_n,B_n, X_n)\) and proves that k out of n tuples are not in DH (i.e., they are non-DH tuples). We show that this can be done using the classical [10] (kn)-proof of partial knowledge that can be obtained starting with a \(\varSigma \)-protocol \(\varSigma ^{\mathsf {ddh}}\) for DH. We then use the well known [4, 5, 12, 17] fact that \(\varSigma \)-protocols can be used to construct an instance-dependent trapdoor commitment scheme, where the sender can equivocate if he knows the witness for the instance. Thus, each tuple \(T_i\) can be used to compute an instance-dependent trapdoor commitment \(\mathtt {com}_i\) using \(\varSigma ^{\mathsf {ddh}}\). \(\mathtt {com}_i\) will be equivocal if \(T_i\) was indeed a DH tuple, it will be binding otherwise. Because the sender proves that k tuples are not in DH, it holds that there are at least k binding commitment. Hiding follows from the WI property of [10] and the HVZK of \(\varSigma ^{\mathsf {ddh}}\). Commitment and decommitment can be completed in 3 rounds.

The Case of Different \(\varSigma \) -protocols. We now consider the case where we want to compose \(\varSigma _1, \ldots , \varSigma _n\) for possibly different relations. Our (kn)-equivocal commitment does not help here because each \(a_i\) is specific to protocol \(\varSigma _i\), and cannot be arbitrarily mixed and matched once the k witnesses are known.

For this case we thus use a different trick. We ask the prover to commit to each \(a_i\) twice, once using a binding commitment and once using an equivocal commitment. This again can be very efficiently implemented from the DDH assumption as follows. For each i, P generates tuples \(T^0_i\) and \(T^1_i\), that are such that at most one can be a DH tuple. It then commits to \(a_i\) twice using the instance-dependent trapdoor commitment associated to tuple \(T^0_i\) and tuple \(T^1_i\). Because at most one of the two tuples is a DH tuple, at most one of the commitments of \(a_i\) can be later equivocated. Thus the 1st round of our transformation consists of 2 commitments of \(a_i\) for \(1\le i\le n\). In the 3rd round, when P receives instances \(x_1,\ldots , x_n\) and k witnesses, he proceeds at follows. For each i, if P knows the witness for \(x_i\), he will open the binding commitment for position i, and compute \(z_i\) using the honest prover procedure of \(\varSigma _i\). Instead, if P does not have a witness for \(x_i\), he will compute a new \(\tilde{a}_i, z_i\) using the simulator on input \(x_i,c\) and open the equivocal commitment in position i. At the end, for each position i, one commitment has remained unopened.

This mechanism allows an honest prover to complete the proof with the knowledge of only k witnesses. However, what stops a malicious prover to always open the equivocal commitments and thus complete the proof without knowing any of the witnesses? We avoid this problem by requiring P to prove that, among the n tuples corresponding to the unopened commitments, at least k out of n tuples are DH tuples. This directly means that k of the opened commitments were constructed over non-DH tuples, and therefore are binding.

Now note that proving this theorem requires an (kn)-proof of partial knowledge in order to implement \(\varSigma ^{\mathsf {ddh}}\), where the instance to prove, i.e., the tuple that will be unopened, is known only in the 3rd round when P knows for which instances he is able to open a binding commitment. Here we crucially use the (kn)-proof of partial knowledge for the same \(\varSigma \)-protocol developed above making sure to first run our compiler that strengthen \(\varSigma ^{\mathsf {ddh}}\) with respect to statements adaptively selected by a malicious prover.

1.3 Comparison with the State of the Art

In Table 1 we compare our results with the relevant related work. We consider [19], a 3-round public-coin WIPoK that is fully adaptive-input and that works for any NP language. We also consider [10] that proposed efficient 3-round public-coin WI proofs of partial knowledge (though, without supporting any adaptivity). Finally, we consider [6] since it was the only work that faced the problem of combining together efficiency and some form of delayed-input instances. The last row refers to our main result that allows to postpone knowledge of all the instances to the last round. The 2nd column refers to the computational assumptions needed by [19] (i.e., one-way permutations) and our main result (i.e., DDH assumption). The 3rd column specifies the type of WI depending on the adaptive selection of the instances from the adversarial verifier. The 4th column specifies the soundness depending on the adaptive selection of the instances from the adversarial prover.

Table 1. Comparison with previous work.

1.4 Online/Offline Computations

Our result has the advantage that the prover can compute the first round without knowing instances and witnesses. The first round is therefore an offline phase. When the prover interacts with the verifier (online phase) he sends the first round precomputed and computes only the third round of the protocol. We stress that [10] requires to know the instances already to compute the first round. Furthermore the work of [19] allows the prover to compute the first round offline but in the online phase the prover must perform an NP reduction.

In Table 2 Footnote 2 we compare the effort of the prover in the online phase in our work and in [10, 19]. We consider a prover that proves knowledge of discrete logarithms for 1 instance out of 2 instances (1st column) and a prover that proves knowledge of discrete logarithms for k instances out of n instances (2nd column). As we noted, above in the online phase of [19] the prover computes an NP reduction (2nd row). For our construction and the one of [10] we count the number of modularFootnote 3 exponentiations that are computed in the online phase (3rd and 4th rows). Below we briefly describe how we have computed the above costs. In [10] the number of exponentiations is \(2n-k\). This comes from the fact that the first round of Schnorr’s \(\varSigma \)-protocol requires one exponentiation while the simulator requires two exponentiations. In [10] the simulator is executed \(2(n-k)\) times and moreover k exponentiations are needed to run the prover of Schnorr’s protocol.

Table 2. Comparison with previous work proving knowledge of discrete logarithms. The table illustrates the computations of the prover in the online phase.

In the (1,2)-proof of partial knowledge of [6], the 1st round requires 3 exponentiations. Indeed, in the 1st round of [6] the prover runs Schnorr’s simulator and computes the 1st round of Schnorr’s \(\varSigma \)-protocol. The 3rd round of [6] has a different analysis depending on which witness is used. When the prover of [6] uses the witness for an adaptively chosen instance, then there is no addition exponentiation. Otherwise, another execution of Schnorr’s simulator is required. For this reason, in the worst case the 3rd round of [6] costs two exponentiations. Note that in the execution of the construction of [6] 4 exponentiations are performed in the online phase, since only the 1st round of Schnorr’s \(\varSigma \)-protocol can be precomputed.

The final row corresponds to our main result and shows the general case of k instances out of n. Our construction involves \(10n-k\) exponentiations. Indeed a commitment computed according to the commitment scheme described previously based on DH tuples costs 4 exponentiations. In our construction in the 1st round we sample \(n-k\) DH tuples and k non-DH, sampling a DH/non-DH tuple costs 3 exponentiations, so this operation costs 3n. Also in the 1st round we compute \(n-k\) equivocal commitments and k binding commitments, and this sums up to \(2n+2k\) modular exponentiations. Furthermore the prover computes the 1st round of Schnorr’s \(\varSigma \)-protocol n times and this costs n exponentiations. Moreover it has to run [10] to prove knowledge of witnesses for k instances out of n instances, and this costs \(2n-k\) exponentiations. The only operations that involve exponentiations at the third round are the \(n-k\) executions of the simulator of Schnorr’s \(\varSigma \)-protocol. Therefore the online phase costs \(2(n-k)\).

The adaptive-input special-sound version of our construction costs \(13n - 3k\) exponentiations. Consider that in the adaptive-input special-sound version of Schnorr’s \(\varSigma \)-protocol an execution of the simulator costs 4 exponentiations. Moreover computing the 1st round involves 2 exponentiations. Hence the first round of our adaptive-input special-sound construction involves \(6n+k\) exponentiations and the online phase costs \(4(n-k)\) exponentiations.

The exponentiations in square brackets specify the cost of our main result when Schnorr’s \(\varSigma \)-protocol is transformed into an adaptive-input special-sound \(\varSigma \)-protocol. The analysis for the case of 1 out of 2 is similar with \(k=1\) and \(n=2\) but in this case, in the offline phase, we do not consider the cost of [10] since the correctness of the pair of tuples can be self-verified.

2 Preliminaries

We use \(\lambda \) as security parameter. A(x) denotes the probability distribution of the output of a probabilistic algorithm A when running with x as input. We will use A(xr) to denote the randomness r used by A. PPT stands for probabilistic polynomial time.

If \(\mathcal {R}\) is a subset of \(\{0,1\}^\star \times \{0,1\}^\star \) for which membership of (xw) to \(\mathcal {R}\) can be decided in time polynomial in |x| then we say that \(\mathcal {R}\) is a polynomial-time relation and w is a witness for the instance x. Given a polynomial-time relation \(\mathcal {R}\), \(L_\mathcal {R}\) defined as \(L_\mathcal {R}=\{x|\exists w: (x, w)\in \mathcal {R}\}\) is an NP language. For generality, we define \(\hat{L}_\mathcal {R}\) to be the input language that includes both \(L_\mathcal {R}\) and all well formed instances that do not have a witness, as already done in [15]. It follows that \(L_\mathcal {R}\subseteq \hat{L}_\mathcal {R}\) and membership in \(\hat{L}_\mathcal {R}\) can be tested in polynomial time. In proof systems for relation \(\mathcal {R}\), the verifier runs the protocol only if the common input x belongs to \(\hat{L}_\mathcal {R}\), while it rejects immediately common inputs not in \(\hat{L}_\mathcal {R}\).

Given two interactive machines \(M_0\) and \(M_1\), we denote by \(\langle M_0(x_0), M_1(x_1)\rangle (x_2)\) the output of \(M_1\) when running on input \(x_1\) with \(M_0\) running on input \(x_0\), both running on common input \(x_2\).

Definition 1

A pair \((\mathcal {P},\mathcal {V})\) of PPT interactive machines is a complete protocol for an NP-language L with relation \(\mathcal {R}\) if the following property holds:

  • Completeness. For every common input \(x\in L\) and witness w such that \((x,w)\in \mathcal {R}\), it holds that \(\text{ Prob }\left[ \;\langle \mathcal {P}(w), \mathcal {V}\rangle (x) =1\;\right] =1.\)

Definition 2

A complete protocol \((\mathcal {P},\mathcal {V})\) is a proof system for an NP-language L with relation \(\mathcal {R}\) if the following property holds:

  • Soundness. For every interactive machine \(\mathcal {P}^\star \) there exists a negligible function \(\nu \) such that for every \(x\notin L\): \(\text{ Prob }\left[ \;\langle \mathcal {P}^\star ,\mathcal {V}\rangle (x)=1\;\right] \le \nu (|x|).\)

A proof system \((\mathcal {P},\mathcal {V})\) is public coin if \(\mathcal {V}\) sends only random bits.

Definition 3

([11]). Let \(k: \{0,1\}^{*} \rightarrow [0,1]\) be a function. A protocol \((\mathcal {P}, \mathcal {V})\) is a proof of knowledge for the relation \(\mathcal {R}\) with knowledge error k if the following properties are satisfied:

  • Completeness: if \( \mathcal {P}\) and \(\mathcal {V}\) follow the protocol on input x and private input w to \( \mathcal {P}\) where \((x, w) \in \mathcal {R}\), then \(\mathcal {V}\) always accepts.

  • Knowledge Soundness: there exists a constant \(c > 0\) and a probabilistic oracle machine \(\mathsf {Extract}\), called the extractor, such that for every interactive prover \({ \mathcal {P}^\star }\) and every input x, the machine \(\mathsf {Extract}\) satisfies the following condition. Let \(\epsilon (x)\) be the probability that \(\mathcal {V}\) accepts on input x after interacting with \({ \mathcal {P}^\star }\). If \(\epsilon (x) > k(x)\), then upon input x and oracle access to \({ \mathcal {P}^\star }\), the machine \(\mathsf {Extract}\) outputs a string w such that \((x,w) \in \mathcal {R}\) within an expected number of steps bounded by \(|x|^c/(\epsilon (x) - k(x)).\)

A transcript \(\tau \) of an execution of a public-coin protocol \(\varPi =(\mathcal {P},\mathcal {V})\) for statement x consists of the sequence of messages exchanged \(\mathcal {P}\) and \(\mathcal {V}\). We say that \(\tau \) is accepting if \(\mathcal {V}\) outputs 1. Two accepting transcripts (acz) and \((a',c',z')\) for a 3-round public coin proof system with the same common input constitute a collision iff \(a=a'\) and \(c\ne c'\). The one message sent by the verifier \(\mathcal {V}\) in a 3-round public coin proof system is called the challenge.

\(\varSigma \) -protocols. The most common form of proof system used in practice consists of 3-round protocols referred to as \(\varSigma \)-protocols. For several useful languages there exist efficient \(\varSigma \)-protocols, and they are easy to work with as already shown in many transforms [2, 3, 8, 12, 20, 22, 23, 28, 29].

Definition 4

A 3-round public-coin protocol \(\varPi =(\mathcal {P},\mathcal {V})\) is a \(\varSigma \)-protocol for an \(\mathsf{{NP}}\)-language L with polynomial-time relation \(\mathcal {R}\) iff the following additional properties are satisfied:

  • Completeness. When \(\mathcal {P},\mathcal {V}\) execute the protocol on input x and private input w to \(\mathcal {P}\) where \((x,w)\in \mathcal {R}\), the verifier \(\mathcal {V}\) always accepts.

  • Special Soundness. There exists an efficient algorithm \(\mathsf {Extract}\) that, on input x and a collision for x, outputs a witness w such that \((x,w)\in \mathcal {R}\).

  • Special Honest Verifier Zero Knowledge (special HVZK, SHVZK). There exists a PPT simulator algorithm \(S\) that, on input an instance \(x\in L\) and challenge c, outputs (az) such that (acz) is an accepting w.r.t. x. Moreover, the distribution of the output of \(S\) on input (xc) is perfectlyFootnote 4 indistinguishable from the distribution of the transcript obtained when \(\mathcal {V}\) sends c as challenge and \(\mathcal {P}\) runs on common input x and any private input w such that \((x,w)\in \mathcal {R}\).

A security parameter \(1^\lambda \) for a \(\varSigma \)-protocol represents challenge length. Therefore we have that a \(\varSigma \)-protocol with a sufficiently large security parameter \(1^\lambda \) is also a proof system.

Theorem 1

([10]). Every \(\varSigma \)-protocol is Perfect WI.

Theorem 2

([11]). Let \(\varPi \) be a \(\varSigma \)-protocol for a relation \(\mathcal {R}\) with security parameter \(\lambda \). Then \(\varPi \) is a proof of knowledge with knowledge error \(2^{-\lambda }\).

From the above theorem we have that every \(\varSigma \)-protocol with a sufficiently long challenge is a proof of knowledge with negligible knowledge error. We observe that in the proof of the above theorem only completeness and special soundness of the \(\varSigma \)-protocol are used. Therefore the theorem regardless of HVZK. Furthermore, using the same proof approach used in the security proof of this theorem we can consider a relaxed notion of special soundness, t-special soundness, requiring \(t\ge 2\) transcripts to extract the witness, with \(t=\mathtt {poly}(\lambda )\). This is still sufficient to obtain a proof of knowledge with negligible soundness error when the challenge is sufficiently long.

Therefore in this work when interested in proving the proof of knowledge property we will without loss of generality just prove t-special soundness for a polynomially bounded t and completeness.

Definition 5

(Delayed-Input \(\varSigma \) -protocol [6]). A \(\varSigma \)-protocol \(\varPi =( \mathcal {P},\mathcal {V})\) for a relation \(\mathcal {R}\) is delayed-input if \(\mathcal {P}\) computes the first round having as input only the security parameter \(1^\lambda \) and \(\ell =|x|\).Footnote 5

2.1 Adaptive-Input Special Soundness and Proof of Knowledge

The special soundness of a \(\varSigma \)-protocol strictly requires the statement \(x \in L\) to be unchanged in the 2 accepting transcripts. We introduce a stronger notion referred to as adaptive-input special soundness. Roughly speaking, we require that it is possible to extract witnesses from a collision even if the two accepting 3-round transcripts are for two different instances. It is easy to see that adaptive-input special soundness implies extraction against provers that choose the theorem to be proved after seeing the challenge.

Definition 6

A \(\varSigma \)-protocol \(\varPi \) for relation \(\mathcal {R}\) enjoys adaptive-input special soundness if there exists an efficient algorithm \(\mathsf {AExtract}\) that, on input accepting 3-round transcripts \((a,c_1,z_1)\) for input \(x_1\) and \((a,c_2,z_2)\) for input \(x_2\), outputs witnesses \(w_1\) and \(w_2\) such that \((x_1,w_1)\in \mathcal {R}\) and \((x_2,w_2)\in \mathcal {R}\).

In this work we also define a protocol \(\varPi =(\mathcal {P},\mathcal {V})\) that is adaptive-input proof of knowledge. The adaptive-input proof of knowledge property is the same as the proof of knowledge property, with the difference that the adversarial prover \(\mathcal {P}^\star \) can choose the statement when the last round is played. We require that the instance x given in output by \(\mathsf {AExtract}\) must be perfect indistinguishable from an instance \(x'\) given in output by \(\mathcal {P}^\star \) in an execution of \(\varPi \) with \(\mathcal {V}\). The previous discussion about proving the proof of knowledge property from \(\ell \)-special soundness also applies when proving adaptive-input proof of knowledge from adaptive-input \(\ell \)-special soundness.

2.2 Adaptive-Input Witness Indistinguishability

The notion of adaptive-input WI formalizes security of the prover with respect to an adversarial verifier \({\mathcal {A}}\) that adaptively chooses the input instance to the protocol; that is, after seeing the first message of the prover. More specifically, for a delayed-input 3-round complete protocol \(\varPi \), we consider game \(\mathsf {ExpAWI}_{\varPi ,{\mathcal {A}}}\) between a challenger \({\mathcal {C}}\) and an adversary \({\mathcal {A}}\) in which the instance x and two witnesses \(w_0\) and \(w_1\) for x are chosen by \({\mathcal {A}}\) after seeing the first message of the protocol played by the challenger. The challenger then continues the game by randomly selecting one of the two witnesses, \(w_b\), and by computing the third message by running the prover’s algorithm on input the instance x, the selected witness \(w_b\) and the challenge received from the adversary. The adversary wins the game if she can guess which of the two witnesses was used by the challenger.

We now define the adaptive-input WI experiment \(\mathsf {ExpAWI}_{\varPi ,{\mathcal {A}}}(\lambda ,\mathsf {aux})\). This experiment is parameterized by a delayed-input 3-round complete protocol \(\varPi =(\mathcal {P},\mathcal {V})\) for a relation \(\mathcal {R}\) and by PPT adversary \({\mathcal {A}}\). The experiment has as input the security parameter \(\lambda \) and auxiliary information \(\mathsf {aux}\) for \({\mathcal {A}}\).

\(\mathsf {ExpAWI}_{\varPi ,{\mathcal {A}}}(\lambda ,\mathsf {aux})\):

  1. 1.

    \({\mathcal {C}}\) randomly selects coin tosses r and runs \(\mathcal {P}\) on input \((1^\lambda ;r)\) to obtain a;

  2. 2.

    \({\mathcal {A}}\), on input a and \(\mathsf {aux}\), outputs instance x, witnesses \(w_0\) and \(w_1\) such that \((x,w_0),(x,w_1)\in \mathcal {R}\), challenge c and internal state \(\mathtt {state}\);

  3. 3.

    \({\mathcal {C}}\) randomly selects \(b\leftarrow \{0,1\}\) and runs \(\mathcal {P}\) on input \((x,w_b,c)\) to obtain z;

  4. 4.

    \(b'\leftarrow {\mathcal {A}}((a,c,z),\mathsf {aux},\mathtt {state})\);

  5. 5.

    if \(b=b'\) then output 1 else output 0.

We set \(\mathsf {AdvAWI}_{\varPi ,{\mathcal {A}}}(\lambda ,\mathsf {aux})=\left| \text{ Prob }\left[ \;\mathsf {ExpAWI}_{\varPi ,{\mathcal {A}}}(\lambda , \mathsf {aux})=1\;\right] -\frac{1}{2}\right| .\)

Definition 7

(Adaptive-Input Witness Indistinguishability). A delayed-input 3-round complete protocol \(\varPi \) is adaptive-input WI if for any PPT adversary \({\mathcal {A}}\) there exists a negligible function \(\nu \) such that for any \(\mathsf {aux}\in \{0,1\}^*\) it holds that \(\mathsf {AdvAWI}_{\varPi ,{\mathcal {A}}}(\lambda ,\mathsf {aux})\le \nu (\lambda ).\)

About DDH. The DDH assumption posits the hardness of distinguishing a randomly selected DH tuple from a randomly selected non-DH tuple with respect to a group generator algorithm \(\mathsf {IG}\). For sake of concreteness, we consider a specific group generator that, on input \(1^\lambda \), randomly selects a \(\lambda \)-bit prime p such that \(q=(p-1)/2\) is also prime and outputs the (description of the) order q group \({\mathcal {G}}\) of the quadratic residues modulo p along with a random generator g of \({\mathcal {G}}\).

2.3 A \(\varSigma \)-Protocol for Partial Knowledge of DH/Non-DH Tuples

Let \(\mathcal{G}\) be a cyclic group of order p. We say that \(T=(g,A,B,X)\in \mathcal{G}^4\) is oneNDH if there exits \(\alpha ,\beta \in Z_p\) such that \(A=g^{\alpha }, B=g^\beta , X= g^{\alpha \beta +1}\). In this section we describe a \(\varSigma \)-protocol for proving that at least k out of n tuples are oneNDH. The \(\varSigma \)-protocol is based on the one of [10] and we stress that, just as in [10], the \(\varSigma \)-protocol is perfect WI.

Formally, for \(1\le k\le n-1\), we construct \(\varSigma \)-protocol \(\varPi _{k,n}^{nddh}=(\mathcal {P}_{k,n}, \mathcal {V}_{k,n})\) for the polynomial-time relation

of the sequences of the n-tuples such that at least k of them are oneNDH. The prover \(\mathcal {P}_{k,n}\) and the verifier \(\mathcal {V}_{k,n}\) of \(\varPi _{k,n}^{nddh}\), on input n tuples \((g_1,A_1,B_1,X_1),\ldots \) \(\ldots ,(g_n,A_n,B_n,X_n)\) constructs tuples \((g_i,A_i,B_i,Y_i)\) setting \(Y_i=X_i/g_i\), for \(i=1,\ldots ,n\).

Then prover and verifier start \(\varSigma \)-protocol \(\varSigma ^\mathsf {ddh}\) of [10] for proving that at least k of n constructed tuples are DH.

Theorem 3

For every n and \(1\le k\le n-1\), \(\varPi _{k,n}^{ddh}\) is a \(\varSigma \)-protocol for the polynomial-time relation \({\mathsf {NDH}}_{k,n}\) with perfect WI.

Proof

The perfect WI property follows from the perfect WI of [10]. The proof is then completed by the following two simple observations. If at least k of the input tuples are oneNDH then at least k of the constructed tuples \((g_i,A_i,B_i,Y_i)\) are DH and the prover has a witness of this fact. On the other hand, if fewer than k of the input tuples are oneNDH then the transformed tuples contain fewer than k DH tuples.

2.4 Commitments from \(\varSigma \)-protocols

We define the notion of an Instance-Dependent Trapdoor Commitment scheme associated with a polynomial-time relation \(\mathcal {R}\) and show a construction that uses \(\varSigma \)-protocols and fits this definition.

Definition 8

(Instance-Dependent Trapdoor Commitment Scheme). Let \(\mathcal {R}\) be a polynomial-time relation. An Instance-Dependent Trapdoor Commitment (a IDTC, in short) scheme for \(\mathcal {R}\) with message space \(M\) is a quadruple of PPT algorithms \((\mathsf {Com},\mathsf {Dec},(\mathsf {Fake}_1,\mathsf {Fake}_2))\) where \(\mathsf {Com}\) is the randomized commitment algorithm that takes as input an instance \(x \in \hat{L}_\mathcal {R}\) (with \(|x|=\mathtt {poly}(\lambda )\)) and a message \(m\in M\) and outputs commitment \(\mathtt {com}\) and decommitment \(\mathtt {dec}\). \(\mathsf {Dec}\) is the verification algorithm that takes as input \((x,\mathtt {com},\mathtt {dec},m)\) and decides whether m is the decommitment of \(\mathtt {com}\).

\((\mathsf {Fake}_1,\mathsf {Fake}_2)\) are randomized algorithms. \(\mathsf {Fake}_1\) takes as input an instance x, a witness w s.t. \((x,w)\in \mathcal {R}\) \((|x|=\mathtt {poly}(\lambda )\)) and outputs commitment \(\mathtt {com}\), and equivocation information \(\mathtt {rand}\). \(\mathsf {Fake}_2\) takes as input x, w, m, and \(\mathtt {rand}\), and outputs \(\mathtt {dec}\) s.t. \(\mathsf {Dec}\), on input \((x,\mathtt {com},\mathtt {dec},m)\), accepts m as decommitment of \(\mathtt {com}\).

An Instance-Dependent Trapdoor Commitment scheme has the following properties:

  • Correctness: for all \(x\in \hat{L}_\mathcal {R}\), all \(m\in M\), it holds that

    $$\text{ Prob }\left[ \;(\mathtt {com},\mathtt {dec}) \leftarrow \mathsf {Com}(x,m) : \mathsf {Dec}(x,\mathtt {com},\mathtt {dec},m)=1\;\right] =1.$$
  • Binding: if \(x\notin L\) then for every commitment \(\mathtt {com}\) there exists at most one message m s.t. \(\mathsf {Dec}(x,\mathtt {com},\mathtt {dec},m)=1\) for any value \(\mathtt {dec}\).

  • Hiding: for every receiver \({\mathcal {A}}\), for every auxiliary information \(\mathsf {aux}\), for all \(x\in L_\mathcal {R}\) and all for \(m_0,m_1 \in M\), it holds that

    $$\text{ Prob }\left[ \;b\leftarrow \{0,1\}; (\mathtt {com},\mathtt {dec})\leftarrow \mathsf {Com}(1^\lambda ,x,m_b): b={\mathcal {A}}(\mathsf {aux},x,\mathtt {com},m_0,m_1)\;\right] \le \frac{1}{2}.$$
  • Trapdoorness: the following two families of probability distributions are perfect indistinguishable (namely the two probability distributions coincide for all (xwm) such that \((x,w)\in \mathcal {R}\) and \(m\in M\)):

    $$\{ (\mathtt {com},\mathtt {rand})\leftarrow \mathsf {Fake}_1(x,w); \mathtt {dec}\leftarrow \mathsf {Fake}_2(x,w,m,\mathtt {rand}): (\mathtt {com},\mathtt {dec})\}$$
    $$\{(\mathtt {com},\mathtt {dec})\leftarrow \mathsf {Com}(x,m):(\mathtt {com},\mathtt {dec})\}.$$

IDTC from \(\varSigma \) -protocol. Our construction follows similar constructions of [11, 13, 17]. Let \(\varPi =(\mathcal {P},\mathcal {V})\) be a \(\varSigma \)-protocol for polynomial-time relation \(\mathcal {R}\) with the associated NP-language \(L_\mathcal {R}\) and challenge length \(\lambda \). Let S be the special HVZK simulator for \(\varPi \) and let (xw) be s.t. \((x,w)\in \mathcal {R}\). Now we show an IDTC \(\mathsf {CS}^\varPi =(\mathsf {Com}^\varPi ,\mathsf {Dec}^\varPi ,(\mathsf {Fake}_1^\varPi , \mathsf {Fake}_2^\varPi ))\).

  • \(\mathsf {Com}^\varPi \) takes as input instance x and message \(m\in \{0,1\}^\lambda \), sets \((\mathtt {com},\mathtt {dec})\leftarrow S(x,m)\) and outputs \((\mathtt {com},\mathtt {dec})\).

  • \(\mathsf {Dec}^\varPi \) takes as input instance x and transcript \((\mathtt {com},m,\mathtt {dec})\), runs \(\mathcal {V}\) on input the instance and the transcript and returns \(\mathcal {V}\)’s output.

  • \(\mathsf {Fake}_1^\varPi \) takes as input instance x and witness w, samples random string \(\rho \) and runs \(\mathcal {P}\) on input \((1^\lambda ,x,w;\rho )\) to get the 1st message \(\mathsf {a}\) of \(\varPi \). \(\mathsf {Fake}_1^\varPi \) sets \(\mathtt {rand}=\rho \), \(\mathtt {com}=\mathsf {a}\) and outputs \((\mathtt {com},\mathtt {rand})\).

  • \(\mathsf {Fake}_2^\varPi \) takes as input \(x, w, m, \mathtt {rand}\) and runs \(\mathcal {P}\) on input \((1^\lambda , x, w, m,\mathtt {rand})\) to get the 3rd message \(\mathsf {z}\) of \(\varPi \). \(\mathsf {Fake}_1^\varPi \) sets \(\mathtt {dec}=\mathsf {z}\) and outputs \(\mathtt {dec}\).

Theorem 4

\(\mathsf {CS}^\varPi \) is an \(IDTC\).

Proof

The security proof relies only on the properties of \(\varPi \). Correctness follows from the completeness of \(\varPi \). Binding follows from the special soundness of \(\varPi \). Hiding and Trapdoorness follow from the SHVZK and the completeness of \(\varPi \).

3 Adaptive-Input (kn)-Proof of Partial Knowledge

In this section we describe in details our new transform for compound statements. For the high-level overview the reader is referred to Sect. 1.2.

Let \(\mathcal {R}\) be a polynomial-time relation admitting a delayed-input \(\varSigma \)-protocol \(\varPi =(\mathcal {P},\mathcal {V})\). Recall that delayed-input means that the prover does not need the instances of the statement to play the 1st round.

We describe a compiler that on input \(\varPi \) for \(\mathcal {R}\) outputs a delayed-input WIPoK \(\varPi ^k=(\mathcal {P}^k,\mathcal {V}^k)\) for the (kn)-threshold relation \(\mathcal {R}_k\) defined as follows

The main tools involved in our construction are the protocol \(\varPi _{k,n}^{nddh}\) described in Sect. 2.3, and an IDTC scheme described in Sect. 2.4. More precisely the IDTC scheme is constructed using a \(\varSigma \)-protocol for DDH. Therefore given a tuple \(T=(g,A,B,X)\) (either DH or non-DH), a message m and a randomness r, we can compute \((\mathtt {com},\mathtt {dec})\) using the scheme described in Sect. 2.4. If T is a DH tuple, with \(A=g^{\alpha }\), then \(\alpha \) represents the trapdoor for the commitment \(\mathtt {com}\) and \(\mathtt {dec}\) is equal to \(\perp \). In this case given a randomness r, \(\mathtt {com}\), the tuple T and \(\alpha \) for every message m it is possible to compute \(\mathtt {dec}\) such that a receiver accepts \(\mathtt {com}\) as a commitment of the message m.

  • 1st Round. \(\mathcal {P}^k \Rightarrow \mathcal {V}^k\):

    1. 1.

      Set \(({\mathcal {G}},p,g)\leftarrow \mathsf {IG}(1^\lambda )\).

    2. 2.

      Randomly choose tuples \(T_1=(g_1,A_1,B_1,X_1),\ldots ,T_n=(g_n,A_n,B_n,X_n)\) of elements of \({\mathcal {G}}\) under the constraint that exactly k are oneNDH and \(n-k\) are DH, along with \(\alpha _1,\ldots ,\alpha _n\) such that \(A_i=g_i^{\alpha _i}\), for \(i=1,\ldots ,n\).

    3. 3.

      Let \(b_1,\ldots ,b_k\) denote the indices of the k oneNDH tuples and \(\widetilde{b}_1,\ldots ,\widetilde{b}_{n-k}\) denote the indices of the \(n-k\) DH tuples.

    4. 4.

      Run the prover of \(\varPi _{k,n}^{nddh}\) on input \(T=(T_1,\ldots ,T_n)\), witnesses \((\alpha _{b_1},\ldots ,\alpha _{b_k})\) and randomness \(r_{k,n}\) thus obtaining message \(a_{k,n}\). Send \(a_{k,n}\) to \(\mathcal {V}^k\).

    5. 5.

      For \(i=1, \dots , n\):       Compute the first round \(a_i\) of \(\varPi \) by running \(\mathcal {P}\) with randomness \(r_i\).       Compute pair \((\mathtt {com}_i,\mathtt {dec}_i)\) of commitment and decommitment       of \(a_i\) using \(T_i\).       Send \((T_i,\mathtt {com}_i)\) to \(\mathcal {V}^{k}\).

  • 2nd Round. \(\mathcal {V}^k \Rightarrow \mathcal {P}^k\): randomly select a challenge c and send it to \(\mathcal {P}^k\).

  • 3rd Round. \(\mathcal {P}^k \Rightarrow \mathcal {V}^k\):

    1. 1.

      Receive inputs \((x_1,\dots ,x_n)\) and witnesses \((w_{d_1},\dots , w_{d_k})\) for inputs \(x_{d_1},\dots , x_{d_k}\) (we denote by \(\widetilde{d}_1,\ldots ,\widetilde{d}_{n-k}\) the indices of the inputs for which no witness has been provided).

    2. 2.

      Compute the third round of \(\varPi _{k,n}^{nddh}\) using c as challenge to get \(z_{n,k}\) and send it to \(\mathcal {V}^k\).

    3. 3.

      Pick a random permutation \(\sigma \) of \(\{1,\ldots ,k\}\) to associate each of the k oneNDH tuples \(T_{b_1},\ldots ,T_{b_k}\) with one of the k inputs \(x_{d_1},\ldots ,x_{d_k}\) for which a witness is available.

    4. 4.

      For \(i=1,\ldots ,k\):

      • Set \(j=d_{\sigma (i)}\) and \(t_j=b_i\).

      • Compute \(z_j\) by running \(\mathcal {P}\) on input \((x_j,w_j)\), \(a_{t_j}\), randomness \(r_{t_j}\) and challenge c.

      • Set \(M_j=(j,t_j,\mathtt {dec}_{t_j},a_{t_j},z_j)\).

    5. 5.

      Pick a random permutation \(\tau \) of \(\{1,\ldots ,n-k\}\) to associate each of the \(n-k\) DH tuples \(T_{\widetilde{b}_1},\ldots ,T_{\widetilde{b}_{n-k}}\) to one of the k inputs \(x_{\widetilde{d}_1},\ldots ,x_{\widetilde{d}_{n-k}}\) for which no witness is available.

    6. 6.

      For \(i=1,\dots ,n-k\):

      • Set \(j=\widetilde{d}_{\tau (i)}\) and \(t_j=\widetilde{b}_i\).

      • Run simulator S on input \(x_j\) and c obtaining \((a_j,z_j)\).

      • Use trapdoor \(\alpha _{t_j}\) to compute decommitment \(\mathtt {dec}_{t_j}\) of \(\mathtt {com}_{t_j}\) as \(a_j\).

      • Set \(M_j=(j,t_j,\mathtt {dec}_{t_j},a_j,z_j)\).

    7. 7.

      For \(j=1,\dots ,n\): send \(M_j\) to \(\mathcal {V}^k\).

  • \(\mathcal {V}^k\) accepts if and only if all the following conditions are satisfied:

    1. 1.

      \((a_{n,k},c,z_{n,k})\) is an accepting transcript for \(\mathcal {V}_{k,n}^{nddh}\) with input T.

    2. 2.

      All \(t_j\)’s are distinct.

    3. 3.

      For \(j=1,\ldots ,n\): \(\mathtt {dec}_{t_j}\) is a valid decommitment of \(\mathtt {com}_{t_j}\) with respect to \(T_{t_j}\).

    4. 4.

      For \(j=1,\ldots ,n\): \((a_j,c,z_j)\) is accepting for \(\mathcal {V}\) with input \(x_j\).

We will show now that \(\varPi ^k\) is a (adaptive-Input) PoK and is adaptive-input WI for the relation \(\mathcal {R}_k\).

3.1 (Adaptive-Input) Proof of Knowledge

Theorem 5

Protocol \(\varPi ^k\) is a proof of knowledge for \(\mathcal {R}_k\).

Proof

The completeness property follows from the completeness of protocols \(\varPi _{k,n}^{nddh}\) and \(\varPi \), and from the correctness and trapdorness property of the Instance-Dependent Trapdoor Commitment scheme used.

Now we proceed by proving that our protocol is \(((n-1)\cdot k+2)\)-special sound and then, using the arguments of Sect. 2 about the proof of knowledge property of protocols that enjoy t-special soundness, we can conclude the proof claiming that \(\varPi ^k\) is a proof of knowledge. There exists an efficient extractor that, for any sequence \((x_1,\ldots ,x_n)\) of n inputs and for any set of \(N=(n-1)\cdot k+2\) accepting transcripts of \(\varPi ^k\) that share the same first message and have different challenges, outputs the witnesses of k of the n inputs. The extractor is based on the following observations.

First of all, observe that, by the special soundness of \(\varPi _{n,k}\), it is possible to extract the witness that k of the tuple \(T_1,\ldots ,T_n\) appearing in the first message are oneNDH. Let us denote by \(b_1,\ldots ,b_k\) the indices of the oneNDH tuples. This implies that commitments \(\mathtt {com}_{b_1},\ldots ,\mathtt {com}_{b_k}\) that appear in the shared first round of the N transcripts will be opened to the same strings \(a_{b_1},\ldots ,a_{b_k}\). We also observe that if two transcripts use the same input \(x_i\) with the same oneNDH tuple \(T_{b_i}\) then we can extract two transcripts of the \(\varSigma \)-protocol \(\varPi \) that share the same first message and have two different challenges. By the special soundness of \(\varPi \) there exists an extractor that efficiently extracts a witness. In other words, in order to be able to extract a witness for \(x_i\), \(x_i\) has to be associated with the same oneNDH tuple in two distinct transcripts.

The extractor willing to get k witnesses considers the N transcripts one at the time and stops as soon as it reaches a special transcript \(C^l\) in which, for \(j=1,\ldots ,k\), tuple \(T_{b_j}\) is associated with input \(x_{d_j}\) in \(C^l\) and in at least a transcript \(C^{l_j}\) with \(l_j<l\). Clearly, once such a transcript is reached the extractor has obtained k witnesses. Now observe that a pair (oneNDH tuple, input \(x_i\)) can be used to eliminate at most one transcript. Moreover, there are \(n\cdot k\) such pairs and the first transcript exhibits exactly k of these pairs. Therefore the set of N input transcripts contains at least one special transcript.

Theorem 6

If \(\varPi \) is adaptive-input special sound then \(\varPi ^k\) is an adaptive-input proof of knowledge for \(\mathcal {R}_k\).

Proof

We prove the following stronger statement. There exists an efficient algorithm that on input 2 accepting transcripts \((a, c_1, z_1)\) \((a, c_2, z_2)\) for \(\varPi ^k\), where

  • the first one is accepting with respect to a sequence of n theorems \((x^1_1,\dots ,x^1_n)\),

  • the second one is accepting with respect to a sequence of n (potentially different from the previous one) theorems \((x^2_1,\dots ,x^2_n)\),

  • share the same first round and

  • have different challenges, outputs, for each of the two sequence, k witnesses (for a total of \(2\cdot k\) witnesses).

The extractor is based on the following observations.

First of all, observe that, by the special soundness of protocol \(\varPi _{n,k}\), it is possible to extract the witness certifying that k of the tuple \(T_1,\ldots ,T_n\) appearing in the first message are oneNDH let us denote by \(b_1,\ldots ,b_k\) the indices of the oneNDH tuples. This implies that commitments \(\mathtt {com}_{b_1},\ldots ,\mathtt {com}_{b_k}\) that appear in the common first round of the N transcripts will be opened to the same strings \(a_{b_1},\ldots ,a_{b_k}\).

To conclude the proof we observe that if two transcripts use the same oneNDH tuple \(T_{b_i}\) then we can obtain two transcripts of \(\varSigma \)-protocol \(\varPi \) that share the same first message and have two different challenges. By the adaptive-input special-soundness property of \(\varPi \) there exists an extractor that outputs a witness.

3.2 Adaptive-Input Witness Indistinguishability

Here we prove that \(\varPi ^k\) is WI even when \({\mathcal {A}}\) can select instances and witnesses adaptively after receiving the first round. We have the following theorem.

Theorem 7

Under the DDH assumption, if \(\varPi \) is SHVZK for \(\mathcal {R}\) then \(\varPi ^k\) is adaptive-input WI for relation \(\mathcal {R}_k\).

Proof

Let us fix a PPT adversary \({\mathcal {A}}\) and let us denote by X and \(W^0\) and \(W^1\) the instance and the witnesses of \(\varPi ^k\) output by \({\mathcal {A}}\) at Step 2 of \(\mathsf {ExpAWI}_{\varPi ^{k},{\mathcal {A}}}\). More precisely, we let \(X=(x_1,\ldots ,x_n)\) be the sequence of n instances output by \({\mathcal {A}}\) and \(W^0=((w_1^0,d_1^0),\ldots (w_k^0,d_k^0))\) and \(W^1=((w_1^1,d_1^1),\ldots (w_k^1,d_k^1))\) the two sequences of witnesses. We remark that \((x_{d_i^b},w_i^b)\in \mathcal {R}\) for \(i=1,\ldots ,k\) and \(b=0,1\) and that \(i\ne j\) implies that \(d_i^0\ne d_j^0\) and \(d_i^1\ne d_j^1\).

Let \(m\le k\) be the number of instances of \(\varPi \) in X for which \(W^1\) contains a witness but \(W^0\) does not. Obviously, since \(W^0\) and \(W^1\) contain witnesses for the same number k of instances of \(\varPi \) in X, it must be the case that m is also the number of instances of \(\varPi \) in X for which \(W^0\) contains a witness and \(W^1\) does not. We can rename the instances of X, so that \(W^0\) and \(W^1\) can be written as

$$W^0=\bigl ((w_1^0,m+1),\ldots ,(w_m^0,2m),(w^0_{m+1},2m+1),(w_k^0,m+k)\bigr )\ \text {and}$$
$$W^1=\bigl ((w_1^1,1),\ldots ,(w^1_m,m),(w^1_{m+1},2m+1),\ldots ,(w_k^1,m+k)\bigr ).$$

For our proof we now consider the case in which \(m=0\) and \(m\ne 0\). When \(m=0\) we have that \(W^1\) and \(W^0\) contains witnesses for the same theorems (for \(\varPi \)). Therefore by the perfect-WI property of \(\varPi \) Footnote 6 we can claim that if \(m=0\) then \(\mathsf {AdvAWI}_{\varPi ^k,{\mathcal {A}}}(\lambda ,\mathsf {aux})=0\). Now we consider the more interesting case where \(m\ne 0\).

We define the intermediate sequences of witnesses \(W_1,\ldots ,W_k\) in the following way.

  1. 1.

    For \(i=0,\ldots ,m\): \(W_i\) consists of witnesses

    $$\begin{aligned} \begin{aligned} W_i=\bigl ((w_1^1,1),\ldots ,(w_i^1,i), (w_{i+1}^0,m+i+1),\ldots \\ \ldots ,(w_m^0,2m), (w_{m+1}^0,2m+1),&\ldots ,(w_{m+k}^0,m+k) \bigr ). \end{aligned} \end{aligned}$$

    Note that \(W_i\) contains witnesses for \((x_1,\ldots ,x_i,x_{m+1+i},\ldots ,x_{2m})\). Moreover, \(W_0\) coincides with \(W^0\) and in \(W_m\) the first m witnesses are from \(W^1\) and the remaining are from \(W^0\).

  2. 2.

    For \(i=m+1,\ldots ,k\): \(W_i\) consists of witnesses

    $$\begin{aligned} \begin{aligned} W_i=\bigl ( (w_1^1,1),\ldots ,(w_m^1,m), (w_{m+1}^1,2m+1),\ldots \\ \ldots ,(w_{m+i}^1,m+i), (w_{m+i+1}^0,m+i+1),&\ldots ,(w_{m+k}^0,m+k) \bigr ). \end{aligned} \end{aligned}$$

    It is easy to see that \(W_k\) coincides with \(W^1\).

For \(i=0,\ldots ,k\), we define hybrid experiment \(\mathcal {H}_i\) as the experiment in which the challenger \({\mathcal {C}}\) uses sequence of witnesses \(W_i\) to complete the third step of the experiment \(\mathsf {ExpAWI}_{\varPi ^{k},{\mathcal {A}}}\). Clearly, \(\mathcal {H}_0\) is the experiment \(\mathsf {ExpAWI}_{\varPi ^{k},{\mathcal {A}}}\) when \({\mathcal {C}}\) picks \(b=0\) and \(\mathcal {H}_k\) is the same experiment when \({\mathcal {C}}\) picks \(b=1\). We conclude the proof by showing that, for \(i=0,\ldots ,k-1\), \(\mathcal {H}_i\) and \(\mathcal {H}_{i+1}\) are indistinguishable.

We start by proving indistinguishability of \(\mathcal {H}_i\) and \(\mathcal {H}_{i+1}\) for \(i=0,\ldots ,m-1\). We remind the reader that, in \(\mathcal {H}_i\) and \(\mathcal {H}_{i+1}\), the challenger \({\mathcal {C}}\) uses witnesses for the following k inputs:

figure a

To prove indistinguishability of \(\mathcal {H}_i\) and \(\mathcal {H}_{i+1}\) we consider six intermediate hybrids: \(\mathcal {H}_i^1,\ldots ,\mathcal {H}_i^6\).

  1. 1.

    \(\varvec{\mathcal {H}^1_i(\lambda ,\mathsf {aux})}\) differs from \(\mathcal {H}_i(\lambda ,\mathsf {aux})\) in the way that the accepting transcript for the theorem \(x_{i+1}\) is computed. More precisely in \(\mathcal {H}_i(\lambda ,\mathsf {aux})\) the SHVZK simulator of \(\varPi \) was used to compute the transcript for \(x_{i+1}\) while in \(\mathcal {H}^1_i(\lambda ,\mathsf {aux})\) the transcript for \(x_{i+1}\) is computed using the honest-prover procedure that has also \(w_{i+1}\) as input. To prove the indistinguishability of the hybrids we can easily invoke the SHVZK property of \(\varPi \). We remark that this is possible only because the commitment of the first round of \(\varPi \) with respect to the theorem \(x_{i+1}\) is hiding.

  2. 2.

    \(\varvec{\mathcal {H}^2_i(\lambda ,\mathsf {aux})}\) differs from \(\mathcal {H}^1_i(\lambda ,\mathsf {aux})\) in the way the tuples used to compute the commitments are chosen. More precisely k tuples oneNDH are chosen, \(n-k-1\) tuple DH are chosen and the last tuple is chosen non-DH. The additional non-DH tuple is used to compute the commitment of the first message of \(\varPi \) that will be associated to the theorem \(x_{i+1}\) in the third round. Even in this case is possible to compute an accepting transcript for \(\varPi ^{k}\) because \(k+1\) witnesses are used instead of k, therefore there is no problem if \(k+1\) commitments are binding. The indistinguishability between the two hybrids is ensured by the DDH-assumption.

  3. 3.

    \(\varvec{\mathcal {H}^3_i(\lambda ,\mathsf {aux})}\) the only difference between this hybrid experiment and \(\mathcal {H}^2_i(\lambda ,\mathsf {aux})\) is that instead of a non-DH tuple, a oneNDH tuple is chosen. As in the previous hybrid experiment the considered tuple is used to compute the commitment of the first message of \(\varPi \) that will be associated to the theorem \(x_{i+1}\) in the third round. The indistinguishability between the two hybrids is ensured by the DDH-assumption.

  4. 4.

    \(\varvec{\mathcal {H}^4_i(\lambda ,\mathsf {aux})}\). The differences between this hybrid and \(\mathcal {H}^3_i(\lambda ,\mathsf {aux})\) are that we use k tuples oneNDH  \(n-k-1\) tuples DH and one tuple non-DH. In this case the additional non-DH tuple is used to commit the first round of \(\varPi \) that will be use as the first round of the accepting transcript with respect to \(x_{m+i+1}\). By the DDH-assumption and perfect WI property of \(\varPi _{n,k}\) we can claim that this hybrid is indistinguishable from the previous one.

  5. 5.

    \(\varvec{\mathcal {H}^5_i(\lambda ,\mathsf {aux})}\). The differences between this hybrid and \(\mathcal {H}^4_i(\lambda ,\mathsf {aux})\) are that we again use k oneNDH and \(n-k\) DH tuples. In this case the additional DH tuple is used to commit to the first round of \(\varPi \) that will be used as the first round of the accepting transcript with respect to \(x_{m+i+1}\). By the DDH-assumption we can claim that this hybrid is indistinguishable from the previous one.

  6. 6.

    \(\varvec{\mathcal {H}^6_i(\lambda ,\mathsf {aux})}\) differs from \(\mathcal {H}^5_i(\lambda ,\mathsf {aux})\) in the way that the accepting transcript for the theorem \(x_{m+i+1}\) is computed. More precisely in \(\mathcal {H}^5_i(\lambda ,\mathsf {aux})\) the honest-prover procedure of \(\varPi \) was used to compute the accepting transcript for \(x_{m+i+1}\). In \(\mathcal {H}^5_i(\lambda ,\mathsf {aux})\) the transcript for \(x_{m+i+1}\) is computed using the SHVZK simulator of \(\varPi \). To prove the indistinguishability of this hybrid we invoke the SHVZK property of \(\varPi \). We remark that this is possible only because the commitment of the first round of \(\varPi \) with respect to the theorem \(x_{m+i+1}\) is hiding. We observe that this hybrid is equal to \(\mathcal {H}_{i+1}(\lambda ,\mathsf {aux})\).

Now we are able to complete the first part of the proof observing thatFootnote 7.

$$ \mathcal {H}_{i}(\lambda ,\mathsf {aux})\approx \mathcal {H}^1_{i}(\lambda ,\mathsf {aux})\approx \dots \approx \mathcal {H}^6_{i}(\lambda ,\mathsf {aux})=\mathcal {H}_{i+1}(\lambda ,\mathsf {aux}). $$

We have thus proved that \(\mathcal {H}_0\) and \(\mathcal {H}_m\) are indistinguishable. To complete the proof, we need to prove that \(\mathcal {H}_{m+i}\) and \(\mathcal {H}_{m+i+1}\) are indistinguishable for \(i=0,\ldots ,k-1\). This follows directly from the observation that \(\mathcal {H}_{m+i}\) and \(\mathcal {H}_{m+i+1}\) only differ in the witness used for \(x_{2m+i+1}\) as in \(\mathcal {H}_{m+i}\) the witness from \(W^0\) is used by \({\mathcal {C}}\) whereas in \(\mathcal {H}_{m+i+1}\) \({\mathcal {C}}\) uses the witness from \(W^1\). Indistinguishability follows directly from the Perfect WI of \(\varPi \).

4 On Adaptive-Input Special-Soundness of \(\varSigma \)-Protocols

In this section we show that \(\varSigma \)-Protocols are not secure when the adversarial prover can choose the statement adaptively, when playing the 3rd round. These issues for the case of the Fiat-Shamir transform were noted in [1].

We then show an efficient compiler that on input a \(\varSigma \)-protocol belonging to the general class considered in [9, 21], outputs a \(\varSigma \)-protocol that is secure against adaptively chosen statements.

4.1 Soundness Issues in Delayed-Input \(\varSigma \)-Protocols

We start by showing that the notion of adaptive-input special soundness is non-trivial in the sense that there are \(\varSigma \)-protocols that are not special sound when the statement is chosen adaptively at the 3rd round.

Issues with Soundness. Let us consider the following well-known \(\varSigma \)-protocol \(\varPi _{\mathsf {DH}}\) for relation \({\mathsf {DH}}\). On common input \(T=(g,A,B,X)\) and private input \(\alpha \) such that \(A=g^\alpha \) and \(X=B^\alpha \) for the prover, the following steps are executed. We denote by q the size of the group \({\mathcal {G}}\).

  1. 1.

    \(\mathcal {P}\) picks \(r\in {\mathbb {Z}}_q\) at random and computes and sends \(a=g^r\), \(x=B^r\) to \(\mathcal {V}\);

  2. 2.

    \(\mathcal {V}\) chooses a random challenge \(c\in {\mathbb {Z}}_q\) and sends it to \(\mathcal {P}\);

  3. 3.

    \(\mathcal {P}\) computes and sends \(z=r+c\alpha \) to \(\mathcal {V}\);

  4. 4.

    \(\mathcal {V}\) accepts if and only if: \(g^z=a\cdot A^c\) and \(B^z=x\cdot X^c\).

We now show that the above \(\varSigma \)-protocol is not special sound when an adversarial prover selects X adaptively.

Consider the following two conversations \(((a=g^r,x=B^s),c_1,z_1=r+\alpha \cdot c_1)\) and \(((a=g^r,x=B^s),c_2,z_2=r+\alpha \cdot c_2)\) respectively for tuples \((g,A,B,X_1)\) and \((g,A,B,X_2)\) where \(A=g^\alpha \), \(X_1=g^{\gamma _1}\) and \(X_2=g^{\gamma _2}\) and \(\gamma _i=\frac{z_i-s}{c_i}=\alpha +\frac{r-s}{c_i}\), for \(i=1,2\). It is easy to see that both conversations are accepting (for their respective inputs) and that, if \(r\ne s\), neither tuple is a DH tuple and therefore no witness can be extracted. Notice that this is a very strong soundness attack since the adversarial prover can succeed in convincing the verifier even though the statement is false. A similar argument can be used to prove that the \(\varSigma \)-protocol of [22] for relation \(\mathsf {Com}=\{((g,h,G,H,m),r): G=g^r\ \text {and}\ H=h^{r+m}\}\) does not enjoy adaptive-input special soundness.

Issues with Special Soundness. Let us now consider the case of Schnorr’s \(\varSigma \)-protocol [27] for relation \(\mathsf {DLog}=\{(({\mathcal {G}},g,Y),y): g^y=Y\}\). Clearly, this is a different case since there is no false theorem to prove, but the attack can only consist in proving a statement violating special soundness (i.e., even though there are two accepting transcripts with the same first message no witness can be extracted).

In Schnorr’s protocol, the prover on input \((Y,y)\in \mathsf {DLog}\) starts by sending \(a=g^r\), for a randomly chosen \(r \in Z_q\). Upon receiving challenge c, \(\mathcal {P}\) replies by computing \(z=r+yc\). \(\mathcal {V}\) accepts (acz) if \(g^z=a\cdot Y^c\).

Consider now accepting transcripts \((a,c_i,z_i)\) with respect to inputs \(Y_i\), \(i=1,2\). In this case, to extract witnesses \(y_i\) s.t. \((({\mathcal {G}},g,Y_i),y_i)\in \mathsf {DLog}\) one has to solve the following system with unknowns \(r,y_1,\) and \(y_2\).

$$\left\{ \begin{array}{lr} z_1=r+c_1\cdot y_1\\ z_2=r+c_2\cdot y_2 \end{array} \right. $$

Clearly the system above has q solutions and thus it gives no information on any of the two witnesses.

4.2 A Compiler for Adaptive-Input Special Soundness

In this section we show how to upgrade special soundness to adaptive-input special-soundness in all \(\varSigma \)-protocols belonging to the interesting class of \(\varSigma \)-protocols proposed in [9, 21].

We show a compiler that obtains a \(\varSigma \)-protocol \(\varPi ^\text {a}_f\) for proving knowledge of the pre-image of a homomorphic function. Our compiler takes as input a \(\varSigma \)-protocol \(\varPi _f=(\mathcal {P}_f,\mathcal {V}_f)\) for the same generic relation that includes Schnorr’s [27], Guillou-Quisquater [16] and the \(\varSigma \)-protocol for DH tuples as special cases [9, 21].

Let \(({\mathcal {G}},\star )\) and \((\mathcal {H},\otimes )\) be two groups with efficient operations and let \(f:{\mathcal {G}}\rightarrow \mathcal {H}\) be a one-way homomorphism from \({\mathcal {G}}\) to \(\mathcal {H}\). That is, for all \(x,y\in {\mathcal {G}}\), we have that \(f(x \star y)=f(x)\otimes f(y)\) and it is infeasible to compute w from f(w) for a randomly chosen w. In protocol \(\varPi _f\) for relation \(\mathcal {R}_{f}=\{(x,w): x=f(w)\}\), prover and verifier receive as input a description of the groups \({\mathcal {G}}\) and \(\mathcal {H}\) and \(x\in \mathcal {H}\). The prover receives w such that \(x=f(w)\) as a private input. The prover and verifier execute the following steps:

  1. 1.

    \(\mathcal {P}_f\) picks \(r\leftarrow {\mathcal {G}}\), sets \(a\leftarrow f(r)\) and sends a to \(\mathcal {V}_f\);

  2. 2.

    \(\mathcal {V}_f\) randomly selects a challenge c and sends it to \(\mathcal {P}_f\);

  3. 3.

    \(\mathcal {P}_f\) on input rxw and c computes \(z=r\star w^c\) and sends it to \(\mathcal {V}_f\);

  4. 4.

    \(\mathcal {V}_f\) accepts if and only if \(f(z)=a\otimes x^c\).

It is easy to see that this protocol can be instantiated to give Schnorr’s [27] and Guillou-Quisquater [16] \(\varSigma \)-protocols as special cases. Theorem 3 of [21] describes necessary conditions for \(\varPi _{f}\) to be special sound. Specifically, given a collision \((a,c_1,z_1)\) and \((a,c_2,z_2)\) for common input x, it is possible to extract w such that \(x=f(w)\) if integer y and element \(u\in {\mathcal {G}}\) are known and it holds that

  1. 1.

    \(\gcd (c_1-c_2,y)=1\);

  2. 2.

    \(f(u)=x^y\).

It is not difficult to see that this is the case when the protocol is instantiated for all the relations described above. We also observe that, since Schnorr’s protocol is a special case of this protocol, protocol \(\varPi _f\) does not enjoy adaptive-input special soundness.

From \(\varPi _f\) to \(\varPi ^\text {a}_f\) . We next show how to efficiently transform this \(\varSigma \)-protocol into one that enjoys adaptive-input special soundness. The underlying idea is that an adaptive attack against such protocol consists in misbehaving when playing the first round. For instance, in the case of the \(\varSigma \)-protocol for DH, \(\mathcal {P}^*\) has to send a non-DH tuple in the 1st round while instead the protocol asks for a DH tuple. We therefore can convert \(\varPi _f\) into \(\varPi ^\text {a}_f\) by asking the prover to also give an auxiliary proof where it proves knowledge of the randomness used to correctly compute the first round of \(\varPi _f\). Notice that on this auxiliary proof an adaptive-input selection attack can not take place since the adversarial prover is stuck with the content of the 1st round of \(\varPi _f\) that therefore specifies already the statement to prove. We now show that special soundness allows to get the randomness used to compute the first round of \(\varPi _f\). Then the same argument shown in Theorem 3 of [21] allows to extract the witness from a single transcript.

We now discuss the compiler and why it works more formally.

Let us start with the following observation. Consider an accepting transcript (acz) of \(\varPi _f\) for input x. If r such that \(f(r)=a\) is available, then it is possible to compute a witness w for x. Indeed, from Theorem 3 of [21] it follows that we can compute w as \(w=u^{\alpha } \star (z \star r^{-1})^{\beta }\), where \(\alpha \) and \(\beta \) are such that \(y\cdot \alpha +c\cdot \beta =1\) Footnote 8. We use an argument already used in Theorem 3 of [21] in order to prove that \(f(w)=x\), where \(w= u^\alpha \star (z \star r^{-1})^\beta \). First we observe that \(f(z)=a\otimes x^c\) and this implies that \(f(z \star r^{-1})=x^c\). Then we observe (like in [21]) that \(f(w)=f(u^\alpha \star (z \star r^{-1})^\beta )=f(u)^\alpha \otimes f(z \star r^{-1})^\beta =x^{y\alpha }+x^{\beta c}=x\) that proves that \(f(w)=x\).

Consider protocol \(\varPi _f^\text {a}\) consisting of the parallel execution of two instances of \(\varPi _f\). For common input x, the first instance of \(\varPi _f\) is executed on common input x, whereas in the second instance the common input is the first message a of the first instance. The verifier of \(\varPi _f^\text {a}\) sends the same challenge to both instances and accepts if and only if it accepts in both instances. Since in a collision the first message is fixed, both transcripts have the same first message a and therefore we can invoke special soundness to extract r such that \(f(r)=a\). Once r is available, we apply the observation above and extract witnesses for \(x_1\) and \(x_2\) (the two inputs of the two 3-round transcripts constituting the collision). We have thus the following theorem.

Theorem 8

If there exists a \(\varSigma \)-protocol \(\varPi _f\) for \(\mathcal {R}_f\), then there exists a \(\varSigma \)-protocol \(\varPi ^\text {a}_f\) for \(\mathcal {R}_f\) that enjoys adaptive-input special soundness.

5 On the Adaptive-Input Soundness of [6]’s Transform

Ciampi et al. in [6] show a compiler that takes as input two \(\varSigma \)-protocols, \(\varPi _0\) and \(\varPi _1\) for languages \(L_0\) and \(L_1\), and outputs a new \(\varSigma \)-protocol \({\varPi ^{\mathsf {OR}}}\) for \(L_0 \vee L_1\) in which the instance for the language \(L_1\) is required by the prover only in the 3rd round. The compiler requires that \(\varPi _1\) be delayed input and they show that the output of the compiler is a \(\varSigma \)-protocol, therefore it enjoys special soundness. In this section we assume that \(\varPi _1\) is adaptive-input special sound, and we will show that \({\varPi ^{\mathsf {OR}}}\) enjoys also adaptive-input special soundness.

5.1 Overview of the Construction of [6]

We start with a succinct description of the main building block used in of [6].

t-Instance-Dependent Trapdoor Commitment. Ciampi et al. in [6] define the notion of a \(t\)-Instance-Dependent Trapdoor Commitment (\(t\)-\(\mathsf {IDTC}\)) scheme. Such a scheme works with respect to an polynomial-time relation \(\mathcal {R}\). More formally, given the pair xw s.t. \((x,w)\in \mathcal {R}\), it is possible to compute a commitment (with respect to some massage space \(M\)) using only the instance x, and the message. After that it is possible to open the commitment if one knows the randomness used in the commitment phase, or it is possible to equivocate the commitment using the witness w.

The \(t\)-\(\mathsf {IDTC}\) scheme is defined by a triple of PPT algorithm (\(\mathsf {TCom}\), \(\mathsf {TDec}\), \(\mathsf {TFake}\)) where \(\mathsf {TCom}\), \(\mathsf {TDec}\) are the honest commitment and decommitment procedures and \(\mathsf {TFake}\) is the equivocation procedure that, given a witness for an instance x, equivocates any commitment computed using x as input of \(\mathsf {TCom}\). The properties of a \(t\)-\(\mathsf {IDTC}\) scheme are: correctness, hiding, trapdoor and t-Special Extractability. The property of t-Special Extractability informally says that if the sender opens the same commitment in t different ways, then it is possible to efficiently extract the witness w. For more details see [6].

The authors of [6] show how to construct a \(2\)-\(\mathsf {IDTC}\) schemes are perfect hiding, perfect trapdoor and 2-Special Extractable from \(\varSigma \)-protocols.

In the rest of this section when a player runs the algorithm \(\mathsf {TCom}\) on input xm, obtains the pair (\(\mathtt {com}\), \(\mathtt {dec}\)) where \(\mathtt {com}\) is the commitment of the message m, and \(\mathtt {dec}\) is the decommitment value. To check if \(\mathtt {dec}\) is a valid decommitment of \(\mathtt {com}\) with respect to the message m, we use the algorithm \(\mathsf {TDec}\). To compute a fake opening of the commitment \(\mathtt {com}\) with respect to a message \(m'\ne m\) a player can use the algorithm \(\mathsf {TFake}\) using as input (\(\mathtt {com}\), \(\mathtt {dec}\)).

The Construction of [6]. Let \(\mathcal {R}_0\) be a relation admitting a \(t\)-\(\mathsf {IDTC}\) scheme, with \(t=2\) or \(t=3\). Let \(\mathcal {R}_1\) be a relation admitting an delayed-input \(\varSigma \)-protocol \(\varPi _1\) with associated simulator \(S^1\).

We show a \(\varSigma \)-protocol \({\varPi ^{\mathsf {OR}}}=({\mathcal {P}^{\mathsf {OR}}}, {\mathcal {V}^{\mathsf {OR}}})\) for the OR relation:

$$\mathcal {R}^\mathsf {OR}=\left\{ ((x_0,x_1),w):((x_0,w)\in \mathcal {R}_0\wedge x_1\in \hat{L}_{\mathcal {R}_1}) \ \mathsf {OR}\ ((x_1,w)\in \mathcal {R}_1\wedge x_0\in \hat{L}_{\mathcal {R}_0})\right\} \!.$$

The initial common input is \(x_0\) and the other input \(x_1\) and the witness w for \((x_0,x_1)\) are available to the prover only at the 3rd round. We let \(b \in \{0,1\}\) be such that \((x_b,w)\in \mathcal {R}_b\). The construction of [6] is described below.

Common input: \((x_0,1^\lambda )\), where \(\lambda \) is the length of the instance of \(\hat{L}_{\mathcal {R}_1}\).

  1. 1.

    \({\mathcal {P}^{\mathsf {OR}}}\) executes the following steps:

    1. 1.1.

      pick random \(r_1\) and compute the 1st round \(a_1\) of the delayed-input \(\varSigma \)-protocol \(\varPi _1\);

    2. 1.2.

      compute a pair \((\mathtt {com},\mathtt {dec}_1)\) of commitment and decommitment of \(a_1\);

    3. 1.3.

      send \(\mathtt {com}\) to \({\mathcal {V}^{\mathsf {OR}}}\).

  2. 2.

    \({\mathcal {V}^{\mathsf {OR}}}\) sends a random challenge c.

  3. 3.

    \({\mathcal {P}^{\mathsf {OR}}}\) on input \(((x_0,x_1),c,(w,b))\) s.t. \((x_b,w)\in \mathcal {R}_b\) executes the following steps:

    1. 3.1.

      If \(b=1\), compute the 3rd round of \(\varPi _1\), \(z_1\), using as input \((x_1,w,c)\);

    2. 3.2.

      Send \((\mathtt {dec}_1,a_1,z_1)\) to \({\mathcal {V}^{\mathsf {OR}}}\)r;

    3. 3.3.

      If \(b=0\), run simulator \(S^1\) on input \(x_1\) and c obtaining \((a_2,z_2)\); use trapdoor to compute decommitment \(\mathtt {dec}_{2}\) of \(\mathtt {com}\) as \(a_2\);

    4. 3.4.

      Send \((\mathtt {dec}_2,a_2,z_2)\) to \({\mathcal {V}^{\mathsf {OR}}}\).

  4. 4.

    \(\mathsf {V}^\mathsf {OR}\) accepts if and only if the following conditions are satisfied:

    1. 4.1.

      (acz) is an accepting conversation for \(x_1\);

    2. 4.2.

      \(\mathtt {dec}\) is a valid decommitment of \(\mathtt {com}\) for a message a.

5.2 Adaptive-Input Security of \(\varPi ^{\mathsf {OR}}\)

We now show that \(\varPi ^{\mathsf {OR}}\) preserves the adaptive-input special soundness of the underlying \(\varSigma \)-protocol.

Theorem 9

If \(\mathcal {R}_0\) admits a \(2\)-\(\mathsf {IDTC}\) and \(\mathcal {R}_1\) admits a delayed-input adaptive-input special-sound \(\varSigma \)-protocol, then \(\varPi ^{\mathsf {OR}}\) is an adaptive-input special-sound \(\varSigma \)-protocol.

Proof

The claim follows from the adaptive-input special soundness of the underlying \(\varSigma \)-protocol \(\varPi _1\) and from the 2-Special Extractability property of the \(2\)-\(\mathsf {IDTC}\) scheme. More formally, consider an accepting transcript \((\mathtt {com},c,(z,a,\mathtt {dec}))\) for input \((x_0,x_1)\) and an accepting transcript \((\mathtt {com},c',(z',a',\mathtt {dec}'))\) for input \((x_0,x_1')\), where \(c'\ne c\) and \(x_1\) is potentially different from \(x_1'\). We observe that:

  • if \(a=a'\) then by the property of adaptive-input special soundness of \(\varPi _1\) there exists an efficient extractor \(\mathsf {AExtract}\) that, given as input \(((a,c,z), x_1)\) and \(((a',c',z'), x_1')\), outputs \(w_1\) and \(w_1'\) s.t. \((x_1, w_1)\in \mathcal {R}_1\) and \((x_1', w_1')\in \mathcal {R}_1\);

  • if \(a\ne a'\), then \(\mathtt {dec}\) and \(\mathtt {dec}'\) are two openings of \(\mathtt {com}\) with respect to \(x_0\) for messages \(a\ne a'\); then we can obtain a witness \(w_0\) by the 2-Special Extractability of the \(2\)-\(\mathsf {IDTC}\) scheme.

A similar arguments can be used to show that if \(\mathcal {R}_0\) admits a \(3\)-\(\mathsf {IDTC}\) and \(\mathcal {R}_1\) admits a delayed-input \(\varSigma \)-protocol with adaptive-input special soundness, then \(\varPi ^{\mathsf {OR}}\) enjoys the adaptive-input proof of knowledge property.

6 Extension to Multiple Relations

In this section, we generalize the result of Sect. 3 to the case of different relations. More specifically, given delayed-input \(\varSigma \)-protocols \(\varPi _1,\ldots ,\varPi _n\) for polynomial-time relations \(\mathcal {R}_1,\ldots ,\mathcal {R}_n\), we construct, for some positive constant k, Adaptive-Input Proof of Partial Knowledge \(\varGamma \) for the threshold polynomial-time relation

$$\begin{aligned} \mathcal {R}^{{\mathsf {thres}}}&=\biggl \{ \Bigl (\bigl (x_1,\ldots ,x_n,k\bigr ),\bigl ((w_1,d_1)\ldots ,(w_k,d_k)\bigr )\Bigr ): 1 \le d_1<\cdots <d_k\le n \\&\text {and}\ (x_{d_i}, w_i)\in \mathcal {R}_i\ \text {for}\ i=1,\dots ,k\ \text {and}\ x_1\in \hat{L}_1,\dots ,x_n\in \hat{L}_n \biggr \}. \end{aligned}$$

We remind the reader that \(\hat{L}_1,\ldots ,\hat{L}_n\) are the input languages associated with the polynomial-time relations \(\mathcal {R}_1,\ldots ,\mathcal {R}_n\).

Protocol \(\varGamma \) uses delayed-input protocol \(\varPi ^{k}\), in the adaptive-input special-soundness version, presented in Sect. 3 for relation \({\mathsf {NDH}}_{k,n}\). We remark that protocol \(\varPi _{k,n}^{\text {ddh}}\) of Sect. 2.3 would not work here since the prover of \(\varGamma \) learns the actual statement to be proved just before the third round.

  • 1st Round. \(\varGamma .{\mathsf {Prover}}\Rightarrow \varGamma .{\mathsf {Verifier}}\):

    \(\varGamma .{\mathsf {Prover}}\) receives as unary inputs the security parameter \(\lambda \), the number n of theorems that will be given as input at the beginning of the third round, and the number k of witnesses that will be provided.

    1. 1.

      Set \(({\mathcal {G}},p,g)\leftarrow \mathsf {IG}(1^\lambda )\).

    2. 2.

      For \(j=1,\dots ,n\)

      1. 2.1.

        Randomly sample a non-DH tuple \(T^0_j=(g_j,A_j,B_j,X_j)\) over \({\mathcal {G}}\), along with \(\alpha _j\) such that \(A_j=g_j^{\alpha _j}\).

      2. 2.2.

        Set \(Y_j=B_j^{\alpha _j}\) and \(T^1_j=(g_j,A_j,B_j,Y_j)\) (note that the quadruple \(T^1_j\) is by construction a DH tuple).

    3. 3.

      Select a random string \(R_{k,n}\) and use it to compute the first round message \(a_{k,n}\) of \(\varPi ^{k}\) by running prover \(\mathcal {P}^{k}\).

      Send \(a_{k,n}\) to \(\varGamma .{\mathsf {Verifier}}\).

    4. 4.

      For \(j=1,\dots ,n\)

      1. 4.1.

        Select random strings \(R^0_j\) and \(R^1_j\) and use them to compute the first rounds \(a^0_j\) and \(a^1_j\) of \(\varPi _j\) by running prover \(\mathcal {P}_j\).

      2. 4.2.

        Compute the pair \((\mathtt {com}^0_j,\mathtt {dec}^0_j)\) of commitment and decommitment of the message \(a^0_j\) using non-DH tuple \(T^0_j\).

      3. 4.3.

        Compute the commitment \(\mathtt {com}^1_j\) (of the message \(a^1_j\)) using the DH tuple \(T^1_j\).

      4. 4.4.

        Send pairs \((T^0_j,\mathtt {com}^0_j)\) and \((T^1_j,\mathtt {com}^1_j)\) in random order to \(\varGamma .{\mathsf {Verifier}}\).

  • 2nd Round. \(\varGamma .{\mathsf {Verifier}}\Rightarrow \varGamma .{\mathsf {Prover}}\): \(\varGamma .{\mathsf {Verifier}}\) randomly selects a challenge c and sends it to \(\varGamma .{\mathsf {Prover}}\).

  • 3rd Round. \(\varGamma .{\mathsf {Prover}}\Rightarrow \varGamma .{\mathsf {Verifier}}\):

    \(\varGamma .{\mathsf {Prover}}\) receives theorems \(x_1,\ldots ,x_n\) and, for \(d_1<\ldots <d_k\), witnesses \(w_1,\dots ,w_k\) for theorems \(x_{d_1},\dots , x_{d_k}\), respectively. We let \(\widetilde{d}_1<\ldots <\widetilde{d}_{n-k}\) denote the indices of the theorems for which no witness has been provided.

    1. 1.

      For \(l=1,\dots , k\)

      1. 1.1.

        Use j as a shorthand for \(d_l\).

      2. 1.2.

        Set \(U_j=T^1_j\) and \(\hat{U}_j=T^0_j\).

      3. 1.3.

        Compute third round \(z_j\) of \(\varPi _j\) by running prover \(\mathcal {P}_j\) on input \((x_j,w_i)\), randomness \(R_j^0\) used to compute the first round \(a_j^0\), and a challenge c.

      4. 1.4.

        Set \(M_j=(a_j^0,z_j,\mathtt {dec}^0_j,\hat{U}_j)\).

    2. 2.

      For \(l=1, \dots , n-k\)

      1. 2.1.

        Set \(j=\widetilde{d}_l\).

      2. 2.2.

        Set \(U_j=T^0_j\) and \(\hat{U}_j=T^1_j\).

      3. 2.3.

        Run the simulator \(S_j\) of \(\varPi _j\) on input \(x_j\) and c therefore obtaining \((\widetilde{a}_j^1,z_j)\).

      4. 2.4.

        Use the trapdoor \(\alpha _j\) to compute the decommitment \(\mathtt {dec}^1_j\) of \(\mathtt {com}^1_j\) as \(\widetilde{a}_j^1\).

      5. 2.5.

        Set \(M_j=(\widetilde{a}_j^1,z_j,\mathtt {dec}^1_j,\hat{U}_j)\).

    3. 3.

      For \(l=1,\dots , n\) send \(M_l\) to \(\varGamma .{\mathsf {Verifier}}\).

    4. 4.

      Compute the third round \(z_{k,n}\) of \(\varPi ^{k}\) by running prover \(\mathcal {P}^{k}\) of \(\varPi ^{k}\) on input tuples \((U_1,\dots ,U_n)\), witnesses \(\alpha _{d_1},\ldots ,\alpha _{d_k}\) and randomness \(R_{k,n}\) used to compute the first round \(a_{k,n}\).

  • \(\varGamma .{\mathsf {Verifier}}\) accepts if and only if the following conditions are satisfied.

    1. 1.

      Check that \((a_{n,k},c,z_{n,k})\) is an accepting conversation for \(\mathcal {V}^{k}\) for input \(U_1,\ldots ,U_n\).

    2. 2.

      For \(i=1\dots n\)

      Check that tuples \(T^0_i\) and \(T^1_i\) differ only in the last component.

      Check that \(\{U_i,\hat{U}_i\}=\{T_i^0,T_i^1\}\).

      Write \(M_i\) as \(M_i=(a_i,z_i,\mathtt {dec}_i,\hat{U}_i)\).

      Check that \(\mathtt {dec}_i\) is a decommitment of one of \(\mathtt {com}^0_i\) and \(\mathtt {com}^1_i\) as \(a_i\) with respect to tuple \(\hat{U}_i\).

      Check that \((a_i,c,z_i)\) is an accepting conversation for \(\varPi _i\) on input \(x_i\).

Theorem 10

\(\varGamma \) is a proof of knowledge.

Proof

The completeness property follows from the completeness of protocols \(\varPi ^{k}\) and \(\varPi _i\), for \(i\in \{1,\dots ,n\}\), and from the correctness and trapdoorness property of the Instance-Dependent Trapdoor Commitment scheme used.

Now we proceed by proving that our protocol is \((2n+k)\)-special sound and then, using the arguments of Sect. 2 about the proof of knowledge property of protocols that enjoy t-special soundness, we can conclude the proof claiming that \(\varGamma \) is a proof of knowledge. In more details, we prove that there exists an efficient extractor which, for any sequence \((x_1,\ldots ,x_n)\) of n inputs and for any set of \(2n+k\) accepting conversations of \(\varGamma \) that share the same first message and have different challenges, outputs the witness of \(w_i\) s.t. \((x_i, w_i)\in \mathcal {R}_i\) for some \(i\in \{1,\dots ,n\}\). The extractor considers a set of \(2n+k\) accepting conversations \(a, c^j, z^j\) (with \(j=1,\dots ,2n+k\)) such that they share the same first message and have different challenges. For each \(a, c^j, z^j\) (with \(j=1,\dots ,2n+k\)) processed by the extractor one of the following two cases is possible.

  1. 1.

    There are two conversations of \(\varSigma \)-protocol \(\varPi ^i\) for theorem \(x_i\) that share the same first message \(a_i\) and have two different challenges. Then by the special soundness property of \(\varPi ^i\) one can efficiently get a witness \(w_i\) for theorem \(x_i\).

  2. 2.

    If the new accepting transcript \(a, c^j, z^j\) does not allow the extractor to obtain the witness then a new non-DH tuple is used for the first time in the accepting conversation \(a, c^j, z^j\).

The proof ends with the observation that the algorithm stops after k times that the first case occurs, while the second case occurs at most 2n times.

Theorem 11

Under the DDH assumption, if \(\varPi _i\) is SHVZK for \(\mathcal {R}_i\), for \(i\in \{1,\dots ,n\}\), then \(\varGamma \) is adaptive-input WI for \(\mathcal {R}^{{\mathsf {thres}}}\), for a constant k.

Proof Sketch. The definition of adaptive-input WI gives to the adversary \({\mathcal {A}}\) the power to choose both theorems and witnesses upon receiving the first message from the challenger. This implies that in \(\varGamma \) the first round should be computed without knowing which witnesses will be chosen by \({\mathcal {A}}\), and without knowing for what instances the witnesses will be available in the third round. It is easy to see that the first round of \(\varGamma \) is independent from the \({\mathcal {A}}\) could have. Unfortunately if we follow the same proof of Theorem 7, considering a similar sequence of hybrids experiments, we will have to define hybrid experiments in which the first round depends on which witnesses will be received from \({\mathcal {A}}\) at the second round. This implies that the only way for the challenger to complete these hybrid experiments consists in guessing the instances that correspond to the witnesses that will be received. This explains why k is a constant.

We now explain with more details the differences between the security proof of Theorem 7 and the one needed for protocol \(\varGamma \). The security proof of Theorem 7 works for every k because the n instances \(x_1,\dots ,x_n\) that will be sent by \({\mathcal {A}}\) in the protocol \(\varPi ^{k}\) belong to the same NP-language L. Furthermore the \(\varSigma \)-protocol \(\varPi \) used in \(\varPi ^{k}\) is delayed input. Hence for a first round \(a_i\) of \(\varPi \) it is possible to create an accepting transcript \((a_i,c,z)\) for a theorem \(x_j\), for \(i,j\in \{1,\dots ,n\}\) (if one has the witness \(w_j\) clearly). Therefore the assignment of the values \(a_1,\dots ,a_n\) committed in the first round with the theorems \(x_1,\dots ,x_n\) is made only at the third round. This property holds in all hybrid experiments. Now we consider the protocol \(\varGamma \). The arguments described above are clearly not applicable to prove that \(\varGamma \) is adaptive-input WI.

More in details, during the first round of \(\varGamma \), for each language \(L_i\) we compute the first message of protocol \(\varPi _i\) and commit to it twice using the instance-dependent trapdoor commitment associated to a DH tuple and to a non-DH tuple, for \(i\in \{1,\dots ,n\}\). Hence for each \(a_i\) we compute an equivocal commitment and a binding commitment. First note that these two commitments are linked to a fixed language \(L_i\) (in contrast to the first round of \(\varPi ^{k}\)). When in the security proof we need to consider the hybrid experiment in which \(n+1\) non-DH tuples (one non-DH tuple per pair except one pair where both tuples are non-DH) are used (as in the proof of Theorem 7), we have to commit the first round of \(a_i\), for some \(i\in \{1,\dots ,n\}\), using two commitments that are perfectly binding. Therefore the only way that we have to compute an accepting transcript with respect to the language \(L_i\) consists in using the witness for the instance \(x_i\) that will be sent by \({\mathcal {A}}\). Unfortunately we have no guarantee that \({\mathcal {A}}\) will send \(w_i\), and thus the experiment will have to try again. For lack of space, further details can be found in the full version of this work.