1 Introduction

Byzantine agreement (BA), introduced by Lamport, Shostak, and Pease [31], is a fundamental primitive in distributed computing and is at the core of many secure multi-party computation (MPC) protocols. The problem comes in two main flavors, Consensus and Broadcast—although a number of relaxations have also been proposed. Consensus considers a set of n parties \(\mathcal {P} =\{P _1,\ldots ,P _n\}\) each of whom has an input \(x_i\), and who wish to agree on an output y (Consistency) such that if \(x_i=x\) for all honest parties then \(y=x\) (Validity), despite the potentially malicious behavior of up to t of them. In the Broadcast version, on the other hand, only a single party, often called the sender has an input \(x_s\), and the goal is to agree on an output y (Consistency) which, when the sender is honest equals x (Validity).

The traditional setting in which the problem was introduced and investigated considers synchronous communication and protocol execution. In a nutshell, this means that the protocol advances in rounds such that: (1) parties have a consistent view of the current round—i.e., no party advances to round \(\rho +1\) before all other parties are finished with their round \(\rho \) instructions; and (2) all messages sent in round \(\rho \) are delivered to their respective recipients by the beginning of round \(\rho +1\). Furthermore, the underlying communication network is a complete point-to-point authenticated channels network, where every pair \((P _i,P _j)\) of parties is connected by a channel, such that when \(P _j\) receives a message on this channel it knows it was indeed sent by \(P _i\) (or the adversary, in case \(P _i\) is corrupted). We refer to the above setting as the (standard) LSP setting.

In this model, Lamport et al. [21, 31] proved that there exists no Consensus or Broadcast protocol which can tolerate \(t\ge n/3\) Byzantine parties, i.e., parties controlled by a (central) active and malicious adversary. The original formulation considered perfect security (i.e., information-theoretic security with zero error probability) and no correlated randomness shared among the parties.Footnote 1 This impossibility result was later extended by Borcherding [7] to computational security—i.e., it was proved to hold even under strong computational assumptions, such as one-way permutations.Footnote 2 Furthermore, it applies even when the point-to-point channels used by the parties are secure, i.e., both authenticated and private, and even if we assume an arbitrary public correlated randomness setup and/or a random oracle (RO).Footnote 3 (A public correlated randomness setup can be viewed as a functionality which samples a string and distributes it to all parties, e.g, a common reference string (CRS). This is in contrast to a private correlated randomness setup which might keep part of the sampled string private and distribute different parts of it to different parties, e.g., a PKI.) For ease of reference we state the above as a corollary:

Corollary 1

(Strong \(t\ge n/3\) impossibility [7]). In the synchronous point-to-point channels setting, there exists no Broadcast protocol tolerating \(t\ge n/3\) corrupted parties. The statement holds both in the authenticated and in the secure channels settings, both for unconditional adversaries and assuming (even enhanced) trapdoor permutations, and even assuming an arbitrary public correlated randomness setup and/or a random oracle.

Finally, Cohen et al. [16], show that this line of impossibility results can be extended to the case of symmetric functionalities, i.e., functionalities where all parties receive the same output.

The effect of BA lower bounds on MPC. MPC allows a set of parties to compute an arbitrary function of their (potentially private) inputs in a secure way even in the presence of an adversary. Ben-Or, Goldwasser and Wigderson [5] presented a protocol which computes any function with perfect security in the synchronous setting while tolerating \(t<n/3\) malicious parties assuming the parties have access to a complete network of instant delivery point-to-point secure—i.e., authenticated and private—channels (we shall refer to this model as the BGW communication model). The lower bound holds even if a Broadcast channel—i.e., an ideal primitive guaranteeing the input/output properties of Broadcast—is available to the parties. Rabin and Ben-Or [34] proved that if we allow for a negligible error probability and assume broadcast, then there exists a general MPC protocol tolerating up to \(t<n/2\) of the parties being corrupted, even if the adversary is computationally unbounded.

Observe, however, that just allowing negligible error probability is not sufficient for circumventing the \(t<n/3\) barrier. Indeed, it is straightforward to verify that fully secure MPC as considered in [26, 34]—with fairness and guaranteed output delivery—against malicious/Byzantine adversaries implies Broadcast: Just consider the function which takes input only from a designated party, the sender, and outputs it to everyone.Footnote 4 In fact, using the above observation and Corollary 1 directly implies that \(t<n/3\) is tight even assuming a computational adversary, secure point-to-point channels, an arbitrary public correlated randomness setup, e.g., a CRS, and/or a random oracle.

The public-key infrastructure (PKI) model. With the exception of perfect securityFootnote 5, the above landscape changes if we assume a private correlated randomness setup, such as a PKI. Indeed, in this case Dolev and Strong [19] proved that assuming a PKI and intractability assumptions implying existentially unforgeable digital signatures (e.g., one way functions) Broadcast tolerating arbitrarily many (i.e., \(t<n\)) malicious corruptions is possible. We refer to this protocol as Dolev-Strong Broadcast. In fact, as shown later by Pfitzmann and Waidner [33], by assuming more complicated correlations—often referred to as a setup for information-theoretic (pseudo-)signatures—it is also possible to obtain an unconditionally (i.e., information-theoretically) secure protocol for Broadcast tolerating. Clearly, by plugging the above constructions in [34], we obtain a computationally or even i.t. secure MPC protocol tolerating any dishonest minority in the private correlated randomness setting. Recall that this task was impossible for honest majorities in the public correlated randomness setting.

The blockchain revolution. The introduction and systematic study of blockchains in the permissionless setting, such as the Bitcoin blockchain, demonstrated how Consensus and Broadcast can be reached even in settings where a majority of the participants might be adversarial (as long as the majority of the computing power remains honest) and even without a private correlated randomness setup. And although it was proven that such constructions work under the different assumption of honest-majority computing power, a confusion still remained driven mainly by the fact that the investigation of the type of consensus achieved by Bitcoin (“Nakamoto consensus”) considered more involved models that closer capture its execution parameters (e.g., “partial synchrony” [20]), and that the Bitcoin backbone protocol [23, 32] was shown to achieve eventual consensus, a property closer to the traditional state-machine replication problem from distributed computing [35]Footnote 6. In fact, similar approaches were also used for alternative blockchains that relied on assumptions about restricting other resource, such as for example a majority of honest stake (“proof of stake”—PoS) [6, 25, 30], a majority of honest space [3, 6, 15, 18, 30], etc., which were however also analyzed in more complex network settings; see also Remark 1.

The resource-restricting paradigm. We will use this general term to refer to all the above approaches. Thus, an intriguing question remained:

Does Corollary 1 still apply to the standard LSP model (of instant delivery authenticated channels and full synchrony) under the resource-restricting paradigm?

In this work we first answer this question in the negative by abstracting the essence of the above resource-restricting paradigm as an access restriction on the underlying communication network. Intuitively, the assumption of restricting (the adversary’s access to) the relative resource can be captured by disallowing any party—and in particular any adversarial party—to send unboundedly many more new messages than any other party. To avoid ambiguity and allow using the related assumption in higher level constructions, we choose to work on Canetti’s Universal Composition framework [10]. In particular, we describe the assumption induced by restricting the resources available to the adversary by means of a functionality wrapper, which wraps a communication network and restricts the ability of parties (or the adversary) to send new messages through this network.

We then demonstrate how our wrapper, when applied to the standard instant-delivery synchronous network, makes it impossible for the adversary to launch the attack from [7]. In particular, the classical impossibilities (or even their extension stated in Corollary 1) in the same model as the one they were proven, and with the required properties from the target primitive, do not apply to protocols in this new restricted network. We note in passing that the idea of restricting the resources available to the adversary compared to those available to the parties in order to limit the adversary’s attacking power was also previously explored in [8, 24].

In order to prove that our network restriction is an appropriate abstraction of the mechanisms implied by the resource-restricting paradigm, we focus on the case of proofs of work (PoW) and prove how to implement the wrapped LSP-style network from a public correlated randomness setup (in particular, any high min-entropy CRS) and an access-restricted random oracle. Concretely, along the lines of the composable analyses of Bitcoin [4], we capture the assumption of honest majority of hashing power by means of a wrapped RO, which allows each party (honest or corrupted) at most q queries per communication round (cf. [23]) for any given q (polynomial in the security parameter).Footnote 7 An important consideration of our transformation is the need for a freshness property on the assumed CRS. Specifically, our protocol for realizing the wrapped network assumes that the adversary gets access to the CRS at the same time as honest parties do (and crucially relies on this fact). Intuitively, the reason is that our protocol will rely on PoW-style hash puzzles in order to restrict the ability of the adversary to create many new valid messages. Clearly, if the adversary has access to the initial CRS—which will play the role of the genesis block—way before the honest parties do, then he can start potentially precomputing valid messages thus making the implementation of communication restriction infeasible.

We note that such freshness of the CRS might be considered a non-standard assumption and seems relevant only in combination with the resource-restricting paradigm. Nonetheless, in Sect. 6, we discuss how this freshness can be replaced using PoWs on challenges exchanged between parties, along the lines of [1]. The absence of freshness yields a somewhat relaxed wrapper which offers analogous restrictions as our original wrapper, but guarantees only limited transferability of the messages sent, and is not as strict towards the adversary as our original one (i.e., adversarial messages can be transferred more times than honest ones). Still, as we argue, this relaxed wrapper is sufficient for obtaining all the positive results in this work.

The above sheds light on the seemingly confusing landscape, but leaves open the question of how powerful the new assumption of the resource-restricting wrapper (and hence the resource-restricting paradigm in general) is. In particular, although the above demonstrates that the resource-restricting paradigm allows to circumvent the limitation of Corollary 1, it still leaves open the question:

Does the resource-restricting methodology allow for fully secure MPC in the public correlated randomness model, and if so, under what assumptions on the ‘number of corrupted parties?

We investigate the question of whether we can obtain honest majority MPC in this setting, and answer it in the affirmative. (Recall that without the resource-restricting methodology and associated assumptions this is impossible since MPC implied Broadcast.) Note that a consensus impossibility due to Fitzi [22] proved that the \(t<n/2\) bound is actually necessary for Consensus in the standard LSP communication model. And the lower bound holds even if we assume a broadcast primitive. In fact, by a simple inspection of the results one can observe that the underlying proof uses only honest strategies (for different selections of corruption sets) and therefore applies even under the resource-restricting paradigm—where, as above, this paradigm is captured by wrapping the network with our communication-restricting wrapper.

Towards the feasibility goal, we provide a protocol which allows us to establish a PKI assuming only our resource-restricted (wrapped) LSP network and one-way functions (or any other assumption which allows for existentially unforgeable signatures). More specifically, we show that our PKI establishment mechanism implements the key registration functionality \(\mathcal {F}_{\textsc {reg}} \) from [11]. Our protocol is inspired by the protocol of Andrychowicz and Dziembowski [1]. Their protocol, however, achieved a non-standard notion of MPC in which inputs are associated to public-keys/pseudonyms. In particular, in the standard MPC setting, computing a function \(f(x_1,\ldots , x_n)\) among parties \(P _1,\ldots ,P _n\) means having each \(P _i\) contribute input \(x_i\) and output \(f(x_1,\ldots , x_n)\)—this is reflected both in the original definitions of MPC [26, 36] and in the UC SFE functionality \(\mathcal {F}_{\textsc {sfe}} \) [10] and the corresponding standalone evaluation experiment from [9]. Instead, in the MPC evaluation from [1], every party \(P _i\) is represented by a pseudonym \({j_i}\), which is not necessarily equal to i and where the mapping between i and \(j_i\) is unknown to the honest participants.Footnote 8 Then the party contributing the \(\ell \)th input to the computation of f is \(P _i\) such that \(j_i=\ell \). This evaluation paradigm was termed pseudonymous MPC in [29].

It is not hard to see, however, that the above evaluation paradigm makes the corresponding solution inapplicable to classical scenarios where MPC would be applied, where parties have distinguished roles. Examples include decentralized auctions—where the auctioneer should not bid—and asymmetric functionalities such as oblivious transfer. We note in passing that the above relaxation of traditional MPC guarantees seems inherent in the permissionless peer-to-peer setting setting of [1, 29]. Instead, our protocol adapts the techniques from [1] in a white-box manner to leverage the authenticity of our underlying communication network—recall that our protocol is in the (wrapped) BGW communication setting—in order to ensure that the registered public keys are publicly linked to their respective owners. This allows us to evaluate the standard MPC functionality.

Getting from an implementation of \(\mathcal {F}_{\textsc {reg}} \) where the keys are linked to their owners to standard MPC is then fairly straightforward by using the modularity of the UC framework. As proved in [11], \(\mathcal {F}_{\textsc {reg}} \) can be used to realize the certified signature functionality (aka certification functionality\(\mathcal {F}_{\textsc {cert}} \) which, in turn, can be used to realize a Broadcast functionality against even adaptive adversaries [27]. By plugging this functionality into the honest-majority protocol (compiler) by Cramer et al. [17]—an adaptation of the protocol from [34] to tolerate adaptive corruptions—we obtain an MPC protocol which is adaptively secure.

Organization of the paper. In Sect. 2 we discuss our model. In Sect. 3 we introduce our wrapper-based abstraction of the resource-restricting paradigm and demonstrate how the impossibility from Corollary 1 fails when parties can use it. Section 4 presents our implementation of this wrapper from PoWs and a fresh CRS, and Sect. 5 discusses how to use it to obtain certified digital signatures and MPC. Finally in Sect. 6 we discuss how to remove the freshness assumption by leveraging PoWs.

2 Model

To allow for a modular treatment and ensure universal composition of our results, we will work in Canetti’s UC model [9]. We assume some familiarity of the reader with UC but we will restrict the properties we use to those that are satisfied by any composable security framework. In fact, technically speaking, our underlying framework is the UC with global setups (GUC) [12], as we aim to accurately capture a global notion of time (see below). Nonetheless, the low level technicalities of the GUC framework do not affect our arguments and the reader can treat our proofs as standard UC proofs.

Parties, functionalities, and the adversary and environment are (instances of) interactive Turing machines (ITMs) running in probabilistic polynomial time (PPT). We prove our statements for a static active adversary; however, the static restriction is only for simplicity as our proofs can be directly extended to handle adaptive corruptions. In (G)UC, security is defined via the standard simulation paradigm: In a nutshell, a protocol \(\pi \) realizes a functionality \(\mathcal {F}_{\textsc {}}\) (in UC, this is described as emulation of the dummy/ideal \(\mathcal {F}_{\textsc {}}\)-hybrid protocol \(\phi \)) if for any adversary attacking \(\pi \) there exists a simulator attacking \(\phi \) making the executions of the two protocols indistinguishable in the eyes of any external environment. Note that \(\pi \) might (and in our cases will, as discussed below) have access to its own hybrid functionalities.

Synchrony. We adopt the global clock version of the synchronous UC model by Katz et al. [28] as described in [4]. Concretely, we assume that parties have access to a global clock functionality which allows them to advance rounds at the same pace. For generality, we will allow the clock to have a dynamic party set, as in [4].

figure a

Communication network. We capture point-to-point authenticated communication, modeling the LSP channels in UC, by means of a multi-party multi-use version of the authenticated channel functionality with instant delivery along the lines of [4]. (The original network from  [4] had bounded delay; hence here we need to set this bound to 1.) Note that in this network once an honest party \(P _i\) inserts a message to be sent to \(P _j\), the message is buffered, and it is delivered after at most \(\varDelta \) attempts from the receiver (here \(\varDelta =1\)). Syntactically, we allow the simulator to query the network and learn if a buffered message was received by the respective receiver. This step—despite being redundant in most cases as the simulator should be able to defer this fact by observing the activations forwarded to him—is not only an intuitive addition, as it captures that the adversary is aware of delivery of message, but will also simplify the protocol description and simulation. For completeness, we include the authenticated network functionality below.

Note that the BGW-style secure point-to-point network functionality can be trivially derived by the authenticated one by replacing in the message \((\textsc {sent},\textsf {sid}, m, P_i, P_j, mid )\) which the adversary receives upon some m being inserted to the network, the value of m by \(\perp \) (of by |m| if this is implemented by standard encryption).

figure b

The random oracle functionality. As is typical in the proof-of-work literature, we will abstract puzzle-friendly hash functions by means of a random oracle functionality.

figure c

Furthermore, following [4], we will use the wrapper to capture the assumption that no party gets more than q queries to the RO per round. This wrapper in combination with the honest majority of parties captures the assumption that the adversary does not control a majority of the systems hashing power.

figure d

Correlated randomness setup. Finally, we make use of the CRS functionality [13], which models a public correlated randomness setup.

figure e

3 Inapplicability of Strong BA Impossibility

In this section we present our abstraction of the resource-restricting paradigm as a communication-restricting wrapper for the underlying communication network, and show that the strong BA impossibility (Corollary 1) does not apply to this wrapped network. In particular, as we discussed, in [7] it was argued that assuming \(3t\ge n\), no private correlated randomness setup, the existence of signatures, and authenticated point-to-point channels, no protocol solves the broadcast problem. In this section, we show that if parties have access to a simple channel that is restricted in such a way that spam or sybil attacks are infeasible, the impossibility proof of [7] does not go through.

3.1 Modeling a Communication-Restricted Network

Our filtering wrapper restricts the per-round accesses of each party to the functionality, in a probabilistic manner. In more detail, for parameters pq, each party has a quota of q \(\textsc {send} \) requests per round, each of them succeeding with probability p. Note that after a message has been sent through the filter, the sender, as well as the receiver, can re-send the same message for free. This feature captures the fact that if a message has passed the filtering mechanism once, it should be freely allowed to circulate in the network. We explicitly differentiate this action in our interface, by introducing the \(\textsc {resend} \) request; parties have to use \(\textsc {resend} \) to forward for free messages they have already received.

figure f

3.2 The Impossibility Theorem, Revisited

Next, we show that if parties have access to \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\), for some noticeable p and \(q\ge 1\), the BA attack from the impossibility proof of [7] does not go through. The proof relies on the fact that the adversary can simulate the behavior of multiple honest parties. In a nutshell, we describe a protocol where parties send messages through \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\), and due to the restricted number of \(\textsc {send} \) attempts the adversary has at his disposal, it is impossible for him to simulate multiple parties running this protocol.

Lemma 1

Let \(n=3,t=1\), p be a noticeable function, and \(q\ge 1\). There exists a polynomial time protocol in the \((\mathcal {G}_{\textsc {clock}}, \mathcal {F}_{\textsc {auth}}, \mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}}),\mathcal {F}_{\textsc {sig}})\)-hybrid model that invalidates the \(t\ge n/3\) BA attack from the impossibility theorem of [7].

Proof

The impossibility proof considers the class of full information protocols, where if some party receives a message at some round r, it signs the message with its own signing key, and sends it to all other parties. We are going to show a subclass of protocols that use \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) and are not captured by the proof.

We first briefly recall the proof in [7] for the case \(n=3\) and \(t=1\). The proof is based on constructing three scenarios \(\sigma _1,\sigma _2,\sigma _3\), where broadcast cannot possibly be achieved. Let the sender be \(P_1\). We proceed to describe \(\sigma _1,\sigma _2,\sigma _3\). In \(\sigma _1\), \(P_1\) has input 0 and \(P_2\) is corrupted. In \(\sigma _2\), \(P_1\) has input 1 and \(P_3\) is corrupted. In \(\sigma _3\), \(P_1\) is corrupted.

By Validity, it follows that in \(\sigma _1\) \(P_3\) should output 0, and in \(\sigma _2\) \(P_2\) should output 1, no matter the behavior of the adversary. Moreover, due to the Agreement (Consistency) property, the output of \(P_2\) and \(P_3\) in \(\sigma _3\) must be the same. The proof then proceeds to describe a way of making the view of \(P_3\) (resp. \(P_2\)) indistinguishable in scenarios \(\sigma _1\) (resp. \(\sigma _2\)) and \(\sigma _3\), and thus reaching a contradiction since they are going to decide on different values in \(\sigma _3\).

The main idea is for \(P_2\) in \(\sigma _1\) to behave as if \(P_1\) had input 1, by creating a set of fake keys and changing the signatures of \(P_1\) to the ones with the fake keys and different input where possible. Since there is no PKI, \(P_3\) cannot tell whether: (i) \(P_1\) is corrupted and sends messages signed with different keys to \(P_2\), or (ii) \(P_2\) is corrupted. Symmetrically, \(P_3\) in \(\sigma _2\) simulates \(P_1\) with input 0. Finally, \(P_1\) in \(\sigma _3\) simulates both behaviors, i.e., \(P_1\) running the protocol honestly with input 1 in its communication with \(P_2\), and \(P_1\) with input 0 in its communication with \(P_3\). This is exactly where the impossibility proof does not go through anymore.

For the moment, assume that we are in the setting where \(p=1-\mathsf {negl}(\lambda )\) and \(q=1\). Let \(\mathrm \Pi \) be a full information protocol, where in the first round the sender \(P_1\) uses \(\mathcal {W}_{\textsc {flt}} ^{1-\mathsf {negl}(\lambda ),1}(\mathcal {F}_{\textsc {auth}})\) to transmit its message to the other two parties. Further, assume that this message is different for the cases where the sender input is 0 and 1, with probability \(\alpha \). It follows that \(P_1\) has to send two different messages to parties \(P_2\) and \(P_3\) at the first round of \(\sigma _3\), with probability \(\alpha \). However, this is not possible anymore, as the network functionality only allows for one new message to be send by \(P_1\) at each round, with overwhelming probability. Hence, with probability \(\alpha \) the impossibility proof cannot go through anymore.

For the case where p is noticeable and \(q\ge 1\), we can design a similar protocol that cannot be captured by the proof. The protocol begins with a first “super round” of size \(\frac{\lambda }{pq}\) regular rounds, where each party should successfully send its first message m at least \(\frac{3\lambda }{4}\) times using \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) for it to be considered valid. Since the functionality allows sending the same message twice for free, the sequence of \(\frac{3\lambda }{4}\) messages is encoded as follows: \((m,1),\ldots ,(m,\frac{3\lambda }{4})\).

Next, we analyze the probability that \(\mathcal {A} \) can use the strategy described in the impossibility proof in [7]. Note that each party can query \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) up to \(\lambda /p\) times during the super round. We will show that: (i) honest parties will be able to send \(\frac{3\lambda }{4}\) messages with overwhelming probability, and (ii) that the adversary in \(\sigma _3\) will not be able to send the \(2 \cdot \frac{3\lambda }{4}\) messages it has to. Let random variable \(X_{i}\) be 1 if the i-th query to \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) of some party P succeeds, and 0 otherwise. Also, let \(X = \sum _{i=1}^{\lambda /p}X_i\). It holds that \(\mathbb {E}[X] = p \cdot \lambda /p = \lambda \). By an application of the Chernoff bound, for \(\delta = \frac{1}{4}\), it holds that

$$ \Pr [ X \le (1-\delta ) \mathbb {E}[X] ] = \Pr [ X \le \frac{3\lambda }{4} ] \le e^{-\varOmega (\lambda )} .$$

Hence, with overwhelming probability each party will be able to send at least \(\frac{3\lambda }{4}\) messages in the first \(\frac{\lambda }{pq}\) rounds. On the other hand, we have that

$$ \Pr [ X \ge (1+\delta ) \mathbb {E}[X] ] = \Pr [ X \ge \frac{5\lambda }{4} ] \le e^{-\varOmega (\lambda )}. $$

Hence, no party will be able to send more than \(\frac{5\lambda }{4}\) messages in the first super round. This concludes the proof, since the adversary, in order to correctly follow the strategy described before, must send in total \(\frac{6\lambda }{4}(>\frac{5\lambda }{4})\) messages in the first super round. Thus, with overwhelming probability it is going to fail to do so. Finally, note that the length of the super round is polynomial, since 1/p is bounded by some polynomial. Thus, the theorem follows.   \(\square \)

The proof of Corollary 1 works along the same lines as the proof of [7]; since only public correlated randomness is assumed, nothing prevents the adversary from simulating an honest party. Finally, we note that the same techniques used above can also be used to refute an appropriate adaptation of Corollary 1, where parties have access to \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\).

4 Implementing a Communication-Restricted Network

In this section we describe our implementation of \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) that is based on the resource-restricted RO functionality and a standard authenticated network. As discussed in the introduction, we also make use of an enhanced version of the \(\mathcal {F}_{\textsc {crs}} \) functionality, where it is guaranteed that the adversary learns the shared string after the honest parties. We capture this restriction in a straightforward way: A wrapper \(\mathcal {W}_{\textsc {fresh}} (\mathcal {F}_{\textsc {crs}} ^\mathcal{D})\) which does not allow the adversary to learn the CRS before the round honest parties are spawned. W.l.o.g., in the rest of the paper we are going to assume that all parties are spawned at round 1.

Our protocol makes uses of the proof-of-work construction of [2]. Every time a party wants to send a new message, it tries to find a hash of the message and some nonce, that is smaller than some target value D, and if successful it forwards this message through \(\mathcal {F}_{\textsc {auth}} \) to the designated recipient. Moreover, if it has received such a message and nonce, it can perform a \(\textsc {resend} \) by forwarding this message through \(\mathcal {F}_{\textsc {auth}} \). To be sure that the adversary does not precompute small hashes before the start of the protocol, and thus violates the \(\textsc {send} \) quota described in the wrapper, parties make use of the string provided by \(\mathcal {W}_{\textsc {fresh}} ^\mathcal{D}(\mathcal {F}_{\textsc {crs}})\), where \(\mathcal D\) will be a distribution with sufficient high min-entropy. They use this string as a prefix to any hash they compute, thus effectively disallowing the adversary to use any of the small hashes it may have precomputed.

figure g

Next, we prove that \(\texttt {Wrapped}\hbox {-}{} \texttt {Channel}^{D,q}\) UC realizes the \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) functionality, for appropriate values of p. The main idea of the proof is that the simulator is going to simulate new messages sent through the ideal functionality in the eyes of \(\mathcal {A} \), by appropriately programming the random oracle. All other actions can be easily simulated.

Lemma 2

Let \(p:=\frac{D}{2^\lambda }\), and \(\mathcal D\) be a distribution with min-entropy at least \(\omega (\log (\lambda ))\). Protocol \(\texttt {Wrapped}\hbox {-}{} \texttt {Channel}^{D,q}\) UC-realizes functionality \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) in the -hybrid model.

Proof

We consider the following simulator that is parameterized by some real-world adversary \(\mathcal {A} \):

figure h

We will argue that for every PPT adversary \(\mathcal {A} \) in the real world, no PPT environment \(\mathcal {Z} \) can distinguish between the real execution against \(\mathcal {A} \) and the ideal execution against \(\mathcal {S}_{\textsc {1}} \).

First, let \(E_1\) denote the event where honest parties in the real world, and on input \(\textsc {send} \), repeat a query to the random oracle. Each time an honest party issues a new RO query, a random string of size \(\lambda \) bits is sampled. The probability that the same string is sampled twice in a polynomial execution is negligible in \(\lambda \). Moreover, \(E_1\) implies this event. Hence, the probability of \(E_1\) happening in a polynomially bounded execution is at most \(\mathsf {negl}(\lambda )\). Note, that if \(E_1\) does not occur, the distribution of \(\textsc {send} \) commands invoked by honest parties that succeed is identical in the real and the ideal world.

Next, we turn our attention to adversarial attempts to send a new message. Let \(E_2\) be the event where \(\mathcal {A} \) sends a message of the form (mrv) to \(\mathcal {F}_{\textsc {auth}} \), such that it hasn’t queried (mr) on the random oracle and \(H[(m,r)] =v\). The probability of this event happening, amounts to trying to guess a random value sampled uniformly over an exponential size domain, and is \(\mathsf {negl}(\lambda )\). Moreover, if \(E_2\) does not occur, the adversary can only compute new “valid” messages by querying the RO. Define now \(E_3\) to be the event where the adversary makes a query to the RO containing the CRS value, before round 1. By the fact that the CRS value is sampled by a high min-entropy distribution, and that \(\mathcal {A} \) is PPT, it is implied that \(\Pr [ E_3 ] \le \mathsf {negl}(\lambda )\). Hence, if \(E_2\) and \(E_3\) do not occur, the distribution of adversarially created messages is identical in both worlds.

Now if \(E_1, E_2, E_3\) do no occur, the view of the adversary and the environment in both worlds is identical, as all requests are perfectly simulated. By an application of the union bound, it is easy to see that \(\lnot (E_1 \vee E_2 \vee E_3)\) occurs with only negligible probability. Hence, the real and the ideal execution are statistically indistinguishable in the eyes of \(\mathcal {Z} \), and the theorem follows.    \(\square \)

Regarding the round and communication complexity of our protocol, we note the following: It takes on expectation 1/p \(\textsc {send} \) requests to send a message, i.e., 1/pq rounds, and the communication cost is only one message. Regarding implementing \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) using virtual resources, we point to Remark 1.

Corollary 2

Let \(n=3,t=1\), p be a noticeable function, \(q\ge 1\), and any distribution \(\mathcal D\) with min-entropy at least \(\omega (\log (\lambda ))\). Then, there exist a polynomial time protocol in the -hybrid model, that invalidates the proof of the impossibility theorem of [7].

Remark 1

The resource-restricted crypto paradigm can be also applied to virtual resources. For PoS, the implicit PKI associated with PoS blockchains seems sufficient for a simple implementation of our resource-restricted wrapper using a verifiable random function (VRF). However, this PoS-implicit PKI typically assigns keys to coins instead of parties. Thus, a transformation, e.g. through our wrapper (see Sect. 5), would be needed that shifts from the honest majority assumption on coins to parties. This validates the generality of our abstraction; however, with PoS in the permissioned setting, there might be more direct ways of getting standard MPC by leveraging the implicit coin-PKI.

5 Implementing a Registration Functionality

In this section, we show how to implement a key registration functionality (cf. [11]) in the resource-restricted setting, and in the presence of an honest majority of parties.

5.1 The Registration Functionality

The registration functionality allows any party to submit a key, which all other parties can later retrieve. Our specific formulation \(\mathcal {F}_{\textsc {reg}} ^r\), is parameterized by an integer r that specifies the round after which key retrieval becomes available.Footnote 9 Note, that \(\mathcal {F}_{\textsc {reg}} \) does not guarantee that the keys submitted belong to the corresponding parties, i.e., a corrupted party can submit a key it saw another party submit.

Following the paradigm of [4] to deal with synchrony, \(\mathcal {F}_{\textsc {reg}} \) also has a \(\textsc {Maintain} \) command, which is parameterized by an implementation dependent function \(\textsf {predict}\hbox {-}\textsf {time} \). We use this mechanism, to capture the behavior of the real world protocol with respect to \(\mathcal {G}_{\textsc {clock}} \), and appropriately delay \(\mathcal {F}_{\textsc {reg}} \) from sending its clock update until all honest parties get enough activations. In more detail, \(\textsf {predict}\hbox {-}\textsf {time} \) takes as input a timed honest input sequence of tuples \(\varvec{I}^T_H = (\ldots , (x_i,id_i,\tau _i),\ldots )\), where \(x_i\) is the i-th input provided to \(\mathcal {F}_{\textsc {reg}} \) by honest party \(id_i\) at round \(\tau _i\). We say that a protocol \(\mathrm \Pi \) has a predictable synchronization pattern, if there exists a function \(\textsf {predict}\hbox {-}\textsf {time} \) such that for any possible execution of \(\mathrm \Pi \), with timed honest input sequence \(\varvec{I}^T_H\), \(\textsf {predict}\hbox {-}\textsf {time} (\varvec{I}^T_H) = \tau +1\) if all honest parties have received enough activations to proceed to round \(\tau +1\).

figure i

5.2 The Identity-Assignment Protocol

To implement the above functionality we follow an adaptation of the protocol from [1], with the difference that instead of relating keys to pseudonyms, parties are able to create a PKI relating keys to identities. First, we deal with a technical issue.

Our protocol contains commands that perform a sequence of operations. It is possible that during the execution of this operation, the party will lose the activation. Following the formulation of [4], we perform some of the commands in an interruptible manner. That is, a command I is I-interruptible executed, if in case activation is lost, an anchor is stored so that in the next invocation of this command it continues from the place it stopped in the previous activation. For more details on how implement this mechanism, we refer to [4].

Next, we give an informal description of the protocol, which makes use of \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\), \(\mathcal {F}_{\textsc {auth}} \), \(\mathcal {G}_{\textsc {clock}} \), and the signature functionality \(\mathcal {F}_{\textsc {sig}} \) of [11], adapted for many signers and being responsive, i.e., the one who is issuing a command is not losing its activation, as for example is done in the context of the key evolving signature functionality \(\mathcal {F}_{\textsc {kes}} \) of [3].

The protocol is structured in 2 different phases. In the first phase, lasting up to round \(r+n+1\), parties use \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\) to partially agree on a “graded” PKI. In more detail, for the first r rounds (procedure PoWGeneration) they attempt to send through \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\) messages containing a verification key pk and an increasing counter c. A key is going to be taken in account, only if a sufficient number of messages related to this key and with different counter values are sent. This way keys are linked to resource accesses. And since resource accesses are restricted, so is going to be the number of generated keys. Unlike [1], to establish that keys are linked to identities, at round r parties sign the submitted key \(\hat{pk}\) and their identity with their verification key pk, and multicast it to all other parties.

For the remaining \(n+1\) rounds (procedure KeyAgreement), parties depending on when they received the messages related to some key, assign it a grade from 0 for the earliest, to n for the latest. To ensure that these grades differ by at most one for the same key, they immediately send the relevant messages they received to all other parties. This allows them to establish a form of a graded PKI, denoted by \(\mathcal K\) in the protocol, where parties are proportionally represented, and which is going to be later used for broadcast. Finally, key/identity pairs received that have been signed with a key in \(\mathcal K\) of grade 0 are added to a separate set \(\mathcal M\). This set is going to be used in the second phase, which we describe next, to correctly relate keys to identities.

Starting at round \(r+n+2\), parties use an adaptation of the “Dolev-Strong” protocol to reliably broadcast \(\mathcal M\) (procedure Broadcast). The way the protocol works, is by accepting messages as correctly broadcast only if a progressively bigger number of keys of sufficient grade in \(\mathcal K\) have signed it. At the last round of the protocol, round \(r+2n+2\), it is ensured that if an honest party accepts a message, then so do all other honest parties. Finally, by using a simple majority rule on the key/identity pairs contained in the broadcast sets \(\mathcal M\), parties are able to agree on a key/identity set, denoted by \(\mathcal N\) in the protocol, where each party is related to exactly one key and honest parties are correctly represented. \(\mathcal N\) is output whenever a \(\textsc {Retrieve} \) command is issued. Next, we give a formal description of protocol \(\texttt {Graded}\hbox {-}{} \texttt {Agreement}\).

figure j

We are going to show that protocol \(\texttt {Graded}\hbox {-}{} \texttt {Agreement}\) implements functionality \(\mathcal {F}_{\textsc {reg}} \). First, note that there exists a function \(\textsf {predict}\hbox {-}\textsf {time} \) for our protocol that successfully predicts when honest parties are done for the round; honest parties lose their activation in a predictable manner when they get \(\textsc {Maintain} \) as input. Moreover, a simulator can easily simulate the real world execution in the eyes of \(\mathcal {Z} \), since it has all the information it needs to simulate honest parties’ behavior and functionalities \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\), \(\mathcal {F}_{\textsc {auth}} \), and \(\mathcal {F}_{\textsc {sig}} \). Finally, due to the properties of the protocol, also proved in [1], all parties are going to agree on the same key/identity set \(\mathcal N\), and thus provide the same responses on a \(\textsc {Retrieve} \) command from \(\mathcal {Z} \). We proceed to state our theorem.

Theorem 1

Let \(n>2t\), p be a noticeable function, \(q\in \mathbb {N}^+\). The protocol \(\texttt {Graded}\hbox {-}{} \texttt {Agreement}\) UC-realizes functionality \(\mathcal {F}_{\textsc {reg}} ^{ \frac{4 n^2 \lambda }{ \min (1,pq)} + 2n +3 }\) in the \((\mathcal {G}_{\textsc {clock}},\) \( \mathcal {F}_{\textsc {auth}},\) \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}}),\) \(\mathcal {F}_{\textsc {sig}})\)-hybrid model.

Proof

Let \(r=\frac{4 n^2 \lambda }{ \min (1,pq)}\) and w.l.o.g., let \(p\cdot q \le 1\). We start by making some observations about the protocol.

Claim

The set \(\mathcal K\) of each honest party, at the end of round \(r+1\), will contain the keys of all other honest parties, with overwhelming probability in \(\lambda \).

Proof

We first show that the claim holds for a single honest party. Let random variable \(X_i\) be equal to 1, if the i-th invocation of \(\textsc {send} \) to \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}})\) by some honest party P is successful, and 0 otherwise. It holds that \(\Pr [X_i=1] = p\), and that \(X_1,\ldots ,X_{r\cdot q}\) is a set of independent random variables; each party invokes \(\textsc {send} \) exactly \(r \cdot q\) times up to round r. Let \(X= \sum _{i=1}^{r q} X_i\). By an application of the Chernoff bound, it holds that:

$$\Pr [ X \le (1-\frac{1}{4t})pqr ] = \Pr [ X \le (1-\frac{1}{4t})\mathbb {E}[X]] \le e^{-\varOmega (\lambda )} $$

Since X is an integer, with overwhelming probability each honest party will send at least \(\lceil (1-\frac{1}{4t})pqr \rceil \) messages to each other party. Hence, its key will be included in \(\mathcal K\). By an application of the union bound the claim follows.    \(\dashv \)

In addition to the previous claim, we also note two things: (i) The grade of each such key will be 0, and (ii) due to the correctness of the signature scheme, all honest parties will add the associated key \(\hat{pk}\) and the correct owner of key pk in \(\mathcal M\). These two facts will be useful later, when we will argue that all honest keys make it to the final list of keys \(\mathcal N\), along with their correct owner.

Next, we show that the total number of keys generated will be at most n.

Claim

The set \(\mathcal K\) of each honest party contains at most n elements, with overwhelming probability.

Proof

As before let \(Z = \sum _{i=1}^{qt(r+n)} Z_i\), denote the successful attempts of the adversary to send a message through \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\). Note that, starting from round 1, she has \(r+n\) rounds in her disposal to send messages. After some computations we can show that:

$$ (1+\frac{1}{4t})\mathbb {E}[Z] = (1+\frac{1}{4t})pqt(r+n) \le (1-\frac{1}{4t})pqr(t+1) $$

By the Chernoff bound, it holds that:

$$ \Pr [ Z \ge (1-\frac{1}{4t})pqr(t+1) ] \le \Pr [ Z \ge (1+\frac{1}{4t})\mathbb {E}[Z]] \le e^{-\varOmega (\lambda )} $$

Note now, that \(\lceil (1-\frac{1}{4t})pqr \rceil \) different messages are required for a new key to be added to \(\mathcal K\). It follows, that the adversary will add at most t keys of its choice to \(\mathcal K\). Moreover, by the design of the protocol, honest parties will add at most \(n-t\) keys to \(\mathcal K\). Thus, the set \(\mathcal K\) of any honest party will contain at most n keys with overwhelming probability.    \(\dashv \)

Next, note that if an honest party adds a key to \(\mathcal K\) with grade \(g<n\), due to the fact that the relevant messages for this key are multicast to all other parties in the network together with an additional valid signature, all honest parties will add the same key in \(\mathcal K\) with grade at most \(g+1\).

Using all facts proved above, we can now proceed and show that during the Broadcast phase of the protocol, all honest parties will reliably broadcast set \(\mathcal M\). Moreover, the adversary will not be able to confuse them about her broadcast input, if any. We start by arguing about the values broadcast by honest parties.

Claim

At the end of round \(r+2n+2\), the set \(\mathcal N\) of each honest party will contain the keys of all honest parties, along with their correct identity, with overwhelming probability.

Proof

Let P be some honest party, \((pk,\hat{pk})\) be her public keys, \(\mathcal K',M'\) be her key sets, and \(m=(\mathcal{M'},pk)\). By our previous claim, all honest parties will have added (pk, 0) to their key set \(\mathcal K\). Moreover, they will all receive the message \((\hat{pk},P)\) signed w.r.t. pk at round \(r+1\) by party P, and thus include \((\hat{pk},P)\) in \(\mathcal M\). Note, that no honest party will include another entry related to P, as P will not send any other such message. Moreover, all parties will receive \((m,(pk,\sigma ))\), where \(\sigma \) is a valid signature for m. Hence, they will all add m to \(\mathcal T\). Again, due to unforgeability, they will not add any other entry related to pk in \(\mathcal T\). Hence, since \(\mathcal T\) has at most n elements (one for each key) and \(2n > t\), \((\hat{pk},P)\) will be the only entry that appears exactly once with respect to P in at least n/2 sets of \(\mathcal T\). Thus, all honest parties will add (pkP) in \(\mathcal N\), and the claim follows.    \(\dashv \)

Next, we argue that the key sets \(\mathcal N\) of all honest parties will be the same.

Claim

At the end of round \(r+2n+2\), all honest parties will have the same set \(\mathcal N\), with at most one entry per party, with overwhelming probability.

Proof

First, we argue that all honest parties have same set \(\mathcal T\) at the end of round \(r+2n+2\). For the sake of contradiction assume that the opposite was true. This would imply that some honest party P has added \((m,pk) \in \mathcal{T}\) at some round \(r'\), while some other party \(P'\) has not. We take two cases. If \(r'<r + 2n +2\), then P will forward the message relevant to entry (mpk) together with its own signature to all other parties. Since its key has grade 0, all other honest parties will add (mpk) to \(\mathcal T\) in the next round. On the other hand, if \(r'=r+2n+2\), it holds that (mpk) is signed by n keys in the set \(\mathcal K\) of P, and by our previous claims at least one of these keys was of an honest party. Thus, this party must have accepted this message earlier, and by our previous argument all other honest parties will also receive and add this message to \(\mathcal T\). This is a contradiction. Hence, honest parties agree on their entries in \(\mathcal T\).

Now, since all parties agree on \(\mathcal T\), and \(\mathcal N\) is a deterministic function of \(\mathcal T\), it is implied that they will also agree on \(\mathcal N\). Moreover, by construction each party P is associated with at most one key in \(\mathcal N\). The claim follows.    \(\dashv \)

Our last two claims imply that all parties agree on \(\mathcal N\), all honest parties will be represented, and at most one key will be assigned to each identity.

Having established these properties of the protocol, we next give a sketch of the simulator, which we denote by \(\mathcal {S}_{\textsc {2}} \). The first thing the simulator must deal with is clock updates. In the ideal world, clock updates sent by \(\mathcal {Z} \) to honest parties, are directly forwarded to \(\mathcal {G}_{\textsc {clock}} \), which in turn notifies \(\mathcal {S}_{\textsc {2}} \). This is not the case in the real world. Parties send updates to \(\mathcal {G}_{\textsc {clock}} \) only after a sufficient number of \(\textsc {Maintain} \) and \(\textsc {clock} \hbox {-}\textsc {update} \) inputs have been provided by \(\mathcal {Z} \). The way we simulate this behavior, is by having \(\mathcal {S}_{\textsc {2}} \) deduce exactly when honest parties will send their updates in the real world, by keeping track of when \(\mathcal {F}_{\textsc {reg}} \) will send its clock update in the ideal world, as well as the activations it gets after a \(\textsc {Maintain} \) command has been issued to \(\mathcal {F}_{\textsc {reg}} \) or a \(\textsc {clock} \hbox {-}\textsc {update} \) command has been issued to \(\mathcal {G}_{\textsc {clock}} \). Note, that a new round starts only after either of the two commands has been issued, and thus \(\mathcal {S}_{\textsc {2}} \) has been activated.

Since \(\mathcal {S}_{\textsc {2}} \) can tell when parties are done for each round, it can also simulate the interaction of \(\mathcal {A} \) with \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\), \(\mathcal {F}_{\textsc {auth}} \) and \(\mathcal {F}_{\textsc {sig}} \). It does that by simulating the behavior of honest parties. All information needed to do this are public, or in the case of the honest parties’ signatures can be faked by the simulator itself. Note, that care has been taken so that \(\mathcal {S}_{\textsc {2}} \) never throughout the protocol has to sign anything with the keys submitted to \(\mathcal {F}_{\textsc {reg}} \) for honest parties; it only signs with the keys generated by the parties themselves. This is the reason that each party uses two different keys, pk and \(\hat{pk}\).

Finally, at round \(r+2n+2\) the simulator submits to \(\mathcal {F}_{\textsc {reg}} \) the keys that corrupted parties choose based on key set \(\mathcal N\); with overwhelming probability this set is the same for all honest parties. Thus, the response of \(\mathcal {F}_{\textsc {reg}} \) to any \(\textsc {Retrieve} \) query after this round is \(\mathcal N\). It follows that the view of \(\mathcal {Z} \) in the two executions is going to be indistinguishable, and the theorem follows.    \(\square \)

As discussed in the introduction, getting from an implementation of \(\mathcal {F}_{\textsc {reg}} \) where the keys are linked to their owners to standard MPC is fairly straightforward by using the modularity of the UC framework. As proved in [11], \(\mathcal {F}_{\textsc {reg}} \) can be used to realize the certified signature functionality (aka certification functionality\(\mathcal {F}_{\textsc {cert}} \) which, in turn, can be used to realize a Broadcast functionality against even adaptive adversaries [27] if we additionally assume the existence of secure channels; for details about implementing the secure channel functionality \(\mathcal {F}_{\textsc {sc}} \) from \(\mathcal {F}_{\textsc {auth}} \) we point to [14]. By plugging the Broadcast functionality into the honest-majority protocol (compiler) by Cramer et al. [17]—an adaptation of the protocol from [34] to tolerate adaptive corruptions—we obtain an MPC protocol which is adaptively secure.

Corollary 3

Let \(n>2t\), p be a noticeable function, and \(q\in \mathbb {N}^+\). Then, there exists a protocol that UC-realizes functionality \(\mathcal {F}_{\textsc {mpc}} \) in the \((\mathcal {G}_{\textsc {clock}},\) \( \mathcal {F}_{\textsc {sc}},\) \(\mathcal {W}_{\textsc {flt}} ^{p,q}(\mathcal {F}_{\textsc {auth}}),\) \(\mathcal {F}_{\textsc {sig}})\)-hybrid model.

6 Removing the Freshness Assumption

So far, we have assumed that all parties, including the adversary, get access to the CRS at the same time, i.e., when the protocol starts. In this section, we give a high level overview of how our analysis can be adapted to the case where we remove the fresh CRS and instead assume the existence of a random oracle. The protocol we devise is based on techniques developed initially in [1].

The main function of the CRS in the implementation of \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\), is to ensure that all parties agree on which hash evaluations are “fresh”, i.e., performed after the CRS became known. Consequently, sent messages are fully transferable, in the sense that they can be forwarded an arbitrary number of times and still be valid. Without a CRS we have to sacrifice full transferability and instead settle with a limited version of the property (cf. [33]).

Next, we describe the filtering functionality we implement in this setting, denoted . The functionality has the same syntax as \(\mathcal {W}_{\textsc {flt}} (\mathcal {F}_{\textsc {auth}})\), with one difference: each message sent is accompanied by a grade g, which signifies the number of times that this message can be forwarded by different parties and is also related to when the message was initially sent. For example, if party \(P_1\) receives a message with grade 2, the message can be forwarded to party \(P_2\) with grade 1, and party \(P_2\) can forward to party \(P_3\) with grade 0. Party \(P_3\) cannot forward the message any further, while party \(P_2\) can still forward the message to any other party it wants to. Moreover, the initial grade assigned to a message sent using the \(\textsc {send} \) command is equal to the round that this command was issued minus 1, i.e., messages with higher grades can be computed at later rounds, for honest parties. The adversary has a small advantage: the initial grade of messages he sends is equal to the current round. Finally, we enforce the participation of honest parties the same way we do for the \(\mathcal {F}_{\textsc {reg}} \) functionality in Sect. 5. Next, we formally describe .

figure k

The way we implement this functionality is by introducing a repeated challenge-exchange procedure to protocol Wrapped-Channel: at each round parties sample a random string, which they then hash together with the challenges sent by other parties at the previous round to compute a new challenge, that they multicast to the network. The new challenge computed at each round is used as a prefix to the queries they are making to the restricted RO functionality. If a query is successful, they send the query value along with a pre-image of the challenge, in order for other parties to be sure that the challenge they multicast earlier was used in the computation, and thus ensure freshness. The receiving party can forward the message by also including a pre-image of its own challenge, thus ensuring all honest parties will accept it as valid. Obviously, in the first round of the protocol parties cannot send any message as they haven’t yet exchanged any random challenges, in the second round the messages cannot be transferred, in the third they can be transferred once, and so on. We formally describe the new protocol and state our lemma next. The relevant security proof proceeds as that of Lemma 2, except that we have to show the that the adversary cannot precomputes hashes that are related to some challenge at a round earlier than the one that this challenge was generated. Due to lack of space we omit it.

figure l

Lemma 3

Let \(p:=\frac{D}{2^\lambda }\). The protocol \(\texttt {Wrapped-Channel-Lim}^{D,q}\) UC-realizes functionality in the -hybrid model.

Next, we observe that is sufficient to implement \(\mathcal {F}_{\textsc {reg}} \). The protocol is similar to protocol Graded-Agreement, with two differences: (i) parties start sending messages through after \(n+2\) rounds have passed, and (ii) during the KeyAgreement phase of the protocol, parties take in account messages with grade bigger than n at the first round, \(n-1\) at the second, ..., 0 at the last one. The rest of the protocol is exactly the same. Note, that parties can always forward the messages received during the KeyAgreement phase, since the grade of the relevant messages is bigger than 0. The analysis of [1] is built on the same idea.

As a result, we are able to implement \(\mathcal {F}_{\textsc {reg}} \), and subsequently \(\mathcal {F}_{\textsc {mpc}} \), without having to assume a “fresh” CRS. With the techniques described above, the following theorem can be proven.

Theorem 2

Let \(n>2t\) and \(q\in \mathbb {N}^+\). Then, there exists a protocol that UC-realizes functionality \(\mathcal {F}_{\textsc {mpc}} \) in the \((\mathcal {G}_{\textsc {clock}},\) \( \mathcal {F}_{\textsc {sc}},\) , -hybrid model.