1 Introduction

Motivating Scenario. Consider the following setting: a set of n hospitals publish encryptions of their sensitive patient information \(x_1,\ldots ,x_n\). At a later stage, for the purposes of medical research, they wish to securely evaluate a circuit \(\mathsf {C} _1\) on their joint data by publishing just one additional message - that is, they wish to jointly compute \(\mathsf {C} _1(x_1,\ldots ,x_n)\) by each broadcasting a single message, without revealing anything more than the output of the computation. Can they do so? Furthermore, what if they want to additionally compute circuits \(\mathsf {C} _2, \mathsf {C} _3\ldots \) at a later point on the same set of inputs?

Seminal results on secure multi-party computation (MPC) left quite a bit to be desired when considering the above potential application. In particular, the initial construction of secure multi-party computation by Goldreich, Micali and Wigderson [GMW87] required parties to interact over a large number of rounds. Even though the round complexity was soon reduced to a constant by Beaver, Micali and Rogaway [BMR90], these protocols fall short of achieving the above vision, where interaction is reduced to the absolute minimum.

Making progress towards this goal, Garg et al. [GGHR14] gave the first constructions of two-round MPC protocols, assuming indistinguishability obfuscation [GGH+13] (or, witness encryption [GLS15, GGSW13]) and one-way functions.Footnote 1 A very nice feature of the Garg et al. construction is that the first round message is indeed reusable across multiple executions, thereby achieving the above vision. Follow up works realized two-round MPC protocols based on significantly weaker computational assumptions. In particular, two-round MPC protocols based on LWE were obtained [MW16, BP16, PS16], followed by a protocol based on bilinear maps [GS17, BF01, Jou04]. Finally, this line of work culminated with the recent works of Benmahouda and Lin [BL18] and Garg and Srinivasan [GS18], who gave constructions based on the minimal assumption that two-round oblivious transfer (OT) exists.

However, in these efforts targeting two-round MPC protocols with security based on weaker computational assumptions, compromises were made in terms of reusability. In particular, among the above mentioned results only the obfuscation based protocol of Garg et al. [GGHR14] and the lattice based protocols [MW16, BP16, PS16] offer reusability of the first message across multiple executions. Reusability of the first round message is quite desirable. In fact, even in the two-party setting, this problem has received significant attention and has been studied under the notion of non-interactive secure computation [IKO+11, AMPR14, MR17, BJOV18, CDI+19]. In this setting, a receiver first publishes an encryption of its input and later, any sender may send a single message (based on an arbitrary circuit) allowing the receiver to learn the output of the circuit on its input. The multiparty case, which we study in this work, can be seen as a natural generalization of the problem of non-interactive secure computation. In this work we ask:

Can we obtain reusable two-round MPC protocols from assumptions not based on lattices?

1.1 Our Result

In this work, we answer the above question by presenting a general compiler that obtains reusable two-round MPC, starting from any two-round MPC and using homomorphic secret sharing (HSS) [BGI16] and pseudorandom functions in \(NC^1\). In a bit more detail, our main theorem is:

Theorem 1 (Main Theorem)

Let \(\mathcal {X} \in \{\)semi-honest in plain model, malicious in common random/reference sting model\(\}\). Assuming the existence of a two-round \(\mathcal {X}\)-MPC protocol, an HSS scheme, and pseudorandom functions in \(NC^1\), there exists a reusable two-round \(\mathcal {X}\)-MPC protocol.

We consider the setting where an adversary can corrupt an arbitrary number of parties. We assume that parties have access to a broadcast channel.

Benmahouda and Lin [BL18] and Garg and Srinivasan [GS18] showed how to build a two-round MPC protocol from the DDH assumption. The works of Boyle et al. [BGI16, BGI17] constructed an HSS scheme assuming DDH. Instantiating the primitives in the above theorem, we get the following corollary:

Corollary 2

Let \(\mathcal {X} \in \{\)semi-honest in plain model, malicious in common random/reference sting model\(\}\). Assuming DDH, there exists a reusable two-round \(\mathcal {X}\)-MPC protocol.

Previously, constructions of reusable two-round MPC were only known assuming indistinguishability obfuscation [GGHR14, GS17] (or, witness encryption [GLS15, GGSW13]) or were based on multi-key fully-homomorphic encryption (FHE) [MW16, PS16, BP16]. Furthermore, one limitation of the FHE-based protocols is that they are in the CRS model even for the setting of semi-honest adversaries.

We note that the two-round MPC protocols cited above additionally achieve overall communication independent of the computed circuit. This is not the focus of this work. Instead, the aim of this work is to realize two-round MPC with reusability, without relying on lattices. As per our current understanding, MPC protocols with communication independent of the computed circuit are only known using lattice techniques (i.e., FHE [Gen09]). Interestingly, we use HSS, which was originally developed to improve communication efficiency in two-party secure computation protocols, to obtain reusability.

First Message Succinct Two-Round MPC. At the heart of this work is a construction of a first message succinct (FMS) two-round MPC protocol— that is, a two-round MPC protocol where the first message of the protocol is computed independently of the circuit being evaluated. In particular, the parties do not need to know the description of the circuit that will eventually be computed over their inputs in the second round. Furthermore, parties do not even need to know the size of the circuit to be computed in the second round.Footnote 2 This allows parties to publish their first round messages and later compute any arbitrary circuit on their inputs. Formally, we show the following:

Theorem 3

Let \(\mathcal {X} \in \{\)semi-honest in plain model, malicious in common random/reference sting model\(\}\). Assuming DDH, there exists a first message succinct two-round \(\mathcal {X}\)-MPC protocol.

Such protocols were previously only known based on \(iO\) [GGHR14, DHRW16] or assumptions currently needed to realize FHE [MW16, BP16, PS16, ABJ+19]. Note that for the learning-with-errors (LWE) based versions of these protocols, the first message can only be computed knowing the depth (or, an upper bound on the maximum depth) of the circuit to be computed. We find the notion of first message succinct two-round MPC quite natural and expect it be relevant for several other applications. In addition to using HSS in a novel manner, our construction benefits from the powerful garbling techniques realized in recent works [LO13, GHL+14, GLOS15, GLO15, CDG+17, DG17b].

From First Message Succinctness to Reusability. On first thought, the notion of first message succinctness might seem like a minor enhancement. However, we show that this “minor looking” enhancement is sufficient to enable reusable two-round MPC (supporting arbitrary number of computations) generically. More formally:

Theorem 4

Let \(\mathcal {X} \in \{\)semi-honest in plain model, malicious in common random/reference sting model\(\}\). Assuming a first message succinct two-round MPC protocol, there exists a reusable two-round \(\mathcal {X}\)-MPC protocol.

Two recent independent works have also explicitly studied reusable two-round MPC, obtaining a variety of results. First, Benhamouda and Lin [BL20] construct reusable two-round MPC from assumptions on bilinear maps. Their techniques are quite different than those used in this paper and, while they need stronger assumptions than us, their protocol does have the advantage that the number of parties participating in the second round need not be known when generating first round messages. In our protocol, the number of parties in the system is a parameter used to generate the first round messages. Second, Ananth et al. [AJJ20] construct semi-honest reusable two-round MPC from lattices in the plain model. Prior work from lattices [MW16] required a CRS even in the semi-honest setting. The work of Ananth et al. [AJJ20] includes essentially the same transformation from “first message succinct” MPC to reusable MPC that constitutes the third step of our construction (see Sect. 2.3).

2 Technical Overview

In this section, we highlight our main ideas for obtaining reusability in two-round MPC. Our construction is achieved in three steps. Our starting point is the recently developed primitive of Homomorphic Secret Sharing (HSS), which realizes the following scenario. A secret s is shared among two parties, who can then non-interactively evaluate a function f over their respective shares, and finally combine the results to learn f(s), but nothing more.

2.1 Step 1: Overview of the \(\mathsf {scHSS} \) Construction

First, we show how to use a “standard” HSS (for only two parties, and where the reconstruction algorithm is simply addition) to obtain a new kind of HSS, which we call sharing compact HSS (\(\mathsf {scHSS} \)). The main property we achieve with \(\mathsf {scHSS} \) is the ability to share a secret among n parties, for any n, while maintainting compactness of the share size. In particular, as in standard HSS, the sharing algorithm will be independent of the circuit that will be computed on the shares. We actually obtain a few other advantages over constructions of standard HSS [BGI16, BGI17], namely, we get negligible rather than inverse polynomial evaluation error, and we can support computations of any polynomial-size circuit. To achieve this, we sacrifice compactness of the evaluated shares, simplicity of the reconstruction algorithm, and security for multiple evaluations. However, it will only be crucial for us that multiple parties can participate, and that the sharing algorithm is compact.

The approach: A sharing-compact HSS scheme consists of three algorithms, \(\mathsf {Share},\mathsf {Eval},\) and \(\mathsf {Dec} \). Our construction follows the compiler of [GS18] that takes an arbitrary MPC protocol and squishes it to two rounds. At a high level, to share a secret x among n parties, we have the \(\mathsf {Share} \) algorithm first compute an n-party additive secret sharing \(x_1,\dots ,x_n\) of x. Then, it runs the first round of the squished n-party protocol on behalf of each party j with input \(x_j\).Footnote 3 Finally, it sets the j’th share to be all of the first round messages, plus the secret state of the j’th party. The \(\mathsf {Eval} \) algorithm run by party j will simply run the second round of the MPC, and output the resulting message. The \(\mathsf {Dec} \) algorithm takes all second round messages and reconstructs the output.

Recall that we aim for a sharing-compact HSS, which in particular means that the \(\mathsf {Share} \) algorithm must be independent of the computation supported during the \(\mathsf {Eval} \) phase. Thus, the first observation that makes the above approach viable is that the first round of the two-round protocol that results from the [GS18] compiler is independent of the particular circuit being computed. Unfortunately, it is not generated independently of the size of the circuit to be computed, so we must introduce new ideas to remove this size dependence.

The [GS18] compiler: Before further discussing the size dependence issue, we recall the [GS18] compiler. The compiler is applied to any conforming MPC protocol, a notion defined in [GS18].Footnote 4 Roughly, a conforming protocol operates via a sequence of actions \(\phi _1,\dots ,\phi _T\). At the beginning of the protocol, each party j broadcasts a one-time pad of their input, and additionally generates some secret state \(v_j\). The encrypted inputs are arranged into a global public state \(\mathsf {st} \), which will be updated throughout the protocol. At each step t, the action \(\phi _t = (j,f,g,h)\) is carried out by having party j broadcast the bit \(\gamma _t :=\mathsf {NAND} (\mathsf {st} _f \oplus v_{j,f}, \mathsf {st} _g \oplus v_{j,g}) \oplus v_{j,h}\). Everybody then updates the global state by setting \(\mathsf {st} _h :=\gamma _t\). We require that the transcript of the protocol is publicly decodable, so that after the T actions are performed, anybody can learn the (shared) output by inspecting \(\mathsf {st} \).

Now, the [GS18] compiler works as follows. In the first round of the compiled protocol, each party runs the first round of the conforming protocol and broadcasts a one-time pad of their input. In the second round, each party generates a set of garbled circuits that non-interactively implement the computation phase of the conforming protocol. In particular, this means that an evaluator can use the garbled circuits output by each party to carry out each action \(\phi _1,\dots ,\phi _T\), learn the resulting final \(\mathsf {st} \), and recover the output. The garbled circuits operate as follows. Each garbled circuit for party j takes as input the public state \(\mathsf {st} \), and outputs information that allows recovery of input labels for party j’s next garbled circuit, corresponding to an updated version of the public state. To facilitate this, the initial private state of each party must be hard-coded into each of their garbled circuits.

In more detail, consider a particular round t and action \(\phi _t = (j^*,f,g,h)\). Each party will output a garbled circuit for this round. We refer to party \(j^*\) as the “speaking” party for this round. Party \(j^*\)’s garbled circuit will simply use its private state to compute the appropriate NAND gate and update the public state accordingly, outputting the correct labels for party \(j^*\)’s next garbled circuit, and the bit \(\gamma _t\) to be broadcast. It remains to show how the garbled circuit of each party \(j \ne j^*\) can incorporate this bit \(\gamma _t\), revealing the correct input label for their next garbled circuit. We refer to party j as the “listening” party. In [GS18], this was facilitated by the use of a two-round oblivious transfer (OT). In the first round, each pair of parties \((j,j^*)\) engages in the first round of multiple OT protocols with j acting as the sender and \(j^*\) acting as the receiver. Specifically, \(j^*\) sends a set of receiver messages to party j. Then during action t, party j’s garbled circuit responds with j’s sender message, where the sender’s two strings are garbled input labels \(\mathsf {lab} _0,\mathsf {lab} _1\) of party j’s next garbled circuit. Party \(j^*\)’s garbled circuit reveals the randomness used to produce the receiver’s message with the appropriate receiver bit \(\gamma _t\). This allows for public recovery of the label \(\mathsf {lab} _{\gamma _t}\).

However, note that each of the T actions requires its own set of OTs to be generated in the first round. Each is then “used up” in the second round, as the receiver’s randomness is revealed in the clear. This is precisely what makes the first round of the resulting MPC protocol depend on the size of the circuit to be computed: the parties must engage in the first round of \(\Omega (T)\) oblivious transfers during the first round of the MPC protocol.

Pair-Wise Correlations: As observed also in [GIS18], the point of the first round OT messages was to set up pair-wise correlations between parties that were then exploited in the second round to facilitate the transfer of a bit from party \(j^*\)’s garbled circuit to party j’s garbled circuit. For simplicity, assume for now that when generating the first round, the parties j and \(j^*\) already know the bit \(\gamma _t\) that is to be communicated during action t. This is clearly not the case, but this issue is addressed in [GS18, GIS18] (and here) by generating four sets of correlations, corresponding to each of the four possible settings of the two bits of the public state (\(\alpha ,\beta \)) at the indices (fg) corresponding to action \(\phi _t = (j^*,f,g,h)\).

Now observe that the following correlated randomness suffices for this task. Party j receives uniformly random strings \(z^{(0)}, z^{(1)} \in \{0,1\}^\lambda \), and party \(j^*\) receives the string \(z^{(*)} := z^{(\gamma _t)}\). Recall that party j has in mind garbled input labels \(\mathsf {lab} _0,\mathsf {lab} _1\) for its next garbled circuit, and wants to reveal \(\mathsf {lab} _{\gamma _t}\) in the clear, while keeping \(\mathsf {lab} _{1-\gamma _t}\) hidden. Thus, party j’s garbled circuit will simply output \((\mathsf {lab} _0 \oplus z^{(0)}, \mathsf {lab} _1 \oplus z^{(1)})\), and party \(j^*\)’s garbled circuit outputs \(z^{(*)}\). Now, instead of generating first round OT messages, the \(\mathsf {Share} \) algorithm could simply generate all of the pair-wise correlations and include them as part of the shares. Of course, the number of correlations necessary still depends on T, so we will need the \(\mathsf {Share} \) algorithm to produce compact representations of these correlations.

Compressing Using Constrained PRFs: Consider a pair of parties \((j,j^*)\), and let \(T_{j^*}\) be the set of actions where \(j^*\) is the speaking party. We need the output of \(\mathsf {Share} \) to (implicitly) include random strings \(\{z^{(0)}_{t},z^{(1)}_{t}\}_{t \in T_{j^*}}\) in j’s share and \(\{z^{(\gamma _t)}_{t}\}_{t \in T_{j^*}}\) in \(j^*\)’s share. The first set of strings would be easy to represent compactly with a PRF key \(k_j\), letting \(z^{(b)}_{t} := \mathsf {PRF} (k_j,(t,b))\). However, giving the key \(k_j\) to party \(j^*\) would reveal too much, as it is imperative that we keep \(\{z^{(1-\gamma _t)}_{t}\}_{t \in T_{j^*}}\) hidden from party \(j^*\)’s view. We could instead give party \(j^*\) a constrained version of the key \(k_j\) that only allows \(j^*\) to evaluate \(\mathsf {PRF} (k_j,\cdot )\) on points \((t,\gamma _t)\). We expect that this idea can be made to work, and one could hope to present a construction based on the security of (single-key) constrained PRFs for constraints in \(NC^1\) (plus a standard PRF computable in \(NC^1\)). Such a primitive was achieved in [AMN+18] based on assumptions in a traditional group, however, we aim for a construction from weaker assumptions.

Utilizing HSS: Inspired by [BCGI18, BCG+19], we take a different approach based on HSS. Consider sharing the PRF key \(k_j\) between parties j and \(j^*\), producing shares \(\mathsf {sh} _j\) and \(\mathsf {sh} _{j^*}\), and additionally giving party j the key \(k_j\) in the clear. During action t, we have parties j and \(j^*\) (rather, their garbled circuits) evaluate the following function on their respective shares: if \(\gamma _t = 0\), output \(0^\lambda \) and otherwise, output \(\text {PRF}(k_j,t)\). Assuming that the \(\mathsf {HSS} \) evaluation is correct, and using the fact that \(\mathsf {HSS} \) reconstruction is additive (over \(\mathbb {Z} _2\)), this produces a pair of outputs \((y_j,y_{j^*})\) such that if \(\gamma _t = 0\), \(y_j \oplus y_{j^*} = 0^\lambda \), and if \(\gamma _t = 1\), \(y_j \oplus y_{j^*} = \mathsf {PRF} (k_j,t)\). Now party j sets \(z^{(0)}_{t} := y_j\) and \(z^{(1)}_{t} := y_j \oplus \mathsf {PRF} (k_j,t)\), and party \(j^*\) sets \(z^{(*)}_{t} := y_{j^*}\). This guarantees that \(z^{(*)}_{t} = z^{(\gamma _t)}_{t}\) and that \(z^{(1-\gamma _t)}_{t} = z^{(*)}_{t} \oplus \mathsf {PRF} (k_j,t)\), which should be indistinguishable from random to party \(j^*\), who doesn’t have \(k_j\) in the clear.

Tying Loose Ends: This approach works, except that, as alluded to before, party j’s garbled circuit will not necessarily know the bit \(\gamma _t\) when evaluating its \(\mathsf {HSS} \) share. This is handled by deriving \(\gamma _t\) based on public information (some bits \(\alpha ,\beta \) of the public shared state), and the private state of party \(j^*\). Since party \(j^*\)’s private state cannot be public information, this derivation must happen within the \(\mathsf {HSS} \) evaluation, and in particular, the secret randomness that generates \(j^*\)’s private state must be part of the secret shared via \(\mathsf {HSS} \). In our construction, we compile a conforming protocol where each party \(j^*\)’s randomness can be generated by a PRF with key \(s_{j^*}\). Thus, we can share the keys \((k_j,s_{j^*})\) between parties j and \(j^*\), allowing them to compute output shares with respect to the correct \(\gamma _t\). Finally, note that the computation performed by \(\mathsf {HSS} \) essentially only consists of PRF evaluations. Thus, assuming a PRF in NC1 (which follows from DDH [NR97]), we only need to make use of HSS that supports evaluating circuits in NC1, which also follows from DDH [BGI16, BGI17].

Dealing with the \(1-1/Poly\) Correctness of HSS: We are not quite done, since the [BGI16, BGI17] constructions of \(\mathsf {HSS} \) only achieve correctness with \(1-1/\mathsf {poly} \) probability. At first glance, this appears to be straightforward to fix. To complete action \(\phi _t = (j^*,f,g,h)\), simply repeat the above \(\lambda \) times, now generating sets \(\{z^{(0)}_{t,p},z^{(1)}_{t,p}\}_{p \in [\lambda ]}\) and \(\{z^{(*)}_{t,p}\}_{p \in [\lambda ]}\), using the values \(\{\mathsf {PRF} (k_j,(t,p))\}_{p \in [\lambda ]}\). Party j now masks the same labels \(\mathsf {lab} _0,\mathsf {lab} _1\) with \(\lambda \) different masks, and to recover \(\mathsf {lab} _{\gamma _t}\), one can unmask each value and take the most frequently occurring string to be the correct label. This does ensure that our \(\mathsf {scHSS} \) scheme is correct except with negligible probability.

Unfortunately, the \(1/\mathsf {poly} \) correctness actually translates to a security issue with the resulting \(\mathsf {scHSS} \) scheme. In particular, it implies that an honest party’s evaluated share is indistiguishable from a simulated evaluated share with probability only \(1-1/\mathsf {poly} \). To remedy this, we actually use an \(n\lambda \)-party MPC protocol, and refer to each of the \(n\lambda \) parties as a “virtual” party. The \(\mathsf {Share} \) algorithm now additively secret shares the secret x into \(n\lambda \) parts, and each of the n real parties participating in the \(\mathsf {scHSS} \) receives the share of \(\lambda \) virtual parties. We are then able to show that for any set of honest parties, with overwhelming probability, there will exist at least one corresponding virtual party that is “simulatable”. The existence of a single simulatable virtual party is enough to prove the security of our construction.

At this point it is important to point out that, while the above strategy suffices to prove our construction secure for a single evaluation (where the circuit evaluated can be of any arbitrary polynomial size), it does not imply that our construction achieves reusability, in the sense that the shares output by \(\mathsf {Share} \) may be used to evaluate any unbounded polynomial number of circuits. Despite the fact that the PRF keys shared via \(\mathsf {HSS} \) should enable the parties to generate an unbounded polynomial number of pair-wise correlations, the \(1/\mathsf {poly} \) evaluation error of the \(\mathsf {HSS} \) will eventually break simulation security. Fortunately, as alluded to before, the property of sharing-compactness actually turns out to be enough to bootstrap our scheme into a truly reusable MPC protocol. The key ideas that allow for this will be discussed in Sect. 2.3.

2.2 Step 2: From \(\mathsf {scHSS} \) to \(\text {FMS}\) MPC

In the second step, we use a \(\mathsf {scHSS} \) scheme to construct a first message succinct two-round MPC protocol (in the rest of this overview we will call it \(\text {FMS}\) MPC). The main feature of a \(\mathsf {scHSS} \) scheme is that its \(\mathsf {Share} \) algorithm is independent of the computation that will be performed on the shares. Intuitively, this is very similar to the main feature offered by a \(\text {FMS}\) MPC protocol, in that the first round is independent of the circuit to be computed. Now, suppose that we have an imaginary trusted entity that learns everyone’s input \((x_1,\ldots ,x_n)\) and then gives each party i a share \(\mathsf {sh} _i\) computed as \((\mathsf {sh} _1,\ldots ,\mathsf {sh} _n)\leftarrow \mathsf {Share} (x_1\Vert \ldots \Vert x_n)\). Note that, due to sharing-compactness this step is independent of the circuit \(\mathsf {C} \) to be computed by the \(\text {FMS}\) MPC protocol. After receiving their shares, each party i runs the \(\mathsf {scHSS} \) evaluation circuit \(\mathsf {Eval} (i,\mathsf {C},\mathsf {sh} _i)\) to obtain their own output share \(y_i\), and then broadcasts \(y_i\) . Finally, on receiving all the output shares \((y_1,\ldots ,y_n)\), everyone computes \(y:=\mathsf {C} (x_1,\ldots x_n)\) by running the decoding procedure of \(\mathsf {scHSS} \): \(y:=\mathsf {Dec} (y_1,\ldots ,y_n)\).

A Straightforward Three-Round Protocol. Unfortunately, we do not have such a trusted entity available in the setting of \(\text {FMS}\) MPC. A natural approach to resolve this would be to use any standard two-round MPC protocol (from now on we refer to such a protocol as vanilla MPC) to realize the \(\mathsf {Share} \) functionality in a distributed manner. However, since the vanilla MPC protocol would require at least two rounds to complete, this straightforward approach would incur one additional round. This is inevitable, because the parties receive their shares only at the end of the second round. Therefore, an additional round of communication (for broadcasting the output shares \(y_i\)) would be required to complete the final protocol.

Garbled Circuits to the Rescue. Using garbled circuits, we are able to squish the above protocol to operate in only two rounds. The main idea is to have each party i additionally send a garbled circuit \(\widetilde{\mathsf {C}}_i\) in the second round. Each \(\widetilde{\mathsf {C}}_i\) garbles a circuit that implements \(\mathsf {Eval} (i,\mathsf {C},\cdot )\). Given the labels for \(\mathsf {sh} _i\), \(\widetilde{\mathsf {C}}_i\) can be evaluated to output \(y_i\leftarrow \mathsf {Eval} (i,\mathsf {C},\mathsf {sh} _i)\). Note that, if it is ensured that every party receives all the garbled circuits and all the correct labels after the second round, they can obtain all \((y_1,\ldots ,y_n)\), and compute the final output y without further communication. The only question left now is how the correct labels are communicated within two rounds.

Tweaking Vanilla MPC to Output Labels. For communicating the correct labels, we slightly tweak the functionality computed by the vanilla MPC protocol. In particular, instead of using it just to compute the shares \((\mathsf {sh} _1,\ldots ,\mathsf {sh} _n)\), we have the vanilla MPC protocol compute a slightly different functionality that first computes the shares, and rather than outputting them directly, outputs the corresponding correct labels for everyone’s shares.Footnote 5 This is enabled by having each party provide a random value \(r_i\), which is used to generate the labels, as an additional input to \(\mathsf {D}\). Therefore, everyone’s correct labels are now available after the completion of the second round of the vanilla MPC protocol. Recall that parties also broadcast their garbled circuits along with the second round of the vanilla MPC. Each party i, on receiving all \(\widetilde{\mathsf {C}}_1,\ldots \widetilde{\mathsf {C}}_n\) and all correct labels, evaluates to obtain \((y_1,\ldots ,y_n)\) and then computes the final output y.Footnote 6

2.3 Step 3: From \(\text {FMS}\) MPC to Reusable MPC

Finally, in this third step, we show how \(\text {FMS}\) MPC can be used to construct reusable two-round MPC, where the first message of the protocol can be reused across multiple computations.

We start with the observation that a two-round \(\text {FMS}\) MPC protocol allows us to compute arbitrary sized circuits after completion of the first round. This offers a limited form of (bounded) reusability, in that all the circuits to be computed could be computed together as a single circuit. However, once the second round is completed, no further computation is possible. Thus, the main challenge is how to leverage the ability to compute a single circuit of unbounded size to achieve unbounded reusability. Inspired by ideas from [DG17b], we address this challenge by using the ideas explained in Step 2 (above) repeatedly. For the purposes of this overview, we first explain a simpler version of our final protocol, in which the second round is expanded into multiple rounds. A key property of this protocol is that, using garbled circuits, those expanded rounds can be squished back into just one round (just like we did in Step 2) while preserving reusability.

Towards Reusability: A Multi-round Protocol. The fact that \(\text {FMS}\) MPC does not already achieve reusability can be re-stated as follows: the first round of \(\text {FMS}\) MPC (computed using an algorithm \(\mathsf {MPC} _1\)) can only be used for a single second round execution (using an algorithm \(\mathsf {MPC} _2\)). To resolve this issue, we build a GGM-like [GGM84] tree-based mechanism that generates a fresh FMS first round message for each circuit to be computed, while ensuring that no FMS first round message is reused.

The first round of our final two-round reusable protocol, as well the multi-round simplified version, simply consists of the first round message corresponding to the root level (of the GGM tree) instance of the FMS protocol. We now describe the subsequent rounds (to be squished to a single second round later) of our multi-round protocol.

Intuitively, parties iteratively use an FMS instance at a particular level of the binary tree (starting from the root) to generate two new first-round FMS messages corresponding to the next level of the tree. The leaf FMS protocol instances will be used to compute the actual circuits. The root to leaf path traversed to compute a circuit \(\mathsf {C} \) is decided based on the description of the circuit \(\mathsf {C} \) itself.Footnote 7

In more detail, parties first send the second round message of the root (0’th) level FMS protocol instance for a fixed circuit \(\mathsf{N}\) (independent of the circuit \(\mathsf {C} \) to be computed) that samples and outputs “left” and “right” \(\mathsf {MPC} _1\) messages using the same inputs that were used in the root level FMS. Now, depending on the first bit of the circuit description, parties choose either the left (if the first bit is 0) or the right (if the first bit is 1) \(\mathsf {MPC} _1\) messages for the next (1st) level. Now using the chosen FMS messages, parties generate the \(\mathsf {MPC} _2\) message for the same circuit \(\mathsf{N}\) as above. This results in two more fresh instances of the \(\mathsf {MPC} _1\) messages for the next (2nd) level. As mentioned before, this procedure is continued until the leaf node is reached. At that point the \(\mathsf {MPC} _2\) messages are generated for the circuit \(\mathsf {C} \) that the parties are interested in computing.

Note that, during the evaluation of two different circuits (each associated with a different leaf node), a certain number of FMS protocol instances might get re-executed. However, our construction ensures that this is merely a re-execution of a fixed circuit with the exact same input/output behavior each time. This guarantees that no FMS message is reused (even though it might be re-executed). Finally, observe that this process of iteratively computing more and more \(\mathsf {MPC} _1\) messages for the FMS protocol is only possible because the generation of the first message of an FMS protocol can be performed independently of the circuit that gets computed in the second round. In particular, the circuit \(\mathsf{N}\) computes two more \(\mathsf {MPC} _1\) messages on behalf of each party.

Squishing the Multiple Rounds: Using Ideas in Step 2 Iteratively. We take an approach similar to Step 2, but now starting with a two-round \(\text {FMS}\) MPC (instead of a vanilla MPC). In the second round, each party will send a sequence of garbled circuits where each garbled circuit will complete one instance of an FMS MPC which generates labels for the next garbled circuit. This effectively emulates the execution of the same FMS MPC instance in the multi-round protocol, but without requiring any additional round. Now, the only thing left to address is how to communicate the correct labels.

Communicating the Labels for Each Party’s Garbled Circuit. The trick here is (again very similar to step 2) to tweak the circuit \(\mathsf{N}\), in that instead of outputting the two \(\mathsf {MPC} _1\) messages for the next level, \(\mathsf{N}\) (with an additional random input \(r_i\) from each party i) now outputs labels corresponding to the messages.Footnote 8

For security reasons, it is not possible to include the same randomness \(r_i\) in the input to each subsequent FMS instance. Thus, we use a carefully constructed tree-based \(\mathsf {PRF} \), following the GGM [GGM84] construction and pass along not the key of the \(\mathsf {PRF} \) but a careful derivative that is sufficient for functionality and does not interfere with security.

Adaptivity in the Choice of Circuit. Our reusable two-round MPC protocol satisfies a strong adaptive security guarantee. In particular, the adversary may choose any circuit to compute after seeing the first round messages (and even after seeing the second round messages for other circuits computed on the same inputs). This stronger security is achieved based on the structure of our construction, since the first round messages of the FMS MPC used to compute the actual circuit are only revealed when the actual execution happens in the second round of the reusable protocol. In particular, we do not even have to rely on “adaptive” security of the underlying FMS protocol to achieve this property.Footnote 9

3 Preliminaries

For standard cryptographic preliminaries, see the full version [BGMM20].

3.1 Two-Round MPC

Throughout this work, we will focus on two-round MPC protocols. We now define the syntax we follow for a two-round MPC protocol.

Definition 5

(Two-Round MPC Procotol). An n-party two-round MPC protocol is described by a triplet of PPT algorithms \((\mathsf {MPC} _1,\mathsf {MPC} _2,\mathsf {MPC} _3)\) with the following syntax.

  • \(\mathsf {MPC} _1(1^\lambda , \mathsf {CRS}, \mathsf {C}, i, x_i; r_i)=:(\mathsf {st} ^{(1)}_{i},\mathsf {msg}^{(1)}_{i})\): Takes as input \(1^\lambda \), a common random/reference string \(\mathsf {CRS} \), (the description of) a circuit \(\mathsf {C} \) to be computed, identity of a party \(i\in [n]\), input \(x_i \in \{0,1\}^*\) and randomness \(r_i \in \{0,1\}^\lambda \) (we drop mentioning the randomness explicitly when it is not needed). It outputs party i’s first message \(\mathsf {msg}^{(1)}_{i}\) and its private state \(\mathsf {st} ^{(1)}_{i}\).

  • \(\mathsf {MPC} _2(\mathsf {C}, \mathsf {st} ^{(1)}_i, \{\mathsf {msg}^{(1)}_{j}\}_{j\in [n]})\rightarrow (\mathsf {st} ^{(2)}_i,\mathsf {msg}^{(2)}_{i})\): Takes as input (the description of) a circuitFootnote 10 \(\mathsf {C} \) to be computed, the stateFootnote 11 of a party \(\mathsf {st} ^{(1)}_{i}\), and the first round messages of all the parties \(\{\mathsf {msg}^{(1)}_{j}\}_{j\in [n]}\). It outputs party i’s second round message \(\mathsf {msg}^{(2)}_{i}\) and its private state \(\mathsf {st} ^{(2)}_{i}\).

  • \(\mathsf {MPC} _3(\mathsf {st} ^{(2)}_i, \{\mathsf {msg}^{(2)}_{j}\}_{j\in [n]})=:y_i \): Takes as input the state of a party \(\mathsf {st} ^{(2)}_{i}\), and the second round messages of all the parties \(\{\mathsf {msg}^{(2)}_{j}\}_{j\in [n]}\). It outputs the ith party’s output \(y_i\).

Each party runs the first algorithm \(\mathsf {MPC} _1\) to generate the first round message of the protocol, the second algorithm \(\mathsf {MPC} _2\) to generate the second round message of the protocol and finally, the third algorithm \(\mathsf {MPC} _3\) to compute the output. The messages are broadcasted after executing the first two algorithms, whereas the state is kept private.

The formal security definition is provided in the full version [BGMM20].

First Message Succinct Two-Round MPC. We next define the notion of a \(\text {first message succinct}\) (FMS) two-round MPC protocol. This notion is a strengthening (in terms of efficiency) of the above described notion of (vanilla) two-round MPC. Informally, a two-round MPC protocol is first message succinct if the first round messages of all the parties can be computed without knowledge of the circuit being evaluated on the inputs. This allows parties to compute their first message independent of the circuit (in particular, independent also of its size) that will be computed in the second round.

Definition 6

(First Message Succinct Two-Round MPC). Let \(\pi = (\mathsf {MPC} _1,\) \(\mathsf {MPC} _2,\mathsf {MPC} _3)\) be a two-round MPC protocol. Protocol \(\pi \) is said to be first message succinct if algorithm \(\mathsf {MPC} _1\) does not take as input the circuit \(\mathsf {C} \) being computed. More specifically, it takes an input of the form \((1^\lambda , \mathsf {CRS}, i, x_i;r_i)\).

Note that a first message succinct two-round MPC satisfies the same correctness and security properties as the (vanilla) two-round MPC.Footnote 12

Reusable Two-Round MPC. We next define the notion of a reusable two-round MPC protocol, which can be seen as a strengthening of the security of a first message succinct two-round MPC protocol. Informally, reusability requires that the parties should be able to reuse the same first round message to securely evaluate an unbounded polynomially number of circuits \(\mathsf {C} _1,\ldots ,\mathsf {C} _\ell \), where \(\ell \) is a polynomial (in \(\lambda \)) that is independent of any other parameter in the protocol. That is, for each circuit \(\mathsf {C} _i\), the parties can just run the second round of the protocol each time (using exactly the same first round messages) allowing the parties to evaluate the circuit on the same inputs. Note that each of these circuits can be of size an arbitrary polynomial in \(\lambda \).

Very roughly, security requires that the transcript of all these executions along with the set of outputs should not reveal anything more than the inputs of the corrupted parties and the computed outputs.

We again formalize security (and correctness) via the real/ideal world paradigm. Consider n parties \(P_1,\ldots ,P_n\) with inputs \(x_1,\ldots ,x_n\) respectively. Also, consider an adversary \(\mathcal {A} \) corrupting a set \(I \subset [n]\) of parties.

The Real Execution. In the real execution, the n-party first message succinct two-round MPC protocol \(\pi = (\mathsf {MPC} _1,\mathsf {MPC} _2,\mathsf {MPC} _3)\) is executed in the presence of an adversary \(\mathcal {A} \). The adversary \(\mathcal {A} \) takes as input the security parameter \(\lambda \) and an auxiliary input z. The execution proceeds in two phases:

  • Phase I: All the honest parties \(i \notin I\) execute the first round of the protocol by running the algorithm \(\mathsf {MPC} _1\) using their respective input \(x_i\). They broadcast their first round message \(\mathsf {msg}^{(1)}_{i}\) and preserve their secret state \(\mathsf {st} ^{(1)}_i\). Then the adversary \(\mathcal {A} \) sends the first round messages on behalf of the corrupted parties following any arbitrary (polynomial-time computable) strategy (a semi-honest adversary follows the protocol behavior honestly and runs the algorithm \(\mathsf {MPC} _1(\cdot )\)).

  • Phase II (Reusable): The adversary outputs a circuit \(\mathsf {C} \), which is provided to all parties.

    Next, each honest party computes the algorithm \(\mathsf {MPC} _2\) using this circuit \(\mathsf {C} \) (and its secret state \(\mathsf {st} ^{(1)}_i\) generated as the output of \(\mathsf {MPC} _1\) in Phase I). Again, adversary \(\mathcal {A} \) sends arbitrarily computed (in PPT) second round messages on behalf of the corrupt parties. The honest parties return the output of \(\mathsf {MPC} _3\) executed on their secret state and the received second round messages.

    The adversary \(\mathcal {A} \) decides whether to continue the execution of a different computation. If yes, then the computation returns to the beginning of phase II. In the other case, phase II ends.

The interaction of \(\mathcal {A} \) in the above protocol \(\pi \) defines a random variable \(\mathsf {REAL}_{\pi ,\mathcal {A}}(\) \(\lambda , \mathbf {x},z,I)\) whose distribution is determined by the coin tosses of the adversary and the honest parties. This random variable contains the output of the adversary (which may be an arbitrary function of its view) as well as the output recovered by each honest party.

The Ideal Execution. In the ideal execution, an ideal world adversary \(\mathsf {Sim} \) interacts with a trusted party. The ideal execution proceeds as follows:

  1. 1.

    Send inputs to the trusted party: Each honest party sends its input to the trusted party. Each corrupt party \(P_i\), (controlled by \(\mathsf {Sim} \)) may either send its input \(x_i\) or send some other input of the same length to the trusted party. Let \(x'_i\) denote the value sent by party \(P_i\). Note that for a semi-honest adversary, \(x'_i = x_i\) always.

  2. 2.

    Adversary picks circuit: \(\mathsf {Sim} \) sends a circuit \(\mathsf {C} \) to the ideal functionality which is also then forwarded to the honest parties.

  3. 3.

    Trusted party sends output to the adversary: The trusted party computes \(\mathsf {C} (x'_1,\ldots ,x'_n) = (y_1,\ldots ,y_n)\) and sends \(\{y_i\}_{i\in I}\) to the adversary.

  4. 4.

    Adversary instructs trusted party to abort or continue: This is formalized by having the adversary \(\mathsf {Sim} \) send either a continue or abort message to the trusted party. (A semi-honest adversary never aborts.) In the latter case, the trusted party sends to each uncorrupted party \(P_i\) its output value \(y_i\). In the former case, the trusted party sends the special symbol \(\bot \) to each uncorrupted party.

  5. 5.

    Reuse: The adversary decides whether to continue the execution of a different computation. In the yes case, the ideal world returns to the start of Step 2.

  6. 6.

    Outputs: \(\mathsf {Sim} \) outputs an arbitrary function of its view, and the honest parties output the values obtained from the trusted party.

\(\mathsf {Sim} \)’s interaction with the trusted party defines a random variable \(\mathsf {IDEAL}_{\mathsf {Sim}}(\lambda ,\) \(\mathbf {x},z, I)\). Having defined the real and the ideal worlds, we now proceed to define our notion of security.

Definition 7

Let \(\lambda \) be the security parameter. Let \(\pi \) be an n-party two-round protocol, for \(n\in \mathsf {N}\). We say that \(\pi \) is a reusable two-round MPC protocol in the presence of malicious (resp., semi-honest) adversaries if for every PPT real world adversary (resp., semi-honest adversary) \(\mathcal {A} \) there exists a PPT ideal world adversary (resp., semi-honest adversary) \(\mathsf {Sim} \) such that for any \(\mathbf {x}=\{ x_i\}_{i\in [n]} \in (\{0,1\}^{*})^n\), any \(z\in \{0,1\}^*\), any \(I \subset [n]\) and any PPT distinguisher \(\mathcal {D} \), we have that

$$|Pr[\mathcal {D} (\mathsf {REAL}_{\pi ,\mathcal {A}}(\lambda , \mathbf {x}, z, I)) = 1] - Pr[\mathcal {D} (\mathsf {IDEAL}_{\mathsf {Sim}}(\lambda , \mathbf {x}, z, I)) = 1]|$$

is negligible in \(\lambda \).

4 Step 1: Constructing Sharing-Compact HSS from HSS

In this section, we start by recalling the notion of homomorphic secret sharing (HSS) and defining our notion of sharing-compact HSS. We use the standard notion of HSS, which supports two parties and features additive reconstruction. In contrast, our notion of sharing compactness is for the multi-party case, but does not come with the typical bells and whistles of a standard HSS scheme—specifically, it features compactness only of the sharing algorithm and without additive reconstruction. For brevity, we refer to this notion of HSS as sharing-compact HSS (\(\mathsf {scHSS} \)). In what follows, we give a construction of sharing-compact HSS and prove its security.

4.1 Sharing-Compact Homomorphic Secret Sharing

We continue with our definition of sharing-compact HSS, which differs from HSS in various ways:

  • we support sharing among an arbitrary number of parties (in particular, more than 2);

  • we have a simulation-based security definition;

  • we support a notion of robustness;

  • we have negligible correctness error;

  • our reconstruction procedure is not necessarily additive;

  • we require security for only one evaluation.

We do preserve the property that the sharing algorithm, and in particular, the size of the shares, is independent of the size of the program to be computed.

Definition 8

(Sharing-compact Homomorphic Secret Sharing (\(\mathsf {scHSS} \))). A \(\mathsf {scHSS} \) scheme for a class of programs \(\mathcal {P} \) is a triple of \(\text {PPT}\) algorithms (\(\mathsf {Share}\),\(\mathsf {Eval}\),\(\mathsf {Dec}\)) with the following syntax:

  • \(\mathsf {Share} (1^\lambda ,n,x)\): Takes as input a security parameter \(1^\lambda \), a number of parties n, and a secret \(x\in \{0,1\} ^*\), and outputs shares \((x_1,\ldots ,x_n)\).

  • \(\mathsf {Eval} (j,P,x_j)\): Takes as input a party index \(j\in [n]\), a program P, and share \(x_j\), and outputs a string \(y_j\in \{0,1\} ^*\).

  • \(\mathsf {Dec} (y_1,\ldots ,y_n)\): Takes as input all evaluated shares \((y_1,\ldots ,y_n)\) and outputs \(y\in \{0,1\} ^*\).

The algorithms satisfy the following properties.

  • Correctness: For any program \(P \in \mathcal {P} \) and secret x,

    $$\begin{aligned} \Pr \left[ \mathsf {Dec} (y_1,\dots ,y_n) = P(x):\begin{array}{r}(x_1,\dots ,x_n) \leftarrow \mathsf {Share} (1^\lambda ,x) \\ \forall j, y_j \leftarrow \mathsf {Eval} (j,P,x_j)\end{array}\right] = 1-\mathsf {negl} (\lambda ). \end{aligned}$$
  • Robustness: For any non-empty set of honest parties \(H \subseteq [n]\), program \(P \in \mathcal {P} \), secret x, and PPT adversary \(\mathcal {A} \),

    $$\Pr \left[ \begin{array}{l} \mathsf {Dec} (y_1,\ldots , y_n)\in \{P(x), \bot \} \end{array} : \begin{array}{r} (x_1,\ldots ,x_n)\leftarrow \mathsf {Share} (1^\lambda ,x) \\ \forall j\in H, y_j\leftarrow \mathsf {Eval} (j, P, x_j) \\ \{y_j\}_{j \in [n] \setminus H} \leftarrow \mathcal {A} (\{x_j\}_{j \in [n] \setminus H},\{y_j\}_{j \in H}) \end{array} \right] =1 - \mathsf {negl} (\lambda ).$$
  • Security: There exists a PPT simulator \(\mathcal {S} \) such that for any program \(P \in \mathcal {P} \), any secret x, and any set of honest parties \(H\subseteq [n]\) we have that:

    $$\left\{ \{x_i\}_{i \in [n] \setminus H},\{y_i\}_{i \in H} : \begin{array}{r}(x_1,\dots ,x_n) \leftarrow \mathsf {Share} (1^\lambda ,n,x), \\ \forall i \in H, y_i \leftarrow \mathsf {Eval} (i,P,x_i)\end{array}\right\} {\mathop {\approx }\limits ^{c}} \left\{ \mathcal {S} (1^\lambda ,P,n,H,P(x))\right\} .$$

4.2 Conforming Protocol

In our construction, we need a modification of the notion of conforming MPC protocol from [GS18]. Consider an MPC protocol \(\varPhi \) between parties \(P_1, \ldots , P_n\). For each \(i \in [n]\), we let \(x_i \in \{0,1\}^m\) denote the input of party \(P_i\). We consider any random coins used by a party to be part of its input (we can assume each party uses at most \(\lambda \) bits of randomness, and expands as necessary with a PRF). A conforming protocol \(\varPhi \) is defined by functions \(\mathsf {inpgen} \), \(\mathsf {gen} \), \(\mathsf {post} \), and computation steps or what we call actions \(\phi _1,\cdots \phi _T\). The protocol \(\varPhi \) proceeds in three stages: the input sharing stage, the computation stage, and the output stage. For those familiar with the notion of conforming protocol from [GS18, GIS18], we outline the differences here.

  • We split their function \(\mathsf {pre}\) into \((\mathsf {inpgen},\mathsf {gen})\), where \(\mathsf {inpgen} \) is universal, in the sense that it only depends on the input length m (and in particular, not the function to be computed).

  • We explicitly maintain a single public global state \(\mathsf {st} \) that is updated one bit at a time. Each party’s private state is maintained implicitly via their random coins \(s_i\) chosen during the input sharing phase.

  • We require the transcript (which is fixed by the value of \(\mathsf {st} \) at the end of the protocol) to be publicly decodable.

Next, we give our description of a conforming protocol.

  • Input sharing phase: Each party i chooses random coins \(s_i \leftarrow \{0,1\}^\lambda \), computes \((w_i,r_i) := \mathsf {inpgen} (x_i,s_i)\) where \(w_i = x_i \oplus r_i\), and broadcasts \(w_i\). Looking ahead to the proof of Lemma 9, we will take \(s_i\) to be the seed of a \(\mathsf {PRF} (s_i,\cdot ): \{0,1\}^* \rightarrow \{0,1\}\).

  • Computation phase: Let T be a parameter that depends on the circuit C to be computed. Each party sets the global public state

    $$\begin{aligned} \mathsf {st}:= (w_1 \Vert 0^{T/n} \Vert w_2 \Vert 0^{T/n} \Vert \cdots \Vert w_n \Vert 0^{T/n}), \end{aligned}$$

    and generates their secret state \(v_i := \mathsf {gen} (i,s_i)\).Footnote 13 Let \(\ell \) be the length of \(\mathsf {st} \) or \(v_i\) (\(\mathsf {st} \) and \(v_i\) will be of the same length). We will also use the notation that for index \(f \in [\ell ]\), \(v_{i,f} := \mathsf {gen} _f(i,s_i)\). For each \(t \in \{1\cdots T\}\) parties proceed as follows:

    1. 1.

      Parse action \( \phi _t\) as (ifgh) where \(i \in [n]\) and \(f,g,h \in [\ell ]\).

    2. 2.

      Party \(P_{i}\) computes one \(\mathsf {NAND}\) gate as

      $$\gamma _t = \mathsf {NAND} (\mathsf {st} _f \oplus v_{i,f} , \mathsf {st} _g \oplus v_{i,g}) \oplus v_{i,h}$$

      and broadcasts \(\gamma _t\) to every other party.

    3. 3.

      Every party updates \(\mathsf {st} _h\) to the bit value \(\gamma _t\) received from \(P_i\).

    We require that for all \(t, t' \in [T]\) such that \(t \ne t'\), we have that if \(\phi _t = (\cdot ,\cdot ,\cdot ,h)\) and \(\phi _{t'} = (\cdot ,\cdot ,\cdot ,h')\) then \(h \ne h'\) (this ensures that no state bit is ever overwritten).

  • Output phase: Denote by \(\Gamma = (\gamma _1,\dots ,\gamma _T)\) the transcipt of the protocol, and output \(\mathsf {post} (\Gamma )\).

Lemma 9

For any input length m, there exists a function \(\mathsf {inpgen} \) such that any n party MPC protocol \(\Pi \) (where each party has an input of length at most m) can be written as a conforming protocol \(\varPhi = (\mathsf {inpgen},\mathsf {gen},\mathsf {post}, \{\phi _t\}_{t \in T})\) while inheriting the correctness and the security of the original protocol.

The proof of this lemma is very similar to the proof provided in [GS18], and is deferred to the full version [BGMM20].

4.3 Our Construction

We describe a sharing-compact HSS scheme for sharing an input \(x \in \{0,1\}^m\) among n parties.

Ingredients: We use the following ingredients in our construction.

  • An \(n\lambda \)-party conforming MPC protocol \(\varPhi \) (for computing an arbitrary functionality) with functions \(\mathsf {inpgen},\mathsf {gen} \), and \(\mathsf {post} \).

  • A homomorphic secret sharing scheme \((\mathsf {HSS}.\mathsf {Share},\mathsf {HSS}.\mathsf {Eval})\) supporting evaluations of circuits in \(NC^1\). To ease notation in the description of our protocol, we will generally leave the party index, identifier, and error parameter \(\delta \) implicit. The party index will be clear from context, the identifier can be the description of the function to be evaluated, and the error parameter will be fixed once and for all by the parties.

  • A garbling scheme for circuits \((\mathsf {Garble},\mathsf {GEval})\).

  • A robust private-key encryption scheme \((\mathsf {rob.enc},\mathsf {rob.dec})\).

  • A \(\mathsf {PRF} \) that can be computed in \(NC^1\).

Theorem 10

Assuming a semi-honest MPC protocol (with any number of rounds) that can compute any polynomial-size functionality, a homomorphic secret sharing scheme supporting evaluations of circuits in \(NC^1\), and a \(\mathsf {PRF} \) that can be computed in \(NC^1\), there exists a sharing-compact homomorphic secret sharing scheme supporting the evaluation of any polynomial-size circuit.

Notation: As explained in Sect. 2, our construction at a high level follows the template of [GS18] (which we refer to as the GS protocol). In the evaluation step of our construction, each party generates a sequence of garbled circuits, one for each action step of the conforming protocol. For each of these action steps, the garbled circuit of one party speaks and the garbled circuits of the rest listen. We start by describing three circuits that aid this process: (i) circuit \(\mathsf {F} \) (described in Fig. 1), which includes the \(\mathsf {HSS} \) evaluations enabling the speaking/listening mechanism, (ii) circuit \(\mathsf {P} ^*\) (described in Fig. ) garbled by the speaking party, and (iii) circuit \(\mathsf {P} \) (described in Fig. 3) garbled by the listening party.

(i) Circuit \(\mathsf {F} \). The speaking garbled circuit and the listening garbled circuit need shared secrets for communication. Using HSS, \(\mathsf {F} \) provides an interface for setting up these shared secrets. More specifically, consider a speaking party \(j^*\) and a listening party \(j \ne j^*\) during action t. In our construction, the parties \(j,j^*\) will be provided with HSS shares of their secrets \(\{s_j,s_{j^*}\},\{k_j,k_{j^*}\}\). Note that the order of \(s_j\) and \(s_{j^*}\) in \(\{s_j,s_{j^*}\}\) and the order of \(k_j\) and \(k_{j^*}\) in \(\{k_j,k_{j^*}\}\) is irrelevant. All of the secret information used by party \(j^*\) in computation of its conforming protocol messages is based on \(s_{j^*}\). Also, during action t, party j’s garbled circuit will need to output encrypted labels for its next garbled circuit. Secret \(k_j\) is used to generate any keys needed for encrypting garbled circuit labels. Concretely, in the circuit \(\mathsf {G} \) (used inside \(\mathsf {F} \)), observe that \(s_{j^*}\) is used to perform the computation of \(\gamma \), and \(k_j\) is used to compute the “difference value”, explained below.

Fig. 1.
figure 1

The Circuit \(\mathsf {F} \).

Both party j and party \(j^*\) can compute \(\mathsf {F} \) on their individual share of \(\{s_j,s_{j^*}\},\{k_j,k_{j^*}\}\). They either obtain the same output value (in the case that party \(j^*\)’s message bit for the \(t^{th}\) action is 0) or they obtain outputs that differ by a pseudorandom difference value known only to party j (in the case that party \(j^*\)’s message bit for the \(t^{th}\) action is 1). This difference value is equal to \(\mathsf {PRF} (k_j, (t,\alpha , \beta , p))\), where t, \(\alpha \), \(\beta \) and p denote various parameters of the protocol.

Next, we’ll see how the circuit \(\mathsf {F} \) enables communication between garbled circuits. In our construction the speaking party will just output the evaluation of \(\mathsf {F} \) on its share (for appropriate choices of t, \(\alpha \), \(\beta \) and p). On the other hand, party j will encrypt the zero-label for its next garbled circuit using the output of the evaluation of \(\mathsf {F} \) on its share (for appropriate choices of t, \(\alpha \), \(\beta \) and p) and will encrypt the one-label for its next garbled circuit using the exclusive or of this value and the difference value. Observe that the output of the speaking circuit will be exactly the key used to encrypt the label corresponding to the bit sent by \(j^*\) in the \(t^{th}\) action.

Finally, we need to ensure that each circuit \(\mathsf {G} \) evaluated under the HSS can be computed in \(NC^1\). Observe that \(\mathsf {G} \) essentially only computes \(\mathsf {gen} _f(j^*,s_{j^*})\) evaluations and \(\mathsf {PRF} (k_j,\cdot )\) evaluations. The proof of Lemma 9 shows that \(\mathsf {gen} _f(j^*,s_{j^*})\) may be computed with a single PRF evaluation using key \(s_{j^*}\). Thus, if we take each \(s_j,k_j\) to be keys for a PRF computable in \(NC^1\), it follows that \(\mathsf {G} \) will be in \(NC^1\).

(ii) The Speaking Circuit \(\mathsf {P} ^*\). The construction of the speaking circuit is quite simple. The speaking circuit for the party \(j^*\) corresponding to action t computes the updated global state and the bit \(\gamma \) sent out in action t. However, it must somehow communicate \(\gamma \) to the garbled circuit of each \(j \ne j^*\). This effect is achieved by having \(\mathsf {P} ^*\) return the output of \(\mathsf {F} \) (on relevant inputs as explained above). However, technical requirements in the security proof preclude party \(j^*\) from hard-coding its HSS share \(\mathsf {sh} \) into \(\mathsf {P} ^*\), and having \(\mathsf {P} ^*\) compute on this share. Thus, we instead hard-code the outputs of \(\mathsf {F} \) on all relevant inputs. More specifically, we hard-code \(\left\{ z^{(\alpha ,\beta )}_{j,p}\right\} _{\begin{array}{c} \alpha ,\beta \in \{0,1\}, \\ j \in [n'] \setminus \{j^*\}, \\ p \in [\lambda ] \end{array}}\), where \(z^{(\alpha ,\beta )}_{j,p}\) is obtained as the output \(\mathsf {F} [t,\alpha ,\beta ,p](\mathsf {sh})\).

Fig. 2.
figure 2

The Speaking Circuit \(\mathsf {P} ^*\).

(iii) The Listening Circuit \(\mathsf {P} \). The construction of the listening circuit mirrors that of the speaking circuit. The listening circuit outputs the labels for all wires except the \(h^{th}\) wire that it is listening on. For the \(h^{th}\) wire, the listening circuit outputs encryptions of the two labels under two distinct keys, where one of them will be output by the speaking circuit during this action. As in the case of speaking circuits, for technical reasons in the proof, we cannot have the listening circuit compute these value but must instead hard-code them. More specifically, we hard code \(\left\{ z_{p,0}^{(\alpha ,\beta )},z_{p,1}^{(\alpha ,\beta )}\right\} _{\begin{array}{c} \alpha ,\beta \in \{0,1\}, \\ p \in [\lambda ] \end{array}}\) where \(z_{p,0}^{(\alpha ,\beta )}\) is obtained as \(\mathsf {F} [t,\alpha ,\beta ,p](\mathsf {sh})\) and \(z_{p,1}^{(\alpha ,\beta )}\) is obtained as \(z_{p,0}^{(\alpha ,\beta )} \oplus \mathsf {PRF} \left( k_j,(t,\alpha ,\beta ,p)\right) \).

Fig. 3.
figure 3

The Listening Circuit \(\mathsf {P} \).

The Construction Itself: The foundation of a sharing-compact HSS for evaluating circuit \(\mathsf {C} \) is a conforming protocol \(\varPhi \) (as described earlier in Sect. 4.2) computing the circuit \(\mathsf {C} \). Very roughly (and the details will become clear as we go along), in our construction, the \(\mathsf {Share} \) algorithm will generate secret shares of the input x for the n parties. Additionally, the share algorithm generates the first round GS MPC messages on behalf of each party. The \(\mathsf {Eval} \) algorithm will roughly correspond to the generation of the second round messages of the GS MPC protocol. Finally, the \(\mathsf {Dec} \) algorithm will perform the reconstruction, which corresponds to the output computation step in GS after all the second round messages have been sent out.

The Sharing Algorithm: Because of the inverse polynomial error probability in HSS (hinted at in Sect. 2 and explained in the proof), we need to use an \(n' = n \lambda \) (virtual) party protocol rather than just an n party protocol. Each of the n parties actually messages for \(\lambda \) virtual parties. Barring this technicality and given our understanding of what needs to be shared to enable the communication between garbled circuits, the sharing is quite natural.

On input x, the share algorithm generates a secret sharing of x (along with the randomness needed for the execution of \(\varPhi \)) to obtain a share \(x_j\) for each virtual party \(j \in [n']\). In addition, two \(\mathsf {PRF} \) keys \(s_j, k_j\) for each virtual party \(j \in [n']\) are sampled. Now, the heart of the sharing algorithm is the generation of HSS shares of \(\{s_j,s_{j'}\},\{k_j,k_{j'}\}\) for every pair of \(j \ne j' \in [n']\), which are then provided to parties j and \(j'\). Specifically, the algorithm computes shares \(\mathsf {sh} ^{\{j,j'\}}_j\) and \(\mathsf {sh} ^{\{j,j'\}}_{j'}\) as the output of \(\mathsf {HSS}.\mathsf {Share} \left( 1^\lambda ,\left( \{s_j,s_{j'}\},\{k_j,k_{j'}\}\right) \right) \). Note that we generate only one set of shares for each \(j,j'\) and the ordering of j and \(j'\) is irrelevant (we use the set notation to signify this).

\(\mathsf {Share} (1^\lambda , n, x):\)

  1. 1.

    Let \(n' = n\lambda \), \(m' = m + \lambda \), and \(x_1 := (z_1 \Vert \rho _1 ) \in \{0,1\}^{m'},\dots ,x_{n'} := (z_{n'} \Vert \rho _{n'}) \in \{0,1\}^{m'}\), where \(z_1,\dots ,z_{n'}\) is an additive secret sharing of x, and each \(\rho _i \in \{0,1\}^\lambda \) is uniformly random. The \(\rho _i\) are the random coins used by each party in the MPC protocol \(\Pi \) underlying the conforming protocol \(\varPhi \).

  2. 2.

    For each \(j \in [n']\):

    1. (a)

      Draw PRF keys \(s_j,k_j \leftarrow \{0,1\}^\lambda \), so that \(\mathsf {PRF} (s_j,\cdot ): \{0,1\}^* \rightarrow \{0,1\}\) and \(\mathsf {PRF} (k_j,\cdot ): \{0,1\}^* \rightarrow \{0,1\}^\lambda \), where both of these pseudorandom functions can be computed by \(NC^1\) circuits.

    2. (b)

      Compute \((w_j,r_j) := \mathsf {inpgen} (x_j,s_j)\).

  3. 3.

    For each \(j \ne j' \in [n']\), compute \(\left( \mathsf {sh} ^{\{j,j'\}}_j,\mathsf {sh} ^{\{j,j'\}}_{j'}\right) \leftarrow \mathsf {HSS}.\mathsf {Share} \Big (1^\lambda ,\Big (\{s_j,s_{j'}\},\{k_j,k_{j'}\}\Big )\Big )\).

  4. 4.

    Let \(\overline{\mathsf {sh}}_j = \left( x_j,s_j,k_j,\left\{ \mathsf {sh} ^{\{j,j'\}}_j\right\} _{j' \in [n'] \setminus \{j\}}\right) \).

  5. 5.

    For each \(i \in [n]\), output party i’s share \(\mathsf {sh} _i := \Big (\{w_j\}_{j \in [n']},\left\{ \overline{\mathsf {sh}}_j\right\} _{j \in [(i-1)\lambda + 1,\cdots ,i\lambda ]}\Big )\).

The Evaluation Algorithm: Observe that the sharing algorithm is independent of the conforming protocol \(\varPhi \) (and the circuit \(\mathsf {C} \) to be computed), thus achieving sharing compactness. This is due to the fact that the function \(\mathsf {inpgen} \) is universal for conforming protocols \(\varPhi \) (as explained in Sect. 4.2).

In contrast, the evaluation algorithm will emulate the entire protocol \(\varPhi \). First, it will set the error parameter \(\delta \) for HSS, depending on the protocol \(\varPhi \). Then, each virtual party j (where each party controls \(\lambda \) virtual parties) generates a garbled circuit for each action of the conforming protocol. For each action, the speaking party uses the speaking circuit \(\mathsf {P} ^*\) and the rest of the parties use the listening circuit \(\mathsf {P} \).

\(\mathsf {Eval} (i,\mathsf {C},\mathsf {sh} _i)\):

  1. 1.

    Parse \(\mathsf {sh} _i\) as \(\left( \{w_j\}_{j \in [n']},\left\{ \overline{\mathsf {sh}}_j\right\} _{j \in [(i-1)\lambda + 1,\cdots ,i\lambda ]}\right) \), let T be a parameterFootnote 14 of the conforming protocol \(\Phi \) computing \(\mathsf {C} \), and set the HSS error parameter \(\delta = 1/8\lambda ^2T\).

  2. 2.

    Set \(\mathsf {st}:= (w_1 \Vert 0^{T/n'} \Vert w_2 \Vert 0^{T/n'} \Vert \cdots \Vert w_{n'} \Vert 0^{T/n'})\).

  3. 3.

    For each \(j \in [(i-1)\lambda + 1,\cdots ,i\lambda ]\), run the following procedure. \(\mathsf {VirtualEval}(j,\mathsf {C},\overline{\mathsf {sh}}_j)\):

    1. (a)

      Parse \(\overline{\mathsf {sh}}_j\) as \(\left( x_j,s_j,k_j,\left\{ \mathsf {sh} ^{\{j,j'\}}_j\right\} _{j' \in [n'] \setminus \{j\}}\right) \).

    2. (b)

      Compute \(v_j := \mathsf {gen} (j,s_j)\).

    3. (c)

      Set \(\overline{\mathsf {lab}}^{j,T+1} := \left\{ \mathsf {lab} _{k,0}^{j,T+1}, \mathsf {lab} _{k,1}^{j,T+1}\right\} _{k\in [\ell ]}\) where for each \(k \in [\ell ]\) and \(b \in \{0,1\}\), \(\mathsf {lab} _{k,b}^{j,T+1} := 0^\lambda \).

    4. (d)

      For each t from T down to 1:

      1. i.

        Parse \(\phi _t\) as \((j^*,f,g,h)\).

      2. ii.

        If \(j = j^*\), compute (where \(\mathsf {P} ^*\) is described in Fig.  and \(\mathsf {F} \) is described in Fig. 1)

        $$\begin{aligned}&\mathsf {arg}_1 :=\left\{ \mathsf {F} [t,\alpha ,\beta ,p]\left( \mathsf {sh} _{j^*}^{\{j^*,j\}}\right) \right\} _{\begin{array}{c} j \in [n'] \setminus \{j^*\}, \\ \alpha ,\beta \in \{0,1\}, \\ p \in [\lambda ] \end{array}} \\&\mathsf {arg}_2 :=(v_{j^*,f},v_{j^*,g},v_{j^*,h}) \\&\left( \widetilde{\mathsf {P}}^{j^*,t}, {\overline{\mathsf {lab}}}^{j^*,t} \right) \leftarrow \mathsf {Garble} \left( 1^{\lambda },\mathsf {P} ^*\left[ j^*,\mathsf {arg}_1,\mathsf {arg}_2,\overline{\mathsf {lab}}^{j^*,t+1}\right] \right) . \end{aligned}$$
      3. iii.

        If \(j \ne j^*\), compute (where \(\mathsf {P} \) is described in Fig. 3 and \(\mathsf {F} \) is described in Fig. 1)

      $$\begin{aligned}&\mathsf {arg}_1 :=\left\{ \begin{array}{c}\mathsf {F} [t,\alpha ,\beta ,p]\left( \mathsf {sh} _j^{\{j^*,j\}}\right) , \\ \mathsf {F} [t,\alpha ,\beta ,p]\left( \mathsf {sh} _j^{\{j^*,j\}}\right) \oplus \mathsf {PRF} \left( k_j,\left( t,\alpha ,\beta ,p\right) \right) \end{array}\right\} _{\begin{array}{c} \alpha ,\beta \in \{0,1\}, \\ p \in [\lambda ] \end{array}} \\&\mathsf {arg}_2 :=(f,g,h) \\&\left( \widetilde{\mathsf {P}}^{j,t}, {\overline{\mathsf {lab}}}^{j,t} \right) \leftarrow \mathsf {Garble} \left( 1^{\lambda },\mathsf {P} \left[ j,\mathsf {arg}_1,\mathsf {arg}_2,\overline{\mathsf {lab}}^{j,t+1}\right] \right) . \end{aligned}$$
    5. (e)

      Set \(\overline{y}_j := \left( \left\{ \widetilde{\mathsf {P}}^{j,t}\right\} _{t \in [T]},\left\{ \mathsf {lab} ^{j,1}_{k,\mathsf {st} _k}\right\} _{k \in [\ell ]} \right) \). Recall \(\mathsf {st} \) was defined in step 2.

  4. 4.

    Output \(y_i := \left\{ \overline{y}_j\right\} _{j \in [(i-1)\lambda + 1,\cdots ,i\lambda ]}\).

The Decoding Algorithm: The decoding algorithm is quite natural given what we have seen so far. Garbled circuits from each virtual party are executed sequentially, communicating among themselves. This results in an evaluation of the conforming protocol \(\varPhi \) and the final output can be computed using the \(\mathsf {post} \) algorithm.

\(\mathsf {Dec} (y_1,\dots ,y_n)\):

  1. 1.

    For each \(i \in [n]\), parse \(y_i\) as \(\left\{ \left( \left\{ \widetilde{\mathsf {P}}^{j,t}\right\} _{t \in [T]},\left\{ \mathsf {lab} ^{j,1}_{k}\right\} _{k \in [\ell ]}\right) \right\} _{j \in [(i-1)\lambda + 1,\cdots ,i\lambda ]}\).

  2. 2.

    For each \(j \in [n']\), let \(\widetilde{\mathsf {lab}}^{j,1} := \left\{ \mathsf {lab} ^{j,1}_{k}\right\} _{k \in [\ell ]}\).

  3. 3.

    For each t from 1 to T,

    1. (a)

      Parse \(\phi _t\) as \((j^*,f,g,h)\).

    2. (b)

      Compute \(\left( \gamma _t,\left\{ z^*_{j,p}\right\} _{\begin{array}{c} j \in [n'] \setminus \{j^*\}, \\ p \in [\lambda ] \end{array}}, \widetilde{\mathsf {lab}}^{j^*,t+1}\right) := \mathsf {GEval} \left( \widetilde{\mathsf {P}}^{j^*,t},\widetilde{\mathsf {lab}}^{j^*,t}\right) \).

    3. (c)

      For each \(j \ne j^*\):

      1. i.

        Compute \(\left( \left\{ \mathsf {elab} _{p,0},\mathsf {elab} _{p,1}\right\} _{p \in [\lambda ]},\left\{ \mathsf {lab} _k^{j,t+1}\right\} _{k \in [\ell ] \setminus \{h\}}\right) := \mathsf {GEval} \Big (\widetilde{\mathsf {P}}^{j,t},\widetilde{\mathsf {lab}}^{j,t}\Big ).\)

      2. ii.

        If there exists \(p \in [\lambda ]\), such that \(\mathsf {rob.dec} \left( z^*_{j,p},\mathsf {elab} _{p,\gamma _t}\right) \ne \bot \), then set the result to \(\mathsf {lab} ^{j,t+1}_h\). If all \(\lambda \) decryptions give \(\bot \), then output \(\bot \) and abort.

      3. iii.

        Set \(\widetilde{\mathsf {lab}}^{j,t+1} := \left\{ \mathsf {lab} _k^{j,t+1}\right\} _{k \in [\ell ]}\).

  4. 4.

    Set \(\Gamma = (\gamma _1,\dots ,\gamma _T)\) and output \(\mathsf {post} (\Gamma )\).

Fig. 4.
figure 4

A \(\text {first message succinct}\) MPC protocol \((\mathsf {FMS}.\mathsf {MPC} _1,\mathsf {FMS}.\mathsf {MPC} _2,\mathsf {FMS}.\mathsf {MPC} _3)\)

Fig. 5.
figure 5

The (randomized) circuit \(\mathsf {D}\)

The proof of correctness, security and robustness can be found in the full version [BGMM20].

5 Step 2: FMS MPC from Sharing-Compact HSS

In this section, we use a sharing-compact HSS scheme to construct a first message succinct two-round MPC protocol that securely computes any polynomial-size circuit. We refer to Sect. 2.2 for a high-level overview of the construction. For modularity of presentation, we begin by defining a label encryption scheme.

Label Encryption. This is an encryption scheme designed specifically for encrypting a grid of \(2 \times \ell \) garbled input labels corresponding to a garbled circuit with input length \(\ell \). The encryption algorithm takes as input a \(2 \times \ell \) grid of strings (labels) along with a \(2 \times \ell \) grid of keys. It encrypts each label using each corresponding key, making use of a robust private-key encryption scheme \((\mathsf {rob.enc},\mathsf {rob.dec})\). It then randomly permutes each pair (column) of ciphertexts, and outputs the resulting \(2 \times \ell \) grid. On the other hand, decryption only takes as input a set of \(\ell \) keys, that presumably correspond to exactly one ciphertext per column, or, exactly one input to the garbled circuit. The decryption algorithm uses the keys to decrypt exactly one label per column, with the robustness of \((\mathsf {rob.enc},\mathsf {rob.dec})\) ensuring that indeed only one ciphertext per column is able to be decrypted. The random permutations that occur during encryption ensure that a decryptor will recover a valid set of input labels without knowing which input they actually correspond to. This will be crucial in our construction.

\(\mathsf {LabEnc}(\overline{K},\overline{\mathsf {lab}}):\):

On input a key \(\overline{K} = \{K_{i,b}\}_{i \in [\ell ], b \in \{0,1\}}\) and \(\overline{\mathsf {lab}} = \{\mathsf {lab} _{i,b}\}_{i \in [\ell ], b \in \{0,1\}}\) (where \(K_{i,b}, \mathsf {lab} _{i,b}\in \{0,1\}^\lambda \)), \(\mathsf {LabEnc}\) draws n random bits \(b'_i \leftarrow \{0,1\}\) and outputs \(\overline{\mathsf {elab}}=\left\{ \mathsf {elab}_{i,b}\right\} _{i\in [\ell ], b\in \{0,1\}}\), where \(\mathsf {elab}_{i,b} := \mathsf {rob.enc} (K_{i,b\oplus b'_i},\mathsf {lab} _{i,b\oplus b'_i})\).

\(\mathsf {LabDec}(\widehat{K},\overline{\mathsf {elab}})\)::

On input a key \(\widehat{K} = \{K_i\}_{i\in [\ell ]}\) and \(\overline{\mathsf {elab}}=\left\{ \mathsf {elab}_{i,b}\right\} _{i\in [\ell ], b\in \{0,1\}}\), for each \(i \in [\ell ]\) output \(\mathsf {rob.dec} (K_i,\mathsf {elab}_{i,0})\) if it is not \(\bot \) and \(\mathsf {rob.dec} (K_i,\mathsf {elab}_{i,1})\) otherwise.

We present the formal construction in Fig. 4. It is given for functionalities \(\mathsf {C} \) where every party receives the same output, which is without loss of generality. Throughout, we will denote by \(\ell \) the length of each party’s \(\mathsf {scHSS} \) share. Note that the circuit \(\mathsf {D}\) used by the construction is defined immediately after in Fig. 5. Finally, p[t] denotes the t’th bit of a string \(p \in \{0,1\}^*\).

Theorem 11

Let \(\mathcal {X}\in \{\)semi-honest in the plain model, semi-honest in the common random/reference string model, malicious in the common random/reference string model\(\}\). Assuming a (vanilla) \(\mathcal {X}\) two-round MPC protocol and a \(\mathsf {scHSS} \) scheme for polynomial-size circuits, there exists an \(\mathcal {X}\) first message succinct two-round MPC protocol.

The proof of Theorem 11 can be found in the full version [BGMM20].

6 Step 3: Two-Round Reusable MPC from FMS MPC

We start by giving a high-level overview of the reusable MPC, which we call \(\mathsf {r}.\mathsf {MPC} \). Recall from Sect. 2.3 that round one of \(\mathsf {r}.\mathsf {MPC} \) essentially just consists of round one of an \(\mathsf {FMS}.\mathsf {MPC} \) instance computing the circuit \(\mathsf{N}\). We refer to this as the 0’th (instance of) MPC. Now fix a circuit \(\mathsf {C} \) to be computed in round two, and its representative string \(p:=\langle \mathsf {C} \rangle \), which we’ll take to be length m. This string p fixes a root-to-leaf path in a binary tree of MPCs that the parties will compute. In round two, the parties compute round two of the 0’th MPC, plus m (garbled circuit, encrypted labels) pairs. Each of these is used to compute an MPC in the output phase of \(\mathsf {r}.\mathsf {MPC} \). The first \(m-1\) of these MPCs compute \(\mathsf{N}\), and the m’th MPC computes \(\mathsf {C} \).

In the first round of \(\mathsf {r}.\mathsf {MPC} \), each party i also chooses randomness \(r_i\), which will serve as the root for a binary tree of random values generated as in [GGM84] by a PRG \((\mathsf {G} _0,\mathsf {G} _1)\). Below, we set \(r_{i,0} :=r_i\), where the 0 refers to the fact that the 0’th MPC will be computing the circuit \(\mathsf{N}\) on input that includes \(\{r_{i,0}\}_{i \in [n]}\). The string p then generates a sequence of values \(r_{i,1},\dots ,r_{i,m}\) by \(r_{i,d}:=\mathsf {G} _{p[d]}(r_{i,d-1})\). The d’th MPC will be computing the circuit \(\mathsf{N}\) on input that includes \(\{r_{i,d}\}_{i \in [n]}\).

Now, it remains to show how the m (garbled circuit, encrypted labels) pairs output by each party in round two can be used to reconstruct each of the m MPC outputs, culminating in \(\mathsf {C} \). We use a repeated application of the mechanism developed in the last section. In particular, the d’th garbled circuit output by party i computes their second round message of the d’th MPC. The input labels are encrypted using randomness derived from party i’s root randomness \(r_i\). Specifically, as in last section, we use a PRF to compute a \(2 \times \ell \) grid of keys, which will be used to \(\mathsf {LabEnc}\) the \(2 \times \ell \) grid of input labels. The key to this PRF will be generated by a PRG \((\mathsf {H} _0,\mathsf {H} _1)\) applied to \(r_{i,d-1}\). Since we are branching based on the bit p[d], the key will be set to \(\mathsf {H} _{p[d]}(r_{i,d-1})\).

Likewise, the d’th MPC (for \(d<m\)), using inputs \(\{r_{i,d}\}_{i \in [n]}\), computes two instances of the first round of the \(d+1\)’st MPC, the “left child” using inputs \(\{\mathsf {G} _0(r_{i,d})\}_{i \in [n]}\) and the “right child” using inputs \(\{\mathsf {G} _1(r_{i,d})\}_{i \in [n]}\). It then uses the PRF key \(\mathsf {H} _0(r_{i,d})\) to output the \(\ell \) keys corresponding to party i’s left child first round message, and the key \(\mathsf {H} _1(r_{i,d})\) to output the \(\ell \) keys corresponding to party i’s right child first round message.

Finally, in the output phase of \(\mathsf {r}.\mathsf {MPC} \), all parties can recover party i’s second round message of the d’th MPC, by first using the output of the \(d-1\)’st MPC to decrypt party i’s input labels corresponding to its first round message of the d’th MPC, and then using those labels to evaluate its d’th garbled circuit, finally recovering the second round message. Once all of the d’th second round messages have been recovered, the output may be reconstructed. Note that this output is exactly the set of keys necessary to repeat the process for the \(d+1\)’st MPC. Eventually, the parties will arrive at the m’th MPC, which allows them to recover the final output \(\mathsf {C} (x_1,\dots ,x_n)\). One final technicality is that each party’s second round message for each MPC may be generated along with a secret state. We cannot leak this state to other parties in the output phase, so in the second round of \(\mathsf {r}.\mathsf {MPC} \), parties will actually garble circuits that compute their second round (state, message) pair, encrypt the state with their own secret key, and then output the encrypted state plus the message in the clear. In the output phase, each party i can decrypt their own state, (but not anyone else’s) and use their state to reconstruct the output of each MPC.

The formal construction and the proof of the following theorem are deferred to the full version [BGMM20].

Theorem 12

Let \(\mathcal {X}\in \{\)semi-honest in the plain or \(\text {CRS}\) model, malicious in the \(\text {CRS}\) model\(\}\). Assuming a first message succinct \(\mathcal {X}\) two-round MPC protocol, there exists an \(\mathcal {X}\) reusable two-round MPC protocol.