1 Introduction

The round complexity of secure multi-party computation (MPC) [19, 39, 40] has been a problem of fundamental interest in cryptography. The last few years have seen major advances in improving the round complexity of secure computation with dishonest majority [1, 6, 7, 9, 10, 16, 20, 22, 24, 27, 32, 34, 38], culminating eventually in four round protocols for secure multi-party computation from general assumptions such as DDH and LWE [1, 7, 16].

Intriguingly, however, when we only require security against (semi-malicious) adversaries that follow protocol specifications, recent research has also constructed MPC protocols that require even less that four rounds of simultaneous message exchange in the plain model. For instance, [11] give a two-round protocol based on indistinguishability obfuscation, while [7] very recently gave a three round protocol from the hardness of the learning with errors assumption.

However, these protocols do not offer any privacy guarantees at all against Byzantine adversaries that may deviate from protocol specifications. Can we achieve meaningful security against Byzantine adversaries in two or three rounds? This question is even more interesting in the setting where parties participate in multiple executions of the MPC protocol concurrently. Indeed, as our world becomes increasingly interconnected, it is hard to imagine that future cryptographic protocols will be carried out in a standalone setting, where participants interact in only a single instance of the protocol. Thus, we ask:

“Can we achieve concurrently secure MPC in two or three rounds?”

Super-polynomial security. Indeed, even defining security against concurrent adversaries in the plain model requires care. Barak, Prabhakaran and Sahai [4] give an explicit “chosen protocol attack” that rules out concurrently secure MPC with polynomial simulation in any number of rounds in the plain model. In fact, even in the stand-alone setting, three round secure computation with polynomial simulation and black-box reductions turns out to be impossible to achieve [16].

However, it has been known for a long time that for MPC, a powerful security notion in the plain model is security with super-polynomial time simulation (SPS) [3, 5, 8, 13, 15, 25, 30, 33, 36]. SPS security circumvents the impossibility results above including the chosen protocol attack in the concurrent setting, and is the most widely studied security model for concurrent MPC in the plain model.

To understand the intuition behind SPS security, it is instructive to view SPS security through the lens of the security loss inherent in all security reductions. In ordinary polynomial-time simulation, the security reduction has a polynomial security loss with respect to the ideal world. That is, an adversary in the real world has as much power as another adversary that runs in polynomially more time in the ideal world. In SPS security, the security reduction has a fixed super-polynomial security loss, for example \(2^{n^\epsilon }\), where n is the security parameter, with respect to the ideal world. Just as in other applications in cryptography using super-polynomial assumptions, this situation still guarantees security as long as the ideal model is itself super-polynomially secure. For instance, if the ideal model hides honest party inputs information-theoretically, then security is maintained even with SPS. For example, this is true for applications like online auctions, where no information is leaked in the ideal world about honest party inputs beyond what can be easily computed from the output. But SPS also guarantees security for ideal worlds with cryptographic outputs, like blind signatures, as long as the security of the cryptographic output is guaranteed against super-polynomial adversaries. Indeed, SPS security was explicitly considered for blind signatures in [14, 17] with practically relevant security parameters computed in [14]. Additional discussion on the meaningfulness of SPS security can be found in the original works of [33, 36] that introduced SPS security in the protocol context.

Prior to our work, the best round complexity even for concurrent two-party computation with SPS security was 5 rounds [15] from standard sub-exponential assumptions. For concurrent MPC with SPS security from standard sub-exponential assumptions, the previous best round complexity was perhaps approximately 20 rounds in the simultaneous message exchange model [13, 26], although to the best of our knowledge, no previous work even gave an approximation of the constant round complexity that is sufficient for the multi-party setting.

1.1 Our Results

We obtain several results on concurrently secure MPC in 2 or 3 rounds:

  1. 1.

    We obtain the following results for multi-party secure computation with SPS in three rounds in the simultaneous message model, against rushing adversaries.

    • A compiler that converts a large class of three round protocols secure against semi-malicious adversaries, into protocols secure against malicious adversaries, additionally assuming the sub-exponential hardness of DDH or QR or \(N^{th}\) residuosity.

    • A compiler that converts a large class of three round protocols secure against semi-malicious adversaries, into protocols secure against malicious concurrent adversaries, additionally assuming the sub-exponential hardness of DDH or QR or \(N^{th}\) residuosity.

    On instantiating these compilers with the three-round semi-malicious protocol in the recent work of Brakerski et al. [7], we obtain the following main result.

Informal Theorem 1

Assuming sub-exponentially secure LWE and DDH, there exists a three-round protocol in the simultaneous message exchange model with rushing adversaries, that achieves sub-exponential concurrent SPS security for secure multi-party computation for any efficiently computable function, in which all parties can receive output.

The same result holds if the sub-exponential DDH assumption above is replaced with the sub-exponential QR or \(N^{th}\) residuosity assumptions.

  1. 2.

    We also obtain the following results for multi-party secure computation with SPS in two rounds in the simultaneous message model, against rushing adversaries.

    • A compiler that converts a large class of two round protocols secure against semi-malicious adversaries, into protocols secure against malicious adversaries computing input-less randomized functionalities, assuming assuming sub-exponential hardness of DDH and indistinguishability obfuscation.

    • A compiler that converts a large class of two round protocols secure against semi-malicious adversaries, into protocols secure against concurrent malicious adversaries computing input-less randomized functionalities, assuming assuming sub-exponential hardness of DDH and indistinguishability obfuscation.

    On instantiating these compilers with the two-round semi-malicious protocol in  [11], we obtain the following main result.

Informal Theorem 2

Assuming sub-exponentially secure indistinguishability obfuscation and DDH, there exists a two-round protocol in the simultaneous message exchange model with rushing adversaries, that achieves sub-exponential concurrent SPS security for secure multi-party computation for any efficiently computable randomized input-less function, in which all parties can receive output.

In particular, our protocols can be used to generate samples from any efficiently sampleable distribution. For example, they can be used to concurrently securely sample common reference strings from arbitrary distributions for cryptographic applications, such that the randomness used for sampling remains hidden as long as at least one of the participants is honest. Applications include generating a common reference string sufficient for building universal samplers [23]. Before our work, only the special case of multi-party coin-flipping with SPS was known to be achievable in two rounds [25].

2 Technical Overview

We will now give an overview of the techniques used in our work.

2.1 Three Round MPC Without Setup

A well established approach to constructing secure computation protocols against malicious adversaries in the standalone setting is to use the GMW compiler [19]: “compile” a semi-honest protocol with zero-knowledge arguments to enforce correct behavior. Normally, such compilers involve an initial ‘coin-tossing’ phase, which determines the randomness that will be used by all parties in the rest of the protocol. Unfortunately, in two or three rounds, there is no scope at all to carry out an initial coin-tossing.

However, as observed by [2, 7, 31], certain two and three round protocols satisfy semi-malicious security: that is, the protocol remains secure even when the adversary is allowed to chose malicious randomness, as long as the adversary behaves according to protocol specifications. When compiling semi-malicious protocols, the coin-tossing phase is no longer necessary: at a very high level, it seems like it should suffice to have all parties give proofs of correct behavior. Several difficulties arise when trying to implement such compilers in extremely few rounds. Specifically, in many parts of our protocols, we will have only two rounds to complete the proof of correct behavior. However, attempts to use two-round zero-knowledge with super-polynomial simulation [33] run into a few key difficulties, that we now discuss.

A key concern in MPC is that malicious parties may be arbitrarily mauling the messages sent by other parties. In order to prevent this, we will use two-round non-malleable commitments, that were recently constructed in [21, 25, 28]. In particular, we will rely on a construction of two-round concurrent non-malleable commitments with simultaneous messages, that were constructed by [25] assuming sub-exponential DDH.

The very first difficulty arises as soon as we try to compose non-malleable commitments with SPS-ZK.

Difficulty of using two-round SPS-ZK in few rounds with Simultaneous Messages. Standard constructions of two-round SPS zero-knowledge can be described as follows: the verifier generates a challenge that is hard to invert by adversaries running in time T, then the prover proves (via WI) that either the statement being proven is in the language, or that he knows the inverse of the challenge used by the verifier. This WI argument is such that the witness used by the prover can be extracted (via brute-force) in time \(T' \ll T\). Naturally, this restricts the argument to be zero-knowledge against verifiers that run in time \(T_\mathsf {zk} \ll T' \ll T\).

Thus, if a prover generates an accepting proof for a false statement, the WI argument can be broken in time \(T'\) to invert the challenge, leading to a contradiction. On the other hand, there exists a simulator that runs in time \(T_\mathsf {Sim} \gg T\) to invert the receiver’s challenge and simulate the proof (alternatively, such a simulator can non-uniformly obtain the inverse of the receiver’s challenge). Thus, we have \(T_\mathsf {Sim} \gg T_\mathsf {zk} \).

Let us now consider an SPS-ZK protocol, run simultaneously with a non-malleable commitment, as illustrated in Fig. 1. The two-round concurrent non-malleable commitment scheme from [25] requires the committer and receiver to send simultaneous messages in the first round of the execution, followed by a single message from the committer in the second round.

Fig. 1.
figure 1

Composing SPS-ZK with Non-malleable commitments

Let us also imagine that multiple parties running such a protocol are sending non-malleable commitments to their inputs, together with messages of the underlying semi-malicious protocol, and SPS-ZK proofs of correct behavior.

In order to begin a reduction between the real and ideal worlds, we would have to begin by simulating the proofs sent by honest parties, and then argue that adversarial parties cannot maul honest parties’ inputs. However, while arguing non-malleability, we cannot simulate proofs non-uniformly, since that would end up also non-uniformly fixing the messages of the non-malleable commitments. Thus, we would want non-malleability of \(\mathsf {NMCom}\) to hold even while we are sending simulated proofs in time \(T_\mathsf {Sim} \).

On the other hand, when we switch a real SPS ZK proof to being simulated, we must argue that the values within the non-malleable commitments provided by the adversary did not suddenly change. To achieve this, it must be true that the quality of the SPS ZK simulation is sufficiently high to guarantee that the messages inside the non-malleable commitments did not change. Specifically, we must be able to break the non-malleable commitments and extract from them in time that is less than \(T_\mathsf {zk} \). Putting together all these constraints, we have that non-malleable commitments should be breakable in time that is less than the time against which they remain non-malleable: this is a direct contradiction.

In order to solve this problem, we must rely on ZK argument systems where the quality of the SPS ZK simulation exceeds the running time of the SPS simulator, namely where \(T_\mathsf {Sim} \ll T_\mathsf {zk} \). Zero-knowledge with strong simulation ([33]), is roughly a primitive that satisfies exactly this constraint. We call such a ZK protocol an SPSS-ZK argument. Such a primitive was recently realized by [25], by constructing a new form of two-round extractable commitments. Note that if one uses SPSS-ZK instead of SPS-ZK, the contradiction described above no longer holds. This is a key insight that allows us to have significantly simpler arguments of SPS security, especially in the concurrent security setting.

However, as we already mentioned, in arguing security against malicious adversaries, we must be particularly wary of malleability attacks. In particular, we would like to ensure that while the simulator provides simulated proofs, the adversary continues to behave honestly – thereby allowing such a simulator to correctly extract the adversary’s input and force the right output. This is the notion of simulation soundness [37]. However, it is unknown how to build a two-round concurrently simulation-sound SPSS ZK argument. We address this by providing a mechanism to emulates two-round and three-round simulation-soundness via strong simulation, in a simultaneous message setting. This mechanism allows us to compile a semi-malicious protocol with a type of non-malleable proofs of honest behavior.

Roughly speaking, the idea behind our strategy for enforcing simulation soundness is to have each party commit not only to its input, but also all the randomness that it will use in the underlying semi-malicious secure protocol. Then, the high quality of the SPSS ZK simulation will ensure that even the joint distribution of the input, the randomness, and the protocol transcript cannot change when we move to SPS simulation. Since honest behavior can be checked by computing the correct messages using the input and randomness, the quality of the SPSS ZK simulation guarantees that adversarial behavior must remain correct. Counter-intuitively, we enforce a situation where we cannot rule out that the adversary isn’t “cheating” on his ZK arguments, but nevertheless the adversary’s behavior in the underlying semi-malicious MPC protocol cannot have deviated from honest behavior.

We note that our simulation strategy is uniform and straight-line. The only non-trivial use of rewinding in our protocol is in arguing non-malleability, and this is abstracted away into the underlying non-malleable commitment scheme that we invoke. This leads to a significantly simpler proof of concurrent security.

Several additional subtleties arise in the proofs of security. Please refer to Sect. 4 for additional details on our protocol and complete proofs.

Barriers to Two Round Secure Computation of General Functionalities. We also note that barriers exist to constructing two-round two-party SPS-secure computation of general functionalities with super-polynomial simulation, where both parties receive the output. Let us focus on protocols for the secure computation of a specific functionality \({\mathcal F} (x, y) = (x + y)\), which computes the sum of the inputs of both parties, when interpreted as natural numbers. However, our arguments also extend to all functionalities that are sensitive to the private inputs of individual parties. We will also restrict ourselves to two-round protocols where both parties send an encoding of their message in the first round while the next round is used to compute the output. It is not difficult to see that any protocol for two-round two-party secure computation of general functionalities, must satisfy this property, as long as security must hold against non-uniform adversaries. If the first message wasn’t committing, then a non-uniform adversary could obtain a first message that is consistent with two inputs, and then by aborting in the second round, it could obtain two outputs of the function with two different inputs, violating security.

Let \(\varPi \) denote a two-round secure computation protocol between two parties A and B, where both parties receive the output. We will also consider a “mauling” rushing adversary that corrupts B, let us denote this corrupted party by \(\widetilde{B}\). At the beginning of the protocol A sends an honest encoding of its input X. After obtaining the first round message from party A, suppose that \(\widetilde{B}\) “mauls” the encoding sent by A and generates another encoding of the same input X. Because the encodings must necessarily hide the inputs of parties, the honest PPT party A cannot detect if such a mauling occurred, and sends the second message of the protocol. At this point, \(\widetilde{B}\) generates its second round message on its own, but does not send this message. Instead, \(\widetilde{B}\) computes the output of the protocol (which is guaranteed by correctness). The adversary \(\widetilde{B}\) learns 2X, and blatantly breaks security of the SPS-secure protocol. Similarly, a rushing adversary could choose to corrupt party A and launch the same attack. Getting over this barrier would clearly require constructing non-interactive non-malleable commitments.

2.2 Two Round MPC Without Setup for Input-Less Randomized Functionalities

We begin by noting that the discussion above on the hardness of two-round MPC with super-polynomial simulation does not rule out functionalities that are not sensitive to the private inputs of parties. In particular, let us consider input-less randomized functionalities. Even though the functionality is input-less, still each party must contribute to selecting the secret randomness on which the function is to be evaluated. At first glance, it may appear that we still have the same problem: in only two rounds, perhaps this “implied input” can be compromised. However, note that for input-less functionalities, if the adversary aborts, then even if the adversary learns the “implied inputs” of the honest parties, this does not violate security because the honest parties will not accept the output of the protocol. Thus, the honest parties’ contributions to the randomness is discarded since the protocol execution is aborted. As such, we only need to guarantee security of the honest party inputs if the protocol terminates correctly – that is, if the adversary is able to send second-round messages that do not cause the protocol to abort.

More technically, the only actual requirement is that a super-polynomial simulator must be able to correctly and indistinguishably, force the output of the computation to an externally generated value. The security of each honest party’s contribution to the randomness is implied by this forcing.

We show that this is indeed possible using only two rounds of interaction in the simultaneous message model, under suitable cryptographic assumptions. We describe a compiler that compiles a large class of two-round secure computation protocols for input-less randomized functionalities from semi-malicious to full malicious (and even concurrent) security. We consider functionalities where each party contributes some randomness, and the joint randomness of all parties is used to sample an output from some efficiently sampleable distribution.

Our protocol follows a similar template to the protocol described for the 3-round case: parties first commit to all the input and randomness that they will use throughout the execution via a non-malleable commitment. Simultaneously, parties run an underlying two-round semi-malicious protocol and by the end of the second round, provide SPSS-ZK proofs that they correctly computed all messages. We stress again that it is only if the adversary successfully completes both rounds of the protocol without causing an abort, that we actually need to care about hiding the shares of randomness contributed by honest parties – in order to argue overall security.

At the same time, in order to enforce correctness, the simulator would still need to extract the randomness used by the adversary at the end of the first round of the computation. Unlike our three round protocol, here, the simulator will try to extract randomness at the end of the first round anyway. This is because the simulator can afford to be optimistic: Either its extraction is correct, and it can make use of this in forcing the output. Or its extraction is incorrect, but in this case we will guarantee that the adversary will cause the protocol to abort in the second round because of the SPSS ZK argument that the adversary must give proving that it behaved honestly in the first round.

We need to take additional care when defining the simulation strategy when the simulator extracts incorrect randomness: this causes other subtleties in our proof of security. The complete constructions and proofs of standalone as well as concurrent security, can be found in Sect. 5.

3 Preliminaries

Here, we recall some preliminaries that will be useful in the rest of the paper. We will typically use n to denote the security parameter. We will say that \(T_1(n) \gg T_2(n)\) if \(T_1(n) > T_2(n) \cdot n^c\) for all constants c.

We define a T-time machine as a non-uniform Turing Machine that runs in time at most T. All honest parties in definitions below are by default uniform interactive Turing Machines, unless otherwise specified.

3.1 ZK with Superpolynomial Simulation

We will use two message ZK arguments with strong superpolynomial simulation (SPS) and with super-polynomial strong simulation (SPSS) [34].

Definition 1

(Two Message \((T_\mathsf {Sim} , T_\mathsf {zk}, \delta _\mathsf {zk})\) -ZK Arguments With Superpolynomial Simulation). [34] We say that an interactive proof (or argument) \(\langle P, V \rangle \) for the language \(L \in \mathsf {NP}\), with the witness relation \(R_L\), is \((T_\mathsf {Sim} , T_\mathsf {zk}, \delta _\mathsf {zk})\)-simulatable if for every \(T_\mathsf {zk} \)-time machine \(V^*\) exists a probabilistic simulator \({\mathcal S} \) with running time bounded by \(T_\mathsf {Sim} \) such that the following two ensembles are \(T_\mathsf {zk}, \delta _\mathsf {zk})\)-computationally indistinguishable (when the distinguishing gap is a function in \(n = |x|\)):

  • \(\{( \langle P(y), V^*(z)\rangle (x) ) \}_{z \in {\{0,1\}} ^*,\,x \in L}\) for arbitrary \(y \in R_L(x)\)

  • \(\{ {\mathcal S} (x, z) \}_{z \in {\{0,1\}} ^*,\,x \in L}\)

That is, for every probabilistic algorithm D running in time polynomial in the length of its first input, every polynomial p, all sufficiently long \(x \in L\), all \(y \in R_L(x)\) and all auxiliary inputs \(z \in {\{0,1\}} ^*\) it holds that

$$\Pr [D(x, z, ( \langle P(y), V^*(z)\rangle (x) ) = 1] - \Pr [D(x, z, S(x, z)) = 1] < \delta _\mathsf {zk} ({\lambda })$$

Definition 2

We say that a two-message \((T_\mathsf {Sim} , T_\mathsf {zk}, \delta _\mathsf {zk})\)-SPS ZK argument satisfies non-uniform simulation (for delayed statements) if we can write the simulator \({\mathcal S} = ({\mathcal S} _1, {\mathcal S} _2)\) where \({\mathcal S} _1(V^*(z))\), which outputs \(\sigma \), runs in \(T_\mathsf {Sim} \)-time, but where \({\mathcal S} _2(x,z,\sigma )\), which outputs the simulated view of the verifier \(V^*\), runs in only polynomial time.

3.2 ZK with Super-Polynomial Strong Simulation

We now define zero-knowledge with strong simulation. We use the definition in [25].

Definition 3

( \((T_\varPi , T_\mathsf {Sim} , T_\mathsf {zk}, T_L,\delta _\mathsf {zk})\) -SPSS Zero Knowledge Arguments). We call an interactive protocol between a PPT prover P with input \((x, w) \in R_L\) for some language L, and PPT verifier V with input x, denoted by \(\langle P, V \rangle (x,w)\), a super-polynomial strong simulation (SPSS) zero-knowledge argument if it satisfies the following properties and \(T_\varPi \ll T_\mathsf {Sim} \ll T_{\mathsf {zk}} \ll T_{L}\):

  • Completeness. For every \((x, w) \in R_L\), \(\Pr [V \, outputs \,1|\langle P, V \rangle (x,w)] \ge 1 - \mathsf {negl} ({\lambda })\), where the probability is over the random coins of P and V.

  • \(T_\varPi \) -Adaptive-Soundness. For any language L that can be decided in time at most \(T_L\), every x, every \(z \in {\{0,1\}} ^*\), and every poly-non-uniform prover \(P^*\) running in time at most \(T_\varPi \) that chooses x adaptively after observing verifier message, \(\Pr [\langle P^*(z), V \rangle (x) = 1 ~\wedge ~ x \not \in L] \le \mathsf {negl} ({\lambda })\), where the probability is over the random coins of \({\mathcal V} \).

  • \(T_\mathsf {Sim} , T_\mathsf {zk}, \delta _\mathsf {zk} \) -Zero Knowledge. There exists a (uniform) simulator \({\mathcal S} \) that runs in time \(T_\mathsf {Sim} \), such that for every x, every non-uniform \(T_\mathsf {zk} \)-verifier \(V^*\) with advice z, and every \(T_\mathsf {zk} \)-distinguisher \({\mathcal D} \): \(\left| \Pr [{\mathcal D} (x, z, \mathsf {view} _{V^*}[\langle P, V^*(z) \rangle (x,w)]) = 1]\right. \) \(\left. - \Pr [{\mathcal D} (x, z, {\mathcal S} ^{V^*}(x, z)) = 1] \right| \le \delta _\mathsf {zk} ({\lambda })\)

3.3 Non-Malleability w.r.t. Commitment

Throughout this paper, we will use \({\lambda } \) to denote the security parameter, and \(\mathsf {negl} ({\lambda })\) to denote any function that is asymptotically smaller than \(\frac{1}{\mathsf {poly} ({\lambda })}\) for any polynomial \(\mathsf {poly} (\cdot )\). We will use PPT to describe a probabilistic polynomial time machine. We will also use the words “rounds” and “messages” interchangeably.

We follow the definition of non-malleable commitments introduced by Pass and Rosen [35] and further refined by Lin et al. [29] and Goyal [20] (which in turn build on the original definition of [12]). In the real interaction, there is a man-in-the-middle adversary \(\mathsf {MIM}\) interacting with a committer \({\mathcal C} \) (where \({\mathcal C} \) commits to value v) in the left session, and interacting with receiver \({\mathcal R} \) in the right session. Prior to the interaction, the value v is given to C as local input. \(\mathsf {MIM} \) receives an auxiliary input z, which might contain a-priori information about v. Then the commit phase is executed. Let \(\mathsf {MIM} _{\langle C, R\rangle }(\mathsf {val},z) \) denote a random variable that describes the value \(\widetilde{\mathsf {val}}\) committed by the \(\mathsf {MIM}\) in the right session, jointly with the view of the \(\mathsf {MIM}\) in the full experiment. In the simulated experiment, a PPT simulator \({\mathcal S} \) directly interacts with the \(\mathsf {MIM} \). Let \(\mathsf {Sim} _{\langle C, R\rangle }(1^{\lambda },z) \) denote the random variable describing the value \(\widetilde{\mathsf {val}}\) committed to by \({\mathcal S} \) and the output view of \({\mathcal S} \). If the tags in the left and right interaction are equal, the value \(\widetilde{\mathsf {val}}\) committed in the right interaction, is defined to be \(\bot \) in both experiments.

Concurrent non-malleable commitment schemes consider a setting where the \(\mathsf {MIM}\) interacts with committers in polynomially many (a-priori unbounded) left sessions, and interacts with receiver(s) in upto \(\ell (n)\) right sessions. If any of the tags (in any right session) are equal to any of the tags in any left session, we set the value committed by the \(\mathsf {MIM}\) to \(\bot \) for that session. The we let \(\mathsf {MIM} _{\langle C, R\rangle }(\mathsf {val},z) ^{\mathsf {many}}\) denote the joint distribution of all the values committed by the \(\mathsf {MIM}\) in all right sessions, together with the view of the \(\mathsf {MIM}\) in the full experiment, and \(\mathsf {Sim} _{\langle C, R\rangle }(1^{\lambda },z) ^{\mathsf {many}}\) denotes the joint distribution of all the values committed by the simulator \({\mathcal S} \) (with access to the \(\mathsf {MIM}\)) in all right sessions together with the view.

Definition 4

(Non-malleable Commitments w.r.t. Commitment). A commitment scheme \(\langle C, R \rangle \) is said to be non-malleable if for every PPT \(\mathsf {MIM}\), there exists a PPT simulator \({\mathcal S} \) such that the following ensembles are computationally indistinguishable:

$$\{\mathsf {MIM} _{\langle C, R\rangle }(\mathsf {val},z) \}_{n \in {\mathbb N}, v \in {\{0,1\}} ^{\lambda }, z \in {\{0,1\}} ^*} \, and \, \{\mathsf {Sim} _{\langle C, R\rangle }(1^{\lambda },z) \}_{n \in {\mathbb N}, v \in {\{0,1\}} ^{\lambda }, z \in {\{0,1\}} ^*}$$

Definition 5

( \(\ell (n)\) -Concurrent Non-malleable Commitments w.r.t. Commitment). A commitment scheme \(\langle C, R \rangle \) is said to be \(\ell (n)\)-concurrent non-malleable if for every PPT \(\mathsf {MIM}\), there exists a PPT simulator \({\mathcal S} \) such that the following ensembles are computationally indistinguishable:

$$ \{\mathsf {MIM} _{\langle C, R\rangle }(\mathsf {val},z) ^\mathsf {many}\}_{n \in {\mathbb N}, v \in {\{0,1\}} ^{\lambda }, z \in {\{0,1\}} ^*} \, and \, \{\mathsf {Sim} _{\langle C, R\rangle }(1^{\lambda },z) ^{\mathsf {many}}\}_{n \in {\mathbb N}, v \in {\{0,1\}} ^{\lambda }, z \in {\{0,1\}} ^*} $$

We say that a commitment scheme is fully concurrent, with respect to commitment, if it is concurrent for any a-priori unbounded polynomial \(\ell (n)\).

3.4 Secure Multiparty Computation

As in [18], we follow the real-ideal paradigm for defining secure multi-party computation. The only difference is that our simulator can run in super-polynomial time. A formal definition can be found in the full version.

Semi-malicious adversary: An adversary is said to be semi-malicious if it follows the protocol correctly, but with potentially maliciously chosen randomness. We refer the reader to the full version for more details.

Concurrent security: The definition of concurrent secure multi-party computation considers an extension of the real-ideal model where the adversary participates simultaneously in many executions, corrupting subsets of parties in each execution. We refer the reader to [8, 13] for a detailed definition of concurrent security.

4 Three Round Malicious Secure MPC

Let f be any functionality. Consider n parties \(\mathsf {P}_1,\ldots ,\mathsf {P}_n\) with inputs \({\mathsf x} _1,\ldots ,{\mathsf x} _n\) respectively who wish to compute f on their joint inputs by running a secure multiparty computation (MPC) protocol. Let \(\pi ^{SM}\) be any 3 round protocol that runs without any setup for the above task and is secure against adversaries that can be completely malicious in the first round, semi-malicious in the next two rounds and can corrupt upto \((n-1)\) parties. In this section, we show how to generically transform \(\pi ^{SM}\) into a 3 round protocol \(\pi \) without setup with super-polynomial simulation and secure against malicious adversaries that can corrupt upto \((n-1)\) parties. Formally, we prove the following theorem:

Theorem 1

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • 3 round MPC protocol for any functionality f that is secure against malicious adversaries in the first round and semi-malicious adversaries in the next two rounds,

the protocol presented in Fig. 2 is a 3 round MPC protocol for any functionality f, in the plain model with super-polynomial simulation.

We can instantiate the underlying MPC protocol with the construction of Brakerski et al. [7], which satisfies our requirements. That is:

Imported Lemma 1

([7]): There exists a 3 round MPC protocol for any functionality f based on the LWE assumption that is secure against malicious adversaries in the first round and semi-malicious adversaries in the next 2 rounds.

Additionally, Dodis et al. [11] give a 2 round construction based on indistinguishability obfuscation that is secure against semi-malicious adversaries. Of course, this can be interpreted as a 3 round construction where the first round has no message and is trivially secure against malicious adversaries in the first round.

Formally, we obtain the following corollary on instantiating the MPC protocol with the sub-exponentially secure variants of the above:

Corollary 1

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • B, where B \( \in \{LWE, \ Indistinguishability \ Obfuscation\}\)

the protocol presented in Fig. 2 is a 3 round MPC protocol for any functionality f, in the plain model with super-polynomial simulation.

Note that though the two underlying MPC protocols can be based on the security of polynomially hard LWE and polynomially hard iO respectively, we require sub-exponentially secure variants of the MPC protocol and hence we use sub-exponentially secure LWE and iO in our constructions.

Remark 1

(On the Semi-Malicious security of [11]). We note that the protocol in [11] works in two rounds: In the first round, each party provides a suitably “spooky” homomorphic encryption of its input, under public keys chosen by each party independently. After the first round, each party carries out a deterministic homomorphic evaluation procedure that results in an encryption of \(f(\mathbf {x})\), where \(\mathbf {x} \) is a vector that combines inputs of all parties. In the second round, each party computes a partial decryption of this ciphertext. The result is guaranteed to be the sum of these partial decryptions in a suitable cyclic group.

Furthermore, their protocol satisfies the invariant that given the (possibly maliciously chosen) randomness of the corrupted parties for the first round, and given the vector of ciphertexts that are fixed after the first round, it is possible to efficiently compute, at the end of the first round, the decryption shares for all corrupted parties. Thus, if there is one honest party and the other parties are corrupted, given the final output value \(f(\mathbf {x})\), the first round ciphertexts and the randomness of the corrupted semi-malicious parties, it is possible to compute the unique decryption share of the honest party that would force the desired output value. This property shows that their protocol satisfies semi-malicious security, since the first round message of the simulated honest party can simply be the honest first round message corresponding to the input 0, and the second round message can be computed from \(f(\mathbf {x})\), the first round ciphertexts and the randomness of the corrupted semi-malicious parties. The work of [31] further showed how to transform such a 2-round semi-malicious MPC protocol that handles exactly all-but-one corruptions into a 2-round semi-malicious MPC protocol that handles any number of corruptions.

4.1 High-Level Overview

Before describing our protocol formally, to help the exposition, we first give a brief overview of the construction in this subsection.

Consider n parties \(\mathsf {P}_1,\ldots ,\mathsf {P}_n\) with inputs \({\mathsf x} _1,\ldots ,{\mathsf x} _n\) respectively who wish to run a secure MPC to compute a function f on their joint inputs. Initially, each party \(\mathsf {P}_i\) picks some randomness \(\mathsf {r}_i\) that it will use to run the semi-malicious protocol \(\mathsf {\pi ^{SM}}\).

In the first round, each party \(\mathsf {P}_i\) sends the first round message of the protocol \(\mathsf {\pi ^{SM}}\). Then, with every other party \(\mathsf {P}_j\), \(\mathsf {P}_i\) initiates two executions of the \(\mathsf {SPSS.ZK}\) argument system playing the verifier’s role. Additionally, \(\mathsf {P}_i\) and \(\mathsf {P}_j\) also initiate two executions of a non-malleable commitment scheme - each acting as the committer in one of them. \(\mathsf {P}_i\) commits to the pair \(({\mathsf x} _i,\mathsf {r}_i)\) - that is, the input and randomness used in the protocol \(\mathsf {\pi ^{SM}}\). Recall that the first round messages of \(\mathsf {\pi ^{SM}}\) are already secure against malicious adversaries, so intuitively, the protocol doesn’t require any proofs in the first round.

In the second round, each party \(\mathsf {P}_i\) sends the second round message of the protocol \(\mathsf {\pi ^{SM}}\) using input \({\mathsf x} _i\) and randomness \(\mathsf {r}_i\). Then, \(\mathsf {P}_i\) finishes executing the non-malleable commitments (playing the committer’s role) with every other party \(\mathsf {P}_j\), committing to \(({\mathsf x} _i,\mathsf {r}_i)\). Finally, with every other party \(\mathsf {P}_j\), \(\mathsf {P}_i\) completes the execution of the \(\mathsf {SPSS.ZK}\) argument by sending its second message - \(\mathsf {P}_i\) proves that the two messages sent so far using the protocol \(\mathsf {\pi ^{SM}}\) were correctly generated using the pair \(({\mathsf x} _i,\mathsf {r}_i)\) committed to using the non-malleable commitment.

In the third round, each party \(\mathsf {P}_i\) first verifies all the proofs it received in the last round and sends a global abort (asking all the parties to abort) if any proof does not verify. Then, \(\mathsf {P}_i\) sends the third round message of the protocol \(\mathsf {\pi ^{SM}}\) using input \({\mathsf x} _i\) and randomness \(\mathsf {r}_i\). Finally, as before, with every other party \(\mathsf {P}_j\), \(\mathsf {P}_i\) completes the execution of the \(\mathsf {SPSS.ZK}\) argument by sending its second message - \(\mathsf {P}_i\) proves that the two messages sent so far using the protocol \(\mathsf {\pi ^{SM}}\) were correctly generated using the pair \(({\mathsf x} _i,\mathsf {r}_i)\) committed to using the non-malleable commitment.

Each party \(\mathsf {P}_i\) now computes its final output as follows. \(\mathsf {P}_i\) first verifies all the proofs it received in the previous round and sends a global abort (asking all the parties to abort) if any proof does not verify. Then, \(\mathsf {P}_i\) computes the output using the output computation algorithm of the semi-malicious protocol \(\mathsf {\pi ^{SM}}\). This completes the protocol description.

Security Proof: We now briefly describe how the security proof works. Let’s consider an adversary \({\mathcal A} \) who corrupts a set of parties. Recall that the goal is to move from the real world to the ideal world such that the outputs of the honest parties along with the view of the adversary is indistinguishable. We do this via a sequence of computationally indistinguishable hybrids.

The first hybrid \(\mathsf {Hyb}_1\), refers to the real world. In \(\mathsf {Hyb}_2\), the simulator extracts the adversary’s input and randomness (used in protocol \(\mathsf {\pi ^{SM}}\)) by a brute force break of the non-malleable commitment. The simulator aborts if the extracted values don’t reconstruct the protocol messages for the underlying semi-malicious protocol correctly. These two hybrids are indistinguishable because from the soundness of the proof system, except with negligible probability, the values extracted by the simulator correctly reconstruct to protocol messages.

Then, in \(\mathsf {Hyb}_3\), we switch the \(\mathsf {SPSS.ZK}\) arguments used by all honest parties in rounds 2 and 3 to simulated ones. This hybrid is computationally indistinguishable from the previous hybrid by the security of the \(\mathsf {SPSS.ZK}\) system. Notice that when we switch from real to simulated arguments, we can no longer rely on the adversary’s zero knowledge arguments to argue the correctness of the values extracted by breaking the non-malleable commitment. That is, the adversary’s arguments may not be simulation sound. However, recall that to check the validity of the extracted values, we only rely on the correct reconstruction of the semi-malicious protocol messages, and hence this is not a problem. Also, the running time of the simulator in these two hybrids is the time taken to break the non-malleable commitment \(\mathsf {T^{Brk}_{Com}}\) - which must be lesser than the time against which the zero knowledge property holds - \(\mathsf {T_{ZK}}\).

In \(\mathsf {Hyb}_4\), we switch all the non-malleable commitments sent by honest parties to be commitments of 0 instead of the actual input and randomness. Recall that since the arguments of the honest parties are simulated, this doesn’t violate correctness. Also, this hybrid is computationally indistinguishable from the previous hybrid by the security of the non-malleable commitment scheme. One issue that arises here is whether the simulator continues to extract the adversary’s inputs correctly. Recall that to extract, the simulator has to break the non-malleable commitment for which it has to run in time \(\mathsf {T^{Brk}_{Com}}\). However, then the reduction to the security of the non-malleable commitment only makes sense if the simulator runs in time lesser than that needed to break the non-malleable commitment. We overcome this issue by a sequence of sub-hybrids where we first switch the simulator to not extract the adversary’s inputs, then switch the non-malleable commitments and then finally go back to the simulator extracting the adversary’s inputs. We elaborate on this in the formal proof.

Then, in \(\mathsf {Hyb}_5\) we run the simulator of \(\mathsf {\pi ^{SM}}\) using the extracted values to generate the protocol messages. This hybrid is indistinguishable from the previous one by the security of \(\mathsf {\pi ^{SM}}\). Once again, in order to ensure correctness of the extracted values, we require the running time of the simulator - which is \(\mathsf {T^{Brk}_{Com}}\) to be lesser than the time against which the semi-malicious protocol \(\mathsf {\pi ^{SM}}\) is secure. This is because, then, the simulator can continue to extract the adversary’s message and randomness used for the protocol \(\mathsf {\pi ^{SM}}\) by breaking the semi-malicious protocol. This hybrid (\(\mathsf {Hyb}_5\)) now corresponds to the ideal world. Notice that our simulation is in fact straight-line. There are other minor technicalities that arise and we elaborate on this in the formal proof.

4.2 Construction

We first list some notation and the primitives used before describing the construction.

Notation:

  • \({\lambda } \) denotes the security parameter.

  • \(\mathsf {SPSS.ZK}= (\mathsf {ZK}_{1},\mathsf {ZK}_{2},\mathsf {ZK}_{3})\) is a two message zero knowledge argument with super polynomial strong simulation (SPSS-ZK). The zero knowledge property holds against all adversaries running in time \(\mathsf {T_{ZK}}\). Let \(\mathsf {Sim^{ZK}}\) denote the simulator that produces simulated ZK proofs and let \(\mathsf {T^{Sim}_{ZK}}\) denote its running time. [25] give a construction of an \(\mathsf {SPSS.ZK}\) scheme satisfying these properties that can be based on one of the following sub-exponential assumptions: (1) DDH; (2) Quadratic Residuosity; (3) \(N^{th}\) Residuosity.

  • \(\mathsf {NMCom}= (\mathsf {NMCom}_{1}^R,\mathsf {{NMCom}^S_{1}}, \mathsf {NMCom}_{2}^S)\) is a two message concurrent non-malleable commitment scheme with respect to commitment in the simultaneous message model. Here, \(\mathsf {NMCom}_{1}^R,\mathsf {{NMCom}^S_{1}}\) denote the first message of the receiver and sender respectively while \(\mathsf {NMCom}_{2}^S\) denotes the second message of the sender. It is secure against all adversaries running in time \(\mathsf {T^{Sec}_{Com}}\), but can be broken by adversaries running in time \(\mathsf {T^{Brk}_{Com}}\). Let \(\mathsf {Ext.Com}\) denote a brute force algorithm running in time \(\mathsf {T^{Brk}_{Com}}\) that can break the commitment scheme. [25] give a construction of an \(\mathsf {NMCom}\) scheme satisfying these properties that can be based on one of the following sub-exponential assumptions: (1) DDH; (2) Quadratic Residuosity; (3) \(N^{th}\) Residuosity.

    The \(\mathsf {NMCom}\) we use is tagged. In the authenticated channels setting, the tag of each user performing a non-malleable commitment can just be its identity. In the general setting, in the first round, each party can choose a strong digital signature verification key \(\mathsf {VK}\) and signing key, and then sign all its messages using this signature scheme for every message sent in the protocol. This \(\mathsf {VK}\) is then used as the tag for all non-malleable commitments. This ensures that every adversarial party must choose a tag that is different than any tags chosen by honest parties, otherwise the adversary will not be able to sign any of its messages by the existential unforgeability property of the signature scheme. This is precisely the property that is assumed when applying \(\mathsf {NMCom}\). For ease of notation, we suppress writing the tags explicitly in our protocols below.

  • \(\mathsf {\pi ^{SM}}\) is a sub-exponentially secure 3 round MPC protocol that is secure against malicious adversaries in the first round and semi-malicious adversaries in the next two rounds. This protocol is secure against all adversaries running in time \(\mathsf {T_{SM}}\). Let \((\mathsf {MSG}_{1},\mathsf {MSG}_{2},\mathsf {MSG}_{3})\) denote the algorithms used by any party to compute the messages in each of the three rounds and \(\mathsf {OUT}\) denotes the algorithm to compute the final output. Further, let’s assume that this protocol \(\mathsf {\pi ^{SM}}\) runs over a broadcast channel. Let \({\mathcal S} = ({\mathcal S} _1,{\mathcal S} _2,{\mathcal S} _3)\) denote the straight line simulator for this protocol - that is, \({\mathcal S} _i\) is the simulator’s algorithm to compute the \(i^{th}\) round messages. Also, we make the following assumptions about the protocol structure, that is satisfied by the instantiations:

    1. 1.

      \({\mathcal S} _1\) and \({\mathcal S} _2\) run without any input other than the protocol transcript so far - in particular, they don’t need the input, randomness and output of the malicious parties. For \({\mathcal S} _1\), this must necessarily be true since the first round of \(\mathsf {\pi ^{SM}}\) is secure against malicious adversaries. We make the assumption only on \({\mathcal S} _2\).Footnote 1

    2. 2.

      The algorithm \(\mathsf {MSG}_{3}\) doesn’t require any new input or randomness that was not already used in the algorithms \(\mathsf {MSG}_{1},\mathsf {MSG}_{2}\). Looking ahead, this is used in our security proof when we want to invoke the simulator of this protocol \(\mathsf {\pi ^{SM}}\), we need to be sure that we have fed the correct input and randomness to the simulator. This is true for all instantiantions we consider, where the semi-malicious simulator requires only the secret keys of corrupted parties (that are fixed in the second round) apart from the protocol transcript.

In order to realize our protocol, we require that \(\mathsf {poly} (\lambda )< \mathsf {T^{Sim}_{ZK}}< \mathsf {T^{Sec}_{Com}}< \mathsf {T^{Brk}_{Com}}< \mathsf {T_{ZK}}, \mathsf {T_{SM}}\).

The construction of the protocol is described in Fig. 2. We assume broadcast channels. In our construction, we use proofs for a some NP languages that we elaborate on below.

NP language L is characterized by the following relation R.

Statement : \(\mathsf {st}= (\mathsf {c}_1, \hat{\mathsf {c}}_1, \mathsf {c}_2, \mathsf {msg}_1, \mathsf {msg}_2, \tau )\)

Witness : \(\mathsf {w}=(\mathsf {inp}, \mathsf {r}, \mathsf {r}_\mathsf {c})\)

\(R(\mathsf {st},\mathsf {w})=1\) if and only if :

  • \(\hat{\mathsf {c}}_1 = \mathsf {{NMCom}^S_{1}}(\mathsf {inp},\mathsf {r};\mathsf {r}_\mathsf {c})\) AND

  • \(\mathsf {c}_2 = \mathsf {NMCom}_{2}^S(\mathsf {inp},\mathsf {r}, \mathsf {c}_1 ;\mathsf {r}_\mathsf {c})\) AND

  • \(\mathsf {msg}_1 = \mathsf {MSG}_{1}(\mathsf {inp}; \mathsf {r})\) AND

  • \(\mathsf {msg}_2 = \mathsf {MSG}_{2}(\mathsf {inp}, \tau ; \mathsf {r})\)

That is, the messages \((\mathsf {c}_1,\hat{\mathsf {c}}_1, \mathsf {c}_2)\) form a non-malleable commitment of \((\mathsf {inp},\mathsf {r})\) such that \(\mathsf {msg}_2\) is the second round message using input \(\mathsf {inp}\), randomness \(\mathsf {r}\) by running the protocol \(\mathsf {\pi ^{SM}}\), where the protocol transcript so far is \(\tau \).

NP language \(L_1\) is characterized by the following relation \(R_1\).

Statement : \(\mathsf {st}= (\mathsf {c}_1, \hat{\mathsf {c}}_1, \mathsf {c}_2, \mathsf {msg}_3, \tau )\)

Witness : \(\mathsf {w}=(\mathsf {inp}, \mathsf {r}, \mathsf {r}_\mathsf {c})\)

\(R(\mathsf {st},\mathsf {w})=1\) if and only if :

  • \(\hat{\mathsf {c}}_1 = \mathsf {{NMCom}^S_{1}}(\mathsf {inp},\mathsf {r};\mathsf {r}_\mathsf {c})\) AND

  • \(\mathsf {c}_2 = \mathsf {NMCom}_{2}^S(\mathsf {inp},\mathsf {r}, \mathsf {c}_1 ;\mathsf {r}_\mathsf {c})\) AND

  • \(\mathsf {msg}_3 = \mathsf {MSG}_{3}(\mathsf {inp}, \tau ;\mathsf {r})\)

That is, the messages \((\mathsf {c}_1,\hat{\mathsf {c}}_1, \mathsf {c}_2)\) form a non-malleable commitment of \((\mathsf {inp},\mathsf {r})\) such that \(\mathsf {msg}_3\) is the third round message using input \(\mathsf {inp}\), randomness \(\mathsf {r}\) by running the protocol \(\mathsf {\pi ^{SM}}\), where the protocol transcript so far is \(\tau \).

In the protocol, let’s assume that every party has an associated identity \(\mathsf {id}\). For any session \(\mathsf {sid}\), each parties generates its non-malleable commitment using the tag \((\mathsf {id}|| \mathsf {sid})\).

Fig. 2.
figure 2

3 round MPC Protocol \(\pi \) for functionality f.

The correctness of the protocol follows from the correctness of the protocol \(\mathsf {\pi ^{SM}}\), the non-malleable commitment scheme \(\mathsf {NMCom}\) and the zero knowledge proof system \(\mathsf {SPSS.ZK}\).

4.3 Security Proof

In this section, we formally prove Theorem 1.

Consider an adversary \({\mathcal A} \) who corrupts t parties where \(t<n\). For each party \(\mathsf {P}_i\), let’s say that the size of input and randomness used in the protocol \(\mathsf {\pi ^{SM}}\) is \(p(\lambda )\) for some polynomial p. That is, \(|({\mathsf x} _i,\mathsf {r}_i)| = p(\lambda )\). The strategy of the simulator \(\mathsf {Sim}\) against a malicious adversary \({\mathcal A} \) is described in Fig. 3.

Fig. 3.
figure 3

Simulation strategy in the 3 round protocol

Here, in the simulation, we crucially use the two assumptions about the protocol structure. The first one is easy to notice since the simulator \(\mathsf {Sim}\) has to run the semi-malicious to produce the first and second messages before it has extracted the adversary’s input and randomness. For the second assumption, observe that in order to run the simulator algorithm \({\mathcal S} _3\), \(\mathsf {Sim}\) has to feed it the entire input and randomness of the adversary and so these have to be bound to by the end of the second round.

We now show that the simulation strategy described in Fig. 3 is successful against all malicious PPT adversaries. That is, the view of the adversary along with the output of the honest parties is computationally indistinguishable in the real and ideal worlds. We will show this via a series of computationally indistinguishable hybrids where the first hybrid \(\mathsf {Hyb}_1\) corresponds to the real world and the last hybrid \(\mathsf {Hyb}_6\) corresponds to the ideal world.

  1. 1.

    \(\mathsf {Hyb}_1\) : In this hybrid, consider a simulator \(\mathsf {Sim_{Hyb}}\) that plays the role of the honest parties. \(\mathsf {Sim_{Hyb}}\) runs in polynomial time.

  2. 2.

    \(\mathsf {Hyb}_2\) : In this hybrid, the simulator \(\mathsf {Sim_{Hyb}}\) also runs the “Input Extraction” phase and the “Special Abort” phase in step3 and 5 in Fig. 3. \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\).

  3. 3.

    \(\mathsf {Hyb}_3\) : This hybrid is identical to the previous hybrid except that in Rounds 2 and 3, \(\mathsf {Sim_{Hyb}}\) now computes simulated SPSSZK proofs as done in Round 2 in Fig. 3. Once again, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\).

  4. 4.

    \(\mathsf {Hyb}_4\) : This hybrid is identical to the previous hybrid except that \(\mathsf {Sim_{Hyb}}\) now computes all the \((\hat{\mathsf {c}}^j_{1,i},\mathsf {c}^j_{2,i})\) as non-malleable commitments of \(0^{p(\lambda )}\) as done in Round 2 in Fig. 3. Once again, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\).

  5. 5.

    \(\mathsf {Hyb}_5\) : This hybrid is identical to the previous hybrid except that in Round 3, \(\mathsf {Sim_{Hyb}}\) now computes the messages of the protocol \(\mathsf {\pi ^{SM}}\) using the simulator algorithms \({\mathcal S} = ({\mathcal S} _1,{\mathcal S} _2,{\mathcal S} _3)\) as done by \(\mathsf {Sim}\) in the ideal world. \(\mathsf {Sim_{Hyb}}\) also instructs the ideal functionality to deliver outputs to the honest parties as done by \(\mathsf {Sim}\). This hybrid is now same as the ideal world. Once again, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\).

We now show that every pair of successive hybrids is computationally indistinguishable.

Lemma 1

Assuming soundness of the \(\mathsf {SPSS.ZK}\) argument system, binding of the non-malleable commitment scheme and correctness of the protocol \(\mathsf {\pi ^{SM}}\), \(\mathsf {Hyb}_1\) is computationally indistinguishable from \(\mathsf {Hyb}_2\).

Proof

The only difference between the two hybrids is that in \(\mathsf {Hyb}_2\), \(\mathsf {Sim_{Hyb}}\) may output “Special Abort” which doesn’t happen in \(\mathsf {Hyb}_1\). More specifically, in \(\mathsf {Hyb}_2\), “Special Abort” occurs if event \(\mathsf {E}\) described below is true.

Event \(\mathsf {E}\) : Is true if : For any malicious party \(\mathsf {P}_j\)

  • All the \(\mathsf {SPSS.ZK}\) proofs sent by \(\mathsf {P}_j\) in round 2 and 3 verify correctly. (AND)

  • Either of the following occur:

    • The set of values \(\{({\mathsf x} ^i_j,\mathsf {r}^i_j)\}\) that are committed to using the non-malleable commitment is not same for every i where \(\mathsf {P}_i\) is honest. (OR)

    • \(\mathsf {msg}_{1,j} \ne \mathsf {MSG}_{1}({\mathsf x} _j, \mathsf {r}_j)\) (OR)

    • \(\mathsf {msg}_{2,j} \ne \mathsf {MSG}_{2}({\mathsf x} _j, \mathsf {r}_j, \tau _1)\) where \(\tau _1\) is the protocol transcript after round 1. (OR)

    • \(\mathsf {msg}_{3,j} \ne \mathsf {MSG}_{3}({\mathsf x} _j,\mathsf {r}_j,\tau _2)\) where \(\tau _2\) is the protocol transcript after round 2.

That is, in simpler terms, the event E occurs if for any malicious party, it gives valid ZK proofs in round 2 and 3 but its protocol transcript is not consistent with the values it committed to.

Therefore, in order to prove the indistinguishability of the two hybrids, it is enough to prove the lemma below.

Sub-Lemma 1

\(\mathsf {Pr} [ \mathsf {Event} \ \mathsf {E}\ \mathsf {is \ true \ in \ } \mathsf {Hyb}_2] = \mathsf {negl} ({\lambda })\).

Proof

We now prove the sub-lemma. Suppose the event \(\mathsf {E}\) does occur. From the binding property of the commitment scheme and the correctness of the protocol \(\mathsf {\pi ^{SM}}\), observe that if any of the above conditions are true, it means there exists ij such that the statement \(\mathsf {st}^i_{2,j} = (\mathsf {c}^j_{1,i}, \mathsf {c}^i_{2,j}, \mathsf {msg}_{1,j}, \mathsf {msg}_{2,j}, \tau _1) \notin L\), where \(\mathsf {P}_i\) is honest and \(\mathsf {P}_j\) is malicious. However, the proof for the statement verified correctly which means that the adversary has produced a valid proof for a false statement. This violates the soundness property of the SPSSZK argument system which is a contradiction.

Lemma 2

Assuming the zero knowledge property of the \(\mathsf {SPSS.ZK}\) argument system, \(\mathsf {Hyb}_2\) is computationally indistinguishable from \(\mathsf {Hyb}_3\).

Proof

The only difference between the two hybrids is that in \(\mathsf {Hyb}_2\), \(\mathsf {Sim_{Hyb}}\) computes the proofs in Rounds 2 and 3 honestly, by running the algorithm \(\mathsf {ZK}_{2}\) of the \(\mathsf {SPSS.ZK}\) argument system, whereas in \(\mathsf {Hyb}_3\), a simulated proof is used. If the adversary \({\mathcal A} \) can distinguish between the two hybrids, we can use \({\mathcal A} \) to design an algorithm \({\mathcal A} _\mathsf {ZK}\) that breaks the zero knowledge property of the argument system.

Suppose the adversary can distinguish between the two hybrids with non-negligible probability p. Then, by a simple hybrid argument, there exists hybrids \(\mathsf {Hyb}_{2,k}\) and \(\mathsf {Hyb}_{2,k\,+\,1}\) that the adversary can distinguish with non-negligible probability \(p'<p\) such that: the only difference between the two hybrids is in the proof sent by an honest party \(\mathsf {P}_i\) to a (malicious) party \(\mathsf {P}_j\) in one of the rounds. Let’s say it is the proof in round 2.

\({\mathcal A} _\mathsf {ZK}\) performs the role of \(\mathsf {Sim_{Hyb}}\) in its interaction with \({\mathcal A} \) and performs all the steps exactly as in \(\mathsf {Hyb}_{2,k}\) except the proof in Round 2 sent by \(\mathsf {P}_i\) to \(\mathsf {P}_j\). It interacts with a challenger \({\mathcal C} \) of the \(\mathsf {SPSS.ZK}\) argument system and sends the first round message \(\mathsf {ver}^i_{1,j}\) it received from the adversary. \({\mathcal A} _\mathsf {ZK}\) receives from \({\mathcal C} \) a proof that is either honestly computed or simulated. \({\mathcal A} _\mathsf {ZK}\) sets this received proof as its message \(\mathsf {prove}^j_{i,2}\) in Round 2 of its interaction with \({\mathcal A} \). In the first case, this exactly corresponds to \(\mathsf {Hyb}_{2,k}\) while the latter exactly corresponds to \(\mathsf {Hyb}_{2,k\,+\,1}\). Therefore, if \({\mathcal A} \) can distinguish between the two hybrids, \({\mathcal A} _\mathsf {ZK}\) can use the same distinguishing guess to distinguish the proofs: i.e., decide whether the proofs received from \({\mathcal C} \) were honest or simulated. Now, notice that \({\mathcal A} _\mathsf {ZK}\) runs only in time \(\mathsf {T^{Brk}_{Com}}\) (during the input extraction phase), while the \(\mathsf {SPSS.ZK}\) system is secure against adversaries running in time \(\mathsf {T_{ZK}}\). Since \(\mathsf {T^{Brk}_{Com}}<\mathsf {T_{ZK}}\), this is a contradiction and hence proves the lemma.

In particular, this also means the following: \(\mathsf {Pr} [ \mathsf {Event} \ \mathsf {E}\ \mathsf {is \ true \ in \ } \mathsf {Hyb}_3] = \mathsf {negl} ({\lambda })\).

Lemma 3

Assuming the non-malleability property of the non-malleable commitment scheme \(\mathsf {NMCom}\), \(\mathsf {Hyb}_3\) is computationally indistinguishable from \(\mathsf {Hyb}_4\).

Proof

We will prove this using a series of computationally indistinguishable intermediate hybrids as follows.

  • \(\mathsf {Hyb}_{3,1}\) : This is same as \(\mathsf {Hyb}_3\) except that the simulator \(\mathsf {Sim_{Hyb}}\) does not run the input extraction phase apart from verifying the \(\mathsf {SPSS.ZK}\) proofs. Also, \(\mathsf {Sim_{Hyb}}\) does not run the special abort phase. In particular, the \(\mathsf {Ext.Com}\) algorithm is not run and there is no “Special Abort”. In this hybrid, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Sim}_{ZK}}\) which is lesser than \(\mathsf {T^{Brk}_{Com}}\).

  • \(\mathsf {Hyb}_{3,2}\) : This hybrid is identical to the previous hybrid except that in Round 2, \(\mathsf {Sim_{Hyb}}\) now computes all the messages \((\hat{\mathsf {c}}^j_{1,i},\mathsf {c}^j_{2,i})\) as non-malleable commitments of \(0^{p(\lambda )}\) as done by \(\mathsf {Sim}\) in the ideal world. In this hybrid too, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Sim}_{ZK}}\).

  • \(\mathsf {Hyb}_{3,3}\) : This is same as \(\mathsf {Hyb}_3\) except that the simulator does run the input extraction phase and the special abort phase. It is easy to see that \(\mathsf {Hyb}_{3,3}\) is the same as \(\mathsf {Hyb}_4\). In this hybrid, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\) which is greater than \(\mathsf {T^{Sim}_{ZK}}\).

We now prove the indistinguishability of these intermediate hybrids and this completes the proof of the lemma.

Sub-Lemma 2

\(\mathsf {Hyb}_3\) is statistically indistinguishable from \(\mathsf {Hyb}_{3,1}\).

Proof

The only difference between the two hybrids is that in \(\mathsf {Hyb}_3\), the simulator might output “Special Abort” which doesn’t happen in \(\mathsf {Hyb}_{3,1}\). As shown in the proof of Lemma 2, the probability that Event E occurs in \(\mathsf {Hyb}_3\) is negligible. This means that the probability that the simulator outputs “Special Abort” in \(\mathsf {Hyb}_{3}\) is negligible and this completes the proof.

Sub-Lemma 3

Assuming the non-malleability property of the non-malleable commitment scheme \(\mathsf {NMCom}\), \(\mathsf {Hyb}_{3,1}\) is computationally indistinguishable from \(\mathsf {Hyb}_{3,2}\).

Proof

The only difference between the two hybrids is that in \(\mathsf {Hyb}_{3,1}\), for every honest party \(\mathsf {P}_i\), \(\mathsf {Sim_{Hyb}}\) computes the commitment messages \((\hat{\mathsf {c}}^j_{1,i},\mathsf {c}^j_{2,i})\) as a commitment of \(({\mathsf x} _i,\mathsf {r}_i)\), whereas in \(\mathsf {Hyb}_{3,2}\), they are computed as a commitment of \((0^{p(\lambda )})\). If the adversary \({\mathcal A} \) can distinguish between the two hybrids, we can use \({\mathcal A} \) to design an algorithm \({\mathcal A} _\mathsf {NMC}\) that breaks the security of the non-malleable commitment scheme \(\mathsf {NMCom}\). We defer the details about the reduction to the full version.

Sub-Lemma 4

\(\mathsf {Hyb}_{3,2}\) is statistically indistinguishable from \(\mathsf {Hyb}_{3,3}\).

Proof

The only difference between the two hybrids is that in \(\mathsf {Hyb}_{3,3}\), the simulator might output “Special Abort” which doesn’t happen in \(\mathsf {Hyb}_{3,2}\). As shown in the proof of Sub-Lemma 3, the probability that Event E occurs in \(\mathsf {Hyb}_{3,2}\) is negligible. This means that the probability that the simulator outputs “Special Abort” in \(\mathsf {Hyb}_{3,3}\) is negligible and this completes the proof.

Lemma 4

Assuming the security of the protocol \(\mathsf {\pi ^{SM}}\), \(\mathsf {Hyb}_4\) is computationally indistinguishable from \(\mathsf {Hyb}_5\).

Proof

The only difference between the two hybrids is that in \(\mathsf {Hyb}_4\), \(\mathsf {Sim_{Hyb}}\) computes the messages of protocol \(\mathsf {\pi ^{SM}}\) correctly using the honest parties’ inputs, whereas in \(\mathsf {Hyb}_5\), they are computed by running the simulator \({\mathcal S} \) for protocol \(\mathsf {\pi ^{SM}}\). If the adversary \({\mathcal A} \) can distinguish between the two hybrids, we can use \({\mathcal A} \) to design an algorithm \({\mathcal A} _\mathsf {SM}\) that breaks the security of protocol \(\mathsf {\pi ^{SM}}\). We defer the details about the reduction to the full version.

5 Two Round Malicious Secure MPC for Input-Less Functionalities

Let f be any input-less functionality randomized functionalities. Consider n parties \(\mathsf {P}_1,\ldots ,\mathsf {P}_n\) who wish to compute f by running a secure multiparty computation(MPC) protocol. Let \(\pi ^{SM}\) be any 2 round MPC protocol for f in the plain model, that is secure against semi-malicious adversaries corrupting upto \((n-1)\) parties (such a protocol for general functionalities was described in [7]). In this section, we show how to generically transform \(\pi ^{SM}\) into a 2 round protocol \(\pi _1\) without setup with super-polynomial simulation and secure against malicious adversaries that can corrupt upto \((n-1)\) parties. Formally, we prove the following theorem:

Theorem 2

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • 2 round MPC protocol for any functionality f that is secure against semi-malicious adversaries,

the protocol presented in Fig. 4 is a 2 round MPC protocol for any input-less randomized functionality f, in the plain model with super-polynomial simulation.

We can instantiate the underlying MPC protocol with the 2 round construction of [11] to get the following corollary:

Corollary 2

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • Indistinguishability Obfuscation,

the protocol presented in Fig. 4 is a 2 round MPC protocol for any input-less randomized functionality f in the plain model with super-polynomial simulation.

5.1 High-Level Overview

Before describing our protocol formally, to help the exposition, we first give a brief overview of the construction in this subsection.

Consider n parties \(\mathsf {P}_1,\ldots ,\mathsf {P}_n\) with no inputs who wish to run a secure MPC to compute an input-less randomized function f. Initially, each party \(\mathsf {P}_i\) picks some randomness \(\mathsf {r}_i\) that it will use to run the semi-malicious protocol \(\mathsf {\pi ^{SM}}\) for the same functionality f.

In the first round, each party \(\mathsf {P}_i\) sends the first round message of the protocol \(\mathsf {\pi ^{SM}}\). Then, with every other party \(\mathsf {P}_j\), \(\mathsf {P}_i\) initiates an execution of the \(\mathsf {SPSS.ZK}\) argument system playing the verifier’s role. Additionally, \(\mathsf {P}_i\) and \(\mathsf {P}_j\) also initiate two executions of a non-malleable commitment scheme - each acting as the committer in one of them. \(\mathsf {P}_i\) commits to the randomness \(\mathsf {r}_i\) used in the protocol \(\mathsf {\pi ^{SM}}\).

In the second round, each party \(\mathsf {P}_i\) sends the second round message of the protocol \(\mathsf {\pi ^{SM}}\) using randomness \(\mathsf {r}_i\). Then, \(\mathsf {P}_i\) finishes executing the non-malleable commitments (playing the committer’s role) with every other party \(\mathsf {P}_j\), committing to \(rr_i\). Finally, with every other party \(\mathsf {P}_j\), \(\mathsf {P}_i\) completes the execution of the \(\mathsf {SPSS.ZK}\) argument by sending its second message - \(\mathsf {P}_i\) proves that the two messages sent so far using the protocol \(\mathsf {\pi ^{SM}}\) were correctly generated using the randomness \(\mathsf {r}_i\) committed to using the non-malleable commitment.

Each party \(\mathsf {P}_i\) now computes its final output as follows. \(\mathsf {P}_i\) first verifies all the proofs it received in the last round and sends a global abort (asking all the parties to abort) if any proof does not verify. Then, \(\mathsf {P}_i\) computes the output using the output computation algorithm of the semi-malicious protocol \(\mathsf {\pi ^{SM}}\). This completes the protocol description.

Security Proof: We now briefly describe how the security proof works. Let’s consider an adversary \({\mathcal A} \) who corrupts a set of parties. Recall that the goal is to move from the real world to the ideal world such that the outputs of the honest parties along with the view of the adversary is indistinguishable. We do this via a sequence of computationally indistinguishable hybrids.

In the first hybrid \(\mathsf {Hyb}_1\), we start with the real world.

Then, in \(\mathsf {Hyb}_2\), we switch the \(\mathsf {SPSS.ZK}\) proofs used by all honest parties in round 2 to simulated proofs. This hybrid is computationally indistinguishable from the previous hybrid by the security of the \(\mathsf {SPSS.ZK}\) system.

In \(\mathsf {Hyb}_3\), we switch all the non-malleable commitments sent by honest parties to be commitments of 0 rather than the randomness. Recall that since the proofs were simulated, this doesn’t violate correctness. Also, this hybrid is computationally indistinguishable from the previous hybrid by the security of the non-malleable commitment scheme.

Then, in \(\mathsf {Hyb}_4\), the simulator extracts the adversary’s randomness (used in protocol \(\mathsf {\pi ^{SM}}\)) by a brute force break of the non-malleable commitment. The simulator aborts if the extracted values don’t reconstruct the protocol messages correctly. These two hybrids are indistinguishable because from the soundness of the proof system, the extraction works correctly except with negligible probability. One technicality here is that since we are giving simulated proofs at this point, we cannot rely on soundness anymore. To get around this, from the very first hybrid, we maintain the invariant that in every hybrid, the value committed by the adversary using the non-malleable commitments can be used to reconstruct the messages used in the semi-malicious protocol. Therefore, at this point, as in Sect. 4, we need the time taken to break the non-malleable commitment scheme \(\mathsf {T^{Brk}_{Com}}\) to be lesser than the time against which the zero knowledge property holds - \(\mathsf {T_{ZK}}\). We elaborate on this in the formal proof.

Then, in \(\mathsf {Hyb}_5\) we run the simulator of \(\mathsf {\pi ^{SM}}\) using the extracted values to generate the protocol messages. This hybrid is indistinguishable from the previous one by the security of \(\mathsf {\pi ^{SM}}\). Once again, in order to ensure correctness of the extracted values, we require the running time of the simulator - which is \(\mathsf {T^{Brk}_{Com}}\) to be lesser than the time against which the semi-malicious protocol \(\mathsf {\pi ^{SM}}\) is secure. This is because, then, the simulator can continue to extract the adversary’s message and randomness used for the protocol \(\mathsf {\pi ^{SM}}\) by breaking the semi-malicious protocol.

Finally, \(\mathsf {Hyb}_5\) corresponds to the ideal world. Notice that our simulation is in fact straight-line. There are some slight technicalities that arise and we elaborate on this in the formal proof. We now refer the reader to the formal protocol construction.

5.2 Construction

As in Sect. 4, we first list some notation and the primitives used before describing the construction.

Notation:

  • \({\lambda } \) denotes the security parameter.

  • \(\mathsf {SPSS.ZK}= (\mathsf {ZK}_{1},\mathsf {ZK}_{2},\mathsf {ZK}_{3})\) is a two message zero knowledge argument with super polynomial strong simulation (SPSS-ZK). The zero knowledge property holds against all adversaries running in time \(\mathsf {T_{ZK}}\). Let \(\mathsf {Sim^{ZK}}\) denote the simulator that produces simulated ZK proofs and let \(\mathsf {T^{Sim}_{ZK}}\) denote its running time. [25] give a construction of an \(\mathsf {SPSS.ZK}\) scheme satisfying these properties that can be based on one of the following sub-exponential assumptions: (1) DDH; (2) Quadratic Residuosity; (3) \(N^{th}\) Residuosity.

  • \(\mathsf {NMCom}= (\mathsf {NMCom}_{1}^R,\mathsf {{NMCom}^S_{1}}, \mathsf {NMCom}_{2}^S)\) is a two message concurrent non-malleable commitment scheme with respect to commitment in the simultaneous message model. Here, \(\mathsf {NMCom}_{1}^R,\mathsf {{NMCom}^S_{1}}\) denote the first message of the receiver and sender respectively while \(\mathsf {NMCom}_{2}^S\) denotes the second message of the sender. It is secure against all adversaries running in time \(\mathsf {T^{Sec}_{Com}}\), but can be broken by adversaries running in time \(\mathsf {T^{Brk}_{Com}}\). Let \(\mathsf {Ext.Com}\) denote a brute force algorithm running in time \(\mathsf {T^{Brk}_{Com}}\) that can break the commitment scheme just using the first round messages. [25] give a construction of an \(\mathsf {NMCom}\) scheme satisfying these properties that can be based on one of the following sub-exponential assumptions: (1) DDH; (2) Quadratic Residuosity; (3) \(N^{th}\) Residuosity.

  • \(\mathsf {\pi ^{SM}}\) is a sub-exponentially secure 2 round MPC protocol that is secure against semi-malicious adversaries. This protocol is secure against all adversaries running in time \(\mathsf {T_{SM}}\). Let \((\mathsf {MSG}_{1},\mathsf {MSG}_{2})\) denote the algorithms used by any party to compute the messages in each of the two rounds and \(\mathsf {OUT}\) denotes the algorithm to compute the final output. Further, let’s assume that this protocol \(\mathsf {\pi ^{SM}}\) runs over a broadcast channel. Let \({\mathcal S} = ({\mathcal S} _1,{\mathcal S} _2)\) denote the simulator for the protocol \(\mathsf {\pi ^{SM}}\) - that is, \({\mathcal S} _i\) is the simulator’s algorithm to compute the \(i^{th}\) round messages. Also, we make the following assumptions about the protocol structure that is satisfied by the instantiations:

    1. 1.

      Since the protocol is for input-less functionalities, we assume that \({\mathcal S} _1\) is identical to the algorithm \(\mathsf {MSG}_{1}\) used by honest parties to generate their first message.

    2. 2.

      The algorithm \(\mathsf {MSG}_{2}\) doesn’t use any new randomness that was not already used in the algorithm \(\mathsf {MSG}_{1}\). This is similar to the assumption used in Sect. 4.

In order to realize our protocol, we require that \(\mathsf {poly} (\lambda )< \mathsf {T^{Sim}_{ZK}}< \mathsf {T^{Sec}_{Com}}< \mathsf {T^{Brk}_{Com}}< \mathsf {T_{ZK}}, \mathsf {T_{SM}}\).

The construction of the protocol is described in Fig. 4. We assume broadcast channels. In our construction, we use proofs for a some NP languages that we elaborate on below.

NP language L is characterized by the following relation R.

Statement : \(\mathsf {st}= (\mathsf {c}_1, \hat{\mathsf {c}}_1, \mathsf {c}_2, \mathsf {msg}_1, \mathsf {msg}_2, \tau )\)

Witness : \(\mathsf {w}=(\mathsf {r}, \mathsf {r}_\mathsf {c})\)

\(R(\mathsf {st},\mathsf {w})=1\) if and only if :

  • \(\hat{\mathsf {c}}_1 = \mathsf {{NMCom}^S_{1}}(\mathsf {r};\mathsf {r}_\mathsf {c})\) AND

  • \(\mathsf {c}_2 = \mathsf {NMCom}_{2}^S(\mathsf {r}, \mathsf {c}_1 ;\mathsf {r}_\mathsf {c})\) AND

  • \(\mathsf {msg}_1 = \mathsf {MSG}_{1}(\bot ; \mathsf {r})\) AND

  • \(\mathsf {msg}_2 = \mathsf {MSG}_{2}(\bot , \tau ; \mathsf {r})\)

That is, the messages \((\mathsf {c}_1,\hat{\mathsf {c}}_1, \mathsf {c}_2)\) form a non-malleable commitment of \((\mathsf {inp},\mathsf {r})\) such that \(\mathsf {msg}_2\) is the second round message using input \(\mathsf {inp}\), randomness \(\mathsf {r}\) by running the protocol \(\mathsf {\pi ^{SM}}\), where the protocol transcript so far is \(\tau \).

In the protocol, let’s assume that every party has an associated identity \(\mathsf {id}\). For any session \(\mathsf {sid}\), each parties generates its non-malleable commitment using the tag \((\mathsf {id}|| \mathsf {sid})\).

Fig. 4.
figure 4

2 Round MPC protocol \(\pi _1\) for input-less randomized functionality f.

The correctness of the protocol follows from the correctness of the protocol \(\mathsf {\pi ^{SM}}\), the non-malleable commitment scheme \(\mathsf {NMCom}\) and the zero knowledge proof system \(\mathsf {SPSS.ZK}\).

5.3 Security Proof

In this section, we formally prove Theorem 2.

Consider an adversary \({\mathcal A} \) who corrupts t parties where \(t<n\). For each party \(\mathsf {P}_i\), let’s say that the size of randomness used in the protocol \(\mathsf {\pi ^{SM}}\) is \(p(\lambda )\) for some polynomial p. That is, \(|\mathsf {r}_i| = p(\lambda )\). The strategy of the simulator \(\mathsf {Sim}\) against a malicious adversary \({\mathcal A} \) is described in Fig. 5.

Here, notice that since there is no input, the simulator gets the output from the ideal functionality - \(\mathsf {y}\) right at the beginning. It still has to instruct the functionality to deliver output to the honest party.

Fig. 5.
figure 5

Simulation strategy in the 2 round protocol

We now show that the simulation strategy described in Fig. 5 is successful against all malicious PPT adversaries. That is, the view of the adversary along with the output of the honest parties is computationally indistinguishable in the real and ideal worlds. We will show this via a series of computationally indistinguishable hybrids where the first hybrid \(\mathsf {Hyb}_1\) corresponds to the real world and the last hybrid \(\mathsf {Hyb}_6\) corresponds to the ideal world.

We prove indistinguishability of these hybrids via similar reductions as those in Sect. 4. Please refer to the full version for these reductions.

  1. 1.

    \(\mathsf {Hyb}_1\) : In this hybrid, consider a simulator \(\mathsf {Sim_{Hyb}}\) that plays the role of the honest parties. \(\mathsf {Sim_{Hyb}}\) runs in polynomial time.

  2. 2.

    \(\mathsf {Hyb}_2\) : This hybrid is identical to the previous hybrid except that in Round 2, \(\mathsf {Sim_{Hyb}}\) now computes simulated SPSSZK proofs as done in Round 2 in Fig. 5. Here, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Sim}_{ZK}}\).

  3. 3.

    \(\mathsf {Hyb}_3\) : This hybrid is identical to the previous hybrid except that \(\mathsf {Sim_{Hyb}}\) now computes all the \((\hat{\mathsf {c}}^j_{1,i},\mathsf {c}^j_{2,i})\) as non-malleable commitments of \(0^{p(\lambda )}\) as done in Round 2 in Fig. 5. Once again, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Sim}_{ZK}}\).

  4. 4.

    \(\mathsf {Hyb}_4\) : In this hybrid, the simulator \(\mathsf {Sim_{Hyb}}\) also runs the “Randomness Extraction” phase and the “Special Abort” phase in steps 2 and 4 in Fig. 5. Now, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\).

  5. 5.

    \(\mathsf {Hyb}_5\) : In this hybrid, if the value of the variable \(\mathsf {correct}= 1\), \(\mathsf {Sim_{Hyb}}\) now computes the second round message of the protocol \(\mathsf {\pi ^{SM}}\) using the simulator algorithms \({\mathcal S} _2\) as done by \(\mathsf {Sim}\) in the ideal world. \(\mathsf {Sim_{Hyb}}\) also instructs the ideal functionality to deliver outputs to the honest parties as done by \(\mathsf {Sim}\). This hybrid is now same as the ideal world. Once again, \(\mathsf {Sim_{Hyb}}\) runs in time \(\mathsf {T^{Brk}_{Com}}\).

6 Three Round Concurrently Secure MPC

Let f be any functionality. Consider n parties \(\mathsf {P}_1,\ldots ,\mathsf {P}_n\) with inputs \({\mathsf x} _1,\ldots ,{\mathsf x} _n\) respectively who wish to compute f on their joint inputs by running a concurrently secure multiparty computation(MPC) protocol. Let \(\pi ^{SM}\) be any 3 round protocol that runs without any setup for the above task and is secure against adversaries that can be completely malicious in the first round, semi-malicious in the next two rounds and can corrupt upto \((n-1)\) parties. In this section, we show how to generically transform \(\pi ^{SM}\) into a 3 round concurrently secure protocol \(\pi ^{\mathsf {Conc}}\) without setup with super-polynomial simulation that is secure against malicious adversaries which can corrupt upto \((n-1)\) parties. Formally, we prove the following theorem:

Theorem 3

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • 3 round MPC protocol for any functionality f that is stand-alone secure against malicious adversaries in the first round and semi-malicious adversaries in the next two rounds,

the protocol presented in Fig. 2 is a 3 round concurrently secure MPC protocol without any setup with super-polynomial simulation for any functionality f, secure against malicious adversaries.

We can instantiate the underlying MPC protocol with the constructions of [7, 11] to get the following corollary:

Corollary 3

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • B, where B \( \in \{LWE, \ Indistinguishability \ Obfuscation\}\)

the protocol presented in Fig. 2 is a 3 round concurrently secure MPC protocol without any setup with super-polynomial simulation for any functionality f, secure against malicious adversaries.

We essentially prove that the same protocol from Sect. 4 is also concurrently secure. The proof is fairly simple and not too different from the proof of stand-alone security, because the simulation strategy as well as all reductions are straight-line. The only use of rewinding occurs (implicitly) within the proof of non-malleability, which we carefully combine with identities to ensure that the protocol remains concurrently secure. For the sake of completeness, we write out the protocol and the proof in their entirety in the full version.

7 Two Round Concurrently Secure MPC for Input-Less Functionalities

Let f be any input-less functionality randomized functionalities. Consider n parties \(\mathsf {P}_1,\ldots ,\mathsf {P}_n\) who wish to compute f by running a concurrently secure multiparty computation(MPC) protocol. Let \(\pi ^{SM}\) be any 2 round protocol that runs without any setup for the above task and is secure against semi-malicious adversaries that can corrupt upto \((n-1)\) parties. In this section, we show how to generically transform \(\pi ^{SM}\) into a 2 round concurrently secure protocol \(\pi ^{\mathsf {Conc}}_1\) without setup with super-polynomial simulation and secure against malicious adversaries that can corrupt upto \((n-1)\) parties. Formally, we prove the following theorem:

Theorem 4

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • 2 round MPC protocol for any functionality f that is stand-alone secure against semi-malicious adversaries,

the protocol presented in Fig. 4 is a 2 round concurrently secure MPC protocol without any setup with super-polynomial simulation for any input-less randomized functionality f, secure against malicious adversaries.

We can instantiate the underlying MPC protocol with the 2 round construction of [11] to get the following corollary:

Corollary 4

Assuming sub-exponentially secure:

  • A, where A \( \in \{DDH, \ Quadratic \ Residuosity, \ N^{th} \ Residuosity\}\) AND

  • Indistinguishability Obfuscation,

the protocol presented in Fig. 4 is a 2 round concurrently secure MPC protocol without any setup with super-polynomial simulation for any input-less randomized functionality f.

We essentially prove that the same protocol from Sect. 5 is also concurrently secure. The proof is fairly simple and not too different from the proof of stand-alone security, because the simulation strategy as well as all reductions are straight-line. The only use of rewinding occurs (implicitly) within the proof of non-malleability, which we carefully combine with identities to ensure that the protocol remains concurrently secure. For the sake of completeness, we write out the protocol and the proof in their entirety in the full version.