On CutandChoose Oblivious Transfer and Its Variants
Abstract
Motivated by the recent progress in improving efficiency of secure computation, we study cutandchoose oblivious transfer—a basic building block of stateoftheart constant round twoparty secure computation protocols that was introduced by Lindell and Pinkas (TCC 2011). In particular, we study the question of realizing cutandchoose oblivious transfer and its variants in the OThybrid model. Towards this, we provide new definitions of cutandchoose oblivious transfer (and its variants) that suffice for its application in cutandchoose techniques for garbled circuit based twoparty protocols. Furthermore, our definitions conceptually simplify previous definitions including those proposed by Lindell (Crypto 2013), Huang et al., (Crypto 2014), and Lindell and Riva (Crypto 2014). Our main result is an efficient realization (under our new definitions) of cutandchoose OT and its variants with small concrete communication overhead in an OThybrid model. Among other things this implies that we can base cutandchoose OT and its variants under a variety of assumptions, including those that are believed to be resilient to quantum attacks. By contrast, previous constructions of cutandchoose OT and its variants relied on DDH and could not take advantage of OT extension. Also, our new definitions lead us to more efficient constructions for multistage cutandchoose OT—a variant proposed by Huang et al. (Crypto 2014) that is useful in the multiple execution setting.
Keywords
Cutandchoose oblivious transfer OT extension Concrete efficiency1 Introduction
Secure twoparty computation is rapidly moving from theory to practice. While the basic approach for semihonest security, garbled circuits [33], is extensively studied and is largely settled, security against malicious players has recently seen significant improvements. The main technique for securing garbled circuit protocols against malicious adversaries is cutandchoose, formalized and proven secure by Lindell and Pinkas [23]. A line of work [11, 22, 23, 24, 26, 31] has focused on reducing the concrete overhead of the cutandchoose approach: it is possible to guarantee probability of cheating \(\le 2^{\sigma }\) using exactly \(\sigma \) garbled circuits.
The above works have been motivated by the impression that the major overhead of secure twoparty computation arises from the generation, transmission, and evaluation of garbled circuits (especially for functions having large circuit size). Indeed, the work of Frederiksen and Nielsen [7] showed that the cost of the circuit communication and computation for oblivious twoparty AES is approximately \(80\,\%\) of the total cost; likewise, Kreuter et al. [19] showed that the circuit generation and evaluation for large circuits takes \(99.999\,\%\) of the execution time.
Recent works of [10, 21] consider the multipleexecution setting, where two parties compute the same function on possibly different inputs either in parallel or sequentially. These works show that to evaluate the same function t times, it is possible to reduce the number of garbled circuits to \(O(\sigma /\log t)\). In concrete terms, this corresponds to a drastic reduction in the number of garbled circuits. For instance when \(t = 3500\) and for \(\sigma = 40\), the work of [10, 21] shows a cutandchoose technique that reduces the number of garbled circuits to less than 8 per execution. Thus it is reasonable to say that the overhead due to generation, transmission, and evaluation of garbled circuits has been significantly reduced.
Of greater concern is the fact that these stateoftheart protocols are unlikely to perform well in settings where the inputs of even one of the parties are large (because they use public key operations proportional to the total size of inputs of both parties). It is worthwhile to note that although techniques, most notably amortization via oblivious transfer (OT) extension [12, 14, 29], exist to reduce the number of public key operations required at least for one of the parties, the stateoftheart twoparty secure computation protocols simply are not able to take advantage of these amortization techniques.
If one restricts their attention to constantround protocols with good concrete efficiency there are very few alternatives [23, 26] that require reduced number of public key operations. For instance the protocols of [23, 26] use public key operations only for the (seed) OTs (which can be amortized using OT extension). Furthermore, at least in the single execution setting, the techniques of [23, 26] can be easily merged with stateoftheart cutandchoose techniques to reduce the number of public key operations. However, this results in a considerable overhead in the communication complexity (by factor \(\sigma \)) for proving input consistency of the circuit generator. More importantly the techniques of [23, 26] do not adapt well to the stateoftheart cutandchoose techniques for the multiple executions setting, and require strong assumptions such as a programmable random oracle. Specifically, the “XORtree encoding schemes” technique employed in [23, 26] to avoid the selective failure attack no longer appears to work with standard garbling techniques. On the other hand, a natural generalization of cutandchoose OT, namely multistage cutandchoose OT proposed in [10, 21] can handle the selective failure attack in the multiple executions setting (cf. Sect. 1.2).
Unfortunately the only known constructions of cutandchoose OT as well as its variants rely on DDH and consequently use public key operations proportional to the size of the cutandchoose OT instance. This is further amplified by the fact that known cutandchoose OT protocols require regular exponentiations which are more expensive relative to even fixedbase exponentiations. (Note that on the other hand the DDHbased zeroknowledge protocols to ensure input consistency used in [22, 24] require only fixedbase exponentiations.)

– Pinning down the exact formulation of cutandchoose OT and its variants that suffices for its applications.

– Basing cutandchoose OT on a wide variety of assumptions (including LWE, RSA, DDH).

– Showing how to efficiently “extend” cutandchoose OT (a la OT extension).

– An approach for porting “XORtree encoding schemes” to work in the multiple execution setting while preserving their efficiency.

– Treats cutandchoose OT (and its variants) as a reactive functionality. This allows us to construct efficient protocols for multistage cutandchoose OT.

– Requires ideal process simulation for corrupt receiver but only privacy against corrupt sender. This will allow us to realize cutandchoose OT (and its variants) with low concrete communication complexity.
1.1 CutandChoose Oblivious Transfer and Its Variants
We provide an overview of cutandchoose OT and its variants. In the following, let \(\lambda \) (resp. \(\sigma \)) denote the computational (resp. statistical) security parameter.
CutandChoose Oblivious Transfer. Cutandchoose oblivious transfer (CCOT) [24], denoted \(\mathcal {F}_{\mathrm {ccot}}\) (see Fig. 1) is an extension of standard OT. The sender inputs n pairs of strings, and the receiver inputs n selection bits to select one string out of each pair of sender strings. However, the receiver also inputs a set J of size n / 2 that consists of indices where it wants both the sender’s inputs to be revealed. Note that for indices not contained in J, only those sender inputs that correspond to the receiver’s selection bits are revealed.
Remark 1
Using a PRG it is possible to obtain OT on long strings given ideal access to OT on short strings of length \(\lambda \) [12]. This length extension technique is applicable to cutandchoose and its variants. Furthermore, for applications to secure computation, sender input strings (i.e., garbled circuit keys) are of length \(\lambda \). Therefore, we assume wlog that sender input strings are all of length \(\lambda \).
Batch SingleChoice CCOT. In applications to secure computation, one needs singlechoice CCOT, where the receiver is restricted to inputting the same selection bit in all the n / 2 instances where it receives exactly one out of two sender strings. Furthermore, it is crucial that the subset J input by the receiver is the same across each instance of singlechoice CCOT. This variant, called batch singlechoice CCOT can be efficiently realized under DDH [24].
Multistage CCOT. To handle the multiple (parallel) execution setting, a new variant of \(\mathcal {F}^\star _{\mathrm {ccot}}\) called batch singlechoice multistage cutandchoose oblivious transfer was proposed in [10]. For sake of simplicity, we refer to this primitive as multistage cutandchoose oblivious transfer and denote it by \(\mathcal {F}^\star _{\mathrm {mcot}}\). At a high level, this variant differs from \(\mathcal {F}^\star _{\mathrm {ccot}}\) in that the receiver can now input multiple sets \(E_1,\ldots , E_t\) (where J is now implicitly defined as \([n]\setminus \cup _{k\in [t]} E_k\)), and make independent selections for each \(E_1,\ldots , E_t\). In fact the above definition reflects the cutandchoose technique employed in [10, 21] for the multiple execution setting. The technique proceeds by first choosing a subset of the n garbled circuits to be checked, and then partitioning the remaining garbled circuits into t evaluation “buckets”. An informationtheoretic reduction of \(\mathcal {F}^\star _{\mathrm {mcot}}\) to t instances of \(\mathcal {F}^\star _{\mathrm {ccot}}\) with total communication cost \(O(nt^2\lambda )\) was shown in [10].
For lack of space, we present only the multistage cutandchoose OT functionality in Fig. 2. Note that \(\mathcal {F}^\star _{\mathrm {mcot}}\) generalizes modified batch singlechoice CCOT of [22] (simply by setting \(t = 1\)) as well as batch singlechoice CCOT of [24] (by setting \(t=1\) and forcing \(J = n/2\) and setting all \(\phi _j^k\) values to \(0^\sigma \)).
1.2 Selective Failure Attacks
In garbled circuit protocols, OT is used to enable the circuit generator (referred to as the sender) \(S\) to transfer input keys for the garbled circuit corresponding to the circuit evaluator (referred to as the receiver) \(R\)’s inputs. However, when \(S\) is malicious, this can lead to a “selective failure” attack. To explain this problem in more detail, consider the following naïve scheme. For simplicity assume that \(R\) has only one input bit b. Let the keys corresponding to R’s input be \((x_{j,0}, x_{j,1})\) in the jth garbled circuit. In the following, let \({\textsf {com}}\) be a commitment scheme.

– \(S\) sends \(({\textsf {com}}(x_{1,0}), {\textsf {com}}(x_{1,1})), \ldots , ({\textsf {com}}(x_{n,0}), {\textsf {com}}(x_{n,1}))\) to \(R\).

– \(S\) and \(R\) participate in a single instance of \(\mathcal {F}_{\mathrm {OT}}\) where \(S\)’s input \(((d_{1,0},\ldots ,d_{n,0}), (d_{1,1},\ldots ,d_{n,1}))\) where \(d_{j,c}\) is the decommitment corresponding to \({\textsf {com}}(x_{j,c})\), and \(R\)’s input is b. \(R\) obtains \((d_{1,b},\ldots ,d_{n,b})\) from \(\mathcal {F}_{\mathrm {OT}}\).

– Then \(R\) sends check indices \(J \subseteq [n]\) to \(S\).

– \(S\) sends \(\{d_{j,0},d_{j,1}\}_{j\in J}\) to \(R\).
The selective failure attack operates in the following way: \(S\) supplies \((d_{1,0},\ldots ,d_{n,0}),(d_{1,1}',\ldots ,d_{n,1}')\) where \(d_{i,0}\) is a valid decommitment for \({\textsf {com}}(x_{i,0})\) while \(d_{i,1}'\) is not a valid decommitment for \({\textsf {com}}(x_{i,1})\). Then when \(R\) sends check indices, \(S\) responds with \(\{d_{j,0},d_{j,1}\}_{j\in J}\) where \(d_{j,0}\) and \(d_{j,1}\) are valid decommitments for \({\textsf {com}}(x_{j,0})\) and \({\textsf {com}}(x_{j,1})\) respectively. Suppose \(R\)’s input equals 0. In this case, \(R\) does not detect any inconsistency, and continues the protocol, and obtains output. Suppose \(R\)’s input equals 1. Now \(R\) will not obtain \(x_{j,1}\) for all \(j\in [n]\) since it receives invalid decommitments. If \(R\) aborts then \(S\) knows that \(R\)’s input bit equals 1. In any case, R cannot obtain the final output. I.e., the ideal process and the real process can be distinguished when \(R\)’s input equals 1, and the protocol is insecure since S can force an abort depending on R’s input.
Approaches Based on “XORTree Encoding Schemes”. The first solution to the selective failure attack was proposed in [6, 15, 23] where the idea was to randomly encode \(R\)’s input and then augment the circuit with a supplemental subcircuit (e.g., “XORtree”) that performs the decoding to compute \(R\)’s actual input. Note that the “selective failure”type attack can still be applied by \(S\) but the use of encoding ensures that the event that \(R\) aborts due to the attack is almost statistically independent of its actual input. The basic XORtree encoding scheme incurs a multiplicative overhead of \(\sigma \) in the number of OTs and increases the circuit size by \(\sigma \) XOR gates. The “random combinations” XORtree encoding [23, 25, 30] incurs a total overhead of \(m' = \max (4m,8\sigma )\) in the number of OTs where m is the length of \(R\)’s inputs, and an additional \(0.3\,mm'\) XOR gates. (Note that use of the freeXOR technique [18] can lead to nullifying the cost of the additional XOR gates.) [13] uses \(\sigma \)wise independent generators to provide a rate1 encoding of inputs which can be decoded using an \(\mathbf {NC}^0\) circuit.
Approaches Based on CCOT. CCOT forces \(S\) to “commit” to all keys corresponding to \(R\)’s input and reveals a subset of these keys corresponding to \(R\)’s input but without the knowledge of which subset of keys were revealed. This allows us to intertwine the OT and the circuit checks and avoids the need to augment the original circuit with a supplemental decoding subcircuit. I.e., selective failure attacks are “caught” along with check for incorrectly constructed circuits, and this results in a simpler security analysis.
Approaches for the Multiple Execution Setting. While either approach seems sufficient to solve the selective failure attack, the CCOT based approach offers a qualitative advantage in the multiple parallel execution setting. First let us provide an overview of the cutandchoose technique in the multiple execution setting [10, 21]. S sends n garbled circuits, and R picks a check set \(J\subseteq [n]\). The garbled circuits corresponding to check sets will eventually be opened by S. The garbled circuits which are not check circuits are randomly partitioned into t evaluation “buckets” denoted by \(E_1, \ldots , E_t\). We now explain the difficulty in adapting XORtree encoding schemes to this cutandchoose technique.
Observe that when using standard garbling schemes [23, 33] in a 2party garbled circuits protocol, the OT step needs to be carried out before the garbled circuits are sent. This is necessary for the simulator to generate correctly faked garbled circuits (using R’s inputs extracted from the OT) in the simulation for corrupt R. For simplicity assume that R has exactly one input bit (which may vary across different executions). Now when using XORtree encoding schemes we need to enforce that in each execution, R inputs the same choice in all the OTs. Batching the OTs together for each execution can be implemented if S knows which circuits are going to be evaluation circuits for each execution, but R cannot reveal which circuits are evaluation circuits because this allows a corrupt S to transmit wellformed check circuits and illformed evaluation circuits. Thus it is unclear how to apply the XORtree encoding schemes and ensure that corrupt R chooses the same inputs for the evaluation circuits within an execution.
A generalization of CCOT called multistage CCOT (Fig. 2) is wellsuited to the multiple parallel execution setting. Indeed, multistage CCOT \(\mathcal {F}^\star _{\mathrm {mcot}}\) takes as inputs (1) from S: all input keys corresponding to R’s inputs in each of the n garbled circuits, and (2) from R: the sets \(E_1,\ldots , E_t\) along with independent choice bits for each of the t executions. Thus \(\mathcal {F}^\star _{\mathrm {mcot}}\) avoids the selective failure attack in the same way as CCOT does it in the single execution setting. Further, it ensures that R is forced to choose the same inputs within each execution.
Remark 2
Surprisingly, CCOT has a significant advantage over XORtree encoding schemes only in the parallel execution setting. In the sequential execution setting, it is unclear how to use CCOT since R’s inputs for each of its executions are not available at the beginning of the protocol. It appears necessary to do the OT for each execution after all the garbled circuits are sent. Then one may use adaptively secure garbling schemes [2, 3] (e.g., in the programmable random oracle model) to enable the simulator to generate correctly faked garbled circuits in the simulation for corrupt R. Assuming that the garbling is adaptively secure, XORtree encoding schemes suffice to circumvent the selective failure attack in the multiple sequential setting. This also applies to the multiple parallel setting.
1.3 Overview of Definitions and Constructions
As mentioned in the Introduction, all known constructions of CCOT rely on DDH and thus make heavy use of public key operations. A natural approach to remedy the above situation is try and construct CCOT in a OThybrid model and then use OT extension techniques [12, 29].
Basing CCOT on OT. A first idea is to use general OTbased 2PC (e.g., [14]) to realize CCOT but it is not clear if this would result in a CCOT protocol with good concrete efficiency. Note that the circuit implementing CCOT has very small depth, and that S’s inputs are of length \(O(n\lambda )\) while R’s is of length O(n) (where the bigOh hides small constants). Protocols of [23, 26] do not perform well since there’s a multiplicative overhead of (at least) \(\lambda \sigma \) over the instance size (i.e., \(O(n\lambda )\)) simply because of garbling (factor \(\lambda \)) and cutandchoose (factor \(\sigma \)). Protocols of [10, 22, 24] already rely on CCOT and the instance size of CCOT required inside these 2PC protocols are larger than the CCOT instance we wish to realize. Since the circuit has very small constant depth it is possible to employ nonconstant round solutions [29] but this still incurs a factor \(\lambda \) overhead due to use of authenticated OTs. Employing informationtheoretic garbled circuit variants [15, 17] in the protocols of [23, 26] still incur a factor \(\sigma \) overhead due to cutandchoose. In summary, none of the above are satisfactory for implementing CCOT as they incur at least concrete factor \(\mathrm {min}(\lambda ,\sigma )\) multiplicative overhead.
To explain the intuition behind our definitions and constructions, we start with the seemingly close relationship between CCOT and 2outof3 OT. At first glance, it seems that it must be easy to construct CCOT from 2outof3 OT. For example, for each index, we can let S input the pair of real input keys along with a “dummy check value” as its 3 inputs to 2outof3 OT, and then let R pick two out of the three values (i.e., both keys if it’s a check circuit, or the dummy check value along with the key that corresponds to R’s real input). There are multiple issues with making this idea work in the presence of malicious adversaries. Perhaps the most important issue is that this idea still wouldn’t help us achieve our goal of showing a reduction from CCOT to 1outof2 OT. More precisely, we do not know how to construct efficient protocols for 2outof3 OT from 1outof2 OT. Consider the following toy example for the same.
Inputs: \(S\) holds \((x_0,x_1,x_2)\) and \(R\) holds \(b_1\in \{0,2\},b_2 \in \{1,2\}\).

– \(S\) sends \((x_0, x_2)\) to \(\mathcal {F}_{\mathrm {OT}}\) and \(R\) sends \(b_1\) to \(\mathcal {F}_{\mathrm {OT}}\).

– \(S\) sends \((x_1, x_2)\) to \(\mathcal {F}_{\mathrm {OT}}\) and \(R\) sends \(b_2\) to \(\mathcal {F}_{\mathrm {OT}}\).
The problem with the protocol above is that simulation extraction will fail with probability 1/2 since a malicious \(S\) may input different values for \(x_2\) in each of the two queries to \(\mathcal {F}_{\mathrm {OT}}\). Note that even enforcing \(S\) to send \(h = \tilde{H}(x_2)\) to \(R\) where \(\tilde{H}\) is a collisionresistant hash function (or an extractable commitment) does not help the simulator. On the other hand this hash value does enable \(R\) to detect an inconsistency if (1) \(S\) supplied two different values for \(x_2\) in each of the two queries to \(\mathcal {F}_{\mathrm {OT}}\) and (2) \(R\) picked the \(x_2\) value which is not consistent with h. However, if \(R\) aborts on detection of inconsistency this leaks information.
Our main observation is that the attacks on the toy protocol are very similar to the selective failure attacks discussed in Sect. 1.2. Motivated by this one may attempt to use “XORtree encoding schemes” to avoid the selective failure attacks, and attempt to construct CCOT directly from 1outof2 OT. However, note that the encoding schemes alone do not suffice to prevent selective failure attacks; they need to be used along with a supplemental decoding circuit. Here our main observation is that known encoding schemes (possibly with the exception of [32]) used to prevent selective failure attacks [13, 23] can be decoded using (a circuit that performs) only XOR operations. Thus, one may use the freeXOR technique [18] to get rid of the need for a supplemental decoding circuit, and instead perform XOR operations directly on strings. Indeed the above idea can be successfully applied to prevent selective failure attacks that could be mounted on the toy protocol, and can also be extended to yield a protocol for CCOT. Although the resulting CCOT protocol is simulatable against a malicious receiver, unfortunately we do not know how to simulate a corrupt sender (specifically, extract sender’s input).
Relaxing CCOT. Our main observation is that for application to 2PC, full simulation against a corrupt sender is not required. It is only privacy that is required. This is because S’s inputs to the 2PC are extracted typically via ZK (or the mechanism used for input consistency checks), and the inputs to the CCOT are just random garbled keys which are unrelated to its real input. Note that in 2PC protocols that use CCOT [10, 22, 24] the following three steps happen after the CCOT protocol is completed: (1) S sends all the garbled circuits, and (2) then R reveals the identity of the evaluation circuits, and (3) then S reveals its keys corresponding to its input for the evaluation circuits. Consider the second step mentioned above, namely that R reveals the identity of the evaluation circuits. This is a relatively subtle step since a malicious R may claim (a) that a check circuit is an evaluation circuit, or (b) that an evaluation circuit is a check circuit. Both these conditions need to be handled carefully since in case (a) corrupt R, upon receiving S’s input keys in step (3) will be able to evaluate the garbled circuits on several inputs of its choice. Case (b) is problematic while simulating a corrupt R as the simulator does not know which circuits to generate correctly and which ones to fake. Therefore, 2PC protocols that use CCOT require R to “prove” the identity of the check/evaluation circuits. In [10, 22], this is done via “check values” and “checkset values”. We use similar ideas in our protocols: if \(j\in [n]\) is such that \(j \not \in J\), then R receives some dummy check value \(\phi _j\), and if \(j \in J\) then R receives “checkset values” \(x_{j,0}, x_{j,1}\) which correspond to S’s inputs. Thus, R can prove the identity of check/evaluation circuits simply by sending the “check values” \(\{\phi _j\}_{j \not \in J}\) and “checkset values” \(\{ x_{j,0}, x_{j,1} \}_{j \in J}\). Observe that this step does not reveal any information about R’s input bits \(\{b_j\}_{j\not \in J}\) to S. To do this, we would need to include a “reveal” step.
Motivated by the discussions above, we formulate a new definition for CCOT and its variants. Our definitions pose CCOT and its variants as reactive functionalities, and in particular include a “reveal phase” where R’s evaluation set \([n]\setminus J\) is simply revealed to S by the functionality. More precisely, in the reveal phase we allow R to decide whether it wants to abort or reveal J. Note that for the case of \(\mathcal {F}^\star _{\mathrm {mcot}}\), the evaluation sets \(E_1,\ldots ,E_t\) is revealed to S by the functionality. This in particular allows us to eliminate the “check values” in the definitions of \(\mathcal {F}^\star _{\mathrm {ccot}}\) [22] and \(\mathcal {F}^\star _{\mathrm {mcot}}\) [10], and allows us to present protocols for (the reactive variant of) \(\mathcal {F}^\star _{\mathrm {mcot}}\) that is more efficient than prior constructions [10]. We formulate CCOT as a reactive functionality because step (1) where S sends all the garbled circuits happens immediately after the CCOT step and before step (2) where R reveals the identity of the evaluation circuits. It is easy to see that this relaxed formulation suffices for applications to secure computation.
Discussion. Such relaxed definitions, in particular requiring only privacy against corrupt sender, is not at all uncommon for OT and its variants (cf. [1, 28]) or PIR (cf. [4, 20]). Similarly, [8] propose “keyword OT” protocols in a clientserver setting, and require one to simulate the server’s (which acts as the sender) view alone, without considering its joint distribution with the honest client’s output. For another example, consider [11] who use a CDHbased OT protocol that achieves privacy (but is not known to be simulatable) against a malicious sender, and yet this suffices for their purposes to construct efficient 2PC protocols.
2 Definitions
We will be using the following definitions (loosely based on analogous definitions for keyword OT [8]) for CCOT as well as its variants. Therefore for convenience we will define these as security notions for an arbitrary functionality F, and then in our theorem statements we will refer to F as being CCOT or one of its variants.
Definition 1
(Correctness). If both parties are honest, then, after running the protocol on inputs (X, Y), the receiver outputs Z such that \(Z = F(X,Y)\).
Definition 2
(Receiver’s privacy: indistinguishability). Let \(\sigma \) be a statistical security parameter. Then, for any ppt \(S'\) executing the sender’s part and for any inputs \(X,Y,Y'\), the statistical distance between the views that \(S'\) sees on input X, in the case that the receiver inputs Y and the case that it inputs \(Y'\) is bound by \(2^{\sigma +O(1)}\).
Definition 3
(Sender’s privacy: comparison with the ideal model). For every ppt machine \(R'\) substituting the receiver in the real protocol, there exists a ppt machine \(R''\) that plays the receiver’s role in the ideal implementation, such that on any inputs (X, Y), the view of \(R'\) is computationally indistinguishable from the output of \(R''\). (In the semihonest model \(R' = R\).)
Definition 4
A protocol \(\pi \) securely realizes functionality F with sendersimulatability and receiverprivacy if it satisfies Definitions 1, 2, and 3.

1. Algorithm \(\mathsf {En}\) takes input \(\{ (x_0^i, x_1^i) \}_{i \in [m]}\) and produces pairs of random \(\lambda \)bit strings \(\{ u_0^\ell , u_1^\ell \}_{\ell \in [m']}\) s.t. for each \(\ell ,\ell ' \in [m']\), it holds that \(u_0^{\ell '} {\oplus }u_1^{\ell '} = u_0^\ell {\oplus }u_1^\ell \).

2. Algorithm \(\mathsf {En}'\) takes input \(\mathbf {b} = (b_1,\ldots ,b_m) \in \{0,1\}^m\) and outputs \(\{ b_\ell '\}_{\ell \in [m']}\).
 3. For every \(\mathbf {b} = (b_1,\ldots ,b_m) \in \{0,1\}^m\) and every \(\{ (x_0^i, x_1^i) \}_{ i \in [m] }\) it holds thatWe sometimes abuse notation and allow \(\mathsf {De}\) to take sets of pairs of strings as input in which case we require that for every \(\{ (x_0^i, x_1^i) \}_{ i \in [m] }\) it holds that$$\begin{aligned} \mathrm {Pr}\left[ \begin{array}{l} \{b_\ell '\}_{\ell \in [m']} \leftarrow \mathsf {En}'(\mathbf {b});\\ \{(u_0^\ell , u_1^\ell )\}_{\ell \in [m']} \leftarrow \mathsf {En}(\{(x_0^i,x_1^i)\}_{i \in [m] }) \end{array} :\mathsf {De}( \{ u_{b_\ell '}^\ell \}_{\ell \in [m']}) = \{ x_{b_i}^i \}_{i \in [m] } \right] = 1.\end{aligned}$$$$\begin{aligned} \mathrm {Pr}\left[ \begin{array}{l} \{(u_0^\ell , u_1^\ell )\}_{\ell \in [m']} \leftarrow \mathsf {En}(\{(x_0^i,x_1^i)\}_{i \in [m]}) : \\ \qquad \qquad \qquad \mathsf {De}( \{ (u_0^\ell , u_{1}^\ell ) \}_{\ell \in [m']}) = \{ ( x_{0}^i, x_1^i ) \}_{i \in [m] } \end{array} \right] = 1. \end{aligned}$$

4. For every \(\mathbf {b}\), it holds that \(\mathrm {Pr}[\mathsf {De}'(\mathsf {En}'(\mathbf {b})) = \mathbf {b}] = 1\).

5. Algorithms \(\mathsf {De},\mathsf {De}'\) can be implemented by using (a tree of) XOR gates only.

6. For every disjunctive predicate \(P(\cdot )\), the following holds: (1) If P involves at most \(\sigma 1\) literals, then \(\Pr [P (\mathsf {En}'(\mathbf {b})) = 1]\) is completely independent of \(\mathbf {b}\). (2) Otherwise, \(\Pr [P (\mathsf {En}'(\mathbf {b})) = 1] \ge 1 2^{\sigma +1}\).
 7. For every \(\{(x_0^i, x_1^i)\}_{i \in [m] }\) and for every (possibly unbounded) adversary \(\mathcal {A}'\) and for every \(\{b_\ell '\}_{\ell \in [m']} \in \{0,1\}^{m'}\), there exists a ppt algorithm \(\mathcal {S}'\) such that the following holds:(This in particular, implies that \(\mathcal {A}\) obtains no information about \(\{x_{1b_i}^i\}_{i \in [m]}\).)$$\begin{aligned} \begin{array}{l} \mathrm {Pr}[ \{ (u_0^\ell , u_1^\ell ) \}_{\ell \in [m']} \leftarrow \mathsf {En}(\{(x_0^i,x_1^i)\}_{i \in [m] }) : \mathcal {A}'( \{ b_\ell '\}_{\ell \in [m']}, \{ u_{b_\ell '}^\ell \}_{\ell \in [m']} ) = 1 ] = \\ \quad \mathrm {Pr}\left[ \begin{array}{l} (b_1,\ldots ,b_m)\leftarrow \mathsf {De}'(\{b_\ell '\}_{\ell \in [m'] } );\\ \{ \tilde{u}^\ell \}_{\ell \in [m']} \leftarrow \mathcal {S}'( \{ b_\ell '\}_{\ell \in [m']} , \{x^i_{b_i} \}_{i \in [m] } )\end{array} : \mathcal {A}'(\{ b_\ell '\}_{\ell \in [m']}, \{ \tilde{u}^\ell \}_{\ell \in [m']}) = 1 \right] \end{array}\!. \end{aligned}$$
Algorithms \((\mathsf {En},\mathsf {De},\mathsf {En}',\mathsf {De}')\) for the basic XORtree encoding scheme [23] are simple and implicit in our basic CCOT construction (cf. Fig. 4). For the random combinations XORtree encoding [23] algorithm \(\mathsf {En}'\) is simply a random linear mapping (i.e., public randomness \(\omega _0\) defines this random linear mapping, see e.g., [23, 30] for more details). Finally, for the \(\sigma \)wise independent generators XORtree encoding the algorithm \(\mathsf {En}'\) depends on the generator (i.e., public randomness \(\omega _0\) defines this generator) which can be implemented only using XOR gates [27]. Note that in all of the above, \(\mathsf {En}'\) essentially creates a \((\sigma 1)\)independent encoding of its input, and thus Property 6 holds (see also Lemma 1). In all our constructions, \(\mathsf {En}\) simply maps its inputs to a pairs of random strings such that the XOR of the two strings within a pair is always some fixed \(\varDelta \). Algorithms \(\mathsf {De},\mathsf {De}'\) are deterministic and function to simply reverse the respective encoding algorithms \(\mathsf {En},\mathsf {En}'\). Note that \(\mathsf {De},\mathsf {De}'\) (acting respectively on outputs of \(\mathsf {En},\mathsf {En}'\)) are naturally defined by the supplemental decoding circuit that decodes the XORtree encoding, and thus can be implemented using XOR gates only. We point out that algorithm \(\mathsf {De}'\) is used only in the simulation to extract R’s input from its XORtree encoded form. Finally, Property 7 is justified by the fact that XORtree encoding schemes that are useful in standard twoparty secure computation protocols, the receiver R obtains only one of two keys corresponding to the encoding (via OTs), and these keys reveal the output keys of the supplemental decoding circuit (that correspond exactly to the output of the decoding) and nothing else.
3 Constructions

– Chooses random \(\varDelta ',\{u_0^\ell , u_2^\ell \}_{\ell \in [\sigma ]}\) and sets for all \(\ell \in [\sigma ]\), value \(u_1^\ell = u_0^\ell {\oplus }\varDelta '\).

– For each \(\ell \in [\sigma ]\), acting as \(\mathcal {F}_{\mathrm {OT}}\) obtain values \(\{c_0^\ell , c_1^\ell \}\) and return answers from \(\{u_0^\ell , u_1^\ell , u_2^\ell \}\) exactly as in the protocol.

– If there exists \(\ell \in [\sigma ]\) such that \(c_0^\ell = c_1^\ell = 0\), then set \(J = \{1\}\) and send J to the trusted party and receive back \((x_0,x_1)\). Now set \(\varDelta _0' = x_0 {\oplus }{\bigoplus }_\ell u_0^\ell \) and \(\varDelta _1' = x_1 {\oplus }\varDelta ' {\oplus }{\bigoplus }_\ell u_0^\ell \). Pick random \(\varDelta _\phi '\) and random \(h' \leftarrow \{0,1\}^\lambda \). Finally, send \(\varDelta _0',\varDelta _1',\varDelta _\phi ',h'\) to R.

– Else if for all \(\ell \in [\sigma ]\), it holds that \(c_0^\ell \ne c_1^\ell \), then for each \(\ell \in [\sigma ]\) compute \(b_\ell '\) such that \(c_{b_\ell '}^\ell = 0\). Extract \(b' = {\bigoplus }_\ell b_\ell '\). Set \(J = \emptyset \), send \((J,b=b')\) to the trusted party and receive back \(x_b\). If \(b = 0\), set \(\varDelta _0' = x_0 {\oplus }{\bigoplus }_\ell u_0^\ell \). Else if \(b= 1\), set \(\varDelta _1' = x_1 {\oplus }\varDelta ' {\oplus }{\bigoplus }_\ell u_0^\ell \). Pick random \(\varDelta _{1b}', \varDelta _\phi ' \leftarrow \{0,1\}^\lambda \), and set \(h' = H(\varDelta _\phi {\oplus }{\bigoplus }_\ell u_2^\ell )\). Finally, send \(\varDelta _0',\varDelta _1',\varDelta _\phi ',h'\) to R.

– Else set \(J = \emptyset \) and choose random \(b' \leftarrow \{0,1\}\) and send \((J, b = b')\) to the trusted party. Receive back \(x_b\). If \(b = 0\), set \(\varDelta _0' = x_0 {\oplus }{\bigoplus }_\ell u_0^\ell \). Else if \(b= 1\), set \(\varDelta _1' = x_1 {\oplus }\varDelta ' {\oplus }{\bigoplus }_\ell u_0^\ell \). Pick random \(\varDelta _{1b}', \varDelta _\phi '\leftarrow \{0,1\}^\lambda \), and set \(h' = H(\varDelta _\phi {\oplus }{\bigoplus }_\ell u_2^\ell )\). Finally, send \(\varDelta _0',\varDelta _1',\varDelta _\phi ',h'\) to R.

– In the reveal phase, if R sends \((J',\varPsi ',\varPhi ')\) such that \(J' \ne J\) or the values \(\varPsi ',\varPhi '\) are not consistent with the values above, then abort the reveal phase. Else, send “reveal” to the trusted party.
It is easy to see that the above simulation is indistinguishable from the real execution. Indeed if there exists any \(\ell \) such that \(c_0^\ell = c_1^\ell = 0\), then in this case corrupt R learns \(\varDelta \) but does not obtain \(u_2^\ell \). Therefore, in this case it misses at least one additive share of \(\phi \) and since \(h = H(\phi )\) does not reveal information (unless H is queried on \(\phi \)), \(\phi \) is statistically hidden from corrupt R. Thus, this case corresponds to \(J \ne \emptyset \) since R could potentially know both \(x_0\) and \(x_1\) (since it knows \(\varDelta \) and potentially at least one of \(u_0^\ell , u_1^\ell \) for each \(\ell \in [\sigma ]\)) but not \(\phi \). On the other hand, if for all \(\ell \in [\sigma ]\) it holds that \(c_0^\ell \ne c_1^\ell \) then it is easy to see that the extracted input \(b'\) equals R’s input \(b_1\) and that the rest of the simulation is indistinguishable from the real execution. Finally the remaining case (i.e., there exists \(\ell \in [\sigma ]\) such that \(c_0^\ell = c_1^\ell = 1\) and there does not exist \(\ell ' \in [\sigma ]\) such that \(c_0^{\ell '} = c_1^{\ell '} = 0\)) is when R obtains only \(\phi \) and neither \(x_0\) nor \(x_1\). This case is rather straightforward to handle; the simulator supplies \(J = \emptyset \) (since R knows \(\phi \)) and a random choice bit \(b'\). This works because there exists some \(\ell \in [\sigma ]\) such that R neither obtains \(u_0^\ell \) nor \(u_1^\ell \). As a result both \(x_0\) and \(x_1\) are informationtheoretically hidden from it.
Privacy Against Corrupt Sender. Note that except in the reveal phase, information flows only from S to R. If S is honest, then reveals made by R do not leak any information. (Recall J is revealed to S in the real as well as the ideal execution.) We have to show that even a corrupt S does not learn any information about \(b_1\). Clearly when \(J\ne \emptyset \), R’s actions are independent of its input \(b_1\) and thus does not leak any information. On the other hand when \(J = \emptyset \), observe that R does not reveal \(\tilde{x}_{b_1}\), and thus S only learns whether \(\varPsi = \varPhi = \emptyset \) or not. This translates to learning information about R’s input \(b_1\) only if for some (possibly many) \(\ell \in [\sigma ]\), S provided \((u_0^\ell , u_2^\ell )\) in one instance of \(\mathcal {F}_{\mathrm {OT}}\) and \((u_1^\ell , \hat{u}_2^\ell )\) in the other instance with \(u_2^\ell \ne \hat{u}_2^\ell \). This is because such a strategy would allow S to learn whether R input \(c_0^\ell = 1\) (in which case R does not abort) or \(c_1^\ell = 1\) (in which case R does abort), and consequently leak information about \(b_\ell '\) (i.e., depending on which of \(c_0^\ell , c_1^\ell \) was 0 when \(J = \emptyset \)). More generally, such a strategy allows S to learn any disjunctive predicate of R’s selections \(\{c_0^\ell , c_1^\ell \}_\ell \). To prove that such a strategy does not help S we use the following easy lemma.
Lemma 1
([13]). Let \(\mathsf {En}' : \{0,1\}^m \rightarrow \{0,1\}^{m'}\) be such that for any \(\mathbf {b} \in \{0,1\}^m\), it holds that \(\mathsf {En}'(\mathbf {b})\) is a \(\kappa \)wise independent encoding of \(\mathbf {b}\). Then for every disjunctive predicate \(P(\cdot )\) the following holds: (1) If P involves at most \(\kappa \) literals, then \(\mathrm {Pr}[P(\mathsf {En}'(\mathbf {b})) = 1]\) is completely independent of \(\mathbf {b}\). (2) Otherwise, \(\mathrm {Pr}[P(\mathsf {En}'(\mathbf {b})) = 1 ]\ge 12^{\kappa }\).
We prove that the protocol in Fig. 5 securely realizes singlechoice CCOT with sendersimulatability and receiverprivacy. We start by observing that correctness follows from inspection of the protocol.
Simulating Corrupt Receiver. The simulation is quite similar to the simulation of CCOT construction presented in Fig. 4. Obviously the main difference now is that R may attempt to use different \(b_\ell '\) values for \(j, j' \in [n]\) (where \(b_\ell '\) is defined as the value R inputs to \(\mathcal {F}_{\mathrm {OT}}\) in Step 3). However the key observation is that it receives only one key \(K_{\ell , b_\ell '}\) in \(\{K_{\ell ,0}, K_{\ell ,1}\}\). Therefore, even if it attempts to deviate and and obtain \(e_{j',b_\ell ''}^\ell \) for \(b_\ell '' \ne b_\ell '\), it still cannot decrypt since it does not possess the secret key \(K_{\ell ,b_\ell ''}\). Semantic security of the encryption allows us to argue that if such a deviation happens then the value of \(\phi _{j'}\) is hidden from S. Therefore in this case, the simulator can simply add \(j' \) to J, and the simulation can be completed. It is instructive to note that when such a deviation happens, R will be neither be able to provide \(\{x_{j',0}, x_{j',1}\}\) nor the value \(\phi _{j'}\), and thus will get rejected by S during the reveal phase.

– Chooses random \(\{\varDelta _j'\}_{j\in [n]},\{ K_{\ell ,0}, K_{\ell ,1} \}_{\ell \in [\sigma ]}, \{u_{j,0}^\ell , u_{j,2}^\ell \}_{j \in [n], \ell \in [\sigma ]}\) and sets for all \(j \in [n], \ell \in [\sigma ]\), value \(u_{j,1}^\ell = u_{j,0}^\ell {\oplus }\varDelta _j'\).

– For each \(\ell \in [\sigma ]\), acting as \(\mathcal {F}_{\mathrm {OT}}\) obtain values \(b_\ell '\), return key \(K_{\ell ,b_\ell '}\), and set \(e_{j,b_\ell '}^\ell = \mathsf {Enc}(K_{\ell ,b_\ell '},u_{j,2}^\ell )\), \(e^\ell _{j, 1b_\ell '} = \mathsf {Enc}(K_{\ell ,1b_\ell '},\mathbf {0})\). Compute \(b' = {\bigoplus }_\ell b_\ell '\).

– For each \(j \in [n], \ell \in [\sigma ]\), acting as \(\mathcal {F}_{\mathrm {OT}}\) obtain values \(\{c_{j,0}^\ell , c_{j,1}^\ell \}\) and return answers using values \(u_{j,0}^\ell , u_{j,1}^\ell , e_{j,b_\ell '}^\ell , e^\ell _{j, 1b_\ell '}\) (computed as above) exactly as in the protocol.

– Initialize \(J = \emptyset \). For each \(j \in [n]\): If there exists \(\ell \in [\sigma ]\) such that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), then add j to J.

– Initialize \(\mathsf {flag}= 0\). For each \(j \not \in J\): If there exists \(\ell \in [\sigma ]\) such that either \(c_{j,b_\ell '}^\ell = 0 \) or \(c_{j,1b_\ell '}^\ell = 1\) do not hold, then add j to J and set \(\mathsf {flag}= 1\).

– Send \((J, b')\) to the trustedparty and receive \(\{x_{j,0}, x_{j,1}\}_{j \in J}\) and \(\{x_{j,b'}\}_{j \not \in J}\).

– For each \(j \in J\), do: (1) set \(\varDelta _{j,0}' = x_{j,0} {\oplus }{\bigoplus }_\ell u_{j,0}^\ell \) and \(\varDelta _{j,1}' = x_{j,1} {\oplus }\varDelta ' {\oplus }{\bigoplus }_\ell u_{j,0}^\ell \), and (2) pick random \(\varDelta _{j,\phi }'\) and random \(h_j' \leftarrow \{0,1\}^\lambda \).

– For each \(j \not \in J\), do: (1) if \(b' = 0\), set \(\varDelta _{j,0}' = x_{j,0} {\oplus }{\bigoplus }_\ell u_0^\ell \), (2) else if \(b' = 1\), set \(\varDelta _{j,1}' = x_{j,1} {\oplus }\varDelta _j' {\oplus }{\bigoplus }_\ell u_0^\ell \), and (3) pick random \(\varDelta _{j,1b'}', \varDelta _{j,\phi }' \leftarrow \{0,1\}^\lambda \), and set \(h_j' = H(\varDelta _{j,\phi }' {\oplus }{\bigoplus }_\ell u_{j,2}^\ell )\).

– Send \(\{\varDelta _{j,0}',\varDelta _{j,1}',\varDelta _{j,\phi }',h_j'\}_{j \in [n]}\) to R.

– In the reveal phase, if \(\mathsf {flag}= 1\) or if R sends \((J',\varPsi ',\varPhi ')\) such that \(J' \ne J\) or the values \(\varPsi ', \varPhi '\) are not consistent with the values above, then abort the reveal phase. Else, send “reveal” to the trusted party.
We show that the simulation is indistinguishable from the real execution. First, note that if for some \(j \in [n], \ell \in [\sigma ]\), it holds that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), then R receives both \(u_{j,0}^\ell \) and \(u_{j,1}^\ell \), and therefore knows \(\varDelta _j\). In this case, it is safe to presume that R will end up knowing both \(x_{j,0}\) as well as \(x_{j,1}\) (since for any \(\ell '\), if it receives even one of \(u_{j,0}^{\ell '}, u_{j,1}^{\ell '}\) it will know the other as well since it knows \(\varDelta _j\)). Therefore, \(\mathcal {S}\) includes j in J and obtains both \(x_{j,0}, x_{j,1}\) from the trusted party. Now \(\mathcal {S}\) can carry out the simulation whether or not R obtained both \(x_{j,0}\) and \(x_{j,1}\).
Next, suppose that for some \(j \in [n]\) such that for no \(\ell \in [\sigma ]\), it holds that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), and yet there exists some \(\ell \in [\sigma ]\) such that either \(c_{j,b_\ell '}^\ell = 0\) or \(c_{j,1b_\ell '}^\ell = 1\) does not hold. In this case, it is easy to see that R will not be able to produce both \(x_{j,0}, x_{j,1}\) (since it is missing one of \(u_{j,0}^\ell , u_{j,1}^\ell \)) in the real execution. Further, it can be shown that except with negligible probability R cannot produce \(\phi _j\) either. This is because (1) R does not obtain \(e_{j, b_\ell '}^\ell \), and (2) R has no information about the plaintext encrypted as \(e_{j,1b_\ell '}^\ell \), and (3) \(h_j = H(\phi _j)\) does not reveal any information about \(\phi _j\) except with statistically negligible probability (i.e., unless H is queried on \(\phi _j\)). Point (2) above trivially holds in the simulation because \(e_{j,1b_\ell '}^\ell \) encrypts \(\mathbf {0}\) instead of \(u_{j,2}^\ell \). On the other hand, in the real execution, observe that R does not possess the key \(K_{\ell , 1b_\ell '}\). It follows from a straightforward reduction to the semantic security of the encryption scheme that the real execution is indistinguishable from the simulation. In particular, in this case R will not be able to produce \((J',\varPsi ',\varPhi ')\) that will be accepted by S in the real execution, and is equivalent to \(\mathcal {S}\) sending abort in the ideal execution.
Finally, suppose that for every \(j \in [n]\), either (1) for all \(\ell \in [\sigma ]\) it holds that \(c_{j,0}^\ell = c_{j,1}^\ell = 1\), or (2) for all \(\ell \in [\sigma ]\) it holds that \(c_{j,b_\ell '}^\ell = 0\) and \(c_{j,1b_\ell '}^\ell = 1\). This indeed corresponds to honest behavior on the part of R. Specifically, in case (1), we have \(j \in J\), and in case (2), we have \(j \not \in J\). This is exactly how the simulator constructs J. It remains to be shown that in this case, any reveal \((J',\varPsi ',\varPhi ')\) such that \(J' \ne J\) or \(\varPsi ',\varPhi '\) is not consistent with the simulation will be rejected by S in the real execution. This follows from: (a) Any \(j \in J\) cannot be claimed by R to not be in the checkset. This is because in this case, R does not have any information about \(\phi _j\) (other than \(H(\phi _j)\) which leaks no information unless H is queried on \(\phi _j\)). (b) Any \(j \not \in J\) cannot be claimed by R to be in the checkset. This is because in this case, R obtains exactly one of \(\{ u_{j,0}^\ell , u_{j,1}^\ell \}\) for every \(\ell \in [\sigma ]\) and thus is able to reconstruct at most one of \(\{ x_{j,0}, x_{j,1} \}\). This concludes the proof of security against corrupt receiver.
Privacy Against Corrupt Sender. The proof of privacy against corrupt sender is very similar to the corresponding proof for (the basic) CCOT. Specifically, note that except in the reveal phase, information flows only from S to R. Next note that if S is honest, then the reveals made by R in the reveal phase do not leak any information about R’s input \(b_1\). (Recall J is revealed to S in the real as well as the ideal execution.) It remains to be shown that even a corrupt S does not learn any information about \(b_1\). Clearly for \(j \in J\), R’s actions are independent of its input \(b_1\) and thus does not leak any information. On the other hand for \(j \not \in J\), observe that R does not reveal \(\tilde{x}_{j,b_1}\) (i.e., in the reveal phase), and thus the only information learnt by S is whether \(\varPsi = \varPhi = \emptyset \) or not. This translates to learning information about R’s input \(b_1\) only if for some (possibly many) \(j \in [n] , \ell \in [\sigma ]\), S provided \((u_{j,0}^\ell , u_{j,2}^\ell )\) in one instance of \(\mathcal {F}_{\mathrm {OT}}\) and \((u_{j,1}^\ell , \hat{u}_{j,2}^\ell )\) in the other instance with \(u_{j,2}^\ell \ne \hat{u}_{j,2}^\ell \). This is because such a strategy would allow S to learn whether R input \(c_{j,0}^\ell = 1\) (in which case R does not abort) or \(c_{j,1}^\ell = 1\) (in which case R does abort), and consequently leak information about \(b_{\ell }'\) (i.e., depending on which of \(c_{j,0}^\ell , c_{j,1}^\ell \) was 0 when \(j \not \in J\)). More generally, such a strategy allows S to learn any disjunctive predicate of R’s selections \(\{c_{j,0}^\ell , c_{j,1}^\ell \}_\ell \).
We describe a protocol for \(\mathcal {F}^{\mathrm {bat,sin}}_{\mathrm {ccot}}\) in the \(\mathcal {F}_{\mathrm {OT}}\)hybrid model that makes use of an arbitrary XORtree encoding scheme in Fig. 6. The protocol itself is a straightforward extension combining ideas from protocols in Figs. 4 and 5 while abstracting away the underlying encoding scheme. We now prove that the protocol \(\pi _{\mathrm {ccot}}^{\mathrm {bat,sin}}\) described in Fig. 6 realizes batch singlechoice CCOT with sendersimulatability and receiverprivacy. We start by observing that correctness follows from correctness properties of the XORtree encoding schemes (specifically, Property 3).

– Samples for each \(j \in [n]\), uniform \(\{ ( \hat{x}_{j,0}^{(i)}, \hat{x}_{j,1}^{(i)} ) \}_{i \in [m]}\), uniformly random \(\omega _j\) and computes \(\{ ( u_{j,0}^\ell , u_{j,1}^\ell ) \}_{\ell \in [m']} \leftarrow \mathsf {En}_{\omega _0}(\{ (\hat{x}_{j,0}^{(i)}, \hat{x}_{j,1}^{(i)} ) \}_{i\in [m]}; \omega _j)\).

– Chooses random \(\{ ( K_{\ell ,0}, K_{\ell ,1} ) \}_{\ell \in [m']}, \{ u_{j,2}^\ell \}_{j \in [n] , \ell \in [m']}\).

– For each \(\ell \in [m']\), acting as \(\mathcal {F}_{\mathrm {OT}}\) obtain values \(b_\ell '\), return key \(K_{\ell ,b_\ell '}\), and set \(e_{j,b_\ell '}^\ell = \mathsf {Enc}(K_{\ell ,b_\ell '},u_{j,2}^\ell )\), \(e^\ell _{j, 1b_\ell '} = \mathsf {Enc}(K_{\ell ,1b_\ell '},\mathbf {0})\). Compute \((b_1,\ldots ,b_m) = \mathsf {De}'( \{b_\ell ' \}_{\ell \in [m']})\).

– For each \(j \in [n], \ell \in [m']\), acting as \(\mathcal {F}_{\mathrm {OT}}\) obtain values \(\{c_{j,0}^\ell , c_{j,1}^\ell \}\) and return answers using values \(u_{j,0}^\ell , u_{j,1}^\ell , e_{j,b_\ell '}^\ell , e^\ell _{j, 1b_\ell '}\) (computed as above) exactly as in the protocol.

– Initialize \(J = \emptyset \). For each \(j \in [n]\): If there exists \(\ell \in [m']\) such that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), then add j to J.

– Initialize \(\mathsf {flag}= 0\). For each \(j \not \in J\): If there exists \(\ell \in [m']\) such that either \(c_{j,b_\ell '}^\ell = 0 \) or \(c_{j,1b_\ell '}^\ell = 1\) do not hold, then add j to J and set \(\mathsf {flag}= 1\).

– Send \((J, \{b_i\}_{i\in [m]})\) to the trusted party and receive back \(\{ ( x_{j,0}^{(i)}, x_{j,1}^{(i)} ) \}_{i \in [m], j \in J}\) and \(\{x_{j,b_i}^{(i)}\}_{ i \in [m], j \not \in J}\).

– For each \(j \in J\), do: (1) for each \(i \in [m]\), set Open image in new window and Open image in new window , and (2) pick random \(\varDelta _{j,\phi }'\) and random \(h_j' \leftarrow \{0,1\}^\lambda \).

– For each \(j \not \in J\), do: (1) for each \(i \in [m]\): set Open image in new window and pick random Open image in new window , and (2) pick random \( \varDelta _{j,\phi }' \leftarrow \{0,1\}^\lambda \), and set \(h_j' = H(\varDelta _{j,\phi }' {\oplus }{\bigoplus }_\ell u_{j,2}^\ell )\).

– For each \(j \in [n]\): send Open image in new window to R.

– In the reveal phase, if \(\mathsf {flag}= 1\) or if R sends \((J',\varPsi ',\varPhi ')\) such that \(J' \ne J\) or the values \(\varPsi ', \varPhi '\) are not consistent with the values above, then abort the reveal phase. Else, send “reveal” to the trusted party.
We show that the simulation is indistinguishable from the real execution. First, note that if for some \(j \in [n], \ell \in [m']\), it holds that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), then R receives both \(u_{j,0}^\ell \) and \(u_{j,1}^\ell \), but does not obtain \(u_{j,2}^\ell \). In this case, it is safe to presume that R will end up knowing both \(x_{j,0}^{(i)}\) as well as \(x_{j,1}^{(i)}\) but R definitely misses an additive share of (and consequently has no information about) \(\phi _j\). Therefore, \(\mathcal {S}\) includes j in J and obtains both \(x_{j,0}^{(i)}, x_{j,1}^{(i)}\) from the trusted party. This allows \(\mathcal {S}\) to carry out a correct simulation irrespective of whether or not R obtained both \(x_{j,0}^{(i)}\) and \(x_{j,1}^{(i)}\).
Next, suppose that for some \(j \in [n]\) such that for no \(\ell \in [m']\) it holds that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), and yet there exists some \(\ell \in [m']\) such that either \(c_{j,b_\ell '}^\ell = 0\) or \(c_{j,1b_\ell '}^\ell = 1\) does not hold. In this case, we claim that R will not be able to produce both \(x_{j,0}^{(i)}, x_{j,1}^{(i)}\) in the real execution. This follows from the properties of the XORtree encoding schemes and the fact that R misses one of \(u_{j,0}^\ell , u_{j,1}^\ell \). Further, it can be shown that except with negligible probability R cannot produce \(\phi _j\) either. This is because (1) R does not obtain \(e_{j, b_\ell '}^\ell \), and (2) R has no information about the plaintext encrypted as \(e_{j,1b_\ell '}^\ell \), and (3) \(h_j = H(\phi _j)\) does not reveal any information about \(\phi _j\) with statistically negligible probability (i.e., unless H is queried on \(\phi _j\)). Point (2) mentioned above trivially holds in the simulation because \(e_{j,1b_\ell '}^\ell \) encrypts \(\mathbf {0}\) instead of \(u_{j,2}^\ell \). On the other hand, in the real execution, observe that R does not possess the key \(K_{\ell , 1b_\ell '}\). It then follows from a straightforward reduction to the semantic security of the encryption scheme that the real execution is indistinguishable from the simulation. In particular, in this case R will not be able to produce \((J',\varPsi ',\varPhi ')\) that will be accepted by S in the real execution, and is equivalent to \(\mathcal {S}\) sending abort in the ideal execution.
Finally, suppose that for every \(j \in [n]\), either (1) for all \(\ell \in [m']\) it holds that \(c_{j,0}^\ell = c_{j,1}^\ell = 0\), or (2) for all \(\ell \in [m']\) it holds that \(c_{j,b_\ell '}^\ell = 0\) and \(c_{j,1b_\ell '}^\ell = 1\). This indeed corresponds to honest behavior on the part of R. Specifically, in case (1), we have \(j \in J\), and in case (2), we have \(j \not \in J\). This is exactly how the simulator constructs J. It remains to be shown that in this case, any reveal \((J',\varPsi ',\varPhi ')\) such that \(J' \ne J\) or \(\varPsi ',\varPhi '\) is not consistent with the simulation will be rejected by S in the real execution. This follows from: (a) Any \(j \in J\) cannot be claimed by R to not be in the checkset. This is because in this case, R does not have any information about \(\phi _j\) (other than \(H(\phi _j)\) which leaks no information unless H is queried on \(\phi _j\)). (b) Any \(j \not \in J\) cannot be claimed by R to be in the checkset. This is because in this case, R obtains exactly one of \(\{ u_{j,0}^\ell , u_{j,1}^\ell \}\) for every \(\ell \in [m']\) and thus by Property 7 of XORtree encoding schemes, is able to reconstruct at most one of \(\{ x_{j,0}^{(i)}, x_{j,1}^{(i)} \}\).
Privacy Against Corrupt Sender. The proof of privacy against corrupt sender is very similar to the corresponding proof for (the basic) CCOT. Specifically, note that except in the reveal phase, information flows only from S to R. Next note that if S is honest, then the reveals made by R in the reveal phase do not leak any information about R’s input \(b_1,\ldots ,b_m\). (Recall J is revealed to S in the real as well as the ideal execution.) It remains to be shown that even a corrupt S does not learn any information about \(b_1,\ldots ,b_m\). Clearly for \(j \in J\), R’s actions are independent of its inputs \(b_1,\ldots ,b_m\) and thus does not leak any information. On the other hand for \(j \not \in J\), observe that R does not reveal any information about \(\{ \tilde{x}_{j,b_i}^{(i)} \}_{i \in [m]}\) (i.e., in the reveal phase), and thus the only information learnt by S is whether \(\varPsi = \varPhi = \emptyset \) or not. This translates to learning information about R’s inputs \(b_1,\ldots ,b_m\) only if for some (possibly many) \(j \in [n] , \ell \in [\sigma ]\), S provided \((u_{j,0}^\ell , u_{j,2}^\ell )\) in one instance of \(\mathcal {F}_{\mathrm {OT}}\) and \((u_{j,1}^\ell , \hat{u}_{j,2}^\ell )\) in the other instance with \(u_{j,2}^\ell \ne \hat{u}_{j,2}^\ell \). This is because such a strategy would allow S to learn whether R input \(c_{j,0}^\ell = 1\) (in which case R does not abort) or \(c_{j,1}^\ell = 1\) (in which case R does abort), and consequently leak information about \(b_{\ell }'\) (i.e., depending on which of \(c_{j,0}^\ell , c_{j,1}^\ell \) was 0 when \(j \not \in J\)). More generally, such a strategy allows S to learn any disjunctive predicate of R’s selections \(\{c_{j,0}^\ell , c_{j,1}^\ell \}_\ell \).
To prove that such a strategy does not help S we make use of Property 6 of XORtree encoding schemes. Thus we have that if S supplied inconsistent values (i.e., \(u_2^\ell , \hat{u}_2^\ell \)) in at most \((\sigma 1)\) instances, then S does not learn any information about \(b_1,\ldots ,b_m\) in the reveal phase. Further, even if S supplied inconsistent values in all instances, then with all but negligible probability (exponentially negligible in \(\sigma \)) R will abort in the reveal phase (irrespective of R’s true input \(b_1,\ldots ,b_m\)). This concludes the proof of privacy against corrupt sender.

For every \(j \in [n]\) such that there exists unique \(\alpha \in [t]\) such that \(j \in E_\alpha '\), then add j to \(E_\alpha \).

For every \(j \in [n]\) such that there exists \(\alpha ,\beta \in [t]\) such that \(j \in E_\alpha ' \cap E_\beta '\), then add j to J and set \(\mathsf {flag}= 1\).

– If \(j \in E_\alpha '\) for some unique \(\alpha \in [t]\), then R does not have any information about \(\phi _j^\beta \) for any \(\beta \ne \alpha \). Thus, it can successfully reveal \((E_1'',\ldots , E_t'')\) with \(j \in E_\beta ''\) for \(\beta \ne \alpha \) only with probability negligible in \(\sigma \). More precisely in this case R will not be able to provide \(\varPhi _\beta \) consistent with \((E_1'',\ldots ,E_t'')\). That is if \(j \in E_\alpha '\) for some unique \(\alpha \in [t]\), then for every reveal \((E_1'',\ldots ,E_t'')\) that is accepted by the sender it must hold that \(j \in E_\alpha ''\). Stated differently, if for every \(j \in [n]\), there exists unique \(\alpha \in [t]\) such that \(j \in E_\alpha '\), then R can successfully reveal \((E_1'',\ldots ,E_t'')\) only for \((E_1'',\ldots , E_t'') = (E_1',\ldots , E_t')\). Recall that in this case, the simulator \(\mathcal {S}\) set \(\mathsf {flag}= 0\) and thus will reveal \((E_1,\ldots ,E_t) = (E_1',\ldots ,E_t')\) in the reveal phase. Therefore, in this case it holds that the real protocol is indistinguishable from the ideal simulation.

– If \(j \in E_\alpha ' \cap E_\beta '\) for \(\alpha \ne \beta \), then one of \(x_{j,0}^{(i,\beta )},x_{j,1}^{(i,\beta )}\) (alternatively one of \(x_{j,0}^{(i,\alpha )},x_{j,1}^{(i,\alpha )}\)) is informationtheoretically hidden from R. Thus, it can successfully reveal \((E_1'',\ldots , E_t'')\) with \(j \in E_\beta ''\) (resp. \(j\in E_\alpha ''\)) only if it guesses the missing value, i.e., with probability negligible in \(\lambda \). More precisely in this case R will not be able to provide \(\varPsi _\beta \) (resp. \(\varPsi _\alpha \)) consistent with \((E_1'',\ldots ,E_t'')\). In other words, if \(j \in E_\alpha ' \cap E_\beta '\) for \(\alpha \ne \beta \), then for every reveal \((E_1'',\ldots ,E_t'')\) that is accepted by the sender it must hold that \(j \not \in \cup _k E_k''\). Indeed, it can be observed that any reveal by R will be rejected by S. In particular, R cannot reveal \(j \not \in \cup _k E_k''\) either, since in this case it will be required to produce both \(x_{j,0}^{(i,k)}, x_{j,1}^{(i,k)}\) for every \(k \in [t]\). As pointed out earlier, R cannot do this except with negligible probability for \(k \in \{\alpha ,\beta \}\). Recall that in this case, the simulator \(\mathcal {S}\) set \(\mathsf {flag}= 1\) and thus will abort in the reveal phase. Therefore, in this case it holds that the real protocol is indistinguishable from the ideal simulation.
Additional Optimizations. Instead of sending the values \(\varPsi = \{ \tilde{x}_{j,0}, \tilde{x}_{j,1} \}_{j \in J}\) and \(\varPhi = \{ \tilde{\phi }_j \}_{j \not \in J} \), R could send \(( J , H'( \varPsi ), H'' ( \varPhi ) )\) to S, where \(H', H''\) are modeled as collisionresistant hash functions (alternatively, random oracles). Note that these optimizations are applicable in a straightforward way in other constructions we present. We omit detailing them to keep the exposition more clear.
In applications to secure computation, full receiver simulation in CCOT is also not required. We require only privacy, i.e., we do not need to consider the joint distribution of receiver’s view and sender’s inputs. This is because sender’s inputs are just random keys for the garbled circuits, and in the simulation of the 2PC protocol, it is the simulator that will generate these keys. On the other hand, extracting receiver inputs is very crucial in order to enable the simulator to generate correctly faked garbled circuits. However our definitions will require full receiver simulation (including extraction). Fortunately, achieving full receiver simulation comes only with a small multiplicative overhead.
Summary of Efficiency. All our protocols are presented in the \(\mathcal {F}_{\mathrm {OT}}\)hybrid model and thus can take advantage of OT extension techniques. Further, using standard leveraging techniques (such as ones used in [9]), OT extension of [14], the XORtree encoding scheme of [13], and the constructions in Figs. 4, 5, 6, and 7, one can obtain a rate1/6 construction for \(\mathcal {F}^{\mathrm {bat,sin}}_{\mathrm {ccot}}\) (in the nonprogrammable RO model) with sendersimulatability and receiverprivacy as in Definition 4. In concrete terms, it is easy to verify that the additional overhead of realizing \(\mathcal {F}^{\mathrm {bat,sin}}_{\mathrm {ccot}}\) is \(\le 6 \cdot \max (4m,8\sigma )\). The efficiency of our CCOT protocol in the single execution setting is comparable to that of XORtree encodings of [23], but is clearly better than DDHbased CCOT [22, 24] since we take advantage of OT extension (under the assumption that correlationrobust hash functions exist [12, 14, 29]). Finally, we can realize \(\mathcal {F}_{\mathrm {mcot}}^+\) (in the nonprogrammable random oracle model) with sendersimulatability and receiverprivacy as in Definition 4 while bearing an overhead at most t over the cost of realizing \(\mathcal {F}^{\mathrm {bat,sin}}_{\mathrm {ccot}}\) where t denotes the number of executions.
References
 1.Aiello, W., Ishai, Y., Reingold, O.: Priced oblivious transfer: how to sell digital goods. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 119–135. Springer, Heidelberg (2001) CrossRefGoogle Scholar
 2.Applebaum, B., Ishai, Y., Kushilevitz, E., Waters, B.: Encoding functions with constant online rate or how to compress garbled circuits keys. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 166–184. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 3.Bellare, M., Hoang, V.T., Rogaway, P.: Adaptively secure garbling with applications to onetime programs and secure outsourcing. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 134–153. Springer, Heidelberg (2012) CrossRefGoogle Scholar
 4.Cachin, C., Micali, S., Stadler, M.A.: Computationally private information retrieval with polylogarithmic communication. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 402–414. Springer, Heidelberg (1999) Google Scholar
 5.David, B.M., Nishimaki, R., Ranellucci, S., Tapp, A.: Generalizing efficient multiparty computation. In: Lehmann, A., Wolf, S. (eds.) Information Theoretic Security. LNCS, vol. 9063, pp. 15–32. Springer, Heidelberg (2015) Google Scholar
 6.Feige, U., Kilian, J., Naor, M.: A minimal model for secure computation (extended abstract). In: STOC, pp. 554–563 (1994)Google Scholar
 7.Frederiksen, T.K., Jakobsen, T.P., Nielsen, J.B.: Faster maliciously secure twoparty computation using the GPU. In: Abdalla, M., De Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 358–379. Springer, Heidelberg (2014) Google Scholar
 8.Freedman, M.J., Ishai, Y., Pinkas, B., Reingold, O.: Keyword search and oblivious pseudorandom functions. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 303–324. Springer, Heidelberg (2005) CrossRefGoogle Scholar
 9.Garay, J.A., Ishai, Y., Kumaresan, R., Wee, H.: On the complexity of UC commitments. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 677–694. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 10.Huang, Y., Katz, J., Kolesnikov, V., Kumaresan, R., Malozemoff, A.J.: Amortizing garbled circuits. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 458–475. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 11.Huang, Y., Katz, J., Evans, D.: Efficient secure twoparty computation using symmetric cutandchoose. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 18–35. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 12.Ishai, Y., Kilian, J., Nissim, K., Petrank, E.: Extending oblivious transfers efficiently. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 145–161. Springer, Heidelberg (2003) CrossRefGoogle Scholar
 13.Ishai, Y., Kushilevitz, E., Ostrovsky, R., Prabhakaran, M., Sahai, A.: Efficient noninteractive secure computation. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 406–425. Springer, Heidelberg (2011) CrossRefGoogle Scholar
 14.Ishai, Y., Prabhakaran, M., Sahai, A.: Founding cryptography on oblivious transfer – efficiently. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 572–591. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 15.Kilian, J.: Founding cryptography on OT. In: STOC, pp. 20–31 (1988)Google Scholar
 16.Kolesnikov, V., Kumaresan, R.: Improved OT extension for transferring short secrets. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 54–70. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 17.Kolesnikov, V.: Gate evaluation secret sharing and secure oneround twoparty computation. In: Roy, B. (ed.) ASIACRYPT 2005. LNCS, vol. 3788, pp. 136–155. Springer, Heidelberg (2005) CrossRefGoogle Scholar
 18.Kolesnikov, V., Schneider, T.: Improved garbled circuit: free XOR gates and applications. In: Aceto, L., Damgård, I., Goldberg, L.A., Halldórsson, M.M., Ingólfsdóttir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp. 486–498. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 19.Kreuter, B., Shelat, A., Shen, C.: Billiongate secure computation with malicious adversaries. In: USENIX (2012)Google Scholar
 20.Kushilevitz, E., Ostrovsky, R.: Replication is NOT needed: SINGLE database, computationallyprivate information retrieval. In: FOCS, pp. 364–373 (1997)Google Scholar
 21.Lindell, Y., Riva, B.: CutandChoose YaoBased secure computation in the online/offline and batch settings. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 476–494. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 22.Lindell, Y.: Fast cutandchoose based protocols for malicious and covert adversaries. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 1–17. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 23.Lindell, Y., Pinkas, B.: An efficient protocol for secure twoparty computation in the presence of malicious adversaries. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS, vol. 4515, pp. 52–78. Springer, Heidelberg (2007) CrossRefGoogle Scholar
 24.Lindell, Y., Pinkas, B.: Secure twoparty computation via cutandchoose oblivious transfer. In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 329–346. Springer, Heidelberg (2011) CrossRefGoogle Scholar
 25.Lindell, Y., Pinkas, B., Smart, N.P.: Implementing twoparty computation efficiently with security against malicious adversaries. In: Ostrovsky, R., De Prisco, R., Visconti, I. (eds.) SCN 2008. LNCS, vol. 5229, pp. 2–20. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 26.Mohassel, P., Riva, B.: Garbled circuits checking garbled circuits: more efficient and secure twoparty computation. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 36–53. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 27.Mossel, E., Shpilka, A., Trevisan, L.: On ebiased generators in NC0. In: FOCS, pp. 136–145 (2003)Google Scholar
 28.Naor, M., Pinkas, B.: Efficient oblivious transfer protocols. In: SODA, pp. 448–457 (2001)Google Scholar
 29.Nielsen, J.B., Nordholt, P.S., Orlandi, C., Burra, S.S.: A new approach to practical activesecure twoparty computation. In: SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 681–700. Springer, Heidelberg (2012) CrossRefGoogle Scholar
 30.Pinkas, B., Schneider, T., Smart, N.P., Williams, S.C.: Secure twoparty computation is practical. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 250–267. Springer, Heidelberg (2009) CrossRefGoogle Scholar
 31.Shelat, A., Shen, C.: Twooutput secure computation with malicious adversaries. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 386–405. Springer, Heidelberg (2011) CrossRefGoogle Scholar
 32.Shelat, A., Shen, C.: Fast twoparty secure computation with minimal assumptions. In: CCS, pp. 523–534 (2013)Google Scholar
 33.Yao, A.: How to generate and exchange secrets. In: FOCS, pp. 162–167 (1986)Google Scholar