Keywords

1 Introduction

Oblivious transfer (OT), introduced by Rabin [37], is one of the most fundamental primitives in cryptography and a complete primitive for secure multi-party computation [14, 43]. Oblivious transfer protocols are known to exist assuming (several types of) families of trapdoor permutations [12, 18], learning with errors [35], decisional Diffie-Hellman [1, 33],computational Diffie-Hellman [4] and quadratic residuosity [28]. While in some of the constructions of OT in the literature, the construction immediately yields a full-fledged OT, in others it only yields a “weak” form of OT, that is later “amplified” into a full-fledged one.

In this paper we introduce a new notion for a “weak form of OT”, and show how to amplify this “weak OT” into full-fledged OT. This notion is more “fine grained” than some previously suggested notions, which allows us to obtain OT in scenarios that could not be handled by previous works. Our approach is suitable for the computational and for the information theoretic settings (i.e., the dishonest parties are assumed to be computationally bounded or not).

1.1 Our Results

We start with presenting our results in the information theoretic setting, and then move to the computation one.

1.1.1 The Information Theoretic Setting

The information theoretic analogue of a two-party protocol between parties \(\mathsf {A} \) and \(\mathsf {B} \), is a “channel”: namely, a quadruple of random variables \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\), with the interpretation that when “activating” (or “calling”) the channel C, party \(\mathrm {P}\in \left\{ \mathsf {A} ,\mathsf {B} \right\} \) receives his “output” \(O^{\mathsf {P} }\) and his “view” \(V^\mathsf {P} \). In other words, “activating a channel” is analogous to running a two-party protocol with fresh randomness. (We assume that the view \(V^\mathsf {P} \) contains the output \(O^{\mathsf {P} }\)).

Log-Ratio Leakage (Channels). We are interested in the special case where the channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) has Boolean outputs (i.e., \(O^{\mathsf {A} },O^{\mathsf {B} }\in \left\{ 0,1\right\} \)), and assume for simplicity that the channel is balanced, meaning that for both \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \), \(O^{\mathsf {P} }\) is uniformly distributed. Such channels are parameterized by their agreement and leakage:

  • A channel C has \(\alpha \)-agreement if \(\Pr [O^{\mathsf {A} }=O^{\mathsf {B} }] = \tfrac{1}{2}+ \alpha \). (Without loss of generality, \(\alpha \ge 0\), as otherwise one of the parties can flip his output).

  • The leakage of party \(\mathsf {B} \) in C is the distance between the distributions \(V^\mathsf {A} |_{O^{\mathsf {A} }= O^{\mathsf {B} }}\) and \(V^\mathsf {A} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}\). (Note that these two distributions are well defined if \(\alpha \in [0,\tfrac{1}{2})\)). The leakage of party \(\mathsf {A} \) is defined in an analogous way, and the leakage of C is the maximum of the two leakages.

This approach (with somewhat different notation) was taken by past work [40, 41], using statistical distance as the distance measure.

Loosely speaking, leakage measures how well can a party distinguish the case \(\left\{ O^{\mathsf {A} }=O^{\mathsf {B} }\right\} \) from the case \(\left\{ O^{\mathsf {A} }\ne O^{\mathsf {B} }\right\} \). As each party knows his output, this can be thought of as the “amount of information” on the input of one party that leaks to the other party.Footnote 1

We will measure leakage using a different distance measure, which we refer to as “log-ratio distance”.

Definition 1.1

(Log-Ratio distance). Two numbers \(p_0,p_1 \in [0,1]\) satisfy \(p_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } p_1\) if for both \(b \in \left\{ 0,1\right\} \): \(p_b \le e^{\epsilon } \cdot p_{1-b} + \delta \). Two distributions \(D_0,D_1\) over the same domain \(\varOmega \), are \((\epsilon ,\delta )\) -log-ratio-close (denoted \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } D_1\)) if for every \(A \subseteq \varOmega \):

$$\begin{aligned} \Pr [D_0 \in A] \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } \Pr [D_1 \in A]. \end{aligned}$$

We use the notation \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{\delta } D_1\) to say that the statistical distance between \(D_0\) and \(D_1\) is at most \(\delta \). Log-ratio distance is a generalization of statistical distance as \(\mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{\delta }\) is the same as \(\mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{0,\delta }\). This measure of distance was popularized by its use in the differential privacy framework [10] (that we discuss in Sect. 1.1.3).

Loosely speaking, log-ratio distance considers the “log-ratio function” \(L_{D_0||D_1}(x) :=\log \frac{\Pr \left[ D_0=x\right] }{\Pr \left[ D_1=x\right] }\), and the two distribution are \((\epsilon ,\delta )\)-log-ratio-close if this function is in the interval \([-\epsilon ,\epsilon ]\) with probability \(1-\delta \). As such, it can be seen as a “cousin” of relative entropy (also known as, Kullback–Leibler (KL) divergence) that measures the expectation of the log-ratio function.

Note that for \(\epsilon \in [0,1]\), \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,0} D_1\) implies \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{0,2\epsilon } D_1\), but the converse is not true, and the condition (\(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,0} D_1\)) gives tighter handle on the distance between independent samples of distributions (as we explain in detail in Sect. 2.1).

We use the log-ratio distance to measure leakage in channels. This leads to the following definition (in which we substitute “log-ratio distance” as a distance measure).

Definition 1.2

(Log-ratio leakage, channels, informal). A channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) has log-ratio leakage \((\epsilon ,\delta )\), denoted \((\epsilon ,\delta )\) -leakage if for both \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \):

$$\begin{aligned} V^\mathsf {P} |_{O^{\mathsf {A} }=O^{\mathsf {B} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } V^\mathsf {P} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }} . \end{aligned}$$

This definition is related (and inspired by) the differential privacy framework [10]. In the terminology of differential privacy, this can be restated as follows: let E be the indicator variable for the event \(\left\{ O^{\mathsf {A} }=O^{\mathsf {B} }\right\} \). For both \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \), the “mechanism” \(V^\mathsf {P} \) is \((\epsilon ,\delta )\)-differentially private with regards to the “secret”/“database” E.

Channels of Small Log-Ratio Leakage Imply OT. Wullschleger [41] considered channels with small leakage (measured by statistical distance). Using our terminology, he showed for \(\alpha \in [0,\tfrac{1}{2})\) and \(\epsilon \in [0,1]\) with \(\epsilon \) “sufficiently smaller than” \(\alpha ^2\), a channel with \(\alpha \)-agreement and \((0,\epsilon )\)-leakage yields OT. This can be interpreted as saying that if the leakage \(\epsilon \) is sufficiently smaller than the agreement \(\alpha \), then the channel yields OT. We prove the following “fine grained” amplification result, which is restated with precise notation in Theorem 4.2.

Theorem 1.3

(Channels of small log-ratio leakage imply OT, infromal). There exists a constants \(c_1>0\) such that the following holds for every \(\epsilon ,\delta ,\alpha \) with \( c_1 \cdot \epsilon ^2 \le \alpha < 1/8 \) and \(\delta \le \epsilon ^2\): a channel C that has \(\alpha \)-agreement and \((\epsilon ,\delta )\)-\({\text {leakage}}\) yields OT (of statistical security).

For simplicity, let us focus on Theorem 1.3 in the case that \(\delta =0\). Two distributions that are \((\epsilon ,0)\)-log-ratio close, may have statistical distance \(\epsilon \), and so, a channel with \((\epsilon ,0)\)-leakage, can only be assumed to have \((0,\epsilon )\)-leakage (when measuring leakage in statistical distance). Nevertheless, in contrast to [41], Theorem 1.3 allows the leakage parameter \(\epsilon \) to be larger than the agreement parameter \(\alpha \).Footnote 2

The above can be interpreted as saying that when the leakage is “well behaved” (that is the \(\delta \) parameter in log-ratio distance is sufficiently small), OT can be obtained even from a channel whose leakage \(\epsilon \) is much larger than the agreement \(\alpha \). This property will be the key for our applications in Sect. 1.1.3.

Triviality of Channels with Large Leakage. We now observe that the relationship between \(\epsilon \) and \(\alpha \) in Theorem 1.3 is best possible (up to constants). Namely, a channel with agreement that is asymptotically smaller than the one allowed in Theorem 1.3 does not necessarily yield OT.

Theorem 1.4

(Triviality of channels with large leakage, informal). There exists a constant \(c_2>0\), such that the following holds for every \(\epsilon >0\): there exists a two-party protocol (with no inputs) that when it ends, party \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \) outputs \(O^{\mathsf {P} }\) and sees view \(V^\mathsf {P} \), and the induced channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) has \((c_2 \cdot \epsilon ^2)\)-agreement and \((\epsilon ,0)\)-leakage.

Together, the two theorems say that our characterization of “weak-OT” using agreement \(\alpha \) and \((\epsilon ,0)\)-log-ratio leakage has a “threshold behavior” at \(\alpha \approx \epsilon ^2\): if \(\alpha \ge c_1 \cdot \epsilon ^2\) then the channel yields OT, and if \(\alpha \le c_2 \cdot \epsilon ^2\) then such a channel can be simulated by a two-party protocol with no inputs (and thus cannot yield OT with information theoretic security). The proof of Theorem 1.4 uses a variant of the well-known randomized response approach of Warner [38].

1.1.2 The Computational Setting

We consider a no-input, Boolean output, two-party protocol \(\varPi =(\mathsf {A} ,\mathsf {B} )\). Namely, both parties receive a security parameter \(1^\kappa \) as a common input, get no private input, and both output one bit. We denote the output of party \(\mathsf {P} \) by \(O^{\mathsf {P} }_\kappa \), and its view by \(V^\mathsf {P} _\kappa \). In other words, an instantiation of \(\varPi (1^\kappa )\) can be thought of as inducing a channel \(C_\kappa =((V^\mathsf {A} _\kappa ,O^{\mathsf {A} }_\kappa ),(V^\mathsf {B} _\kappa ,O^{\mathsf {B} }_\kappa ))\). Similar to the information theoretic setting, protocol \(\varPi \) has \(\alpha \)-\({\text {agreement}}\) if for every \(\kappa \in {\mathbb {N}}\): \(\Pr \left[ O^{\mathsf {A} }_\kappa = O^{\mathsf {B} }_\kappa \right] = 1/2 + \alpha (\kappa )\).

Log-Ratio Leakage (Protocols). We extend the definition of log-ratio leakage to the computational setting (where adversaries are ppt machines). We will use the simulation paradigm to extend the information theoretic definition to the computational setting.

Definition 1.5

(Log-ratio leakage, protocols, informal). A two-party no-input Boolean output protocol \(\varPi = (\mathsf {A} ,\mathsf {B} )\) has Comp-log-ratio leakage \((\epsilon ,\delta )\), denoted \((\epsilon ,\delta )\)-comp-leakage, if there exists an “ideal channel” ensemble \(\widetilde{C}=\left\{ \widetilde{C}_\kappa =((V^{\widetilde{\mathsf {A} }}_\kappa ,O^{\widetilde{\mathsf {A} }}_\kappa ),(V^{\widetilde{\mathsf {B} }}_\kappa ,O^{\widetilde{\mathsf {B} }}_\kappa ))\right\} _{\kappa \in {\mathbb {N}}}\) such that the following holds:

  • For every \(\kappa \in {\mathbb {N}}\): the channel \(\widetilde{C}_\kappa \) has \((\epsilon (\kappa ),\delta (\kappa ))\)-leakage.

  • For every \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \): the ensembles \(\left\{ {V^\mathsf {P} _\kappa ,O^{\mathsf {A} }_\kappa ,O^{\mathsf {B} }_\kappa }\right\} _{\kappa \in {\mathbb {N}}}\) and \(\left\{ V^{\widetilde{\mathsf {P} }}_\kappa ,O^{\widetilde{\mathsf {A} }}_\kappa ,O^{\widetilde{\mathsf {B} }}_\kappa \right\} _{\kappa \in {\mathbb {N}}}\) are computationally indistinguishable.Footnote 3

Protocols of Small Log-Ratio Leakage Imply OT. We prove the following computational analogue of Theorem 1.3.

Theorem 1.6

(Amplification of protocols with small log-ratio leakage, informal). There exists a constant \(c_1>0\) such that the following holds for every function \(\epsilon ,\delta ,\alpha \) with \( c_1 \cdot \epsilon (\kappa )^2 \le \alpha (\kappa ) < 1/8\), \(\delta (\kappa ) \le \epsilon (\kappa )^2\) and \(1/\alpha (\kappa ) \in {\text {poly}}(\kappa )\): a ppt protocol that has \(\alpha \)-agreement and \((\epsilon ,\delta )\)-\({\text {comp-leakage}}\) yields OT (of computational security).

Triviality of Protocols with Large Leakage. An immediate corollary of Theorem 1.4 is the relationship between \(\epsilon \) and \(\alpha \) in Theorem 1.6 is best possible (up to constants).

Corollary 1.7

(Triviality of protocols with large leakage, informal). There exists a constant \(c_2>0\), such that the following holds for every function \(\epsilon \) with \(\epsilon (\kappa ) > 0\): there exists a ppt protocol that has \(( c_2 \cdot \epsilon ^2)\)-agreement and \((\epsilon ,0)\)-leakage.

1.1.3 Application: Characterization of Two-Party Differentially Private Computation.

We use our results to characterize the complexity of differentially private two-party computation for the XOR function, answering the open question put by [17, 22]. The framework of differential privacy typically studies a “one-party” setup, where a “curator” wants to answer statistical queries on a database without compromising the privacy of individual users whose information is recorded as rows in the database [10]. In this paper, we are interested in two-party differentially-private computation (defined in [32]). This setting is closely related to the setting of secure function evaluation: the parties \(\mathsf {A} \) and \(\mathsf {B} \) have private inputs x and y, and wish to compute some functionality f(xy) without compromising the privacy of their inputs. In secure function evaluation, this intuitively means that parties do not learn any information about the other party’s input, that cannot be inferred from their own inputs and outputs. This guarantee is sometimes very weak: For example, for the XOR function \(f(x,y)=x \oplus y\), secure function evaluation completely reveals the inputs of the parties (as a party that knows x and f(xy) can infer y). Differentially private two-party computation aims to give some nontrivial security even in such cases (at the cost of compromising the accuracy of the outputs).

Definition 1.8

(Differentially private computation [32]). A ppt two-party protocol \(\varPi = (\mathsf {A} ,\mathsf {B} )\) over input domain \(\left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\) is \(\epsilon \)-DP, if for every ppt nonuniform machines \(\mathsf {B} ^*\) and \(\mathsf {D} \), and every \(x,x' \in \left\{ 0,1\right\} ^n\) with \({\text {Ham}}(x,x') =1\): let \(V^{\mathsf {B} ^*}_\kappa (x)\) be the view of \(\mathsf {B} ^*\) in a random execution of \((\mathsf {A} (x),\mathsf {B} ^*)(1^\kappa ))\), then

$$\Pr \left[ \mathsf {D} (V^{\mathsf {B} ^*}_\kappa (x)) = 1\right] \le e^{\epsilon (\kappa )} \cdot \Pr \left[ \mathsf {D} (V^{\mathsf {B} ^*}_\kappa (x')) = 1\right] + {\text {neg}}(\kappa ),$$

and the same hold for the secrecy of \(\mathsf {B} \).

Such a protocol is semi-honest \(\epsilon \)-DP, if the above is only guaranteed for semi-honest adversaries (i.e., for \(\mathsf {B} ^*= \mathsf {B} \)).

In this paper, we are interested in functionalities f, in which outputs are single bits (as in the case of the XOR function). In this special case, the accuracy of a protocol can be measured as follows:

Definition 1.9

(accuracy). A ppt two-party protocol \(\varPi = (\mathsf {A} ,\mathsf {B} )\) over input domain \(\left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\) with outputs \(O^{\mathsf {A} }(x,y),O^{\mathsf {B} }(x,y) \in \left\{ 0,1\right\} \) has perfect agreement if for every \(x,y \in \left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\), and every \(\kappa \in {\mathbb {N}}\), in a random execution of the protocol \((\mathsf {A} (x),\mathsf {B} (y))(1^\kappa )\), it holds that \(\Pr [O^{\mathsf {A} }(x,y)=O^{\mathsf {B} }(x,y)]=1\).

The protocol implements a functionality f over input domain \(\left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\) with \(\alpha \)-accuracy, if for \(\kappa \in {\mathbb {N}}\), every \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \), and every \(x,y \in \left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\), in a random execution of the protocol \((\mathsf {A} (x),\mathsf {B} (y))(1^\kappa )\), it holds that \(\Pr [O^{\mathsf {P} }(x,y)=f^{\mathsf {P} }(x,y)]=\tfrac{1}{2}+ \alpha (\kappa )\).

A natural question is what assumptions are needed for two-party differentially private computation achieving a certain level of accuracy/privacy (for various functionalities). A sequence of works showed that for certain tasks, achieving high accuracy requires one-way functions [3, 6, 16, 31]; some cannot even be instantiated in the random-oracle model [21]; and some cannot be black-box reduced to key agreement [29]. See Sect. 1.2 for more details on these results. In this work we fully answer the above question for the XOR function.

Consider the functionality \(f_{\alpha }(x,y)\) which outputs \(x \oplus y \oplus U_{1/2- \alpha }\) (where \(U_{1/2-\alpha }\) is an independent biased coin which is one with probability \(1/2 - \alpha \)). Assuming OT, there exists a two-party protocol that securely implement \(f_{\alpha }\), and this protocol is \(\epsilon \)-DP, for \(\epsilon = \varTheta (\alpha )\). This is the best possible differential privacy that can be achieved for accuracy \(\alpha \). On the other extreme, an \(\varTheta (\epsilon ^2)\)-accurate, \(\epsilon \)-differential private, protocol for computing XOR can be constructed (with information theoretic security) using the so-called randomized response approach of Warner [38], as shown in [16]. Thus, it is natural to ask whether OT follows from \(\alpha \)-accurate, \(\epsilon \)-DP computation of XOR, for intermediate choices of \(\epsilon ^2 \ll \alpha \ll \epsilon \). In this paper, we completely resolve this problem and prove that OT is implied for any intermediate \(\epsilon ^2 \ll \alpha \ll \epsilon \).

Differentially Private XOR to OT, a Tight Characterization.

Theorem 1.10

[Differentially private XOR to OT, informal] There exists a constant \(c_1>0\) such that the following holds for every function \(\epsilon ,\alpha \) with \(\alpha \ge c_1 \cdot \epsilon ^2\) such that \(1/\alpha \in {\text {poly}}\): the existence of a perfect agreement, \(\alpha \)-accurate, semi-honest \(\epsilon \)-DP ppt protocol for computing XOR implies OT (of computational security).

The above improves upon Goyal et al. [17], who gave a positive answer if the accuracy \(\alpha \) is the best possible: if \(\alpha \ge c \cdot \epsilon \) for a constant c. It also improves (in the implication) upon Haitner et al. [22], who showed that \(c\cdot \epsilon ^2\)-correct \(\epsilon \)-DP XOR implies (infinitely-often) key agreement. Finally, our result allows \(\epsilon \) and \(\alpha \) to be function of the security parameter (and furthermore, allow \(\alpha \) and \(\epsilon \) to be polynomially small in the security parameter) whereas previous reductions [17, 22] only hold for constant values of \(\epsilon \) and \(\alpha \). Our characterization is tight as OT does not follow from protocols with \(\alpha \in o( \epsilon ^2)\).

Theorem 1.11

(Triviality of differentially private XOR with large leakage. Folklore, see [16]). There exists a constant \(c_2>0\) such that for every functions \(\epsilon \) there exists a ppt protocol for computing XOR with information-theoretic \(\epsilon \)-DP, perfect agreement and accuracy \(c_2\cdot \epsilon ^2\).Footnote 4

Perspective. Most of the work in differentially private mechanisms/protocols is in the information theoretic setting (using the addition of random noise). There are, however, examples where using computational definitions of differential privacy together with cryptographic assumptions, yield significantly improved accuracy and privacy compared to those that can be achieved in the information theoretic setting (e.g., the inner product and the Hamming distance functionalities [31], see more references in the related work section below). Understanding the minimal assumptions required in this setting is a fundamental open problem. In this paper, we completely resolve this problem for the special case of the XOR function. We stress that the XOR function is the canonical example of a function f(xy) where the security guarantee given by secure function evaluation is very weak. More precisely, for \(f(x,y)=x \oplus y\), the security guaranteed by secure function evaluation is meaningless, and the protocol in which both parties reveal their private inputs is considered secure. Differential privacy can be used to provide a meaningful definition of security in such cases, and we believe that the tools that we developed for the XOR function, can be useful to argue about the minimal assumptions required for other functionalities. As a first step, we provide a sufficient condition under which our approach applies to other functionalities \(g:\left\{ 0,1\right\} ^n \times \left\{ 0,1\right\} ^n \rightarrow \left\{ 0,1\right\} \).

Extending the Result to any Function that Is Not Monotone Under Relabeling. We can use our results on the XOR function to achieve OT from differentially private, and sufficiently accurate computation of a wide class of functions that are not “monotone under relabeling”. A function \(g:\left\{ 0,1\right\} ^n \times \left\{ 0,1\right\} ^n \rightarrow \left\{ 0,1\right\} \) is monotone under relabeling if there exist two bijective functions \(\sigma _x,\sigma _y:[2^n]\rightarrow \left\{ 0,1\right\} ^n\) such that for every \(x \in \left\{ 0,1\right\} ^n\) and \(i\le j \in [2^n]\):

$$\begin{aligned} g(x,\sigma _y(i))\le g(x,\sigma _y(j)), \end{aligned}$$

and, for every \(y \in \left\{ 0,1\right\} ^n\) and \(i\le j \in [2^n]\):

$$\begin{aligned} g(\sigma _x(i),y)\le g(\sigma _x(j),y). \end{aligned}$$

We observe that every function g that is not monotone under relabeling has an “embedded XOR”, meaning that there exist \(x_0,x_1,y_0,y_1 \in \left\{ 0,1\right\} ^n\) such that for every \(b,c \in \left\{ 0,1\right\} \), \(g(x_b,y_c)=b \oplus c\). This gives that a two-party protocol that computes g can be used to give a two-party protocol that computes XOR (with some losses in privacy) and these yield OT by our earlier results.

1.2 Related Work

Information-Theoretic OT. Oblivious transfer protocols are also widely studies in their information theoretic forms [7, 8, 34, 36, 39]. In this form, and OT is simply a pair of jointly distributed random variable \((V_\mathsf {A} ,V_\mathsf {B} )\) (a “channel”). A pair of unbounded parties \((\mathsf {A} ,\mathsf {B} )\), having access to independent samples from this pair (from each sample \((v_\mathsf {A} ,v_\mathsf {B} )\), party \(\mathsf {P} \) gets the value \(v_\mathsf {P} \)). Interestingly, in the information theoretic form, we do have a “simple” notion of weak OT, that is complete: such a pair can either be used to construct full-fledged (information theoretically secure) OT, or is trivial—there exists a protocol that generates these views. Unfortunately, these reductions are inherently inefficient: the parties wait till an event that might be of arbitrary small probability to occur, and thus, at least not in the most general form, cannot be translated into the computational setting.

Hardness Amplification. Amplifying the security of weak primitives into “fully secure” ones is an important paradigm in cryptography as well as other key fields in theoretical computer science. Most notable such works in cryptography are amplification of one-way functions [15, 20, 42], key-agreement protocols [26], and interactive arguments [19, 25]. Among the above, amplification of key-agreement protocols (KA) is the most similar to the OT amplification we consider in this paper. In particular, we do have a “simple” (non distributional) notion of weak KA [26]. This is done by reduction to the information theoretic notion of key-agreement. What enables this reduction to go through, is that unlike the case of the information theoretic OT, the amplification of information theoretic KA is efficient, since it only use the designated output of the (weak) KA (and not the parties’ view).

Minimal Assumptions for Differentially Private Symmetric Computation. An accuracy parameter \(\alpha \) is trivial with respect to a given functionality f and differential privacy parameter \(\epsilon \), if a protocol computing f with such accuracy and privacy exists information theoretically (i.e., with no computational assumptions). The accuracy parameter is called optimal, if it matches the bound achieved in the client-server model. Gaps between the trivial and optimal accuracy parameters have been shown in the multiparty case for count queries [3, 6] and in the two-party case for inner product and Hamming distance functionalities [31]. [21] showed that the same holds also when a random oracle is available to the parties, implying that non-trivial protocols (achieving non-trivial accuracy) for computing these functionalities cannot be black-box reduced to one-way functions.

[16] initiated the study of Boolean functions, showing a gap between the optimal and trivial accuracy for the XOR or the AND functionalities, and that non-trivial protocols imply one-way functions. [27] showed that non-interactive randomised response is optimal among all the information theoretic protocols. [29] have shown that optimal protocols for computing the XOR or AND, cannot be black-box reduced to key agreement.

[17] showed that an optimal protocol (with best possible parameters) computing the XOR can be viewed as a form of weak OT, which according to Wullschleger [41] yields full fledged OT. Whereas for our choice of parameters the security guarantee is too weak, and it is essential that we correctly amplify the security.

Very recently, [22] showed that a non-trivial protocol for computing XOR (i.e., accuracy better than \(\epsilon ^2\)) implies infinitely often key-agreement protocols. Their reduction, however, only holds for constant value of \(\epsilon \), and is non black box. Finally, [2, 24] gave a criteria that proved the necessity of OT for computationally secure function evaluation, for a select class of functions.

Paper Organization

Due to space limitations, some of the technical details appear in the full version of this paper [23]. In Sect. 2 we give an overview of the main ideas used in the proof. In Sect. 3 we give some preliminaries and state some earlier work that we use. In Sect. 4 we give our amplification results, that convert protocols with small log-ratio leakage into OT. The proofs of our results on two-party differentially private computation of the XOR function, and on functions that are not monotone under relabeling omitted from this version.

2 Our Technique

In this section we give a high level overview of our main ideas and technique.

2.1 Usefulness of Log-Ratio Distance

Recall that the leakage we considered is measured using log-ratio distance, and not statistical distance. We survey some advantages of log-ratio distance over statistical distance.

As is common in “hardness amplification”, our construction will apply the original channel/protocol many times (using fresh randomness). Given a distribution X, let \(X^{\ell }\) denote the distribution of \(\ell \) independent samples from X. A natural question is how does the distance between \(X^\ell \) and \(Y^\ell \) relate to the distance between X and Y. For concreteness, assume that \(\mathsf {\textsc {SD}}(X,Y)=\epsilon \) (where \(\mathsf {\textsc {SD}}\) denotes statistical distance) and that we are interested in taking \(\ell ={c}/{\epsilon ^2}\) repetitions where \(c>0\) is a very small constant. Consider the following two examples (in the following we use \(U_p\) to denote a coin which is one with probability p):

  • \(X_1=U_0\) and \(Y_1=U_{\epsilon }\). In this case, \(\mathsf {\textsc {SD}}(X_1^{\ell },Y_1^{\ell })=1-(1-\epsilon )^{\ell } \approx 1-e^{-c/\epsilon }\) which approaches one for small \(\epsilon \).

  • \(X_2=U_{1/2}\) and \(Y_2=U_{1/2+\epsilon }\), in this case \(\mathsf {\textsc {SD}}(X_2^{\ell },Y_2^{\ell })=\eta \), where \(\eta \approx \sqrt{c}\) is a small constant that is independent of \(\epsilon \), and can be made as small as we want by decreasing c.

There is a large gap in the behavior of the two examples. In the first, the distance is very close to one, while in the second it is very close to zero. This means that when we estimate \(\mathsf {\textsc {SD}}(X^{\ell },Y^{\ell })\) in terms of \(\mathsf {\textsc {SD}}(X,Y)\), we have to take a pessimistic bound corresponding to the first example, which is far from the truth in case our distributions behave like in the second example.

Loosely speaking, log-ratio distance provides a “fine grained” view that distinguishes the above two cases. Note that \(X_2 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{O(\epsilon ),0} Y_2\), whereas there is no finite c for which \(X_1 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{c,0} Y_1\). For XY such that \(X \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } Y\) for \(\delta =0\) (or more generally, for \(\delta \ll \epsilon \)) we get the behavior of the second example under repetitions, yielding a better control on the resulting statistical distance. More precisely, it is not hard to show that if \(X \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon }Y\) then for \(\ell =c/\epsilon ^2\) it holds that \(X^{\ell } \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{O(\sqrt{c \cdot ln(1/c)})} Y^{\ell }\).Footnote 5 A more precise statement and proof are given in Theorem 3.5.Footnote 6

2.2 The Amplification Protocol

In this section we give a high level overview of the proof of Theorem 1.3. The starting point is a channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) that has \(\alpha \)-\({\text {agreement}}\), and \((\epsilon ,\delta )\)-\({\text {leakage}}\). (A good example to keep in mind is the channel from Footnote 2). For simplicity of exposition, let us assume that \(\delta =0\) (the same proof will go through if \(\delta \) is sufficiently small). Our goal is to obtain OT if \(\alpha \ge c_1 \cdot \epsilon ^2\) for some constant \(c_1\), which we will choose to be sufficiently large.

Wullschleger [41] showed that a balanced channel with \(\alpha '\)-agreement, and \((0,\epsilon ')\)-leakage (that is \(\epsilon '\) leakage in statistical distance) implies OT if \(\epsilon ' \le c_{\mathsf {Wul}}\cdot (\alpha ')^2\) for some constant \(c_{\mathsf {Wul}}>0\). Thus, we are looking for a protocol, that starts with a channel that has \((\epsilon ,0)\)-leakage and \(\alpha \)-agreement, where \(\epsilon \) is larger than \(\alpha \), and produces a channel with \((0,\epsilon ')\)-leakage, and \(\alpha '\)-agreement where \(\epsilon '\) is smaller than \(\alpha '\). We will use the following protocol achieving \(\alpha ' \ge 1/5\) and an arbitrarily small constant \(\epsilon '>0\).Footnote 7

Protocol 2.1

(\(\varDelta _\ell ^C = (\widetilde{\mathsf {A} },\widetilde{\mathsf {B} })\), amplification of log-ratio leakage)

  • Channel: \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\).

  • Prameter: Number of samples \(\ell \).

  • Operation: Do until the protocol produces output:

  1. 1.

    The parties activate the channel C for \(\ell \) times. Let \( \overline{O}^\mathsf {A} \) and \(\overline{O}^\mathsf {B} \) be the (\(\ell \)-bit) outputs.

  2. 2.

    \(\widetilde{\mathsf {A} }\) sends the (unordered) set \(\mathcal{{S}}= \{ \overline{O}^\mathsf {A} , \overline{O}^\mathsf {A} \oplus 1^\ell \}\) to \(\widetilde{\mathsf {B} }\).

  3. 3.

    \(\widetilde{\mathsf {B} }\) informs \(\widetilde{\mathsf {A} }\) whether \(\overline{O}^\mathsf {B} \in \mathcal{{S}}\).

    If positive, party \(\widetilde{\mathsf {A} }\) outputs zero if \( \overline{O}^\mathsf {A} \) is the (lex.) smallest element in \(\mathcal{{S}}\), and one otherwise. Party \(\widetilde{\mathsf {B} }\) does the same with respect to \(\overline{O}^\mathsf {B} \). (And the protocol halts.)

Let \(\varDelta = \varDelta _\ell ^C\) for \(\ell =1/4\alpha \). We first observe that \(\varDelta \) halts in a given iteration iff the event \(E=\left\{ \overline{O}^\mathsf {A} \oplus \overline{O}^\mathsf {B} \in \left\{ 0^{\ell },1^{\ell }\right\} \right\} \) occurs. Note that \(\Pr [E] \ge 2^{-\ell }\), and thus the expected running time of \(\varDelta \) is \(O(2^{\ell })=2^{O(1/\alpha )}\) (jumping ahead, the expected running time can be improved to \({\text {poly}}(1/\alpha )\), see Sect. 2.2.1).

We also observe that the outputs of the two parties agree, iff in the final (halting) iteration it holds that \( \overline{O}^\mathsf {A} =\overline{O}^\mathsf {B} \). Thus, the agreement of \(\varDelta \) is given by:

$$\begin{aligned} \Pr [ \overline{O}^\mathsf {A} =\overline{O}^\mathsf {B} |E]&= \frac{(\tfrac{1}{2}+\alpha )^{\ell }}{(\tfrac{1}{2}+\alpha )^{\ell } + (\tfrac{1}{2}-\alpha )^{\ell }} = \left( 1+\left( \frac{\tfrac{1}{2}-\alpha }{\tfrac{1}{2}+\alpha }\right) ^{\ell } \right) ^{-1} \\&\approx \frac{1}{1+e^{-4\alpha \ell }} \ge \frac{1}{1+e^{-1}} \ge \tfrac{1}{2}+ \alpha ', \end{aligned}$$

for \(\alpha ' \ge 1/5\).

In order to understand the leakage of \(\varDelta \), we examine the views of the parties in the final iteration of \(\varDelta \) (it is clear that the views of the previous iteration yields no information). Let us denote these part of a view v by \({\text {final}}(v)\). We are interested in understanding the log-ratio distance between \({\text {final}}(V^{\widetilde{\mathsf {A} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}})\) and \({\text {final}}(V^{\widetilde{\mathsf {A} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}})\). Observe that \({\text {final}}(V^{\widetilde{\mathsf {A} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}})\) is a (deterministic) function of \(\ell \) independent samples from \(V^\mathsf {A} |_{O^{\mathsf {A} }=O^{\mathsf {B} }}\) (i.e., the function that appends \(\{ \overline{O}^\mathsf {A} , \overline{O}^\mathsf {A} \oplus 1^\ell \}\) to the view), and \({\text {final}}(V^{\widetilde{\mathsf {A} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}})\) is the same deterministic function of \(\ell \) independent samples from \(V^\mathsf {A} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}\). Thus, by data processing, it suffices to bound the distance of \(\ell \) independent samples from \(V^\mathsf {A} |_{O^{\mathsf {A} }=O^{\mathsf {B} }}\) from \(\ell \) independent samples from \(V^\mathsf {A} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}\). By assumption, C has \((\epsilon ,0)\)-\({\text {leakage}}\), which means that

$$\begin{aligned} V^\mathsf {A} |_{O^{\mathsf {A} }=O^{\mathsf {B} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,0} V^\mathsf {A} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}. \end{aligned}$$

In the previous section we showed that by choosing a sufficiently small constant \(c>0\) and taking \(\ell =c/\epsilon ^2\) repetitions of a pair of distributions with \((\epsilon ,0)\)-log ratio distance, we obtain two distributions with statistical distance that is an arbitrary small constant \(\epsilon '>0\). Here we consider \(\ell =1/(4\alpha )=1/(4 c_1\cdot \epsilon ^2)\) repetitions, and therefore

$$\begin{aligned} {\text {final}}(V^{\widetilde{\mathsf {A} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}})\mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{\epsilon '} {\text {final}}(V^{\widetilde{\mathsf {A} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}}). \end{aligned}$$

By picking \(c_1\) to be sufficiently large, we can obtain that the leakage in \(\varDelta \) is \(\epsilon ' \le c_{\mathsf {Wul}}\cdot (\alpha ')^2\) as required.

2.2.1 Efficient Amplification

The (expected) running time of \(\varDelta _\ell \) is \(2^{O(\ell )}\) that for the above choice of \(\ell =\varTheta (1/\alpha )\) equals \(2^{O(1/\alpha )}\). To be useful in a setting when the running time is limited, e.g., in the computational setting, this dependency restricts us to “large” values of \(\alpha \). Fortunately, Protocol 2.1 can be modified so that its (expected) running time is only polynomial in \(1/\alpha \).

Intuitively, rather than making \(\ell \) invocations of C at once, and hope that the tuple of invocations happens to be useful: \( \overline{O}^\mathsf {A} \oplus \overline{O}^\mathsf {B} \in \left\{ 0^{\ell },1^{\ell }\right\} \), the efficient protocol combines smaller tuples of useful invocations, i.e., \( \overline{O}^\mathsf {A} \oplus \overline{O}^\mathsf {B} \in \left\{ 0^{\ell '},1^{\ell '}\right\} \), for some \(\ell ' < \ell \), into a useful tuple of \(\ell \) invocations. The advantage is that failing to generate the smaller useful tuples, only “wastes” \(\ell '\) invocations of C. By recursively sampling the \(\ell '\) tuples via the same approach, we get a protocol whose expected running time is \(O(\ell ^2)\) (rather than \(2^{O(\ell )}\)).

The actual protocol implements the above intuition in the following way: on parameter d, protocol \(\varLambda _d\) mimics the interaction of the inefficient protocol \(\varDelta _{2^d}\) (i.e., the inefficient protocols with sample parameter \(2^d\)). It does so by using \(\varDelta _2\) to combines the outputs of two of execution of \(\varLambda _{d-1}\). Effectively, this call to \(\varDelta _2\) combines the two \(2^{d-1}\) useful tuples produced by \(\varLambda _{d-1}\), into a single \(2^{d}\) useful tuple.

Let \(\varLambda _0^C =C\), and recursively define \(\varLambda _d\), for \(d>0\), as follows:

Protocol 2.2

(\(\varLambda _d^C = (\widehat{\mathsf {A} },\widehat{\mathsf {B} })\), efficient amplification of log-ratio leakage)

  • Channel: C.

  • Prameter: log number of sample d.

  • Operation: The parties interact in \(\varDelta _2^{(\varLambda _{d-1}^C)}\).

By induction, the expected running time of \(\varLambda _{d}^C\) is \(4^d\). A more careful analysis yields that the view of \(\varLambda _{d}^C\) can be simulated by the view of \(\varDelta _{2^d}^C\). Indeed, there are exactly \(2^d\) useful invocations of C in an execution of \(\varLambda _{d}^C\): invocations whose value was not ignored by the parties, and their distribution is exactly the same as the \(2^d\) useful invocations of C in \(\varDelta _{2^d}^C\). Hence, using \(\varLambda _d^C\) with \(d= \log 1/4\alpha \), we get a protocol whose expected running time is polynomial in \(1/\alpha \) and guarantees the same level of agreement and security as of \(\varDelta _{1/4\alpha }\).

2.3 The Computational Case

So far, we considered information theoretic security. In order to prove Theorem 1.6 (that considers security against ppt adversaries) we note that Definition 1.5 (of computational leakage) is carefully set up to allow the argument of the previous section to be extended to the computational setting. Using the efficient protocol above, the reduction goes through as long as \(\alpha \) is a noticeable function of the security parameter.

2.4 Two-Party Differentially Private XOR Implies OT

In this section we explain the main ideas that are used in the proof of Theorem 1.10. Our goal is to show that a perfect completeness, \(\alpha \)-accurate, semi-honest \(\epsilon \)-DP protocol for computing XOR, implies OT, if \(\alpha \ge c \cdot \epsilon ^2\) for a sufficiently large constant c. In order to prove this, we will show that such a protocol can be used to give a two-party protocol that has \(\alpha \)-agreement and (computational) \((\epsilon ,0)\)-leakage. Such a protocol yields OT by our earlier results.Footnote 8

We remark that there are two natural definitions of “computational differential privacy” in the literature using either computational indistinguishability or simulation [32]. Definition 1.8 is using indistinguishability, while for our purposes, it is more natural to work with simulation (as using simulation enables us to “‘switch back and forth” between the information theoretic setting and the computational setting). In general, these two definitions are not known to be equivalent. For functionalities like XOR, where the inputs of both parties are single bits, however, the two definitions are equivalent by the work of [32]. This means that when considering differential privacy of the XOR function, we can imagine that we are working in an information theoretic setting, in which there is a trusted party, that upon receiving the inputs xy of the parties, provides party \(\mathsf {P} \), with its output \(O^{\mathsf {P} }\) and view \(V^\mathsf {P} \). We will use the following protocol to obtain a “channel” with \(\alpha \)-agreement and \((\epsilon ,0)\)-leakage.

Protocol 2.3

(DP-XOR to channel)

  1. 1.

    \(\mathsf {A} \) samples \(X \leftarrow \left\{ 0,1\right\} \) and \(\mathsf {B} \) samples \(Y \leftarrow \left\{ 0,1\right\} \).

  2. 2.

    The parties apply the differentially private protocol for computing XOR, using inputs X and Y respectively, and receive outputs \(O^{\mathsf {A} }_{DP},O^{\mathsf {B} }_{DP}\) respectively.

  3. 3.

    \(\mathsf {A} \) sends \(R\leftarrow \left\{ 0,1\right\} \) to \(\mathsf {B} \).

  4. 4.

    \(\mathsf {A} \) outputs \(O^{\mathsf {A} }=X \oplus R\) and \(\mathsf {B} \) outputs \(O^{\mathsf {B} }_{DP} \oplus Y \oplus R\).

The intuition behind this protocol is that if \(O^{\mathsf {B} }_{DP}=X \oplus Y\), then \(O^{\mathsf {B} }=(X \oplus Y) \oplus Y \oplus R = X \oplus R = O^{\mathsf {A} }\). This means that the channel induced by this protocol inherits \(\alpha \)-agreement from the \(\alpha \)-accuracy of the original protocol. In Sect. 4 we show that this channel “inherits” log-ratio leakage of \((\epsilon ,0)\) from the fact that the original protocol is \(\epsilon \)-DP.

3 Preliminaries

3.1 Notation

We use calligraphic letters to denote sets, uppercase for random variables and functions, lowercase for values. For \(a,b\in {\mathbb R}\), let \(a\pm b\) stand for the interval \([a-b,a+b]\). For \(n\in {\mathbb {N}}\), let \([n] = \left\{ 1,\ldots ,n\right\} \) and \((n) = \left\{ 0,\ldots ,n\right\} \). The Hamming distance between two strings \(x,y\in \left\{ 0,1\right\} ^n\), is defined by \({\text {Ham}}(x,y) = \sum _{i\in [n]} x_i \ne y_i\). Let \({\text {poly}}\) denote the set of all polynomials, let ppt stand for probabilistic polynomial time and pptm denote a ppt TM (Turing machine) and let \(\mathsf{ppt}^\mathsf{NU} \)stands for a non-uniform pptm. A function \(\nu :{\mathbb {N}}\rightarrow [0,1]\) is negligible, denoted \(\nu (n) = {\text {neg}}(n)\), if \(\nu (n)<1/p(n)\) for every \(p\in {\text {poly}}\) and large enough n.

3.2 Distributions and Random Variables

Given a distribution, or random variable, D, we write \(x\leftarrow D\) to indicate that x is selected according to D. Given a finite set \(\mathcal{{S}}\), let \(s\leftarrow \mathcal{{S}}\) denote that s is selected according to the uniform distribution over \(\mathcal{{S}}\). The support of D, denoted \({\text {Supp}}(D)\), be defined as \(\left\{ u\in {\mathord {\mathcal {U}}}: D(u)>0\right\} \). We will use the following distance measures.

Statistical Distance.

Definition 3.1

(statistical distance). The statistical distance between two distributions PQ over the same domain \({\mathord {\mathcal {U}}}\), (denote by \(\mathsf {\textsc {SD}}(P,Q)\)) is defined to be:

$$\begin{aligned} SD(P,Q)=max_{\mathcal {A}\subseteq {\mathord {\mathcal {U}}}} |\Pr [P \in \mathcal {A}]-\Pr [Q \in \mathcal {A}]| . \end{aligned}$$

We say that PQ are \(\epsilon \) -close (denoted by \(P \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{\epsilon } Q\)) if \(\mathsf {\textsc {SD}}(P,Q) \le \epsilon \).

We use the following fact, see [23] for the proof.

Proposition 3.2

Let \(0<\epsilon<\mu < 1\), and let (XY), \((\tilde{X}, \tilde{Y})\) be two pairs of random variables over the same domain \(\mathcal {X}\times \mathcal {Y}\), such that \(\mathsf {\textsc {SD}}((X,Y), (\tilde{X},\tilde{Y})) \le \epsilon \). Let \(E_0,E_1 \subseteq \mathcal {X}\times \mathcal {Y}\) be two sets such that for every \(b \in \left\{ 0,1\right\} \), \(\Pr \left[ (X,Y)\in E_b\right] \ge \mu \). Then \(\mathsf {\textsc {SD}}(\tilde{X}|_{\left\{ (\tilde{X},\tilde{Y}) \in E_0\right\} }, \tilde{X}|_{\left\{ (\tilde{X},\tilde{Y}) \in E_1\right\} })\le \mathsf {\textsc {SD}}(X|_{\left\{ (X,Y) \in E_0\right\} }, X|_{\left\{ (X,Y) \in E_1\right\} }) +4\epsilon /\mu .\)

Log-Ratio Distance. We will also be interested in the following natural notion of “log-ratio distance” which was popularized by the literature on differential privacy.

Definition 3.3

(Log-Ratio distance). Two numbers \(p_0,p_1 \ge 0\) satisfy \(p_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } p_1\) if for both \(b \in \left\{ 0,1\right\} \): \(p_b \le e^{\epsilon } \cdot p_{1-b} + \delta \). Two distributions PQ over the same domain \({\mathord {\mathcal {U}}}\), are \((\epsilon ,\delta )\) -log-ratio-close (denoted \(P \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } Q\)) if for every \(\mathcal {A}\subseteq {\mathord {\mathcal {U}}}\):

$$\begin{aligned} \Pr [P \in \mathcal {A}] \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } \Pr [Q \in \mathcal {A}]. \end{aligned}$$

We let \(\mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_\epsilon \) stands for \(\mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,0}\).

It is immediate that \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{\delta } D_1\) iff \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{0,\delta } D_1\), and that \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } D_1\) implies \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{(e^{\epsilon }-1)+\delta } D_1\), and note that for \(\epsilon \in [0,1]\), \(e^{\epsilon }-1 =O(\epsilon )\). It is also immediate that the log-ratio distance respects data processing.

Fact 3.4

Assume \(P \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } Q\), then \(f(P) \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } f(Q)\) for any (possibly randomized) function f.

Log-Ratio Distance Under Independent Repetitions. As demonstrated by the framework of differential privacy, working with this notion of “relative distance” is often a very convenient distance measure between distributions, as it behaves nicely when considering independent executions. Specifically, let \(D^{\ell }\) denote \(\ell \) independent copies from D, the following follows:

Theorem 3.5

(Relative distance under independent repetitions). If \(D_0 \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } D_1\) then for every \(\ell \ge 1\), and every \(\delta ' \in (0,1)\)

$$\begin{aligned} D_0^{\ell } \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{({\eta (\epsilon ,\ell ,\delta ')},\ell \delta + \delta ')} D_1^{\ell }, \end{aligned}$$

where \(\eta (\epsilon ,\ell ,\delta ')=\ell \cdot \epsilon (e^\epsilon -1)+\epsilon \cdot \sqrt{2 \ell \cdot \ln (1/\delta ')}\).

We remark that Theorem 3.5 can also be derived by the (much more complex) result on “boosting differential privacy” [11]. However, it can be easily derived directly by a Hoeffding bound, as is done in the full version of this paper.

Definition 3.6

(Computational indistinguishability). Two distribution ensembles \(X=\left\{ X_\kappa \right\} _{\kappa \in {\mathbb {N}}}\), \(Y=\left\{ Y_\kappa \right\} _{\kappa \in {\mathbb {N}}}\) are [resp non-uniformly] computationally indistinguishable, denoted \(X \mathbin {{\mathop {\approx }\limits ^\mathrm{C}}}Y\) [resp., \(X \mathbin {{\mathop {\approx }\limits ^\mathrm{nuC}}}Y\)] if for every ppt [resp., \(\mathsf{ppt}^\mathsf{NU} \)] \(\mathsf {D} \):

$$\begin{aligned} |\Pr [\mathsf {D} (1^\kappa ,X_\kappa )=1]-\Pr [\mathsf {D} (1^\kappa ,Y_\kappa )=1]| \le {\text {neg}}(\kappa ) . \end{aligned}$$

3.3 Protocols

Let \(\varPi =(\mathsf {A} ,\mathsf {B} )\) be a two-party protocol. Protocol \(\varPi \) is ppt if both \(\mathsf {A} \) and \(\mathsf {B} \) running time is polynomial in their input length. We denote by \((\mathsf {A} (x_\mathsf {A} ),\mathsf {B} (x_\mathsf {B} ))(z)\) a random execution of \(\varPi \) with private inputs \((x_\mathsf {A} ,y_\mathsf {A} )\), and common input z. At the end of such an execution, party \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \) obtains his view \(V^\mathsf {P} (x_\mathsf {A} ,x_\mathsf {B} ,z)\), which may also contain a “designated output” \(O^\mathsf {P} (x_\mathsf {A} ,x_\mathsf {B} ,z)\) (if the protocol specifies such an output). A protocol has Boolean output, if each party outputs a bit.

3.4 Two-Output Functionalities and Channels

A two-output functionality is just a random function that outputs a tuple of two values in a predefined domain. In the following we omit the two-output term from the notation.

Channels. A channel is simply a no-input functionality with designated output bits. We naturally identify channels with the random variable characterizes their output.

Definition 3.7

(Channels). A channel is a no-input Boolean functionality whose output pair is of the from \(((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) and for both \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \), \(O^{\mathsf {P} }\) is Boolean and determined by \(V^\mathsf {P} \). A channel has agreement \(\alpha \) if \(\Pr \left[ O^{\mathsf {A} }= O^{\mathsf {B} }\right] = \tfrac{1}{2}+ \alpha \). A channel ensemble \(\left\{ C_\kappa \right\} _{\kappa \in {\mathbb {N}}}\) has agreement \(\alpha \) if \(C_\kappa \) has agreement \(\alpha (\kappa )\) for every \(\kappa \).

It is convenient to view a channel as the experiment in which there are two parties \(\mathsf {A} \) and \(\mathsf {B} \). Party \(\mathsf {A} \) receives “output” \(O^{\mathsf {A} }\) and “view” \(V^\mathsf {A} \), and party \(\mathsf {B} \) receives “output” \(O^{\mathsf {B} }\) and “view” \(V^\mathsf {B} \).

We identify a no-input Boolean output protocol with the channel “induced” by its semi-honest execution.

Definition 3.8

(The protocol’s channel). For a no-input Boolean output protocol \(\varPi \), we define the channel \({\text {CHN}} (\varPi )\) by \({\text {CHN}} (\varPi ) = ((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\), for \(V^\mathsf {P} \) and \(O^{\mathsf {P} }\) being the view and output of party \(\mathsf {P} \) in a random execution of \(\varPi \). Similarly, for protocol \(\varPi \) whose only input is a security parameter, let \({\text {CHN}} (\varPi ) = \left\{ {\text {CHN}} (\varPi )_\kappa ={\text {CHN}} ({\varPi (1^\kappa )})\right\} _{\kappa \in {\mathbb {N}}}\).

All protocols we construct in this work are oblivious, in the sense that given oracle access to a channel, the parties only make use of the channel output (though the channel’s view becomes part of the party view).Footnote 9

3.5 Secure Computation

We use the standard notion of securely computing a functionality, cf., [13].

Definition 3.9

(Secure computation). A two-party protocol securely computes a functionality f, if it does so according to the real/ideal paradigm. We add the term perfectly/statistically/computationally/non-uniform computationally, if the the simulator output is perfect/statistical/computationally indistinguishable/ non-uniformly indistinguishable from the real distribution. The protocol have the above notions of security against semi-honest adversaries, if its security only guaranteed to holds against an adversary that follows the prescribed protocol. Finally, for the case of perfectly secure computation, we naturally apply the above notion also to the non-asymptotic case: the protocol with no security parameter perfectly compute a functionality f.

A two-party protocol securely computes a functionality ensemble f in the g -hybrid model, if it does so according to the above definition when the parties have access to a trusted party computing g. All the above adjectives naturally extend to this setting.

3.6 Oblivious Transfer

The (one-out-of-two) oblivious transfer functionality is defined as follows.

Definition 3.10

(oblivious transfer functionality \(f_{{\text {OT}}}\)). The oblivious transfer functionality over \(\left\{ 0,1\right\} \times (\left\{ 0,1\right\} ^*)^2\) is defined by \(f_{{\text {OT}}}(i,(\sigma _0,\sigma _1)) = (\perp ,\sigma _i)\).

A protocol is \(*\) secure OT, for \(*\in \{\text {semi-honest statistically/computationally/}\)\(\text {computationally non-uniform}\}\), if it compute the \(f_{{\text {OT}}}\) functionality with \(*\) security.

3.7 Two-Party Differential Privacy

We consider differential privacy in the 2-party setting.

Definition 3.11

(Differentially private functionality). A functionality f over input domain \(\left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\) is \(\epsilon \)-\({\text {DP}}\), if the following holds: let \((V^\mathsf {A} _{x,y},V^\mathsf {B} _{x,y}) = f(x,y)\), then for every \(x,x'\) with \({\text {Ham}}(x,x') =1\), \(y\in \left\{ 0,1\right\} ^n\) and \(v\in {\text {Supp}}(V^\mathsf {B} _{x,y})\):

$$\Pr \left[ V^\mathsf {B} _{x,y}= v\right] \le e^ \epsilon \cdot \Pr \left[ V^\mathsf {B} _{x',y} = v\right] ,$$

and the for every \(y,y'\) with \({\text {Ham}}(y,y') =1\), \(x\in \left\{ 0,1\right\} ^n\) and \(v\in {\text {Supp}}(V^\mathsf {A} _{x,y})\):

$$\Pr \left[ V^\mathsf {A} _{x,y}=v\right] \le e^ \epsilon \cdot \Pr \left[ V^\mathsf {A} _{x,y'} = v\right] .$$

Note that the above definition is equivalence to asking that \(V^\mathsf {B} _{x,y} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_\epsilon V^\mathsf {B} _{x',y}\) for any \(x,x'\) with \({\text {Ham}}(x,x') =1\) and y, and analogously for the view of \(\mathsf {A} \), for \(\mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon }\) being the log-ratio according to Definition 3.3.

We also remark that a more general definition allows also an additive error \(\delta \) in the above, making the functionality \((\epsilon ,\delta )\)-\({\text {DP}}\). However, for the sake simplicity, we focus on the simpler notion of \(\epsilon \)-\({\text {DP}}\) stated above.

Definition 3.12

(Differentially private computation). A ppt two-output protocol \(\varPi = (\mathsf {A} ,\mathsf {B} )\) over input domain \(\left\{ 0,1\right\} ^n\times \left\{ 0,1\right\} ^n\) is \(\epsilon \)-\({\text {IND-DP}}\) if the following holds for every \(\mathsf{ppt}^\mathsf{NU} \) \(\mathsf {B} ^*\), \(\mathsf {D} \) and \(x,x' \in \left\{ 0,1\right\} ^n\) with \({\text {Ham}}(x,x') =1\): let \({V^\mathsf {B} }^*_{x}\) be the view of \(\mathsf {B} ^*\) in a random execution of \( (\mathsf {A} (x),\mathsf {B} ^*)(1^\kappa )\), then

$$\Pr \left[ \mathsf {D} ({V^\mathsf {B} }^*_{x}) = 1\right] \le e^{\epsilon (\kappa )} \cdot \Pr \left[ \mathsf {D} ({V^\mathsf {B} }^*_{x'}) = 1\right] + {\text {neg}}(\kappa ),$$

and the same hold for the secrecy of \(\mathsf {B} \).

Such a protocol is semi-honest \(\epsilon \) -\({\text {IND-DP}}\), if the above is only guaranteed to hold for semi-honest adversaries (i.e., for \(\mathsf {B} ^*= \mathsf {B} \)).

3.8 Passive Weak Binary Symmetric Channels

We rely on the work of Wullschleger [41] that shows that certain channels imply oblivious transfer. The following notion, adjusted to our formulation, of a “Passive weak binary symmetric channel” was studied in [41].

Definition 3.13

(Passive weak binary symmetric channels, WBSC, [41]). An \((\mu ,\epsilon _0,\epsilon _1,p,q)\)-\({\text {WBSC}}\) is a channel \(C = ((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) such that the following holds:

  • Correctness: \(\Pr \left[ O^{\mathsf {A} }=0\right] \in [\tfrac{1}{2}-\mu /2, \tfrac{1}{2}+\mu /2]\)

    and for every \(b_\mathsf {A} \in \left\{ 0,1\right\} \), \(\Pr \left[ O^{\mathsf {B} }\ne O^{\mathsf {A} }\mid O^{\mathsf {A} }=b_\mathsf {A} \right] \in [\epsilon _0,\epsilon _1]\).

  • Receiver security: \((V^\mathsf {A} ,O^{\mathsf {A} })|_{O^{\mathsf {B} }=O^{\mathsf {A} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_p (V^\mathsf {A} ,O^{\mathsf {A} })|_{O^{\mathsf {B} }\ne O^{\mathsf {A} }}\).Footnote 10

  • Sender security: for every \(b_\mathsf {B} \in \left\{ 0,1\right\} \), \(V^\mathsf {B} |_{O^{\mathsf {B} }=b_\mathsf {B} , O^{\mathsf {A} }=0} \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_q V^\mathsf {B} |_{O^{\mathsf {B} }=b_\mathsf {B} ,O^{\mathsf {A} }=1}\).

The following was proven in [41].

Theorem 3.14

(WBSC implies oblivious transfer). There exist a protocol \(\varDelta \) such that the following holds. Let \(\epsilon ,\epsilon _0 \in (0,1/2), p \in (0,1)\) be such that \(150(1-(1-p)^2)<(1-\frac{2\epsilon ^2}{\epsilon ^2+(1-\epsilon )^2})^2\), and \(\epsilon _0 \le \epsilon \). Let C be a \((0,\epsilon _0,\epsilon _0,p,p)\)-\({\text {WBSC}}\). Then \(\varDelta (1^\kappa ,\epsilon )\) is a semi-honest statistically secure OT in the C-hybrid model, and its running time is polynomial in \(\kappa \), \(1/\epsilon \) and \(1/(1-2\epsilon )\). Furthermore, the parties in \(\varDelta \) only makes use of the output bits of the channel.

Theorem 3.14 considers channels with \(\mu =0\), and \(\epsilon _0=\epsilon _1\). This is equivalent to saying that the channel is balanced (i.e., each of the output bits is uniform) and has \(\alpha \)-agreement, for \(\alpha =\tfrac{1}{2}-\epsilon _0\). When stated in this form, Theorem 3.14 says that such a channel implies OT if \(p=O(\alpha ^2)\), and in particular, it is required that \(p<\alpha \).

3.8.1 Specialized Passive Weak Binary Symmetric Channels

We will be interested in a specific choice of parameters for passive WBSC’s, and for this choice, it will be more convenient to work with the following stronger notion of a channel (that is easier to state and argue about, as security is defined in the same terms for both parties).

Definition 3.15

(Specialized passive weak binary symmetric channels). An \((\epsilon _0,p)\)-\({\text {SWBSC}}\) is a channel \(C = ((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) such that the following holds:

  • Correctness: \(\Pr \left[ O^{\mathsf {A} }=0\right] =\tfrac{1}{2}\), and for every \(b_\mathsf {A} \in \left\{ 0,1\right\} \),

    \(\Pr \left[ O^{\mathsf {B} }\ne O^{\mathsf {A} }\mid O^{\mathsf {A} }=b_\mathsf {A} \right] = \epsilon _0\).

  • Receiver security: \(V^\mathsf {A} |_{O^{\mathsf {A} }=O^{\mathsf {B} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_p V^\mathsf {A} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}\).

  • Sender security: \(V^\mathsf {B} |_{O^{\mathsf {B} }=O^{\mathsf {A} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_p V^\mathsf {B} |_{O^{\mathsf {B} }\ne O^{\mathsf {A} }}\).

Proposition 3.16

An \((\epsilon _0,p)\)-\({\text {SWBSC}}\) is a \((0,\epsilon _0,\epsilon _0,2p,2p)\)-\({\text {WBSC}}\).

The proof for Proposition 3.16 appears in the full version.

3.9 Additional Inequalities

The following fact is proven in the full version of this paper.

Proposition 3.17

The following holds for every \(b\in (0,1/2)\) and \(\ell \in {\mathbb {N}}\) such that \(b\ell < 1/4\).

$$\frac{(1/2+b)^\ell }{(1/2+b)^\ell +(1/2-b)^\ell } \in [\tfrac{1}{2}(1+b\ell ),\tfrac{1}{2}(1+3b\ell ) ].$$

4 Amplification of Channels with Small Log-Ratio Leakage

In this section we formally define log-ratio leakage and prove our amplification results. We start in Sect. 4.1 with the information theoretic setting, in which we restate and prove Theorems 1.3 and 1.4. In the full version of this paper we extend our result to the computational setting, restating and proving Theorem 1.6.

4.1 The Information Theoretic Setting

We start with a definition of log-ratio leakage (restating Definition 1.2 with more formal notation).

Definition 4.1

(Log-ratio leakage). A channel \(((O^{\mathsf {A} },V^\mathsf {A} ),(O^{\mathsf {B} },V^\mathsf {B} ))\) has \((\epsilon ,\delta )\)-leakage if

  • Receiver security: \(V^\mathsf {A} |_{O^{\mathsf {A} }=O^{\mathsf {B} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } V^\mathsf {A} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}\).

  • Sender security: \(V^\mathsf {B} |_{O^{\mathsf {A} }=O^{\mathsf {B} }} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{\epsilon ,\delta } V^\mathsf {B} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}\).

The following theorem is a formal restatement of Theorem 1.3

Theorem 4.2

(Small log-ratio leakage implies OT). There exists an (oblivious) ppt protocol \(\varDelta \) and constant \(c_1>0\) such that the following holds. Let \(\epsilon ,\delta \in [0,1]\) be such that \(\delta \le \epsilon ^2\), and let \(\alpha \le \alpha _{\max }<1/8\) be such that \(\alpha \ge \max \left\{ c_1 \cdot \epsilon ^2,\alpha _{\max }/2\right\} \). Then for any channel C with \((\epsilon ,\delta )\)-\({\text {leakage}}\) and \(\alpha \)-agreement, protocol \(\varDelta ^C(1^\kappa ,1^{\left\lfloor 1/\alpha _{\max } \right\rfloor })\) is a semi-honest statistically secure OT in the C-hybrid model.

Before proving Theorem 4.2, we first show that it is tight. The proof of the following theorem is given in the full paper.

Theorem 4.3

(Triviality of channels with large leakage). There exists a constant \(c_2>0\), such that for every \(\epsilon >0\) there is a two-party protocol (with no inputs) where at the end of the protocol, every party \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \) has output \(O^{\mathsf {P} }\) and view \(V^\mathsf {P} \). Moreover, the induced channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) has \(\alpha \)-agreement,and \((\epsilon ,0)\)-leakage, for \(\alpha \ge c_2 \cdot \epsilon ^2\).

Together, the two theorems show that if \(\alpha \ge c_1 \cdot \epsilon ^2\) then the channel yields OT, and if \(\alpha \le c_2 \cdot \epsilon ^2\) then such a channel can be simulated by a two-party protocol with no inputs (and thus cannot yield OT with information theoretic security).

The proof of Theorem 4.2 is an immediate consequence of the following two lemmata.

Recall (Definition 3.8) that \({\text {CHN}} (\varPi )\) denotes the channel induced by a random execution of the no-input, Boolean output protocol \(\varPi \).

Lemma 4.4

(Gap amplification). There exists an (oblivious) ppt protocol \(\varDelta \) and constant \(c_1>0\) such that the following holds. Let \(\epsilon ,\delta ,\alpha ,\alpha _{\max }\) be parameters satisfying requirements in Theorem 4.2 with respect to \(c_1\). Let C be a channel with \((\epsilon ,\delta )\)-\({\text {leakage}}\) and \(\alpha \)-agreement, let \(\ell = 2^{(\left\lfloor \log 1/\alpha _{\max } \right\rfloor -2)}\) and let \(\widetilde{C}={\text {CHN}} (\varDelta ^C(1^\ell ))\). Then

  • \(\widetilde{C}\) has \(\widetilde{\alpha }\in [1/32,3/8]\)-\({\text {agreement}}\).

  • For any \(\delta '\in (0,1)\): \(\widetilde{C}\) has \((\widetilde{\epsilon },\widetilde{\delta })\)-\({\text {leakage}}\) for \(\widetilde{\epsilon }=2\ell \epsilon ^2+\epsilon \sqrt{2\ell \ln ({1/\delta '})}\) and \(\widetilde{\delta }= \delta '+\ell \delta \).

Definition 4.5

(Bounded execution). Given Boolean output protocol \(\varPi \) and \(n\in {\mathbb {N}}\), let \(\mathsf {bound} _n(\varPi )\) be the variant of \(\varPi \) that if the protocol does not halt after n steps, it halts and the parties output uniform independent bits.

Lemma 4.6

(Large Gap to OT). There exist an (oblivious) ppt protocol \(\varDelta \) and constants \(n,c > 0\) such that the following holds: let \(\varPi \) be a protocol of expected running time at most t that induces a channel C with \(\alpha \in [1/32,3/8]\)-agreement, and \((\epsilon ,\delta )\)-\({\text {leakage}}\) for \(\epsilon ,\delta \le c\).

Then \(\varDelta ^{C'}(1^\kappa )\) is a semi-honest statistically secure OT in the \(C' = {\text {CHN}} (\mathsf {bound} _{n\cdot t}(\varPi ))\) hybrid model.

We prove the above two Lemmas in the following subsections, but first we will prove Theorem 4.2.

Proof

(Proof of Theorem 4.2). Let \(\ell = 2^{(\left\lfloor \log 1/\alpha _{\max } \right\rfloor -2)}\). By Lemma 4.4, there exists an expected polynomially time protocol \(\varLambda \) such that \(\varLambda ^C(1^\ell )\) induces a channel \(\widetilde{C}\) of \(\widetilde{\alpha }\in [1/32,3/8]\)-\({\text {agreement}}\), and \((\widetilde{\epsilon },\widetilde{\delta })\)-\({\text {leakage}}\) for \(\widetilde{\epsilon }=2\ell \epsilon ^2+\epsilon \sqrt{2\ell \ln ({1/\delta '})}\) and \(\widetilde{\delta }= \delta '+\ell \delta \), for any \(\delta '\in (0,1)\).

Let \(t \in {\text {poly}}\) be a polynomial that bounds the expected running time of \(\varLambda \). By Lemma 4.6, there exist universal constants nc and ppt protocol \(\varDelta \), such that if

$$\begin{aligned}&\widetilde{\epsilon }=2\ell \epsilon ^2+\epsilon \sqrt{2\ell \ln ({1/\delta '})}\le c&\text {and}&\widetilde{\delta }= \delta '+\ell \delta \le c \end{aligned}$$
(1)

then the protocol \(\varGamma \), defined by \(\varGamma ^C(1^\kappa , 1^{\left\lfloor 1/\alpha _{\max } \right\rfloor }) = \varDelta ^{C'}(1^{\kappa })\) for \(C'= {\text {CHN}} (\mathsf {bound} _{{n\cdot t(\ell )}}(\varLambda ^{C}(1^\ell )))\), is a semi-honest statistically secure OT. Hence, we conclude the proof noting that Eq. (1) holds by setting \(\delta '=\ell \delta \) and choosing \(c_1\) (the constant in Theorem 4.2) to be sufficiently large.

Lemma 4.6 is proved in Sect. 4.1.3 using the amplification result of [41]. Toward proving Lemma 4.4, our main technical contribution, we start in Sect. 4.1.1 by presenting an inefficient protocol implementing the desired channel. In Sect. 4.1.2 we show how to bootstrap the the above protocol into an efficient one.

4.1.1 Inefficient Amplification

The following protocol implements the channel stated in Lemma 4.4, but its running time is exponential in \(1/\alpha _{\max }\).

Protocol 4.7

(Protocol \(\varDelta ^C= (\widetilde{\mathsf {A} },\widetilde{\mathsf {B} })\) )

  • Oracle: channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\).

  • Input: \(1^\ell \).

  • Operation: The parties repeat the following process until it produces outputs:

  1. 1.

    The parties (jointly) call the channel C for \(\ell \) times. Let \(\overline{o}^{\mathsf {A} }=(o^{\mathsf {A} }_1,\ldots ,o^{\mathsf {A} }_\ell ),\overline{o}^{\mathsf {B} }= (o^{\mathsf {B} }_1,...,o^{\mathsf {B} }_\ell )\) be the outputs.

  2. 2.

    \(\widetilde{\mathsf {A} }\) computes and sends \(\mathcal {S}=\left\{ \overline{o}^{\mathsf {A} }, 1^\ell \oplus \overline{o}^{\mathsf {A} }\right\} \) according to their lexical order to \(\widetilde{\mathsf {B} }\).

  3. 3.

    \(\widetilde{\mathsf {B} }\) inform \(\widetilde{\mathsf {A} }\) whether \(\overline{o}^{\mathsf {B} }\in \mathcal {S}\).

    If positive, both parties output the index of their tuple in \(\mathcal {S}\) (and the protocol ends).

We show that the channel induced by protocol \(\varDelta ^C(1^\ell )\) satisfies all the requirement of Lemma 4.4 apart from its expected running time (which is exponential in \(\ell \)).

Let \(\widetilde{C}= {\text {CHN}} (\varDelta ^C(\ell ))=((V^{\widetilde{\mathsf {A} }},O^{\widetilde{\mathsf {A} }}),((V^{\widetilde{\mathsf {B} }},O^{\widetilde{\mathsf {B} }}))\). The following function outputs the calls to C made in the final iteration in \(\widetilde{C}\).

Definition 4.8

(Final calls). For \(c\in {\text {Supp}}(\widetilde{C})\) let \({\text {final}}(c)\) denote the output of the \(\ell \) calls to C made in the final iteration in c.

We make the following observation about the final calls.

Claim 4.9

The following holds for \(((\cdot , \overline{O}^\mathsf {A} ),(\cdot ,\overline{O}^\mathsf {B} )) = {\text {final}}(\widetilde{C}= ((\cdot ,O^{\widetilde{\mathsf {A} }}),(\cdot ,O^{\widetilde{\mathsf {B} }})))\).

  • \(O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}\) iff \( \overline{O}^\mathsf {A} =\overline{O}^\mathsf {B} \).

  • Let \(C^\ell = ((\cdot ,(O^{\mathsf {A} })^\ell ), (\cdot ,(O^{\mathsf {B} })^\ell ))\) be the random variable induced by taking \(\ell \) copies of C and let E be the event that \((O^{\mathsf {B} })^\ell \in \left\{ ( O^{\mathsf {A} })^\ell , (O^{\mathsf {A} })^\ell \oplus 1^\ell \right\} \). Then \({\text {final}}(\widetilde{C}) \equiv C^\ell |_E\).

Proof

Immediate by construction.

Agreement.

Claim 4.10

(Agreement). \( \Pr \left[ O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}\right] \in [17/32 , 7/8]\).

Proof

By Claim 4.9,

$$\begin{aligned} \Pr \left[ O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}\right]&= \frac{\Pr \left[ (O^{\mathsf {A} })^\ell =(O^{\mathsf {B} })^\ell \mid E\right] }{\Pr \left[ (O^{\mathsf {A} })^\ell =(O^{\mathsf {B} })^\ell \mid E\right] +\Pr \left[ (O^{\mathsf {A} })^\ell \oplus (O^{\mathsf {B} })^\ell = 1^\ell \mid E\right] }\\&= \frac{\Pr \left[ (O^{\mathsf {A} })^\ell =(O^{\mathsf {B} })^\ell \right] }{\Pr \left[ (O^{\mathsf {A} })^\ell =(O^{\mathsf {B} })^\ell \right] +\Pr \left[ (O^{\mathsf {A} })^\ell \oplus (O^{\mathsf {B} })^\ell = 1^\ell \right] }\nonumber \\&=\frac{(1/2+\alpha )^\ell }{(1/2+\alpha )^\ell +(1/2-\alpha )^\ell }.\nonumber \end{aligned}$$
(2)

Since, \(\ell = 2^{(\left\lfloor \log 1/\alpha _{\max } \right\rfloor -2)}\) and \(\alpha _{\max }/2 \le \alpha \le \alpha _{\max }\), we get that \(1/4 \ge \ell \cdot \alpha \ge 1/16\). By Proposition 3.17,

$$\begin{aligned} \frac{(1/2+\alpha )^\ell }{(1/2+\alpha )^\ell +(1/2-\alpha )^\ell } \in [\tfrac{1}{2}(1+\alpha \ell ),\tfrac{1}{2}(1+3\alpha \ell ) ] \end{aligned}$$
(3)

Thus, \(\Pr \left[ O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}\right] \in [17/32 , 7/8]\), which concludes the proof.

Leakage.

Claim 4.11

(Leakage). \(\widetilde{C}\) has \((\widetilde{\epsilon },\widetilde{\delta })\)-\({\text {leakage}}\), where \(\widetilde{\epsilon }=2\ell \epsilon ^2+\epsilon \sqrt{2\ell \ln ({1/\delta '})}\) and \(\widetilde{\delta }= \delta '+\ell \delta \) for every \(\delta '\in (0,1)\).

Proof

We need to prove that for both \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \):

$$\begin{aligned} V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{(\widetilde{\epsilon },\widetilde{\delta })} V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}} \end{aligned}$$
(4)

By assumption C has \((\epsilon ,\delta )\)-\({\text {leakage}}\). Thus, by Theorem 3.5,

$$\begin{aligned} (V^\mathsf {P} )^\ell |_{( O^{\mathsf {A} })^\ell = (O^{\mathsf {B} })^\ell } \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{(\widetilde{\epsilon },\widetilde{\delta })} (V^\mathsf {P} )^\ell |_{( O^{\mathsf {A} })^\ell = (O^{\mathsf {B} })^\ell \oplus 1^\ell } \end{aligned}$$
(5)

Let \((( \overline{V}^\mathsf {A} , \overline{O}^\mathsf {A} ),(\overline{V}^\mathsf {B} ,\overline{O}^\mathsf {B} ) = {\text {final}}(\widetilde{C})\). By the above and Claim 4.9,

$$\begin{aligned} \overline{V}^\mathsf {P} |_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{(\widetilde{\epsilon },\widetilde{\delta })} \overline{V}^\mathsf {P} |_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}} \end{aligned}$$
(6)

Equation (4) now follows by a data processing argument: let f be the randomized function that on input \(v\in {\text {Supp}}(\overline{V}^\mathsf {P} )\) outputs a random sample from \(V^{\widetilde{\mathsf {P} }}|_{\overline{V}^\mathsf {P} = v}\). It easy to verify that \(f(\overline{V}^\mathsf {P} |_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}})=V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}}\) and \(f(\overline{V}^\mathsf {P} |_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}})\equiv V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}}\). Thus Eq. (4) follows by Fact 3.4.

4.1.2 Efficient Amplification

We will show how to make Protocol 4.7 protocol more efficient in terms of \(\alpha \). The resulting protocol will run in poly-time even if \(\alpha \) is inverse polynomial. The efficient amplification protocol is defined as follows. Let \(\varDelta \) be the (inefficient) protocol from Protocol 4.7.

Protocol 4.12

[ Protocol \(\varLambda ^C= (\widehat{\mathsf {A} },\widehat{\mathsf {B} })\) ]

  • Oracle: Channel C.

  • Prameter: Recursion depth d.

  • Operation: The parties interact in \(\varDelta ^{\varLambda ^{C}(d-1)}(2)\), letting \(\varLambda ^{C}(0) = C\).

We show that the channel induced by protocol \(\varLambda ^C(d)\) satisfies all the requirement of Lemma 4.4. But we first show that the expected running time of \(\varLambda ^C(d)\) is \(O(4^d)\), and therefore, the protocol that on input \(1^\ell \) invoke \(\varLambda ^C(\log \ell )\), is ppt, as stated in Lemma 4.4.

Running Time.

Claim 4.13

(Expected running time). Let C be a channel, the for any \(d\in {\mathbb {N}}\) the expected running time of \(\varLambda ^C(d)\) is at most \(O(4^d)\).

We will use the following claim:

Claim 4.14

For any channel C, \(\varDelta ^C(2)\) makes in expectation at most 4 calls to C.

Proof

Let C with a channel with agreement \(\alpha \in [-1/2,1/2]\). Let \( \overline{O}^\mathsf {A} =(O^{\mathsf {A} }_1,O^{\mathsf {A} }_2)\) and \(\overline{O}^\mathsf {B} =(O^{\mathsf {B} }_1,O^{\mathsf {B} }_2)\) denote the outputs of two invocations of C, respectively. By construction, \(\varDelta ^C(2)\) concludes on the event \(E=\left\{ (O^{\mathsf {B} }_1,O^{\mathsf {B} }_2)\in \{ \overline{O}^\mathsf {A} ,1^2\oplus \overline{O}^\mathsf {A} \}\right\} \). It is clear that \(\Pr \left[ E\right] =(\tfrac{1}{2}+\alpha )^2+(\tfrac{1}{2}-\alpha )^2=\tfrac{1}{2}+\alpha ^2\ge \tfrac{1}{2}\). Thus, the expected number of invocations preformed by \(\varDelta ^C(2)\) is bounded is 4.

We now prove Claim 4.13 using the above claim.

Proof

(Proof of Claim 4.13). For \(d\in {\mathbb {N}}\), let T(d) denote the expected runtime of \(\varLambda ^C(d)\). By Claim 4.14,

$$\begin{aligned} T(d)=4\cdot {T(d-1)}+O(1), \end{aligned}$$
(7)

letting \(T(0)=1\). Thus, \(T(d)\in O(4^d)\).

Let \(\widehat{C}_d= {\text {CHN}} (\varLambda ^C(d))= ((V^{\widehat{\mathsf {A} }}_d,O^{\widehat{\mathsf {A} }}_d),((V^{\widehat{\mathsf {B} }}_d,O^{\widehat{\mathsf {B} }}_d))\). The following function outputs the “important” calls of C made in \(\widehat{C}_d\), the ones used to set the final outcome.

Let \(\circ \) denote vectors concatenation.

Definition 4.15

(Important calls). For \(d\in {\mathbb {N}}\) and \(c\in {\text {Supp}}(\widehat{C}_d)\), let \({\text {final}}(c) = (c_0,c_1)\) be the two calls to \(\varLambda ^C(d-1)\) done in final execution of \(\varDelta ^{\varLambda ^C(d-1)}(2)\) in c. Define \({\text {important}}(c) = {\text {important}}(c_0) \circ {\text {important}}(c_1)\), letting \({\text {important}}(c) = c\) for \(c\in {\text {Supp}}(\widehat{C}_0)\).

Similarly to the analysis of inefficient protocol, the crux is the following observation about the important calls.

Claim 4.16

Let \(d\in {\mathbb {N}}\) and set \(\ell =2^d\). The following holds for \(((\cdot , \overline{O}^\mathsf {A} ),(\cdot ,\overline{O}^\mathsf {B} )) = {\text {important}}(\widehat{C}_d= ((\cdot ,O^{\widehat{\mathsf {A} }}),(\cdot ,O^{\widehat{\mathsf {B} }})))\).

  • \(O^{\widehat{\mathsf {A} }}=O^{\widehat{\mathsf {B} }}\) iff \( \overline{O}^\mathsf {A} =\overline{O}^\mathsf {B} \).

  • Let \(C^\ell = ((\cdot ,(O^{\mathsf {A} })^\ell ), (\cdot ,(O^{\mathsf {B} })^\ell ))\) be the random variable induced by taking \(\ell \) copies of C and let E be the event that \((O^{\mathsf {B} })^\ell \in \left\{ ( O^{\mathsf {A} })^\ell , (O^{\mathsf {A} })^\ell \oplus 1^\ell \right\} \). Then \({\text {important}}(\widehat{C}_d) \equiv C^\ell |_E\).

We prove Claim 4.16 below, but first use it for proving Lemma 4.4.

Agreement.

Claim 4.17

(Agreement). \( \Pr \left[ O^{\widehat{\mathsf {A} }}=O^{\widehat{\mathsf {B} }}\right] \in [17/32 , 7/8]\).

Proof

The proof follows by Claim 4.16, using the same lines as the proof that Claim 4.10 follows from Claim 4.9.

Leakage.

Claim 4.18

(Leakage). \(\widehat{C}\) has \((\widetilde{\epsilon },\widetilde{\delta })\)-\({\text {leakage}}\), where \(\widetilde{\epsilon }=2\ell \epsilon ^2+\epsilon \sqrt{2\ell \ln ({1/\delta '})}\) and \(\widetilde{\delta }= \delta '+\ell \delta \) for every \(\delta '\in (0,1)\).

Proof

The proof follows by Claim 4.16 and a data processing argument, using similar lines to the proof that Claim 4.11 follows from Claim 4.9.

Proving Lemma 4.4.

Proof

(Proof of Lemma 4.4). Consider the protocol \(\mathrm {T}^C(1^\ell ) = \varLambda ^C(\left\lfloor \log \ell \right\rfloor )\). The proof that \(\mathrm {T}\) satisfies the requirements of Lemma 4.4 immediately follows by Claims 4.13, 4.17 and 4.18.

Proving Claim 4.16.

Proof

(Proof of Claim 4.16). First note that the first item in the claim immediately follows by construction. We now prove the second item.

Let \(d\in {\mathbb {N}}\) and let \(\ell =2^d\). For \(C^{\ell }= ((\cdot ,(O^{\mathsf {A} })^\ell ), (\cdot ,(O^{\mathsf {B} })^\ell ))\), let \(D_\ell \) be the distribution of \(C^{\ell }|_{\left\{ (O^{\mathsf {B} })^\ell \in \left\{ ( O^{\mathsf {A} })^\ell , (O^{\mathsf {A} })^\ell \oplus 1^\ell \right\} \right\} }\). We need to prove that

$$\begin{aligned} {\text {important}}(\widehat{C}_d) \equiv D_\ell \end{aligned}$$

We prove the claim by induction on d. The base case \(d=1\) follows by Claim 4.9.

Fix \(d>1\), for \(j\in \left\{ 0,1\right\} \), let \(\widehat{C}_{d-1,j}\) be an invocations of the channel on input \(d-1\) and let \(((\cdot , \overline{O}^\mathsf {A} _j),(\cdot ,\overline{O}^\mathsf {B} _j)) = {\text {important}}(\widehat{C}_{d-1,j})\). By the induction hypothesis,

$$\begin{aligned} {\text {important}}(\widehat{C}_{d-1,j})\equiv D_{\ell /2} \end{aligned}$$
(8)

The key observation is that by construction, the event \({\text {final}}(\widehat{C}_d)= \widehat{C}_{d-1,0}\circ \widehat{C}_{d-1,1}\) occurs if and only if,

$$\begin{aligned} \overline{O}^\mathsf {B} _0\circ \overline{O}^\mathsf {B} _1\in \left\{ \overline{O}^\mathsf {A} _0 \circ \overline{O}^\mathsf {A} _1,1^\ell \oplus \overline{O}^\mathsf {A} _0 \circ \overline{O}^\mathsf {A} _1\right\} \end{aligned}$$
(9)

Recall this means that,

$${\text {important}}(\widehat{C}_d)=\big ({\text {important}}(\widehat{C}_{d-1,0}) \circ {\text {important}}(\widehat{C}_{d-1,1})\big )\mid _{\overline{E}}$$

where \(\overline{E}={\left\{ {\overline{O}^\mathsf {B} _0\circ \overline{O}^\mathsf {B} _1\in \left\{ \overline{O}^\mathsf {A} _0 \circ \overline{O}^\mathsf {A} _1,1^\ell \oplus \overline{O}^\mathsf {A} _0 \circ \overline{O}^\mathsf {A} _1\right\} }\right\} }\). The above observations yields that \({\text {important}}(\widehat{C}_d) \equiv D_\ell \).

4.1.3 From Channels with Large Gap to OT

Definition 4.19

A channel \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) is balanced if \(\Pr \left[ O^{\mathsf {A} }= 1\right] = \Pr \left[ O^{\mathsf {B} }= 1\right] = \tfrac{1}{2}\).

We use the following claim.

Claim 4.20

Let \(C=((V^\mathsf {A} ,O^{\mathsf {A} }),(V^\mathsf {B} ,O^{\mathsf {B} }))\) be a balanced channel that has \(\alpha \in [\alpha _{\min }, \alpha _{\max }]\)-agreement and \((\epsilon ,\delta )\)-\({\text {leakage}}\). Then C is a \((\epsilon _0,p)\)-\({\text {SWBSC}}\) for some \(\epsilon _0 \in [ \tfrac{1}{2}-\alpha _{\max },\tfrac{1}{2}-\alpha _{\min }]\), and \(p=2\epsilon +\delta \).

Proof

For every \(\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} \) we have that, \(V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}} \mathbin {{\mathop {\approx }\limits ^\mathrm{R}}}_{(\epsilon ,\delta )} V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}},\) thus by definition it follows that, \(V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}=O^{\widetilde{\mathsf {B} }}}\mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{(2\epsilon +\delta )} V^{\widetilde{\mathsf {P} }}|_{O^{\widetilde{\mathsf {A} }}\ne O^{\widetilde{\mathsf {B} }}}\), and the claim holds.

The following claim, states that a given a channel with bounded leakage and agreement we can construct a new protocol using the olds one, that has the same leakage and agreement, while having the additional property of being balanced.

Claim 4.21

There exists a constant-time single oracle call protocol \(\varDelta \) such that for every channel C, the channel \(\widetilde{C}\) induced by \(\varDelta ^C\) is balanced and has the same agreement and leakage as of C.

Protocol 4.22

[Protocol \(\varDelta = (\widetilde{\mathsf {A} },\widetilde{\mathsf {B} })\) ]

  • Oracle: Channel C.

  • Operation:

  1. 1.

    The parties (jointly) call the channel C. Let \(o^{\mathsf {A} }\) and \(o^{\mathsf {B} }\) denote their output respectively.

  2. 2.

    \(\widetilde{\mathsf {A} }\) sends \(r \leftarrow \left\{ 0,1\right\} \) to \(\widetilde{\mathsf {B} }\).

  3. 3.

    \(\widetilde{\mathsf {A} }\) outputs \(o^{\mathsf {A} } \oplus r\) and \(\widetilde{\mathsf {B} }\) outputs \( o^{\mathsf {B} } \oplus r\).

Proof

(Proof of Claim 4.21). Let \(\widetilde{C}={\text {CHN}} (\varDelta ^C)\). By construction \(\widetilde{C}\) is balanced and has \(\alpha \)-agreement. Finally, by a data processing argument, \(\widetilde{C}\) has the same leakage as C.

Proving Lemma 4.6.

Proof

(Proof of Lemma 4.6). Set \(n = 10^8\), let \(C = {\text {CHN}} (\varPi )\), let \(\varPi ' = \mathsf {bound} _{n\cdot t}(\varPi )\) and let \(C'= {\text {CHN}} (\varPi ')\). By Markov inequality,

$$\begin{aligned} C' \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{1/n} C \end{aligned}$$
(10)

By Claim 4.21, there exist a protocol \(\varDelta \) such that \(\varDelta ^C\) is balanced and has the same leakage and agreement as C. Moreover, since \(\varDelta \) only uses one call to the channel C, by data processing argument,

$$\begin{aligned} {\text {CHN}} (\varDelta ^{C'}) \mathbin {{\mathop {\approx }\limits ^\mathrm{S}}}_{1/n} {\text {CHN}} (\varDelta ^C) \end{aligned}$$
(11)

By Claim 4.21, \(\varDelta ^{C'}\) is also balanced. Claim 4.20 yields that \(\varDelta ^C\) is a (15/32, p)-\({\text {WBSC}}\) for \(p=2\epsilon +\delta \). Hence, using Proposition 3.2, we get that \(\varDelta ^{C'}\) is \((\epsilon _0,\overline{p})\)-\({\text {WBSC}}\), for \(\epsilon _0 = \epsilon +1/10^8\) and \(\overline{p}=p+4/10^7\).

In the following we use Theorem 3.14 to show that \(\varDelta ^{C'}\) can be used to construct semi-honest statistically secure OT. To do this, we need to prove that

$$\begin{aligned} 150(1-(1-2\overline{p})^2)<(1-\frac{2\epsilon _0^2}{\epsilon _0^2+(1-\epsilon _0)^2})^2 \end{aligned}$$
(12)

Indeed, since \((1-2\frac{\epsilon _0^2}{\epsilon _0^2+(1-\epsilon _0)^2})^2\ge 1/100\), for \(\delta '=1/10^{7}\) it holds that, for small enough c,

$$\begin{aligned} (1-(1-2\overline{p})^2) \le 4\overline{p}&\le 4p+2/10^6 \le 2\epsilon + \delta +2/10^6\\&\le 3c+2/10^6\nonumber \\&<1/(150\cdot 100).\nonumber \end{aligned}$$
(13)

And therefore \(\varDelta ^{C'}\) satisfies the requirement of Theorem 3.14. Let \(\varGamma \) be the protocol guaranteed in Theorem 3.14, and let \(\widetilde{\varGamma }^{C'}(1^\kappa ) = \varGamma ^{\varDelta ^{C'}}(1^\kappa , 49/100)\). By Eq. (12) and Theorem 3.14, \(\widetilde{\varGamma }^{C'}(1^\kappa )\) is statistically secure semi-honest OT. Since \(\epsilon _0\) is a bounded from 0 and 1/2 by constants, \(\widetilde{\varGamma }\) running time in polynomial in \(\kappa \).

4.2 The Computational Setting

Due to space limitations, the remainder of this section appears in the full version of this paper [23].

Conclusion and Open Problems

A natural open problem is to characterize the (Boolean) AND differential private functionality. That is, show a similar dichotomy that characterizes which accuracy and leakage require OT.

More generally, the task of understanding and characterizing other (non Boolean) differentially private functionalities like hamming distance and inner product remains open.