Keywords

1 Introduction

Secure multi-party computation [27, 57] helps mutually distrusting parties to securely compute a function of their private data. General secure computation is impossible in the information-theoretic plain model for most cryptographically interesting functionalities even when parties are semi-honest [3, 31, 36, 4143]. This necessitates restrictions on the power of the adversaries, for example, honest majority [6, 12, 21, 50], computational hardness assumptions [27, 33] or physical cryptographic resources, like, noisy channels [4, 17, 19, 37, 38], correlated private randomness [19, 38, 44, 54], trusted resources [10, 34] or tamper-proof hardware [11, 23, 28, 35, 46].

Using cryptographic resources like noisy channels, it is possible to securely compute arbitrary functionalities with unconditional security guarantees against malicious computationally unbounded adversaries as well [4, 17, 19, 37, 38]. Aside from unconditional security, this line of work also offers advantages in efficiency [5, 45, 48]. Additionally, all invocations of the noisy channel can be performed in an offline phase that is independent of the target functionality to be securely computed [54]. But, the security analysis of these protocols crucially hinges on accurate knowledge of the channel characteristic. Inaccurately estimated or, even worse, adversarially determined channel characteristic can violate the security guarantees of known secure computation protocols that rely on noisy channels. We broadly call such channels unreliable noisy channels.

Over the last three decades, a lot of effort has been focussed towards performing information-theoretic secure multi-party computation using unreliable noisy channels, but with limited success. Weak forms of oblivious transferFootnote 1 (OT) [7, 8, 17, 22, 55] and noisy channels [16, 19, 20, 22, 47, 55, 56] have been leveraged to perform secure computation with strong security guarantees, but only for limited settings of parameters. For example, the notion of an unfair noisy channel allows both the adversarial sender and the receiver to increase their knowledge of the other party’s outputs or inputs to the channel. This model captures extremely general physical systems. Unfortunately, strong impossibility results exist for unfair channels [22], thus, significantly limiting the potential set of feasible parameters (Ref. Fig. 1).

Fig. 1.
figure 1

Unfair binary symmetric channel parameters for binary symmetric channels. Honest channel flips the input symbol with probability \(\alpha \), where \(0<\alpha <1/2\). Both the sender and the receiver can make the channel more reliable with flip probability \(\beta \), where \(0<\beta \leqslant \alpha \).

Faced with these daunting impossibility results, in this work we ask whether security is possible in meaningful relaxations of the unfair noisy channel model. In particular, we study an unreliable noisy channel model, namely elastic noisy channels, where only one party, either the receiver or sender, but not both, can increase their knowledge of the other party’s inputs and outputs to the channel. We show that an elastic noisy channel with sender advantage is equivalent to an elastic noisy channel with receiver advantage (see Sect. 5), and thus in the sequel, we focus on the case where the receiver can increase its knowledge of the sender’s inputs to the channel. Such a study is motivated, for example, by transmission and reception of information over physical wireless channels between physically separated parties. This is because in physical wireless systems, thermal noise is always present at the receiver’s end and cannot be observed by a physically distant sender. Thus, the sender, even if malicious, cannot anticipate the entire error introduced at the receiver antenna. However, an adversarial receiver, on the other hand, can install a large super-cooled antenna to make its reception more reliable than the reception available to an honest receiver that uses an inexpensive antenna.

While this scenario is one example, our study is primarily motivated from a theoretical standpoint, in the face of severe impossibility results for the full unfair channel setting, where very little progress has been made despite decades of research. Interestingly, our elastic channel model avoids the impossibility results of [22] and, hence, holds the promise to yield secure multi-party computation protocols based on a wide range of parameters. Nevertheless, previous work achieve only quite weak results in the elastic noisy channel setting.

Our main result pertains to realization of information-theoretic secure multi-party computation using \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \), a binary symmetric channel where, informally,Footnote 2 an honest receiver obtains the sender’s input bit flipped with probability \(\alpha \), while the adversarial receiver obtains an the sender’s input bit flipped only with probability \(\beta \), where \(0<\beta \leqslant \alpha <1/2\). Figure 2 shows the set of feasible parameters that can be achieved using the best previous techniques of [22, 55]. The figure also illustrates the much larger set of possible \((\alpha ,\beta )\) pairs for which it is possible to achieve secure multi-party computation on \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \) using the techniques we develop in this paper. As a concrete example, if the best antenna in the market incurs only \(5\,\%\) error, then prior techniques need to assume that the honest receiver uses a receiver with at most \(14\,\%\) error. Our protocols, on the other hand, work even when the honest reception error is as high as \(30\,\%\).

New Ideas. The crux of this significant gain in feasibility parameters is a new perspective on how to securely realize OT from unreliable noisy channels. Over the last several decades, a common underlying theme of previous constructions is a reduction from unreliable noisy channels to weak OT using two-repetition of the underlying channel and the rejection sampling technique of [17] and, subsequently, amplifying the weak OT to a full-fledged OT [17, 22, 55]. The first reduction in this approach, we find, leads to a significant loss in parameters. We, instead, reduce from unreliable noisy channels to a correlated private randomness that provides extremely weak guarantees and ensures only a rudimentary essence of OT. In this respect, as a departure from prior techniques, our target correlated private randomness is closer to the notion of universal OT as proposed by Cachin [8]. Then, we morph this elemental correlated private randomness into a weak variant of OT using the weak converse of Shannon’s Channel Coding Theorem [26, 52] as utilized by [40] and fuzzy extractors [24]. Next, this weak variant of OT is amplified to (full-fledged) OT using techniques similar to those proposed in [55]. Section 1.2 provides a summary of our technical contributions and intuition of the protocol designs.

Looking ahead, we believe that the techniques introduced in this paper are of independent interest and are likely to find use in other areas of cryptography where noisy channels are analyzed.

Fig. 2.
figure 2

Space of parameters \((\beta ,\alpha )\), where \(0<\beta \leqslant \alpha <1/2\), for which we construct secure computation protocol from \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \). The smaller dark region is the space for which such protocols can be obtained using prior techniques from [22, 55] combined.

1.1 Our Contributions

Our main contribution is to design protocols that securely realize oblivious transfer and therefore secure multi-party computation, from elastic binary symmetric channels. Before summarizing our results, we explain the notion of elastic channels.

Elastic Channels. We will model elastic variants of noisy channels as consisting of a pair of noisy channels where the channel for the honest receiver is a degradation of the channel for the adversarial receiver. In general, we view an \((\alpha , \beta )\)-\(\mathsf {BSC} \) as a pair of channels, such the honest receiver has reception over a \(\mathsf {BSC} \) with flip probability \(\alpha \), and an adversarial receiver has reception over a \(\mathsf {BSC} \) with flip probability \(\beta \leqslant \alpha \).

General Secure Computation. We prove that general secure computation is possible for a large range of parameters of elastic binary symmetric channels. In particular, we obtain oblivious transfer (OT) using elastic noisy channels, and then the OT functionality can be used to obtain general secure computation [10, 27, 36, 57]. Our main theorem is as follows:

Theorem 1

(Elastic BSC Completeness). There exists a universal constant \(c \in (0,1)\), such that for all \(0<\beta \leqslant \alpha <1/2\), if \(\alpha < \left( 1 + \left( 4\beta (1-\beta )\right) ^{-1/2}\right) ^{-1}\) then there exists a protocol \(\varPi _{\alpha ,\beta }\) such that, \(\varPi _{\alpha ,\beta }\) securely realizes the OT functionality \({\mathcal F} _{\mathsf {OT}}\) when given access to \(\left( {(\alpha ,\beta )}\text {-}\mathsf {BSC} \right) ^{\otimes {\kappa }}\) channels with at most \(2^{-{\kappa } ^c}\) simulation error, where \({\kappa }\) is the security parameter, with information-theoretic unconditional security against malicious adversaries.

Refer to Fig. 2 for a summary of the parameter space in Theorem 1 and a comparison of our results with results from previous workFootnote 3. Henceforth, we will use \(\ell (\beta ) {\;{:}=\;} \left( 1+\left( 4\beta (1-\beta )\right) ^{-1/2}\right) ^{-1}\).

In addition to elastic noisy channels, both parties also communicate over reliable communication channels in our protocols. These reliable channels can be constructed from the (elastic) noisy channels themselves via standard techniques in error correcting codes (e.g. using polar codes [1, 2, 29]).

Furthermore, we can strengthen our completeness theorems using techniques from [32, 34, 40] to achieve constant rate: that is, our protocols can produce \(\varTheta ({\kappa })\) OTs with only \(O({\kappa })\) total communication and only \(O({\kappa })\) calls to the underlying elastic binary symmetric channels.

Corollary 1

(Constant Rate Elastic BSC Completeness). For all \(0<\beta \leqslant \alpha <1/2\), if \(\alpha < \left( 1 + \left( 4\beta (1-\beta )\right) ^{-1/2}\right) ^{-1}\) then, there exists a protocol \(\varPi _{\alpha ,\beta }\) and constants \(c_{\alpha ,\beta },d_{\alpha ,\beta }\) such that, \(\varPi _{\alpha ,\beta }\) securely realizes \({\mathcal F} _{\mathsf {OT}} ^{\otimes m}\) when given access to \(\left( {(\alpha ,\beta )}\text {-}\mathsf {BSC} \right) ^{\otimes {\kappa }}\) channels with at most \(2^{-{\kappa } ^{c_{\alpha ,\beta }}}\) simulation error and \(m=d_{\alpha ,\beta }{\kappa } \).

1.2 Technical Overview

While our protocols have many ingredients and require a careful analysis, in this section we try to explain the core ideas in our scheme.

A New Take on Previous Approaches. We begin by re-interpreting previous approaches to realize oblivious transfer from noisy channels. Our new understanding of these methods helps abstract out their essence and better illustrate the bottlenecks in our setting. Then, we develop key ideas to achieve oblivious transfer even from channels with adversarial receiver-controlled characteristic, for a large range of parameters of such channels.

To obtain OT from a perfect \(\mathsf {BSC} \), a natural starting point is to have the sender pick appropriate codewords (typically simple repetition codes) and send them over the \(\mathsf {BSC} \) to the receiver. The receiver must then partition the received outputs into two sets establishing two “virtual” channels with the following property: There exists a threshold R, such that one of the virtual channels has capacity \(C^* > R\), while the other channel has capacity \(\tilde{C} < R\). Moreover, the sender will be unable to tell which virtual channel is which.

In the protocol, the sender pushes information across the virtual channels at rate equal to R. The receiver recovers the information that is transmitted over the virtual channel with capacity \(C^* > R\). But, he incurs errors decoding the information transmitted over the virtual channel with capacity \(\tilde{C} < R\) because the weak converse of Shannon’s Channel Coding Theorem [26, 52] kicks in. This decoding error can be amplified using fuzzy extractors [24], to completely erase the other message and guarantee statistical hiding.

But, we would like to design protocols that remain secure even given an \((\alpha , \beta )\text {-}\mathsf {BSC}\). In the following, we will use \({\alpha }\text {-}\mathsf {BSC} \) to denote the channel used by the honest receiver; and \({\beta }\text {-}\mathsf {BSC} \) to denote the channel used by the adversarial receiver. Intuitively, the correctness of our protocol needs to be ensured even for an honest receiver who uses a channel prescribed as the “minimum system requirement” of the protocol description (the \({\alpha }\text {-}\mathsf {BSC} \)). We also require that the same protocol be secure even against an adversarial receiver who can reduce the noise level significantly (using the \({\beta }\text {-}\mathsf {BSC} \)). Again, we will think of the problem as forcing the receiver to establish two virtual channels of noticeably different capacities. We require the capacity \(C^*\) of the better virtual channel established by the receiver using \({\alpha }\text {-}\mathsf {BSC} \), to be higher than the capacity \(\tilde{C}\) of the worse virtual channel established by any adversarial receiver using the \({\beta }\text {-}\mathsf {BSC} \). The sender will code at a suitable rate intermediate to \(C^*\) and \(\tilde{C}\). Then, more information will be received over the \(C^*\) capacity channel in the honest scenario, than the information received over one of the two virtual channels (of capacity at most \(\tilde{C}\)) created by the adversarial receiver. This will give oblivious transfer.

Challenges in Our Setting. Let us re-examine our quantitative goal: Suppose the error of the best (adversarial) receiver in the market is \(2\,\%\), but honest receivers have \(20\,\%\) error. The adversarial receiver can obtain much more information than the honest receiver, without the sender’s knowledge. Yet, we want to establish two virtual channels such that the capacity of the better virtual channel established using the \({\alpha }\text {-}\mathsf {BSC} \), is higher than the capacity of the worse virtual channel established by any adversarial receiver using the \({\beta }\text {-}\mathsf {BSC} \). Such an adversarial receiver is allowed to behave arbitrarily, in particular, it could distribute its total capacity equally between the two channels. Ensuring a capacity gap between the better honest and the worse adversarial capacities in this situation, seems to be a tall order. Indeed, previously the results of Wullschleger [55] could achieve this gap only if the honest adversarial receiver had an error at most \(9\,\%\).

Towards a Solution. Our first step is to try and relax this goal. Instead of directly shooting for 2-choose-1 oblivious transfer, we try to obtain a weaker form of oblivious transfer, namely \((n, 1, n-1)\) OT, where a sender has n messages, an honest receiver gets to choose 1 message, but a dishonest receiver gets \(n-1\) messages of his choice. The sender gets no output. Using the ‘virtual channel’ intuition presented above, we want the receiver to set up n virtual channels (for some constant n), with a threshold R such that at least one of the n virtual channels set up by the honest receiver has capacity \(C^* > R\), while at least one of the n virtual channels set up by the adversarial receiver has capacity less \(\tilde{C} < R\). At this point, we have divided our objective into the following two sub-problems:

  1. 1.

    Reduce \((n, 1, n-1)\) OT to \((\alpha , \beta )\text {-}\mathsf {BSC}\)

  2. 2.

    Reduce 2-choose-1 OT to \((n, 1, n-1)\) OT

The second result has been considered in the works of [18, 51] and can also be demonstrated using techniques presented in [20, 22, 55] for the setting of weak erasure channels. While this reduction is not the focus of our work, for completeness we provide a protocol securely realizing OT from \((n, 1, n-1)\) OT in the full version, achieving security against malicious adversaries.

Now our main goal is to demonstrate the first reduction. Our next question is, what could be some reasonable ways to take an \((\alpha , \beta )\text {-}\mathsf {BSC}\) and build several virtual channels outs of it with varying reliabilities?

A New Kind of Channel Decomposition. A logical starting point is to have the sender send \(\lambda \) repetitions of his bit over fresh instantiations of the \((\alpha , \beta )\text {-}\mathsf {BSC}\), and list all possible outputs obtained by the receiver. Each possible output could be used by the receiver to define a “virtual channel”. On sending \(\lambda \) repetitions of a bit b, if the receiver obtains \(\lambda \) identical bits, then his confidence about the original bit b is extremely high. This is the most reliable channel, and will be set to be the choice channel (with capacity \(C^*\)) by the honest receiver.

Since errors are independently added at each invocation of the \((\alpha , \beta )\text {-}\mathsf {BSC}\), all receiver outputs with the same number of zeroes, irrespective of the positions of these zeroes, convey the same amount of information to the receiver. Thus, such outputs can be classified into the same equivalence class/virtual channel. Furthermore, for \(\eta \in [0, \lfloor \lambda /2\rfloor +1]\), let \({\mathbb S} _{\eta }\) denote all output strings with either \(\eta \) zeroes, or \(\eta \) ones. That is, \({\mathbb S} _\eta \) includes all pairs of output strings of the form \(\{0^{\eta }1^{\lambda -\eta }\), \(0^{\lambda -\eta }1^{\eta }\}\) and their permutations. This results in the creation of \(\lfloor \frac{\lambda }{2}\rfloor + 1\) binary symmetric channelsFootnote 4 of noticeably different capacities, such that the ‘best’ virtual channel of an honest receiver consists of outputs solely from \({\mathbb S} _0\). It is easy to see that the sender, who gets no output from the \(\mathsf {BSC} \), cannot distinguish between various virtual channels created by the receiver.

For security against an adversarial receiver, it suffices to ensure that the capacity of the virtual channel created using values in \({\mathbb S} _0\) corresponding to the \({\alpha }\text {-}\mathsf {BSC} \), is higher than the average capacity (over all possible channels) over all the outputs assembled by an adversarial receiver when he uses the \({\beta }\text {-}\mathsf {BSC} \). We note that the receiver is never allowed to discard any of the outputs he received; he must necessarily divide and distribute them all into his virtual channels.

On analyzing this approach, we find that in fact as we increase \(\lambda \), the situation improves for many parameters \(\alpha , \beta \). While both average adversarial and best honest capacities increase as \(\lambda \) increases, in fact the best honest capacity increases faster. Eventually, then, the best honest capacity becomes better than the average adversarial capacity and we obtain the following results (Ref. Fig. 4 for an example illustration of this phenomenon.). For any constants \(0< \beta \leqslant \alpha < {\left( 1 + {\big (4\beta (1-\beta )\big )}^{-1}\right) }^{-1}\), there exists an efficiently computable constant \(\lambda \in {\mathbb N} \) for which the above property holds. Figure 3 plots the space of these parameters for various values of \(\lambda \) and the limiting curve \(\ell (\beta )\).

Fig. 3.
figure 3

For \(\lambda \in \{2^1,\cdots ,2^7\}\), the space of points \((\beta ,\alpha )\) for which the capacity of the virtual channel created using values in \({\mathbb S} _0\) corresponding to the \({\alpha }\text {-}\mathsf {BSC} \) is higher than the average capacity (over all possible channels) over all the outputs assembled by an adversarial receiver when he uses the \({\beta }\text {-}\mathsf {BSC} \). Finally the limiting \(\ell {\left( \beta \right) } \) curve is plotted.

Although this completes our high-level overview, making these ideas work requires a careful use of the weak converse of Shannon’s Channel Coding Theorem, Fuzzy Extractors and other protocol tools, as well as a careful setting of parameters. Refer Sect. 3 for more details about our construction.

Commitments. Enroute proving Theorem 1, we show that it is possible to obtain string commitments from any \((\alpha , \beta )\text {-}\mathsf {BSC}\), where \(0< \beta \leqslant \alpha <1\) Footnote 5. Using techniques from [32, 34, 40], we can also obtain string commitments at a constant rate. We stress that we can obtain commitments from any \((\alpha , \beta )\) elastic \(\mathsf {BSC} \) for all parameters \(0< \beta \leqslant \alpha < 1\), unlike our completeness result. Our result is formally stated in the following theorem:

Theorem 2

There exists a universal constant \(c \in (0,1)\), such that for all \(0<\beta \leqslant \alpha <1/2\), there exists a protocol \(\varPi _{\alpha ,\beta }\), constant \(d \in (0,1)\) such that, \(\varPi _{\alpha ,\beta }\) securely realizes the string commitment functionality for strings of length \(d{\kappa } \), \({\mathcal F} _{\mathsf {com}} (d{\kappa })\), when given access to \(\left( {(\alpha ,\beta )}\text {-}\mathsf {BSC} \right) ^{\otimes {\kappa }}\) channels, with at most \(2^{-{\kappa } ^c}\) simulation error, where \({\kappa }\) is the security parameter, with information-theoretic unconditional security against malicious adversaries.

On Adversarial Senders. Finally, we note that noisy channels where only the sender can make the transmission more reliable (that is, sender-elastic binary symmetric channels) reduces to the case of elastic noisy channels with an adversarial receiver (receiver-elastic channels), using a tight reduction presented in Sect. 5. Our one-to-one transformation is optimal and tight.

Fig. 4.
figure 4

Obtaining best honest capacity \(C^*\) higher than average adversarial capacity \(\widetilde{C}\) for \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \), where \((\alpha ,\beta )=(1/3,1/6)\). Each graph represents the capacity profile of sub-channels in the decomposition of \((V,\widehat{V})\), where \(\lambda \in \{1,\cdots ,7\}\). The lighter bars denote the adversarial receiver case and the darker bars represent the honest receiver case. When \(\lambda =7\), \(C^* > \widetilde{C}\).

1.3 Prior Work

There is a lot of literature on constructing secure computation based on noisy channels [16, 17, 19, 32, 3840]. An elastic noisy channel, whose characteristic can be altered by adversarial parties, cannot be modeled as a functionality considered by the completeness theorems of [38, 40, 44]. However, the following channels in the literature, are related to the notion of elastic channels.

  • Unfair Noisy Channels. Unfair noisy channels were formally defined by Damgärd et al. [22]: in an unfair noisy channel, both the sender and the receiver can change the channel characteristic. Furthermore, the work of [22] showed strong impossibility results in this model. Several works considered performing secure computation from such unfair noisy channels [16, 17, 19, 20, 22, 55, 56]. The feasibility parameters achieved by these works are a small fraction of the parameters not covered by the impossibility result of [22].

  • Weak OT with one-sided leakage. The closest notion to elastic channels, is that of weak OTFootnote 6 by Wüllschleger [55]. This is an oblivious transfer which allows either sender or receiver leakage, but not both. It also allows incorrect output with some probability. It was shown in [55] that OT reduces to weak OT with one-sided leakage for a subset of leakage and error parameters. It is possible to reduce such a weak OT to elastic noisy channels via the techniques in [20, 22, 56]. To our knowledge, these give the best known completeness results using techniques implicit in prior work, in the setting of elastic \(\mathsf {BSC} \). These parameters are denoted as ‘Best Prior Work’ in Fig. 2.

Comparison of Techniques. Prior works on unfair noisy channels rely on the technique of [17] which invokes the channel twice to transmit a 2-repetition of the input bit. This implements an erroneous version of unfair oblivious transfer. Subsequently, this erroneous unfair OT is amplified to full-fledged OT. Surprisingly, we find that the first reduction in this approach is significantly lossy in parameters, especially when applied to the setting of elastic channels.

Thus, in a departure from previous techniques, we set our first target to obtaining a set of \(n \geqslant 2\) channels – where the honest receiver can obtain information on at least one channel, while even an adversarial receiver cannot obtain information on more than \(n-1\) channels. To realize such channels, we do not restrict ourselves to 2-repetitions only. A comparison of our parameter space against previous work is illustrated in Fig. 2.

2 Preliminaries

In this section, we introduce some basic definitions and notation, and recall some preliminaries for use in the paper.

Throughout the paper, \({\kappa }\) will denote the security parameter. We represent the set \(\{1,\cdots ,n\}\) by [n]. The set of all size-k subsets of a set S is represented by \({ \left( \begin{matrix} {S}\\ {k} \end{matrix}\right) }\). A vector of length n is represented by \((x_1,\cdots ,x_n)=x_{[n]}\). For \(S=\{i_1,\cdots ,i_{\left| {S}\right| }\}\subseteq [n]\), we represent \(x_S = (x_{i_1},\cdots ,x_{i_{\left| {S}\right| }})\). We use \(\text {Ber} (p)\) to represent a sample from a Bernoulli distribution with parameter p.

2.1 Elastic Functionalities

We model elastic variants of noisy channels as a pair of noisy channels where the channel for the honest receiver is a degradation of the channel for the adversarial receiver. The input (say, bit b) is first transmitted over a more reliable (adversarial) channel to obtain leakage z. Then, z is transmitted over a second channel (z is further degraded) to obtain honest receiver output \(\tilde{b}\), such that \(\tilde{b}\) is effectively, the result of transmitting b over a less reliable channel. The honest receiver obtains output \(\tilde{b}\) and the adversarial receiver obtains output leakage z as well as \(\tilde{b}\). Note that in our modeling, the leakage z is strictly more informative than honest receiver output \(\tilde{b}\). This is exactly why we chose to model elastic channels as degradation channels, as it allows more intuitive analysis. We formalize this notion, as follows, for specific instances of elastic noisy channels.

Definition 1

(Elastic Binary Symmetric Channel). Let \(\text {Ber} (p)\) be a sample of Bernoulli distribution with parameter p. For any \(0<\beta \leqslant \alpha <1/2\), an \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \) channel is defined as follows.

figure a

Let B, Z and \(\tilde{B}\) be the random variables corresponding to b, z and \(\tilde{b}\), respectively. We have \(\tilde{B} = B \oplus \text {Ber} (\alpha )\) and \(Z = B \oplus \text {Ber} (\beta )\), such that \(B \rightarrow Z \rightarrow \tilde{B}\).

Definition 2

( \( {(n,k,\ell )}\text {-} \mathsf {OT} \) ). For \(0<k\leqslant \ell <n\), \( {(n,k,\ell )}\text {-} \mathsf {OT} \) is defined as:

figure b

2-choose-1 bit OT is equivalent to \( {(2,1,1)}\text {-} \mathsf {OT} \).

2.2 Basic Information Theory

Entropy. The entropy of a distribution X is defined as: \(\mathop {{\mathbb E}} \nolimits _{x\sim X}[ -\lg \mathop {\mathsf {P}} \nolimits _{x'\sim X}[x'=x]]\). Given a joint distribution (XY), the mutual information is: \(I(X;Y) = H(X) + H(Y) - H(X,Y)\).

Channel Capacity. The capacity of a channel W is defined to be \(I(W) = \max _{X} I(X;W(X))\), where X is any probability distribution over the input space. If W is output symmetric, then \(I(W) = I(U;W(U))\), where U is the uniform distribution over the input space.

For \(0\leqslant \varepsilon \leqslant 1\), the capacity of \( {\varepsilon }\text {-} \mathsf {BEC} \) is \(I( {\varepsilon }\text {-} \mathsf {BEC} )=1-\varepsilon \); and the capacity of \({\varepsilon }\text {-}\mathsf {BSC} \) is \(I({\varepsilon }\text {-}\mathsf {BSC} )=1-h(\varepsilon )\), where \(h(x){\;{:}=\;}-x\lg (x)-(1-x)\lg (1-x)\) is the binary entropy.

\(\mathbf {(A,B)\rightarrow (A,C)}\). For a joint distribution (AB) and (AC), if there exists f such that the distributions (Af(B)) and (AC) are identical, then we say \((A,B)\rightarrow (A,C)\). We say that \((A,B)\equiv (A,C)\), if \((A,B)\rightarrow (A,C)\) and \((A,C)\rightarrow (A,B)\).

\(\mathbf {(J, W_J)}\). A channel \((J,W_J)\) is defined as follows:

On input x, sample \(j\sim J(x)\) and sample \(z\sim W_j(x)\). Output (jz). We say that a channel \(W\equiv (J,W_J)\), if the distributions \((X,W(X)) \equiv (X,J(X), W_{J(X)}(X))\), for all input distributions X.

A binary-input memoryless channel with transition probabilities (W|0) and (W|1) for input symbols 0 and 1, respectively, is called output-symmetric if the probabilities of these two distributions are permutations of each other.

If \(I(X;J(X))=0\) and all \(W_j\) channels are output symmetric, then the capacity of the channel W is \(I(W) = \mathop {{\mathbb E}} \nolimits _{j\sim J}[I(W_j)]\), where J is a fixed distribution over indices (say J(0)).Footnote 7

Polar Codes. There are explicit rate achieving Polar Codes with efficient encoding and decoding parameters for \( {\varepsilon }\text {-} \mathsf {BEC} \) and \({\varepsilon }\text {-}\mathsf {BSC} \), for \(0\leqslant \varepsilon \leqslant 1\) [1, 2, 29].

Definition 3

(Discrete Memoryless Channel). A discrete channel is defined to be a system \(W : {\mathcal X} \rightarrow {\mathcal Y} \) between a sender and a receiver with sender (input) alphabet \({\mathcal X} \), receiver (output) alphabet \({\mathcal Y} \) and a probability transition matrix W(y|x) specifying the probability that of obtaining output \(y \in {\mathcal Y} \) conditioned on input \(x \in {\mathcal X} \). The channel is said to be memoryless if the output distribution depends only on the input distribution and is conditionally independent of previous channel inputs and outputs.

Imported Theorem 1

(Efficient Polar Codes [29]). There is an absolute constant \(\mu <\infty \) such that the following holds. Let W be a binary-input memoryless output-symmetric channel with capacity I(W). Then there exists \(a_W < \infty \) such that for all \(\varepsilon >0\) and all powers of two \(N \geqslant a_W/\varepsilon ^\mu \), there exists a deterministic \(\mathsf {poly} (N)\) time construction of a binary linear code of block length N and rate at least \(I(W)-\varepsilon \) and a deterministic \(N\cdot \mathsf {poly} (\log N)\) decoding algorithm for the code with block error probability at most \(2^{-N^{0.49}}\) for communication over W.

Leftover Hash Lemma. The min-entropy of a discrete random variable X is defined to be \(H_\infty (X) = -\log \max _{x\in \mathsf {Supp} (X)} \mathop {\mathsf {P}} [X=x]\). For a joint distribution (AB), the average min-entropy of A w.r.t. B is defined as \(\tilde{H}_\infty (A|B) = -\log (\mathop {{\mathbb E}} _{b\sim B} \left[ 2^{-H_\infty (A|B=b)}\right] ) \).

Imported Lemma 1

(Generalized Leftover Hash Lemma(LHL) [24]). Let \(\{H_x : {\{0,1\}} ^n \rightarrow {\{0,1\}} ^{\ell }\}\}_{x \in X}\) be a family of universal hash functions. Then, for any joint distribution (WI): \(\mathsf {SD} \left( {(H_X(W), X, I)},{({\mathcal U} _{\ell }, X, I)}\right) \leqslant \frac{1}{2}\sqrt{2^{-\tilde{H}_{\infty }(W|I)}2^{\ell }}.\)

Weak Converse of Shannon’s Channel Coding Theorem. Let \(W^{\otimes N}\) denote N independent instances of channel W, which takes as input alphabets from set \({\{0,1\}} \). Let the capacity of the channel W be C, for a constant \(C > 0\). Let \(\mathcal {C} \in {\{0,1\}} ^N\) be a rate \(R \in {\{0,1\}} \) code. Then, if the sender transmits a random codeword over \(W^{\otimes N}\), the probability of error of the receiver in predicting \(\mathbb {c}\) is \(P_e \geqslant 1 - \frac{1}{NR} - \frac{C}{R}\).

2.3 Chernoff-Hoeffding Bound for Hypergeometric Distribution

Imported Theorem 2

(Multiplicative Chernoff Bound for Binomial Random Variables [13, 30]). Let \(X_1, X_2, \ldots X_n\) be independent random variables taking values in [0, 1]. Let \(X = \sum _{i \in [n]} X_i\), and let \(\mu = {\mathbb E} [X]\) denote the expected value of the X. Then, for any \(\delta >0\), the following hold.

  • \(\mathrm {Pr} [X > (1+\delta )\mu ] < \exp \left( -n\mathsf {D} _{\mathsf {KL}}\left( {\mu (1+\delta )}\Vert {\mu }\right) \right) \).

  • \(\mathrm {Pr} [X > (1-\delta )\mu ] < \exp \left( -n\mathsf {D} _{\mathsf {KL}}\left( {\mu (1-\delta )}\Vert {\mu }\right) \right) \).

Imported Theorem 3

(Multiplicative Chernoff Bound for Hypergeometric Random Variables [14, 30]). If X is a random variable with hypergeometric distribution, then it satisfies the Chernoff bounds given in Imported Theorem 2.

2.4 Constant Rate OT Generation

Imported Theorem 4

([32]). Let \(\pi \) be a protocol which UC-securely realizes \({\mathcal F} _{\mathsf {OT}}\) in the f-hybrid with simulation error \(1-o(1)\). Then there exists a protocol \(\rho \) which UC-securely realizes \({\mathcal F} _{\mathsf {OT}} ^{\otimes m}\) in the \(f^{\otimes n}\)-hybrid with simulation error \(1-\mathsf {negl} ({\kappa })\), such that \(n=\mathsf {poly} ({\kappa })\) and \(m=\varTheta (n)\).

3 Binary Symmetric Channels

3.1 Channel Decomposition

In an \((\alpha , \beta )\text {-}\mathsf {BSC}\), the capacity of each channel invocation in the adversarial receiver case is higher than the capacity when the receiver is honest. Despite this bottleneck, our aim is to (non-interactively) synthesize n new noisy channels such that the highest capacity of these channels when interacting with an honest receiver surpasses the capacity of at least one channel obtained by any adversarial receiver. Intuitively, this is achieved by decomposing the original elastic noisy channel into sub-channels such that the sub-channels are “receiver identifiable.” Details are provided in the following paragraphs.

It is not evident how to directly decompose an elastic BSC into receiver identifiable sub-channels with the above property. So, we construct a different channel from BSC channels and, in turn, we decompose that channel.

Consider the channel \(C_\varepsilon \) (parameterized by \(\lambda \in {\mathbb N} \)) defined below. Given input bit b from the sender, pass \(b^\lambda \) through \(\left( {\varepsilon }\text {-}\mathsf {BSC} \right) ^{\otimes \lambda }\), i.e. \(\lambda \) independent copies of \({\varepsilon }\text {-}\mathsf {BSC} \), and provide the output string to the receiver. The receiver receives an output string \(\tilde{b}_{[\lambda ]}\in {\{0,1\}} ^\lambda \).

Let \(\mathsf {id} (s)\) represent the number of minority bits in \(s\in {\{0,1\}} ^\lambda \).Footnote 8 So, we have \(\mathsf {id} :{\{0,1\}} ^\lambda \rightarrow \left\{ 0,\cdots ,\left\lfloor {\lambda /2}\right\rfloor \right\} \). Define \(S_i\subseteq {\{0,1\}} ^n\), as the set of all strings \(s\in {\{0,1\}} ^\lambda \) such that \(\mathsf {id} (s)=i\). Given an output string \(\tilde{b}_{[\lambda ]}\) of the channel \(\widetilde{C}\), we interpret it output from the \(\mathsf {id} (\tilde{b}_{[\lambda ]})\)-th sub-channel.

Now, note that the sub-channel which takes as input \(\{0^\lambda ,1^\lambda \}\) and outputs a string in \(S_i\) is (isomorphic to) an \({\varepsilon _i}\text {-}\mathsf {BSC} \) channel, for , where:

$$\begin{aligned} \varepsilon _i := \frac{\varepsilon ^{\lambda -i}\cdot {(1-\varepsilon )}^i}{\varepsilon ^{\lambda -i}\cdot {(1-\varepsilon )}^i + (1-\varepsilon )^{\lambda -i} \cdot \varepsilon ^i} = \frac{\varepsilon ^{\lambda -2i}}{\varepsilon ^{\lambda -2i} + (1-\varepsilon )^{\lambda -2i}} \end{aligned}$$

Note that \(\varepsilon _i\) is an increasing function of i. The probability that the i-th sub-channel is stochastically obtained by \(C_\varepsilon \) is:

$$ p_i(\varepsilon ) := { \left( \begin{matrix} {\lambda }\\ {i} \end{matrix}\right) } \left( \varepsilon ^{\lambda -i} (1-\varepsilon )^i + \varepsilon ^i(1-\varepsilon )^{\lambda -i}\right) $$

Now, intuitively, we have decomposed \(C_\varepsilon \), a channel synthesized from \({\varepsilon }\text {-}\mathsf {BSC} \), into a convex linear combination of receiver identifiable sub-channels. More concretely, we have shown that: \(C_\varepsilon \equiv \sum _{i=0}^{\left\lfloor {\lambda /2}\right\rfloor } p_i(\varepsilon )\cdot \left( {\varepsilon _i}\text {-}\mathsf {BSC} \right) \).

Now, for any \(0<\beta \leqslant \alpha <1/2\), we consider the \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \) channel. Analogous to the channel \(C_\varepsilon \), we consider the channel \(C_{\alpha ,\beta }\). This is identical to the channel \(C_\varepsilon \) and \(\varepsilon =\alpha \) when the receiver is honest, and \(\varepsilon =\beta \) when the receiver is adversarial. The maximum capacity of sub-channels in the honest receiver case is: \(C^*=1-h(\alpha _0)\), where \(h(x)=-x\lg (x)-(1-x)\lg (1-x)\) is the binary entropy function. The average capacity of sub-channels in the adversarial receiver case is:

$$\widetilde{C} =1 - \sum _{i=0}^{\left\lfloor {\lambda /2}\right\rfloor +1} p_i(\beta )\cdot h(\beta _i)$$

If we have \(C^* > \widetilde{C}\), then we know that best capacity from \({\alpha }\text {-}\mathsf {BSC} \) exceeds the average malicious capacity from \({\beta }\text {-}\mathsf {BSC} \). We set \(n=1/p_0(\alpha )\) and create n-instantiations of the channel \(C_\varepsilon \). Then one of the sub-channels in the honest receiver case has capacity \(C^*\), while the average capacity of sub-channels in the adversarial receiver case is \(\widetilde{C}\). So, out of the n sub-channels, there is one sub-channel in the honest receiver case which has capacity higher than some sub-channel in the adversarial receiver case.

The next question is: for what \((\alpha , \beta )\) does there exist a \(\lambda \) such that \(C^* > \widetilde{C}\)? In the following lemma, we show that, if \(\alpha <\ell (\beta ):=\left( 1+\left( 4\beta (1-\beta )\right) ^{-1/2}\right) ^{-1}\), then such a \(\lambda \) exists.

For \(\alpha =1/3\) and \(\beta =1/6\), Fig. 4 explains the receiver identifiable decomposition of \(C_{\alpha ,\beta }\) for increasing values of \(\lambda \) until \(C^* > \widetilde{C}\).

Lemma 1

For constants \(0< \alpha <\ell (\beta ):=\left( 1+\left( 4\beta (1-\beta )\right) ^{-1/2}\right) ^{-1}\), given an \((\alpha , \beta )\text {-}\mathsf {BSC}\), there exists a constant \(\lambda \in {\mathbb N} \) such that it is possible for the receiver to sender-obliviously construct channels where the maximum capacity \(C^*\) of one sub-channel in the honest receiver case, over \({\alpha }\text {-}\mathsf {BSC} \), is greater than the average capacity \(\widetilde{C}\) of all sub-channels in the adversarial receiver case, over \({\beta }\text {-}\mathsf {BSC} \).

Consider an elastic binary symmetric channel \({(\alpha ,\beta )}\text {-}\mathsf {BSC} \). For a given a value of \(\lambda \in {\mathbb N} \), define \(\pi :{\{0,1\}} \rightarrow {\{0,1\}} ^\lambda \) as \(\pi (b)=b^\lambda \) (i.e. \(\lambda \) repetitions of the bit b). Corresponding to this, we obtain channels \((V,\widehat{V})\) corresponding to the honest and adversarial receiver respectively. We have \(C^* = 1-h(\alpha ^{{\left( \lambda \right) }} _0)\) and \(\widetilde{C} = 1 - \sum _{i\in [\lfloor [\lambda /2]+1]} p^{{\left( \lambda \right) }} _i(\beta )h(\beta ^{{\left( \lambda \right) }} _i)\). Define two functions: \(h^*(x^{{\left( \lambda \right) }}) := h(x^{{\left( \lambda \right) }} _0)\) and \(\tilde{h}(x^{{\left( \lambda \right) }}):=\sum _{i\in [\lfloor [\lambda /2]+1]}p^{{\left( \lambda \right) }} _i(x)h(x^{{\left( \lambda \right) }} _i)\). Note that \(C^* = 1-h^*(\alpha ^{{\left( \lambda \right) }})\) and \(\widetilde{C} = 1-\tilde{h}(\beta ^{{\left( \lambda \right) }})\). Consider the following manipulation:

$$\begin{aligned} \tilde{h}(x^{{\left( \lambda \right) }})&= \sum _{i\in S} p^{{\left( \lambda \right) }} _i(x) h(x^{{\left( \lambda \right) }} _i) > 2\sum _{i\in S} p^{{\left( \lambda \right) }} _i(x) \cdot x^{{\left( \lambda \right) }} _i \\&= 2\sum _{i\in S} { \left( \begin{matrix} {\lambda }\\ {i} \end{matrix}\right) } x^i(1-x)^i \cdot x^{\lambda -2i} = \sum _{i\in S}{ \left( \begin{matrix} {\lambda }\\ {i} \end{matrix}\right) } x^{\lambda -i}(1-x)^i \end{aligned}$$

This is a binomial distribution with mean \((1-x)\lambda \). By using anti-concentration bound from [15]:

$$\begin{aligned} \tilde{h}(x^{{\left( \lambda \right) }})&> \frac{1}{\lambda ^2}\exp \left( -\lambda \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) \\&= h\left( h^{-1}\left( \frac{1}{\lambda ^2\exp \left( \lambda \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) }\right) \right) \end{aligned}$$

Next, we use the inequality \(h^{-1}(x) \geqslant x / \left( 2\log (6/x)\right) \) from [9]. Set \(t(x) = x / \left( 2\log (6/x)\right) \). This gives \(\tilde{h}(x^{{\left( \lambda \right) }})> h\left( t\left( \frac{1}{\lambda ^2\exp \left( \lambda \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) }\right) \right) \). For any \(x\in (0,1/2)\), consider \(\lambda \rightarrow \infty \). We analyze the behavior of \(t\left( \frac{1}{\lambda ^2\exp \left( \lambda \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) }\right) \).

Define a such that: \( \frac{1}{\lambda ^3\exp \left( \lambda \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) \mathsf {polylog} (\lambda )} \leqslant t\left( \frac{1}{\lambda ^2\exp \left( \lambda \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) }\right) {\;={:}\;} \frac{1}{1+\left( \frac{1}{a}-1\right) ^\lambda }=h^*(a^{{\left( \lambda \right) }}) \) Observe that under these conditions \(a \rightarrow a^*:=\frac{1}{1 + \exp \left( \mathsf {D} _{\mathsf {KL}}\left( {1/2}\Vert {x}\right) \right) } = \frac{1}{1+\frac{1}{\sqrt{4x(1-x)}}}\). Now for any fixed x and \(y<a^*\) (as defined above), for all sufficiently large \(\lambda \in {\mathbb N} \) we have \(\tilde{h}(x^{{\left( \lambda \right) }})>h^*(y^{{\left( \lambda \right) }})\).

This shows that for \(0<\beta \leqslant \alpha <\left( 1+\left( 4\beta (1-\beta )\right) ^{-1/2}\right) ^{-1}\), there exists a constant \(\lambda _{\alpha ,\beta }\) such that for \(\lambda \geqslant \lambda _{\alpha ,\beta }\) we have \(\tilde{h}(\beta ^{{\left( \lambda \right) }})>h^*(\alpha ^{{\left( \lambda \right) }})\), i.e. \(C^*>\widetilde{C}\). Furthermore, this bound is tight.

3.2 Semi-honest Completeness of \((\alpha , \beta )\text {-}\mathsf {BSC}\) for \(0< \beta \leqslant \alpha < \ell (\beta ) \)

Consider the channel \(V_\epsilon \) (parameterized by \(\lambda \in {\mathbb N} \)) which on input a bit b, passes \(b^\lambda \) through \({(\epsilon \text {-}\mathsf {BSC})}^{\otimes \lambda }\). Then, for the channels \((V, \widehat{V})\) constructed by sending a \(\lambda \)-repetition code via an \((\alpha , \beta )\text {-}\mathsf {BSC}\), let \(C^* := \max _{j \in \mathsf {Supp}(J)}I(V_j)\) and \(\widetilde{C} := I(\widehat{V})\). We use Lemma 1 to compute \(\lambda _{\alpha , \beta }\) corresponding to \(\alpha , \beta \) where \(0< \beta \leqslant \alpha < \ell (\beta ) \), such that \(C^* > \widetilde{C}\), and use the capacity-inverting encoding \(\pi _{\alpha ,\beta }(b) = b^{\lambda _{\alpha ,\beta }}\). For ease of notation, we will use \(\lambda \) to represent \(\lambda _{\alpha ,\beta }\).

Let n be an integer, such that \(n = \frac{1}{{\alpha ^\lambda + {(1-\alpha )}^\lambda } - \epsilon }\), where . Let \(\delta = \frac{{c^*}_{h}}{\tilde{c}_m} - 1\). Pick a polar code of rational rate r where \({\tilde{c}_m}(1 + \delta /3)< r < {\tilde{c}_m}(1 + 2\delta /3)\), and block-length \({\kappa }/n\). Let \(\mathsf {enc, dec}\) denote the encoding and decoding algorithms of this polar code. Then, Fig. 5 gives a protocol to UC-securely realize n-choose-1 OT using an \((\alpha , \beta )\text {-}\mathsf {BSC}\), in the semi-honest setting.

Fig. 5.
figure 5

n-choose-1 bit OT from \((\alpha , \beta )\)-BSC for \(0< \beta \leqslant \alpha < \ell (\beta ) \).

Correctness. It is easy to see that the protocol correctly implements 2-choose-1 oblivious transfer.

Lemma 2

For all \(0< \beta \leqslant \alpha < \ell (\beta ) \), for all \((x_1, x_2, \ldots x_n) \in {\{0,1\}} ^{n}\) and \(c \in [n]\), the output of \({\mathcal R} \) equals \(x_c\) with probability at least \((1 - 2^{-{\kappa } ^{0.4}})\).

Proof

When the sender and the receiver are both honest, the expected fraction of receiver outputs in \(\{0^\lambda , 1^\lambda \}\) is \({{\alpha ^\lambda + {(1-\alpha )}^\lambda } - \epsilon }\). Then, the probability that the receiver obtains less than \(1/n = {{\alpha ^\lambda + {(1-\alpha )}^\lambda } - \epsilon }\) outputs in \(\{0^\lambda , 1^\lambda \}\) is at most \(2^{-\frac{\epsilon ^2{\kappa }}{\alpha ^{\lambda }+(1-\alpha )^\lambda }}\), by the Chernoff bound. Moreover, by Imported Theorem 1, the decoding error when a code of block length \({\kappa }/n\) is sent over \({\kappa } \) channels at a rate constant lower than capacity, is at most \({\kappa } \cdot 2^{-\frac{{\kappa } ^{0.49}}{n}}\).

It is easy to see that, conditioned on the receiver obtaining at least \(1/n = {{\alpha ^\lambda + {(1-\alpha )}^\lambda } - \epsilon }\) outputs in \(\{0^\lambda , 1^\lambda \}\) and no decoding error, the protocol is always correct. Thus, the output of \({\mathcal R} \) equals \(x_c\) with probability at least \((1 - 2^{-{\kappa } ^{0.4}})\).

Receiver Security. The semi-honest simulation strategy \(\mathsf {Sim}_{{\mathcal S}} \) is given in Fig. 6.

Fig. 6.
figure 6

Sender simulation strategy for n-choose-1 bit OT.

Lemma 3

The simulation error for the semi-honest sender is at most \(1 - 2^{-\frac{\epsilon ^2{\kappa }}{\alpha ^{\lambda }+(1-\alpha )^\lambda }}\).

Proof

The view of the sender is, \(V_{{\mathcal S}}:= \{(x_1, x_2, \ldots x_n), b_{[{\kappa } ^2]}, S_1, S_2, \ldots S_n\}\).

First, the probability of abort in the real view is at most \(2^{-\frac{\epsilon ^2{\kappa }}{\alpha ^{\lambda }+(1-\alpha )^\lambda }}\). Note that the simulator never aborts. But, conditioned on the receiver not aborting, we argue that the simulated sender view is identical to the real view.

For all \(i \in [{\kappa } ^2]\), the probability that \(\tilde{b}_{i, [\lambda ]} \in \{0^{\lambda }, 1^{\lambda }\}\), is an i.i.d. random variable, over the randomness of the \((\alpha , \beta )\text {-}\mathsf {BSC}\) as well as the receiver. For some fixed size s such that \({\kappa } ^2/n \leqslant s \leqslant {\kappa } ^2\), in the view of the sender, \(I : |I| = s\) is a random subset of \([{\kappa } ]\) of size s, and \(S_c\) is a random partition of I of size \({\kappa }/2\). The other sets are a random partition of \([{\kappa } ^2] \setminus S_c\), and thus all the sets are a random equal partition of \([{\kappa } ^2]\). Thus, in this case the simulation is perfect.

Thus, the simulation error is exactly equal to the probability of abort, which is at most \(2^{-\frac{\epsilon ^2{\kappa }}{\alpha ^{\lambda }+(1-\alpha )^\lambda }}\).

Sender Security. The semi-honest simulation strategy \(\mathsf {Sim}_{{\mathcal R}} \) is given in Fig. 7.

Fig. 7.
figure 7

Receiver simulation strategy for n-choose-1 bit OT.

Lemma 4

The simulation error for the semi-honest receiver is at most \(2^{-{\kappa } \delta /4}\).

Proof

The view of the receiver \(V_{{\mathcal R}}:= \{c, \theta , \tilde{b}_{[{\kappa } ^2],[\lambda ]}, z_{[{\kappa } ^2],[\lambda ]}, r_0, r_1\}\). The values \(\tilde{b}_{[{\kappa } ^2],[\lambda ]}, z_{[{\kappa } ^2],[\lambda ]}\) are generated using honest sender strategy. There is no abort from the sender side in the \((\alpha , \beta )\text {-}\mathsf {BEC}\) hybrid or the simulated view.

Consider channel \(S_c\), composed of \({\kappa } \) sub-channels of block-length \(({\kappa }/n)\), each of capacity \(\tilde{c}_h\). Recall that \(B \rightarrow Z \rightarrow \tilde{B}\), where \(B, Z, \tilde{B}\) are random variables denoting the sender input, leakage and receiver output respectively. Thus, the capacity of any sub-channel of \(S_c\), can only increase when the receiver obtains additional leakage. For a semi-honest receiver, the capacity of each sub-channel of \(S_c\) is at least \(\tilde{c}_h = c^*_m (1+\delta )\) even when the receiver is adversarial and can change channel characteristic. The channels \(S_\ell \) for \(\ell \in [n] \setminus \{c\}\) are constructed by sampling sets of \({\kappa } \) sub-channels at random, without replacement from the remaining set. Since, the overall average capacity of the adversarial receiver (semi-honest, but changes channel characteristic) is at most \(c^*_m\), the average capacity of any sub-channel in this remaining set is at most \(c^*_m(n-1-\delta )/(n-1)\). Then, there are at least a constant fraction \((n-1-\delta )/(n-1)\) sub-channels in this remaining set, each with capacity at most \(c^*_m < r\).

Now, consider the event that there exists a channel \(S_\ell \) for \(\ell \in [n] \setminus \{c\}\), such that for more than \(({\kappa }- \sqrt{{\kappa }})\) sub-channels in \(S_\ell \), the sub-channel capacity is greater than \(c^*_m\). This event occurs with probability at most \(2^{-{\kappa }/3}\). We argue that conditioned on this event not happening, the simulated view is \((n-1)2^{-{\kappa }/3}\)-close to the receiver view in the \((\alpha , \beta )\text {-}\mathsf {BSC}\) hybrid.

For a channel with capacity c and a code of rate \(r>c\), a weak converse of Shannon’s channel coding theorem proves the decoding error is at least \(1 - \frac{c}{r}\), therefore the min-entropy is at least \(h_2(1 - \frac{c}{r})\). Then, an application of the Leftover Hash Lemma gives us that for a randomly chosen universal hash function h, if \(\sqrt{{\kappa }}\) sub-channels have constant min-entropy \({>}\delta /2\), the hash value is at least \(2^{-{\kappa } \delta /3}\) close to uniform. Thus for all channels \(S_\ell \) where \(\ell \in [n] \setminus \{c\}\), the output \(r_\ell \) is \(2^{-{\kappa } \delta /3}\) close to uniform. Moreover, \(r_c\) is computed using honest sender strategy, so the random variable \(r_c\) is identical in the \((\alpha , \beta )\text {-}\mathsf {BSC}\) hybrid and simulated views. Thus, the total simulation error is \((n-1)2^{-{\kappa } \delta /3} + 2^{-{\kappa }/3} = n2^{-{\kappa } \delta /3} < 2^{-{\kappa } \delta /4}\).

3.3 Special-Malicious Completeness of \((\alpha , \beta )\text {-}\mathsf {BSC}\) for \(0< \beta \leqslant \alpha < \ell (\beta ) \)

In fact, it is not difficult to prove that the protocol in Fig. 5 yields \((n, 1, n-1)\) OT in a special-malicious setting. In this setting, the receiver is allowed to behave maliciously, whereas the sender must (semi-)honestly send a repetition code in the first step of the protocol, and after this step the sender is allowed to behave maliciously. Please refer to the full version for a formal proof.

Fig. 8.
figure 8

UC-secure \({\mathcal F} _{\mathsf {com}}\) from \((\alpha , \beta )\)-BSC for \(0< \beta \leqslant \alpha < 1\).

4 Full Malicious Completeness of Binary Symmetric Channels

4.1 \({\mathcal F} _{\mathsf {com}}\) from \((\alpha , \beta )\)-BSC for \(0< \beta \leqslant \alpha < 1/2\)

The protocol is presented in Fig. 8, in terms of a polar code \(\mathcal {C}\) over the binary alphabet, with block-length \({\kappa } \), rate \(1 - o(1)\) and minimum distance \(\omega ({\kappa } ^{4/5})\).

Intuitively, the sender sends picks a codeword from the appropriate code and sends a 2-repetition of the codeword over the BSC, to the receiver. The commitment is statistically hiding because the capacity of the receiver is less than the rate of the code, and therefore there is constant prediction error for each codeword \(\mathbf {c} _i\) for \(i \in [{\kappa } ]\). The commitment is statistically binding because the sender cannot flip too many bits, or send too many ‘bad’ indices to the receiver. If he does, he will be caught with overwhelming probability. If he sends a few bad/flipped bits, the minimum distance of the code will still hash them down to the same value.

Correctness. For honest sender strategy, using a Chernoff bound, it is possible to show that the size of \(I_1\) and \(I_2\) is bounded by \({(1-\alpha )}^2({\kappa } +{\kappa } ^{2/3})\) and \(2\alpha (1-\alpha )({\kappa } +{\kappa } ^{2/3})\) with probability at least \(1 - 2.2^{-{\kappa }/3}\). Thus, when \({\mathcal S} \) and \({\mathcal R} \) are both honest, then \({\mathcal R} \) accepts \(\mathsf {Reveal}(\mathsf {Commit}(b))\) for any \(b \in {\{0,1\}} \) with probability at least \(1 - 2^{-{\kappa }/4}\).

Receiver Security (Statistical Binding/Extractability). It suffices to consider a dummy sender \({\mathcal S} \) and malicious environment \({\mathcal Z} _{\mathcal S} \), such that the dummy sender forwards all messages from \({\mathcal Z} _{\mathcal S} \) to the honest receiver/simulator, and vice-versa.

Without loss of generality, the semi-honest simulation strategy \(\mathsf {Sim}_{{\mathcal S}} \) can be viewed to interact directly with \({\mathcal Z} _{\mathcal S} \). \(\mathsf {Sim}_{{\mathcal S}} \) is described in Fig. 9.

Fig. 9.
figure 9

Sender simulation strategy for \({\mathcal F} _{\mathsf {com}}\).

Lemma 5

The simulation error for the malicious sender is at most \(2^{-{\kappa } ^{0.5}}\).

Proof

First, note that both the real and ideal views reject with probability 1 when \(\mathbf {c} '_i\) is not a valid codeword, for any \(i \in [{\kappa } ]\). Next, if \(|I_{i,1}| > 2{\kappa } ^{2/3}\) or \(|I_{i,2}| > 2{\kappa } ^{2/3}\), then the real view rejects with probability at least \((1 - 2^{-{\kappa } ^{2/3}})\), whereas the ideal view always rejects.

Conditioned on the receiver not rejecting, it remains to argue that the bit \(b'\) extracted by the simulator (and later output to the receiver) is distributed identically in the hybrid and ideal worlds. Conditioned on not rejecting, for each \(i \in [{\kappa } ]\), the distance between \(\mathbf {c} '_i\) and \(\mathbf {c} _i\) is at most \(|I_{i,1}| + |I_{i,2}| = 4{\kappa } ^{2/3}\). Then, because the code has minimum distance \(\omega ({\kappa } ^{4/5})\), the nearest codeword \(\tilde{\mathbf {c}}_i\) to \(\mathbf {c} _i\) is actually \(\mathbf {c'} _i\) itself. Therefore, the bit \(b' = y \oplus h(\tilde{\mathbf {c}}_i, \tilde{\mathbf {c}}_2, \ldots \tilde{\mathbf {c}}_{\kappa }) = y \oplus h(\mathbf {c} '_{1}, \mathbf {c} '_{2}, \ldots \mathbf {c} '_{{\kappa }})\) is distributed identically in the hybrid and ideal worlds in this case.

Thus the simulation error is at most \(2.2^{-{\kappa } ^{2/3}} < 2^{-{\kappa } ^{0.5}}\).

Sender Security (Statistical Hiding/Equivocability). It suffices to consider a dummy receiver \({\mathcal R} \) and malicious environment \({\mathcal Z} _{\mathcal R} \), such that the dummy receiver forwards all messages from \({\mathcal Z} _{\mathcal R} \) to the honest receiver/simulator, and vice-versa.

Without loss of generality, the semi-honest simulation strategy \(\mathsf {Sim}_{{\mathcal R}} \) can be viewed to interact directly with \({\mathcal Z} _{\mathcal R} \). \(\mathsf {Sim}_{{\mathcal R}} \) is described in Fig. 10.

Fig. 10.
figure 10

Receiver simulation strategy for \({\mathcal F} _{\mathsf {com}}\).

Lemma 6

The simulation error for the malicious receiver is at most \(2.2^{-{\kappa }}\).

Proof

For all \(i \in [{\kappa } ]\) and honestly generated \(\mathbf {c} _i\), the channel \(\tilde{\mathbf {c}}_{i,[2]}\) has a constant fraction \(2\beta (1-\beta )\) bits of the form 01 or 10, which count as erasures. Thus, the capacity of each such channel is at most \(1 - 2\beta (1-\beta )\). Since the rate of the code sent over channel \(\tilde{\mathbf {c}}_{i,[2]}\) is \(1 - o(1)\), the entropy in the received string is at least \(1 - \frac{1 - 2\beta (1-\beta )}{1 - o(1)} \approx 2\beta (1-\beta )\). Therefore, via the leftover hash lemma, \(h(\mathbf {c} _1, \mathbf {c} _2, \ldots \mathbf {c} _{\kappa })\) is at least \(1 - 2^{-{\kappa }}\) close to uniform, and therefore, y is at least \(1 - 2^{-{\kappa }}\) close to uniform.

Moreover, with probability at least \(1 - 2^{-{\kappa }}\), it is possible to efficiently find a different set of codewords \(\mathbf {c} '_i\) which hash to a different bit, for the same output \(\tilde{\mathbf {c}}_i\) and \(\tilde{\mathbf {z} _i}\) of the receiver.

4.2 Malicious Completeness of \((\alpha , \beta )\text {-}\mathsf {BSC}\) for \(0< \beta \leqslant \alpha < \ell (\beta ) \)

To make the protocol in Sect. 3.3 secure against a general malicious sender instead of only a special-malicious one, we must ensure correctness of the repetition code sent in Step 1 by the sender. To ensure this, we make use of the commitment protocol \({\mathcal F} _{\mathsf {com}}\).

Fig. 11.
figure 11

2-choose-1 bit OT from \((\alpha , \beta )\text {-}\mathsf {BSC}\) for \(0< \beta \leqslant \alpha < \ell (\beta ) \).

Fig. 12.
figure 12

\({{\mathcal F} _{\mathsf {\widetilde{OT}}}}^{(\delta )} \) Functionality

The functionality \({\mathcal F} _{\mathsf {com}}\) can be constructed from any \((\alpha , \beta )\text {-}\mathsf {BSC}\) as demonstrated in Sect. 4.1. The sender and receiver use \({\mathcal F} _{\mathsf {com}}\) to toss random coins, and then implement a cut-and-choose based protocol to implement Step 1 of the special-malicious protocol. The protocol is presented in Fig. 11 in the \({\mathcal F} _{\mathsf {com}}\) and \((\alpha , \beta )\text {-}\mathsf {BSC}\) hybrids. The protocol (including commitments) always uses the \((\alpha , \beta )\text {-}\mathsf {BSC}\) from the sender to the receiver. Since OT can be reversed, this demonstrates fixed-role completeness of \((\alpha , \beta )\text {-}\mathsf {BSC}\) for \(0< \beta \leqslant \alpha < \ell (\beta ) \). Step 1 of the protocol in Sect. 3.3 is modified as follows.

Analysis. The sender and receiver use \({\mathcal F} _{\mathsf {com}}\) to toss common random coins.

In step 1, the sender sends \(\lambda \)-repetitions of \({\kappa } ^{6}\) bits over the \((\alpha , \beta )\text {-}\mathsf {BSC}\). Additionally, he sends a commitment to each of these bits. Then, the parties pick a random subset, consisting of half of the values sent in step 1, and the sender is required to reveal these values.

Next, out of the remaining \({\kappa } ^{6}/2\) commitments, both parties pick a random subset of size \({\kappa } ^5\). Then, with probability at least \((1 - 1/{\kappa })\), this subset is such that at most \({\kappa } ^{3.1}\) of the values committed to do not match the repetition code (that is, the statistical check would have passed passed). If the sender and receiver pick a random set of \({\kappa } ^2\) random values out of this set of \({\kappa } ^5\) values, then with probability at least \((1 - 1/{\kappa } ^{1.2})\), all of them are correct repetition codes.

Therefore, we obtain a statistical OT which fails with probability at most \(2/{\kappa } ^{1.2}\), we call such a functionality that fails with vanishing probability, \({{\mathcal F} _{\mathsf {\widetilde{OT}}}}^{(\delta )}\), which is formally described in Fig. 12. This functionality \({{\mathcal F} _{\mathsf {\widetilde{OT}}}}^{(\delta )}\), can then be compiled using [32, 34] to obtain constant-rate OT, following [40]. We provide the details of this compiler in the full version.

This completes the proof of Theorem 1.

5 Conclusion

It is an interesting open problem to explore whether our completeness results extend to parameters \(\alpha > \ell (\beta ) \), or if there are impossibility results for this setting.

Unfair channels [22] give a theoretical model, general enough to capture many realistic noisy channels. However, in light of strong impossibility results for the completeness of unfair channels, we weaken the adversarial model resulting in what we call elastic noisy channels.

We show that this model circumvents the impossibility results in the unfair channel setting, and show a wide range of parameters for which elastic channels can be used to securely realize OT. We believe our techniques are of independent interest and can be leveraged, along with other ideas, to close the gap between the known feasible and infeasible parameters in the unfair channel setting.

5.1 Sender-Elastic Channels Reduction to (Receiver-) Elastic Channels

We can reduce sender-elastic BSC to a (receiver-) elastic BSC in the following manner. Suppose Alice is the sender and sends a bit b through the sender-elastic BSC. She receives a leakage \(b\oplus E_1\), where \(E_1 = \text {Ber} (\beta )\). Bob, the receiver, obtains \(C = b \oplus E_1 \oplus E_2\), where \(E_2=\text {Ber} (\gamma )\) such that \(\text {Ber} (\alpha ) \equiv \text {Ber} (\beta ) + \text {Ber} (\gamma )\).

We reverse this channel using the following technique. Bob defines \(T := C \oplus R\), where R is a uniform random bit, and sends T to Alice. Alice now defines \(S := b\oplus T\). Now, interpret R as the bit sent and S as the received bit. It is clear that this is a \({(\alpha ,\gamma )}\text {-}\mathsf {BSC} \) channel. And, it can also be formally argued that this one-to-one transformation is tight.