Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In recent years, there has been a rich line of work studying how to enhance the computational capabilities of probabilistic polynomial-time players by making assumptions on hardware [33]. Two types of hardware assumptions in particular have had tremendous impact on recent research: tamper-proof hardware tokens and physically unclonable functions (PUFs).

The tamper-proof hardware token model introduced by Katz [24] relies on the simple and well accepted assumption that it is possible to physically protect a computing machine so that it can only be accessed as a black box, via oracle calls (as an example, think of smart cards). Immediately after its introduction, this model has been studied and its power is now understood in large part. Tamper-proof hardware tokens allow to obtain strong security notions and very efficient constructions, in some cases without requiring computational assumptions. In particular, the even more challenging case of stateless tokens started by [6] has been investigated further in [1, 10, 11, 14, 15, 19, 22, 23, 26].

1.1 Physically Unclonable Functions

Physically Unclonable Functions (PUFs) were introduced by Pappu et al. [28, 29] but their actual potential has been understood only in recent yearsFootnote 1. Increasing excitement over such physical random oracles generated various different (and sometimes incompatible) interpretations about the actual features and formalizations of PUFs.

Very roughly, a PUF is an object that can be queried by translating an input into a specific physical stimulation, and then by translating the physical effects of the stimulation to an output through a measurement. The primary appealing properties of PUFs include: (1) constructing two PUFs with similar input-output behavior is believed to be impossible (i.e. unclonability), and (2) the output of a PUF on a given input is seemingly unpredictable, i.e., one cannot “learn” the behavior of an honestly-generated PUF on any specific input without actually querying the PUF on that input.

There is a lot of ongoing exciting research on concrete constructions of PUFs, based on various technologies. As such, a PUF can only be described in an abstract way with the attempt to establish some target properties for PUF designers.

However, while formally modeling a PUF, one might (incorrectly) assume that a PUF guarantees some properties that unfortunately exceed the state of affairs in real-world scenarios. For example, assuming that the output of a genuine PUF is purely random is clearly excessive, while relying on min-entropy is certainly a safer and more conservative assumption. Various papers have proposed different models and even attempts to unify them. The interested reader can refer to [2] for detailed discussions about PUF models and their connections to properties of actual PUFs. We stress that in this work we will consider the use of PUFs in the UC model of [5]. Informally, this means that we want to study protocols that can securely compose with other protocols that may be executing concurrently.

1.2 UC Security Based on Physically Unclonable Functions

Starting with the work of Brzuska et al. [4], a series of papers have explored UC-secure computation based on physically unclonable functions. The goal of this line of cryptographic research has been to build protocols secure in progressively stronger models.

The Trusted PUFs of Brzuska et al. [4]. Brzuska et al. [4] began the first general attempts to add PUFs to the simulation paradigm of secure computation. They allowed any player (malicious or honest) to create only well-formed PUFs. As already mentioned, the output of a well-formed PUF on any arbitrary input is typically assumed to have sufficient min-entropy. Furthermore, on being queried with the same input, a well-formed PUF can be assumed to always produce identical (or sufficiently close) outputs. Applying error-tolerant fuzzy extractors [9] to the output ensures that each invocation of the PUF generates a (non-programmable) random string that can be reproduced by querying the PUF again with the same input. Brzuska et al. demonstrated how to obtain unconditional UC secure computation for any functionality in this model.

The Malicious PUFs of Ostrovsky et al. [27]. Ostrovsky et al. [27] then showed that the constructions of [4] become insecure in case the adversary can produce a malicious PUF that deviates from the behavior of an honest PUF. For instance, a malicious PUF could produce outputs according to a pseudo-random function rather than relying on physical phenomena, or it could just refuse to answer to a query. They also showed that it is possible to UC-securely compute any functionality using (potentially malicious) PUFs if one is willing to additionally make computational assumptions. They left open the problem of achieving unconditional UC-secure computation for any functionality using malicious PUFs.

Damgård and Scafuro [8] showed that unconditional UC secure commitments can be obtained even in the presence of malicious PUFsFootnote 2.

The Fully Malicious but Stateless PUFs of Dachman-Soled et al.[7]. More recently, it was shown by Dachman-Soled et al. [7] that unconditional UC security for general functionalities is impossible if the adversary is allowed to create malicious PUFs that can maintain state. They also gave a complementary feasibility result in an intermediate model where PUFs are allowed to be malicious, but are required to be stateless.

We note that the impossibility result of [7] crucially relies on (malicious) PUFs being able to maintain a priori unbounded state.

Thus, the impossibility seems interesting theoretically, but its impact to practical scenarios is unclear. In the real world, this result implies that unconditional UC secure computation of all functionalities is impossible in a model where an honest player is unable to distinguish maliciously created PUFs with gigantic memory, from honest (and therefore completely stateless) PUFs. One could argue that this allows the power of the adversary to go beyond the reach of current technology. On the other hand, the protocol of [7] breaks down completely if the adversary can generate a maliciously created PUF with even one bit of memory, and pass it off as a stateless (honest) PUF. This gap forms the starting point for our work.

1.3 Our Contributions

The current state-of-the-art leaves open the following question:

Can we achieve UC-secure computation with malicious PUFs that are allowed to have a priori bounded state?

In the main contribution of this work we answer this question in the affirmative. We show that not only it is possible to obtain UC-secure computation for any functionality as proven in [27] with computational assumptions, but we prove that this can be done with unconditional security, without relying on any computational assumptions. This brings us to our first main result, which we now state informally.

Informal Theorem 1

For any two party functionality \({\mathcal F} \), there exists a protocol \(\pi \) that unconditionally and UC-securely realizes \({\mathcal F} \) in the malicious bounded-stateful PUF model.

As our second contribution, we introduce a new adversarial model for PUF-based protocols. Here, in addition to allowing the adversary to generate malicious stateless PUFs, we also allow him to encapsulate other (honestly generated) PUFs inside his own (malicious, stateless) PUF, even without the knowledge of the functionality of the inner PUFs. This allows the outer malicious PUF to make black-box (or oracle) calls to the inner PUFs that it encapsulates. In particular, the outer malicious PUF could answer honest queries by first making oracle calls to its inner PUFs, and generating its own output as a function of the output of the inner PUFs on these queries. An honest party interacting with such a malicious PUF need not be able to tell whether the PUF is malicious and possibly encapsulates other PUFs in it, or it is honest.

In this new adversarial modelFootnote 3, we require all PUFs to be stateless. We will refer to this as the malicious encapsulated PUF model. It is interesting to note that all previously known protocols (even for limited functionalities such as commitments) suffer explicit attacks in this stronger malicious encapsulated (stateless) PUF model.

As our other main result, we develop techniques to obtain unconditional UC-secure computation in the malicious encapsulated PUF model.

Informal Theorem 2

For any two party functionality \({\mathcal F} \), there exists a protocol \(\pi \) that unconditionally and UC-securely realizes \({\mathcal F} \) in the malicious encapsulated (stateless) PUF model.

Table 1 compares our results with prior work. Our feasibility result in the malicious bound-stateful PUF model and our feasibility result in the malicious encapsulated-stateless PUF model directly improve the works of [4, 7]. Indeed each of our two results strengthen the power of the adversaries of [4, 7] in one meaningful and natural direction still achieving the same unconditional results of [4, 7]. A natural question is whether our techniques defeating malicious bounded-stateful PUFs can be composed with our techniques defeating malicious encapsulated-stateless PUFs to obtain unconditional UC-security for any functionality against adversaries that can construct malicious bounded-stateful encapsulated PUFs. While we do not see a priori any conceptual obstacle in obtaining such even stronger feasibility result, the resulting construction would be extremely complex and heavily tedious to analyze. Therefore we defer such a stronger claim to future work hoping that follow up research will achieve a more direct and elegant construction.

Table 1. The symbol \(\checkmark \) (resp. \(\times \)) indicates that the construction satisfies (resp. does not satisfy) the corresponding security guarantee.

1.4 Our Techniques

The starting point for our constructions is the UC-secure OT protocol of [7], which itself builds upon the works of [4, 27]. We begin by giving a simplified description of the construction in [7].

Suppose a sender \({\mathcal S} \) with inputs \((m_0, m_1)\) and a receiver R with input bit b want to run a UC secure OT protocol in the malicious stateless PUF model. Then, \({\mathcal S} \) generates a PUF and sends it to the receiver. The receiver queries the PUF on a random challenge string c, records the output r and then returns the PUF to \({\mathcal S} \). Then, the sender sends two random strings \((x_0, x_1)\) to the receiver. In turn, the receiver picks \(x_b\), and sends \(v = c \oplus x_b\) to the sender. The sender uses \(\mathsf {PUF}(v \oplus x_0)\) to mask his input \(m_0\) and \(\mathsf {PUF}(v \oplus x_1)\), to mask his input \(m_1\); and sends both masked values to the receiver. Here \(\mathsf {PUF}(\cdot )\) denotes the output of the PUF on the given input. Since \({\mathcal R} \) had to return the PUF before \((x_0, x_1)\) were revealed, with overwhelming probability, \({\mathcal R} \) only knows \(r = \mathsf {PUF}(v \oplus x_b)\), and can output one and only one of the masked sender inputs.

Enhancing [7] in the Stateless PUF Model. Though this was a simplified overview of the protocol in [7], it helps us to explain a subtle assumption required in their simulation strategy against a malicious sender. In particular, the simulator against a malicious sender must return the PUF to the sender before the sender picks random messages \((x_0, x_1)\). However, it is evident that in order to extract both messages \((m_0, m_1)\), the simulator must know \((x_0 \oplus x_1)\), and in particular know the response of the PUF on challenges \((c, c \oplus x_0 \oplus x_1)\) for some known string c.

But the simulator only learns \((x_0, x_1)\) after sending the PUF back to \({\mathcal S} \). Thus, in order to successfully extract the input of \({\mathcal S} \), the simulator should have the ability to make these queries even after the PUF has been returned to the malicious sender. This means that the PUF is supposed to remain accessible and untouched even when it is again in the hands of its malicious creator. We believe this is a very strong assumption that clearly deviates from real scenarios where the state of a PUF can easily be changed (e.g., by damaging it).

Fig. 1.
figure 1

Protocol \(\varPi ^1\) for 2-choose-1 OT in the malicious stateless PUF model.

Our protocol in Fig. 1 gets rid of this strong assumption on the simulator, and we give a new sender simulation strategy that does not need to query the PUF when it is back in the hands of the malicious sender \({\mathcal S} \). This is also a first step in obtaining security against bounded-stateful PUFs. In the protocol of [7], if the PUF created by a malicious \({\mathcal S} \) is stateful, \({\mathcal S} \) on receiving the PUF can first change the state of the PUF (say, to output \(\bot \) everywhere), and then output values \((x_0, x_1)\). In this case, no simulation strategy will be able to extract the inputs of the sender.

We change the protocol in [7], by having \({\mathcal S} \) commit to the random values \((x_0, x_1)\) at the beginning of the protocol, using a UC-secure commitment scheme. These values are decommitted only after \({\mathcal R} \) returns the PUF back to \({\mathcal S} \), so the scheme still remains UC-secure against a malicious receiver. Moreover, now the simulator against a malicious sender can use the straight-line extractor guaranteed by the UC-secure commitment scheme, to extract values \((x_0, x_1)\), and query the PUF on challenges of the form \((c, c \oplus x_0 \oplus x_1)\) for some string c. It then sets \(v = c \oplus x_0\) and sends it to \({\mathcal S} \). Now, the sender masks are \(\mathsf {PUF}(v \oplus x_0)\) and \(\mathsf {PUF}(v \oplus x_1)\), which is nothing but \(\mathsf {PUF}(c)\) and \(\mathsf {PUF}(c \oplus x_0 \oplus x_1)\), which was already known to the sender simulator before returning the PUF to \({\mathcal S} \). This simulation strategy works (with the simulator requiring only black-box access to the malicious PUF’s code) even if the PUF is later broken or its state is reset in any way. This protocol is described formally and proven secure in Sect. 3.

UC Security with Bounded Stateful PUFs. A malicious PUF is allowed to maintain state, and can generate outputs (including \(\bot \)) as a function of not only the current query but also the previous queries that it received as input. This allows for some attacks on the protocol we just described, but they can be prevented by carefully interspersing coin-tossing with the protocol. Please see Sect. 4 for more details.

A stateful PUF created by the sender can also record information about the queries made by the receiver, and replay this information to a malicious sender when he inputs a secret challenge. Indeed, for PUFs with unbounded state, it is this ability to record queries that makes oblivious transfer impossible. However, we only consider PUFs that have a-priori bounded state. In this case, it is possible to design a protocol, parameterized by an upper bound on the size of the state of the PUF, that in effect exhausts the possible state space of such a malicious PUF. Our protocol then carefully uses this additional entropy to mask the inputs of the honest party.

More specifically, we repeat the OT protocol described before (with an additional coin-tossing phase) K times in parallel, using the same (possibly malicious, stateful) \(\mathsf {PUF}\), for sufficiently large \(K > \ell \) (where \(\ell \) denotes the upper bound on the state of the PUF). At this point, what we require essentially boils down to a one-sided malicious oblivious transfer extractor. This is a gadget that would yield a single OT from K leaky OTs, such that the single OT remains secure even when a malicious sender can ask for \(\ell \) bits of universal leakage across all these OTs. This setting is incomparable to previously studied OT extractors [17, 21] because: (a) we require a protocol that is secure against malicious (not just semi-honest) adversaries, and (b) the system has only one-sided leakage, i.e., a corrupt sender can request \(\ell \) bits of leakage, but a corrupt receiver does not obtain any leakage at all.

For simplicity, we consider the setting of one-sided receiver leakage (instead of sender leakage). It is possible to consider this because OT is reversible. To protect against a malicious receiver that may obtain \(\ell \) bits of universal leakage, the sender picks different random inputs for each OT execution, and then uses a strong randomness extractor to extract min-entropy and mask his inputs. We show that this in fact suffices to statistically hide the input messages of the sender. Please see Sect. 5 for a more detailed overview and construction.

UC Security with Encapsulated PUFs. We demonstrate the feasibility of UC secure computation, in a model where a party may (maliciously) encapsulate one or more PUFs that it obtained from honest parties, inside a malicious stateless PUF of its choice. We stress that our protocol itself does not require honest parties to encapsulate PUFs within each other.

To describe our techniques, we begin by revisiting the protocol in Fig. 1, that we described at the beginning of this overview. Suppose parties could maliciously encapsulate some honest PUFs inside a malicious PUF. Then a malicious receiver in this protocol, when it is supposed to return the sender’s PUF \(\mathsf {PUF}_s\), could instead return a different malicious PUF \(\widehat{\mathsf {PUF}}_s\). In this case, the receiver would easily learn both inputs of the sender. But as correctly pointed out in prior work [7, 8], the sender can deflect such attacks by probing and recording the output of \(\mathsf {PUF}_s\) on some random input(s) (known as Test Queries) before sending it to the receiver. Later the sender can check whether \(\widehat{\mathsf {PUF}}_s\) correctly answers to all Test Queries.

However, a malicious receiver may create \(\widehat{\mathsf {PUF}}_s\) that encapsulates \(\mathsf {PUF}_s\), such that \(\widehat{\mathsf {PUF}}_s\) is programmed to send most outer queries to \(\mathsf {PUF}_s\) and echo its output externally; in order to pass the sender’s test. However, \(\widehat{\mathsf {PUF}}_s\) may have its own malicious procedure to evaluate some of the other external queries. In particular, the “unpredictability” of \(\widehat{\mathsf {PUF}}_s\) may break down completely on these queries.

It turns out that the security of the sender in the basic OT protocol of Fig. 1 hinges on the unpredictability of the output of \(\mathsf {PUF}_s\) (in this situation, \(\widehat{\mathsf {PUF}_s}\)) on a “special challenge query” only, which we will denote by \(\mathfrak {s}\). It is completely feasible for a receiver to create a malicious encapsulating PUF \(\widehat{\mathsf {PUF}_s}\) that passes the sender tests, and yet its output on this special query \(\mathfrak {s}\) is completely known to the receiver, therefore breaking sender security.

We overcome this issue by ensuring that \(\mathfrak {s}\) is chosen using a coin toss, and is completely unknown to the receiver until after he has sent \(\widehat{\mathsf {PUF}}_s\) (possibly a malicious encapsulating PUF) back to the sender. Intuitively, this means that \(\widehat{\mathsf {PUF}}_s\) will either not pass the sender tests, or will be highly likely to deflect this the query \(\mathfrak {s}\) to the inner PUF and echo its output (thereby ensuring that the output of the PUF on input \(\mathfrak {s}\) is unpredictable for the receiver). An additional subtlety that arises is that the receiver might use an incorrect \(\mathfrak {s}\) in the protocol (instead of using the output of the coin toss): the receiver is forced to use the correct \(\mathfrak {s}\) via a special cut-and-choose mechanism. For a more detailed overview and construction, please see to Sect. 6.

UC-Secure Commitments Against Encapsulation Attacks. Finally, UC-secure commitments against encapsulation attacks play a crucial role in our UC-secure OT protocol in the encapsulation model. But, we note that the basic commitment protocol of [8] is insecure in this stronger model, and therefore we modify the protocol of [8] to achieve UC-security in this scenario. In a nutshell, this is done by having the receiver send an additional PUF at the end of the protocol, and forcing any malicious committer to query this additional PUF on the committer’s input bit. We then show that even an encapsulating (malicious) committer will have to carry out this step honestly in order to complete the commit phase. Then, a simulator can extract the adversary’s committed value by observing the queries of the malicious committer to this additional PUF. We illustrate in detail, how prior constructions of UC-secure commitments fail in the PUF encapsulation model in Sect. 7. Our UC-secure commitment protocol in the encapsulated malicious (stateless) PUF model is also described in Sect. 7.

1.5 Organization

The rest of this paper is organized as follows. In Sect. 2, we discuss PUFs and other preliminaries relevant to our protocols. In Sect. 3, we describe an improved version of the protocol in [7], in the stateless PUF model. In Sects. 4 and 5, we boost this protocol to obtain security in the bounded stateful PUF model. In Sects. 6 and 7, we discuss protocols that are secure in the PUF encapsulation model. In Appendix A, we discuss the formal modelling of our PUFs. The complete models and proofs that could not be included in this version owing to space restrictions, can be found in the full version of the paper.

2 Preliminaries

2.1 Physically Unclonable Functions

A PUF is a noisy physical source of randomness. The randomness property comes from an uncontrollable manufacturing process. A PUF is evaluated with a physical stimulus, called the challenge, and its physical output, called the response, is measured. Since the processes involved are physical, the function implemented by a PUF can not (necessarily) be modeled as a mathematical function, neither can be considered computable in PPT. Moreover, the output of a PUF is noisy, namely, querying a PUF twice with the same challenge, could yield distinct responses within a small Hamming distance to each other. Moreover, the response need not be random-looking; rather, it is a string drawn from a distribution with high min-entropy. Prior work has shown that, using fuzzy extractors, one can eliminate the noisiness of the PUF and make its output uniformly random. For simplicity, we assume this in the body of the paper and give a detailed description in the full version.

A PUF-family is a pair of (not necessarily efficient) algorithms \(\mathsf{Sample} \) and \(\mathsf{Eval} \). Algorithm \(\mathsf{Sample} \) abstracts the PUF fabrication process and works as follows. On input the security parameter, it outputs a PUF-index \(\mathsf{id} \) from the PUF-family satisfying the security properties (that we define soon) according to the security parameter. Algorithm \(\mathsf{Eval} \) abstracts the PUF-evaluation process. On input a challenge \(q \), it evaluates the PUF on \(q \) and outputs the response \(a \) of length \(rg\), denoting the range. Without loss of generality, we assume that the challenge space of a PUF is a full set of strings of a certain length.

Security of PUFs. Following [4], we consider only the two main security properties of PUFs: unclonability and unpredictability. Informally, unpredictability means that the output of the PUF is statistically indistinguishable from a uniform random string. Formally, unpredictability is modeled via an entropy condition on the PUF distribution. Namely, given that a PUF has been measured on a polynomial number of challenges, the response of the PUF evaluated on a new challenge still has a significant amount of entropy. For simplicity, a PUF is unpredictable if its output on any given input appears uniformly random.

Informally, unclonability states that in a protocol consisting of several parties, only the party in whose possession the PUF is, can evaluate the PUF. When a party sends a PUF to a different party, it can no longer evaluate the PUF till the time it gets the PUF back. Thus a party not in possession of a PUF cannot predict the output of the PUF on an input for which it did not query the PUF, unless it maliciously created the PUF. A formal definition of unclonability is given in the full version of this paper.

A PUF can be modeled as an ideal functionality \({\mathcal F} _{\mathsf {PUF}}\), which mimics the behavior of the PUF in the real world. We formally define ideal functionalities corresponding to honestly generated and various kinds of maliciously generated PUFs in Appendix A. We summarize these here: the model for honestly generated PUFs and for malicious stateless/stateful PUFs has been explored in prior work [7, 27], and we introduce the model for encapsulated PUFs.

  • An honestly generated PUF can be created according to a sampling algorithm \(\mathsf {Samp}\), and evaluated honestly using an evaluation algorithm \(\mathsf{Eval} \). The output of an honestly generated PUF is unpredictable even to the party that created it, i.e., even the creator cannot predict the output of an honestly generated PUF on any given input without querying the PUF on that input.

  • A malicious stateless PUF, on the other hand, can be created by the adversary substituting an \(\mathsf{Eval} _{mal}\) procedure of his choice for the honest \(\mathsf{Eval} \) procedure. Whenever a (honest) party in possession of this PUF evaluates the PUF, it runs the stateless procedure \(\mathsf{Eval} _{mal}(c)\) instead of \(\mathsf{Eval} (c)\) (and cannot distinguish \(\mathsf{Eval} _{mal}(c)\) from \(\mathsf{Eval} (c)\) unless they are distinguishable with black-box access to the PUF). The output of such a PUF cannot depend on previous queries, moreover no adversary that creates the PUF but does not possess it, can learn previous queries made to the PUF when it was not in its possession. We adapt the definitions from [27], where \(\mathsf{Eval} _{mal}\) is a polynomial-time algorithm with oracle access to \(\mathsf{Eval} \). This is done to model the fact that the \(\mathsf{Eval} _{mal}\) algorithm can access an (honest) source of randomness \(\mathsf{Eval} \), and can arbitrarily modify its output using any polynomial-time strategy.

  • A malicious stateful PUF can be created by the adversary substituting a stateful \(\mathsf{Eval} _{mal}\) procedure of his choice for the honest \(\mathsf{Eval} \) procedure. Whenever a party in possession of this PUF evaluates the PUF, it runs the stateful procedure \(\mathsf{Eval} _{mal}(c)\) instead of \(\mathsf{Eval} (c)\). Thus, the output of a stateful malicious PUF can possibly depend on previous queries, moreover an adversary that created a PUF can learn previous queries made to the \(\mathsf {PUF}\) by querying it, say, on a secret input. \(\mathsf{Eval} _{mal}\) is a polynomial-time stateful Turing Machine with oracle access to \(\mathsf{Eval} \). Again, this is done to model the fact that the \(\mathsf{Eval} _{mal}\) algorithm can access an (honest) source of randomness, \(\mathsf{Eval} \), and arbitrarily modify its output using any polynomial-time strategy. Malicious stateful PUFs can further be of two types:

    • Bounded Stateful. Such a PUF can maintain a-priori bounded memory/state (which it may rewrite, as long as the total memory is bounded).

    • Unbounded Stateful. Such a PUF can maintain unbounded memory/state.

  • A malicious encapsulating PUF can possibly encapsulate other (honestly generated) PUFs inside itFootnote 4, without knowing the functionality of these inner PUFs. Such a PUF \(\mathsf {PUF}_\texttt {mal} \) can make black-box calls to the inner PUFs, and generate its outputs as a function of the output of the inner (honest) PUFs.

    This is modeled by having the adversary substitute an \(\mathsf{Eval} _{mal}\) procedure of his choice for the honest \(\mathsf{Eval} \) procedure in the \(\mathsf {PUF}_\texttt {mal} \) that it creates, where as usual \(\mathsf{Eval} _{mal}\) is a polynomial-time Turing Machine with oracle access to \(\mathsf{Eval} \). Similar to the two previous bullets, this is done to model the fact that the \(\mathsf{Eval} _{mal}\) algorithm can access an (honest) source of randomness, \(\mathsf{Eval} \), and arbitrarily modify its output using any polynomial-time strategy.

    In addition, \(\mathsf{Eval} _{mal}\) can also make oracle calls to polynomially many other (honestly generated) procedures \(\mathsf{Eval} _1, \mathsf{Eval} _2, \ldots \mathsf{Eval} _{M}\) that are contained in PUFs \(\mathsf {PUF}_1, \mathsf {PUF}_2, \ldots \mathsf {PUF}_M\), for any a-priori unbounded \(M = \mathsf {poly} (n)\). These correspond to honestly generated PUFs that the adversary may be encapsulating within its own malicious PUF. Thus on some input c, the \(\mathsf{Eval} _{mal}\) procedure may make oracle calls to \(\mathsf{Eval} _1, \mathsf{Eval} _2, \ldots \mathsf{Eval} _{M}\) on polynomially many inputs, and compute its output as a function of the outputs of the \(\mathsf{Eval}, \mathsf{Eval} _1, \mathsf{Eval} _2, \ldots \mathsf{Eval} _{M}\) procedures. Of course, we ensure that the adversary’s \(\mathsf{Eval} _{mal}\) procedure can make calls to some honestly generated procedure \(\mathsf{Eval} _i\) only if the adversary owns the PUF \(\mathsf {PUF}_i\) implementing the \(\mathsf{Eval} _i\) procedure when creating the encapsulating malicious PUF. Furthermore, when the adversary passes such a PUF to an honest party, the adversary “loses ownership” of \(\mathsf {PUF}_i\) and is no longer allowed to access the \(\mathsf{Eval} _i\) procedure, this is similar to the unclonability requirement. This is modeled by assigning an owner to each PUF, and on passing an outer (encapsulating) PUF to an honest party, the adversary must automatically pass all the inner (encapsulated) honest PUFs. Whenever an honest party is in possession of such an adversarial PUF \(\mathsf {PUF}_\texttt {mal} \) and evaluates it, it receives the output of \(\mathsf{Eval} _{mal}\). When the adversary is allowed to construct encapsulating PUFS, we restrict all PUFs to be stateless. Therefore the model with encapsulating PUFs is incomparable with the model with bounded-stateful malicious PUFs. Further details on the modeling of malicious stateless PUFs that may encapsulate other stateless PUFs, are provided in Appendix A.

To simplify notation, we write \(\mathsf {PUF}\leftarrow \mathsf{Sample} (1^K)\), \(r = \mathsf {PUF}(c)\) and assume that \(\mathsf {PUF}\) is a deterministic function with random output.

2.2 UC Secure Computation

The UC framework, introduced by [5] is a strong framework which gives security guarantees even when protocols may be arbitrarily composed.

Commitments. A UC-secure commitment scheme \(\mathsf {UC\text {-}Com}\) consists of the usual commitment and decommitment algorithms, along with (straight-line) procedures allowing the simulator to extract the committed value of the adversary and to equivocate a value that the simulator committed to. We denote these by \((\mathsf {UC\text {-}Com.Commit},\) \(\mathsf {UC\text {-}Com.Decommit},\mathsf {UC\text {-}Com.Extract},\mathsf {UC\text {-}Com.Equivocate})\). Damgård and Scafuro [8] realized unconditional UC secure commitments using stateless PUFs, in the malicious stateful PUF model.

OT. Ideal 2-choose-1 oblivious transfer (OT) is a two-party functionality that takes two inputs \(m_0, m_1\) from a sender and a bit b from a receiver. It outputs \(m_b\) to the receiver and \(\bot \) to the sender. We use \({\mathcal F} _{\mathsf {ot}}\) to denote this functionality. Given UC oblivious transfer, it is possible to obtain UC secure two-party computation of any functionality.

Formal definitions of these functionalities and background on prior results are provided in the full version of this paper.

3 Unconditional UC Security with (Malicious) Stateless PUFs

As a warm up, we start by considering malicious stateless PUFs as in [7] and we strengthen their protocol in order to achieve security even when the simulator does not have access to a malicious PUF that is in possession of the adversary that created it.

Construction. Let n denote the security parameter. The protocol \(\varPi ^1\) in Fig. 2 UC-securely and unconditionally realizes 2-choose-1 OT in the malicious stateless PUF model, between a sender \({\mathcal S} \) and receiver \({\mathcal R} \), with the following restrictions:

  1. 1.

    The random variables \((x_0, x_1)\) are chosen by \({\mathcal S} \) independently of \(\mathsf {PUF}_s\) Footnote 5.

  2. 2.

    A (malicious) \({\mathcal R} \) returns to \({\mathcal S} \) the same PUF, \(\mathsf {PUF}_s\) that it receivedFootnote 6.

We enforce these restrictions in this section only for simplicity and modularity purposes. We remove them in Sects. 4 and 6 respectively.

Fig. 2.
figure 2

Protocol \(\varPi ^1\) for 2-choose-1 OT in the malicious stateless PUF model.

Our protocol makes black-box use of a UC-commitment scheme, denoted by the algorithms \(\mathsf {UC\text {-}Com.Commit}\) and \(\mathsf {UC\text {-}Com.Decommit}\). We use \(\mathsf {UC\text {-}Com.}\mathsf {Commit}(a,b)\) to denote a commitment to the concatenation of strings a and b. UC-secure commitments can be unconditionally realized in the malicious stateless PUF model [8]. Formally, we prove the following theorem:

Theorem 1

The protocol \(\varPi ^1\) in Fig. 2 unconditionally UC-securely realizes \({\mathcal F} _{\mathsf {ot}} \) in the malicious stateless PUF model.

This protocol is essentially the protocol of Dachman-Soled et al. [7], modified to enable correct extraction of the sender’s input. The protocol as specified in [7], even though private, does not allow for straight-line extraction of the sender’s input messages, unless one is willing to make the strong assumption that the simulator can make queries to a (malicious) PUF that an adversary created, even when this malicious PUF is in the adversary’s possession (i.e., the adversary is forced not to update nor to damage/destroy the PUF).

Our main modification is to have the sender commit to his values \((x_0, x_1)\) using a UC-secure commitment scheme. In this case, it is possible for the simulator to extract \((x_0, x_1)\) in a straight-line manner from the commitment, and therefore extract the sender’s input while it remains hidden from a real receiver. The rest of the proof follows in the same manner as [7]; recall that we already gave an overview in Sect. 1.4. The formal proofs of correctness and security can be found in the full version of this paper.

4 UC-Security with (Bounded-Stateful Malicious) PUFs

Overview. A malicious stateful PUF can generate outputs as a function of its previous input queries. For the (previous) protocol in Fig. 2, note that in Step 2, \(\mathsf {Sim}_{\mathcal S} \) makes two queries \((c_1, c_2)\) to the PUF such that \((c_1 \oplus c_2) = (x_1 \oplus x_2)\), where \((x_1, x_2)\) are the sender’s random messages. On the other hand, an honest receiver makes two queries \((c_1, c_2)\) to the PUF such that \((c_1 \oplus c_2) = \mathsf {rv}\), for an independent random variable \(\mathsf {rv}\).

Therefore, when combined with the sender’s view, the joint distribution of the evaluation queries made to the PUF by \(\mathsf {Sim}_{\mathcal S} \), differs from the joint distribution of the evaluation queries made to the PUF by an honest receiver. Thus, a malicious sender can distinguish the two worlds by having a malicious stateful PUF compute a reply to \(c_2\) depending on the value of the previous challenge \(c_1\). We will call these attacks of Type I. In this section, we will describe a protocol secure against all possible attacks where a stateful PUF computes responses to future queries as a function of prior queries.

Fig. 3.
figure 3

Protocol \(\varPi _{K}\) for \(K \) 2-choose-1 OTs (with at most \(\ell \)-bounded leakage) in the malicious stateful PUF model. The changes from the protocol in Fig. 2 are underlined.

A stateful PUF created by the sender can also record information about the queries made by the receiver, and replay this information to a malicious sender when he inputs a secret challenge. For PUFs with bounded state, we view these as ‘leakage’ attacks, by considering all information recorded and replayed by a PUF as leakage. We will call these attacks of Type II. We describe a protocol secure against general bounded stateful PUFs (i.e., secure against attacks of both Type I and Type II) in Sect. 5.

Our Strategy. Let \(\ell \) denote a polynomial upper bound on the size of the memory of any malicious PUF created by the sender \({\mathcal S} \). Our strategy to obtain secure oblivious transfer from any PUF with \(\ell \)-bounded state is as follows: We use (the same) \(\mathsf {PUF}_s\) created by the sender, to execute \(K = \varTheta (\ell )\) oblivious transfers in parallel. In our new protocol in Fig. 3, we carefully intersperse an additional round of coin tossing with our basic protocol from Fig. 2, to obtain security against attacks of Type I.

Specifically, we modify the protocol of Fig. 2 as follows: instead of having \({\mathcal S} \) generate the random strings \((x_0, x_1)\), we set the protocol up so that both \({\mathcal S} \) and the receiver \({\mathcal R} \) generate XOR shares of \((x_0, x_1)\). Furthermore, \({\mathcal R} \) generates his shares only after obtaining the PUF and a commitment to sender shares from \({\mathcal S} \). In such a case, the PUF created by \({\mathcal S} \) must necessarily be independent of the receiver shares and consequently, also independent of \((x_0, x_1)\).

Recall from Sect. 3, that the simulator against a malicious sender succeeds if it can obtain the output of the PUF to queries of the form \((c, c \oplus x_0 \oplus x_1)\) for a random c, whereas an honest receiver can only make queries of the form \((c_1, c_2)\) for randomly chosen \((c_1, c_2)\). Since \((x_0, x_1)\) appear to be distributed uniformly at random to the PUF, the distributions of \((c, c \oplus x_0 \oplus x_1)\) and \((c_1, c_2)\) are also statistically indistinguishable to the PUFFootnote 7. Therefore, the sender simulator succeeds whenever the honest receiver does not abort and this suffices to prove security against a malicious sender.

Finally, we note that the simulation strategy against a malicious receiver remains similar to one of Sect. 3, even if the receiver has the ability to create PUFs with unbounded state.

Construction. The protocol \(\varPi _{K}\) in Fig. 3 allows us to use an \(\ell \)-bounded stateful PUF to obtain \(K \) secure (but one-sided leaky) oblivious transfers, such that a malicious sender can obtain at most \(\ell \) bits of additional universal leakage on the joint distribution of the receiver’s choice input bits \((b_1, b_2, \ldots b_{K})\). Our protocol makes black-box use of a UC-commitment scheme, denoted by the algorithms \(\mathsf {UC\text {-}Com.Commit}\) and \(\mathsf {UC\text {-}Com.Decommit}\) Footnote 8. UC-secure commitments can be unconditionally realized in the malicious stateful PUF model [8].

Theorem 2

The protocol \(\varPi _{K}\) unconditionally UC-securely realizes K instances of OT(\({\mathcal F} _{\mathsf {ot}} ^{[\otimes K ]}\)) in an \(\ell \)-bounded-stateful PUF model, except that a malicious sender can obtain at most \(\ell \) bits of additional universal leakage on joint distribution of the receiver’s choice bits over all \({\mathcal F} _{\mathsf {ot}} ^{[\otimes K ]}\).

Correctness is immediate from inspection, and the complete proof of security is in the full version of the paper.

5 One-Sided Correlation Extractors with Malicious Security

From Sect. 4, in the \(\ell \)-bounded stateful PUF model, we obtain \(K \) leaky oblivious transfers, such that the sender can obtain \(\ell \) bits of universal leakage on the joint distribution of the receiver’s choice bits over all \(K \) oblivious transfers.

Because OT is reversible [37], it suffices to consider a reversed version of the above setting, i.e., where the receiver can obtain \(\ell \) bits of additional universal leakage on the joint distribution of all the sender’s messages over all \(K \) oblivious transfers. More formally, the leakage model we consider is as follows:

One-Sided Leakage Model for Correlation Extractors. Here, we begin by describing our leakage model for OT correlations formally, and then we define one-sided correlation extractors for OT. Our leakage model is as follows:

  1. 1.

    \(K \) -OT Correlation Generation Phase: For \(i \in [K ]\), the sender \({\mathcal S} \) obtains \((x_0^i, x_1^i) \in {\{0,1\}} ^2\) and the receiver \({\mathcal R} \) gets \((b_i, x_{b_i}^i)\).

  2. 2.

    Corruption and Leakage Phase: A malicious adversary corrupts the receiver and sends a leakage function \(L: {\{0,1\}} ^K \rightarrow {\{0,1\}} ^{t_R}\). It receives \(L(\{(x_0^i, x_1^i)\}_{i \in [K ]})\).

Let (XY) be a random OT correlation (i.e., \(X = (x_0, x_1), Y = (r, x_r)\), where \((x_0, x_1, r)\) are sampled uniformly at random.) We denote a \(t_R\)-leaky version of \((X, Y)^K \) described above as \(((X, Y)^K)^{[t_R]}\).

Definition 1

( \((n, p, t_R, \epsilon )\) One-Sided Malicious OT-Extractor). An \((n, p, t_R, \epsilon )\) one-sided malicious OT-extractor is an interactive protocol between 2 parties S and R with access to \(((X, Y)^n)^{[t_R]}\) described above. The protocol implements p independent copies of secure oblivious transfer instances with error \(\epsilon \).

In other words, we want the output oblivious transfer instances to satisfy the standard \(\epsilon \)-correctness and \(\epsilon \)-privacy requirements for OT. In more detail, the correctness requirement is that the receiver output is correct in all p instances of OT with probability at least \((1-\epsilon )\). The privacy requirement is that in every instance of the output OT protocol, a corrupt sender cannot output the receiver’s choice bit, and a corrupt receiver cannot output the ‘other message’ of the sender with probability more than \(\frac{1}{2} + \epsilon \).

Theorem 3

(Extracting a Single OT). There exists a \((2\ell +2n+1, 1, \ell , 2^{-n})\) one-sided OT extractor according to Definition 1.

Theorem 4

(High Production Rate). There exists a \((2\ell +2n, \frac{n}{\log ^2 n}, \ell , \frac{1}{n\log n})\) one-sided OT extractor according to Definition 1.

We prove these theorems by giving a construction and proof of security of such extractors in the following sections. We will make use of strong seeded extractors in our construction, and we define such extractors below.

Definition 2

(Strong seeded extractors). A function \(\mathsf {Ext}: {\{0,1\}} ^n \times {\{0,1\}} ^d \rightarrow {\{0,1\}} ^m\) is called a strong seeded extractor for entropy k if for any (nk)-source X and an independent random variable Y that is uniform over \({\{0,1\}} ^d\), it holds that \((\mathsf {Ext}(X, Y), Y) \approx (U_m, Y)\).

Here, \(U_m\) is a random variable that is uniformly distributed over m bit strings and is independent of Y, namely \((U_m, Y)\) is a product distribution. In particular, it is known [12, 18, 34] how to construct strong seeded extractors for any entropy \(k = \varOmega (1)\) with seed length \(d = O(\log n)\) and \(m = 0.99\,k\) output bits.

Construction. In Fig. 4, we give the basic construction of an OT extractor that securely obtains a single oblivious transfer from \(K = (2\ell +2n)\) OTs, when a receiver can obtain at most \(\ell \) bits of universal leakage from the joint distribution of sender inputs over all the OTs.

Fig. 4.
figure 4

\((2\ell +2n, 1, \ell , 2^{-{n}})\) one-sided malicious correlation extractor.

Correctness is immediate from inspection. Intuitively, the protocol is secure against \(\ell \) bits of universal (joint) leakage because setting \(K = 2\ell + 2n\) still leaves n bits of high entropy even when the receiver can obtain \(2\ell + n\) bits of leakage. Moreover, with \(\ell \) bits of additional universal leakage over all pairs of sender inputs \((m_0^1, m_1^1, m_0^2,\) \(m_1^2, \ldots m_0^K, m_1^K)\), the strong seeded extractor extracts an output that is statistically close to uniform, and this suffices to mask the sender input.

The formal proof of security can be found in the full version of this paper.

High Production Rate: It is possible to obtain an improved production rate at the cost of higher simulation error. This follows using techniques developed in prior work [17, 36], and the details can be found in the full version.

6 UC Secure Computation in the Malicious Encapsulation Model

Let us consider the stateless protocol described in Sect. 3. In this protocol, the receiver must query \(\mathsf {PUF}_s\) that he obtained from the sender on a random challenge c, before returning \(\mathsf {PUF}_s\) to the sender. A malicious receiver cannot have queried \(\mathsf {PUF}_s\) on both c and \((c \oplus x_0 \oplus x_1)\), because \((x_0 \oplus x_1)\) is chosen by the sender, independently and uniformly at random, and is revealed only after the receiver has returned \(\mathsf {PUF}_s\). If a malicious receiver was restricted to honestly returning the PUF generated by the sender, by unpredictability of \(\mathsf {PUF}_s\), the output of \(\mathsf {PUF}_s\) on \((c \oplus x_0 \oplus x_1)\) would be a completely unpredictable uniform random variable from the point of view of the receiver, and this sufficed to prove sender security.

Fig. 5.
figure 5

OT in the malicious stateless PUF model with encapsulation. We underline all differences from the protocol in the stateless malicious PUF model.

However, if a malicious receiver had no such restriction, it could possibly generate a malicious PUF \(\widehat{\mathsf {PUF}}\) of his own and give it to the sender, in place of the sender’s PUF that it was actually supposed to return. The output of \(\widehat{\mathsf {PUF}}\) would no longer remain unpredictable to the receiver and this would lead to a total break of security. As already pointed out in [7], this can be fixed by having the sender make “test queries” to the PUF he generates, before sending the PUF to the receiver. Indeed, when \(\widehat{\mathsf {PUF}}\) is generated by the receiver independently of \(\mathsf {PUF}_s\), the response of \(\widehat{\mathsf {PUF}}\) on the sender’s random test query will not match the response of \(\mathsf {PUF}_s\) and the sender will catch such a cheating receiver with overwhelming probability.

However there could be a different attack: a malicious receiver can construct \(\widehat{\mathsf {PUF}}\) encapsulating \(\mathsf {PUF}_s\), such that \(\widehat{\mathsf {PUF}}\) redirects all test queries to \(\mathsf {PUF}_s\) (and outputs the value output by \(\mathsf {PUF}_s\) on the evaluation query), whereas it maliciously answers all protocol queries. In order to rule this out, we ensure that the protocol queries (i.e., the input c that the receiver must query \(\mathsf {PUF}_s\) with) are generated uniformly at random, by using coin-tossing, combined with cut-and-choose tests to ensure that they are properly used. This is done carefully to ensure that the test queries and protocol queries are identically distributed in the view of \(\widehat{\mathsf {PUF}}\) (and are revealed only after the receiver has sent \(\widehat{\mathsf {PUF}}\) to the sender).

This ensures that if a maliciously generated \(\widehat{\mathsf {PUF}}\) correctly answers all test queries, then with overwhelming probability it must necessarily have answered at least one evaluation query correctly according to the output of \(\mathsf {PUF}_s\). At this point, an OT combiner is used to obtain one secure instance of OT.

Let the security parameter be n. The protocol in Fig. 5 UC-securely realizes 2-choose-1 OT in a stronger model, where a malicious party is allowed to create malicious PUFs that encapsulate other honest PUFs (see Sect. 2.1). We emphasize that our protocol does not require that honest parties must have the capability to encapsulate PUFs, yet it is secure even when adversarial parties can create encapsulated PUFs. The protocol uses a UC-commitment scheme, secure in the malicious stateless encapsulated PUF model. We use \(\mathsf{Com} \) to denote the ideal functionality for such a scheme. We construct such a scheme in Sect. 7.

Though the commitment scheme we construct is UC-secure, it is not immediately clear that it composes with the rest of the OT protocol for the same reasons as were described in Sect. 4. Namely, the UC framework seemingly does not capture the possibility of transfer of PUFs across sub-protocols, thus we would like to ensure that our UC-commitment scheme composes with the rest of the protocol even if PUFs created for the commitment scheme are used elsewhere.

Like in Sect. 4, this can be resolved by requiring both parties to return PUFs back to the respective creators at the end of the decommitment phase, and the creators performing simple verification checks to ensure that the correct PUF was returned. If any party fails to return the PUF, the other party aborts the protocol. Therefore, parties cannot pass off PUFs used by some party in a previous sub-protocol as a new PUF in a different sub-protocol.

Correctness

Claim

For all \((m_0, m_1) \in {\{0,1\}} ^2\) and \(\mathsf {b} \in {\{0,1\}} \), the output of \({\mathcal R} \) equals \(m_b\).

Proof

If \(b = 0\), \(\mathsf {bc} ^i = \mathsf {b} ^i\) for all i, and the receiver computes:

$$\begin{aligned} m_0'&= S_0 \mathop {\bigoplus }\nolimits _{i \in n \setminus I}r^i = S_0 \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(c^i) = S_0 \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(\mathsf {val}^i \oplus x^i_{\mathsf {b} _i}) \\&= m_0 \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(\mathsf {val}^i \oplus x^i_{\mathsf {bc} _i}) \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(\mathsf {val}^i \oplus x^i_{\mathsf {b} _i}) = m_0. \end{aligned}$$

If \(b = 1\), \(1-\mathsf {bc} ^i = \mathsf {b} ^i\) for all i, and the receiver computes:

$$\begin{aligned} m_1'&= S_1 \mathop {\bigoplus }\nolimits _{i \in n \setminus I}r^i = S_1 \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(c^i) = S_1 \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(\mathsf {val}^i \oplus x^i_{b_i}) \\&= m_1 \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(\mathsf {val}^i \oplus x^i_{1-\mathsf {bc} _i}) \mathop {\bigoplus }\nolimits _{i \in n \setminus I} \mathsf {PUF}_s(\mathsf {val}^i \oplus x^i_{\mathsf {b} _i}) = m_1. \end{aligned}$$

The formal proof of security can be found in the full version of the paper.

7 UC Commitments in the Malicious Encapsulation Model

In this section we construct unconditional UC commitments using stateless PUFs. The model we consider is incomparable with respect to the one of [8] since in our model an adversary can encapsulate honest PUFs (see Sect. 2.1) when creating malicious stateless encapsulated PUFs. Note that the protocol does not require any honest party to have the ability to encapsulate PUFs, but is secure against parties that do have this ability.

We note that it suffices to construct an extractable commitment scheme that is secure against encapsulation. Indeed, given such a scheme, Damgård and Scafuro [8] show that it is possible to compile the extractable commitment scheme using an additional ideal commitment scheme, to obtain a UC commitment scheme that is secure in the malicious stateless PUF model. Since the compiler of [8] does not require any additional PUFs at all, if the extractable commitment and the ideal commitment are secure against encapsulation attacks, then so is the resulting UC commitment.

Extractable Commitments. We describe how to construct an extractable bit commitment scheme \(\mathsf {ExtCom}= (\mathsf {ExtCom.Commit},\mathsf {ExtCom.Decommit},\mathsf {ExtCom.}\mathsf {Extract})\) that is secure in the malicious stateless PUFs model with encapsulation. We start with the extractable commitment scheme of [8] that is secure against malicious PUFs in the non-encapsulated setting. They crucially rely on the fact that the initial PUF (let’s call it \(\mathsf{PUF} _r\)) sent by the receiver can not be replaced by the committer (as that would be caught using a previously computed test query). To perform extraction, the simulator against a malicious committer observes the queries made by the committer to \(\mathsf{PUF} _r\) and extracts the committer’s bit. However, in the encapsulated setting, the malicious committer could encapsulate the receiver’s PUF inside another PUF (let’s call it \(\widehat{\mathsf{PUF} _r}\)) that, for all but one query, answers with the output of \(\mathsf{PUF} _r\). For the value that the committer is actually required to query on, \(\widehat{\mathsf{PUF} _r}\) responds with a maliciously chosen value. Observe that in the protocol description, this query is chosen only by the committer and hence this is an actual attack. Therefore, except with negligible probability, all the receiver’s test queries will be answered by \(\widehat{\mathsf{PUF} _r}\) with the output of the receiver’s original PUF \(\mathsf{PUF} _r\). On the other hand, since the target query is no longer forwarded by \(\widehat{\mathsf{PUF} _r}\) to the receiver’s original PUF, the simulator does not get access to the target query and hence can not extract the committer’s bit.

To overcome this issue, we develop a new technique that forces the malicious committer to reveal the target query to the simulator (but not to the honest receiver). After the committer returns \(\widehat{\mathsf{PUF} _r}\), the receiver creates a new PUF (let’s call it \(\mathsf{PUF} _R\)). Now, using the commitment, the receiver queries \(\mathsf{PUF} _R\) on two values, one of which is guaranteed to be the output of \(\widehat{\mathsf{PUF} _r}\) on the target query. The receiver stores these two outputs and sends \(\mathsf{PUF} _R\) to the committer. The malicious committer now has to query \(\mathsf{PUF} _R\) with \(\widehat{\mathsf{PUF} _r}\)’s output on his target query and commit to the value that is given in output by \(\mathsf{PUF} _R\) (using an ideal commitment scheme). In the decommitment phase, using the previously stored values and the committer’s input bit, the receiver can verify that the committer indeed queried \(\mathsf{PUF} _R\) on the correct value. Observe that since the receiver has precomputed the desired output, the malicious committer will not be able to produce an honest decommitment if he tampers with \(\mathsf{PUF} _R\) and produces a different output. Therefore, the malicious committer must indeed query \(\mathsf{PUF} _R\) and this can be observed by the simulator and used to extract the committer’s bit. Our scheme is described in Fig. 6. We show that this scheme is correct, statistically hiding, and extractable; and give further details in the full version.

Fig. 6.
figure 6

Protocol for Extractable Commitment in the malicious stateless PUF model with encapsulation.