1 Introduction

Quantum computation is the subject of intense research due to the potential of quantum computers to efficiently solve problems which are believed to be intractable for classical computers. The current focus of experiments, aiming to realize scalable quantum computation, is to demonstrate aquantum computational advantage. In other words, this means performing aquantum computation in order to solve aproblem which is proven to be classically intractable, based on plausible complexity-theoretic assumptions. Examples of such problems, suitable for near-term experiments, include boson sampling [1], instantaneous quantum polynomial time (IQP) computations [2] and others [3,4,5]. The prospect of achieving these tasks has ignited aflurry of experimental efforts [6,7,8,9]. However, while demonstrating aquantum computational advantage is an important milestone towards scalable quantum computing, it also raises asignificant challenge:

If aquantum experiment solves aproblem which is proven to be intractable for classical computers, how can one verify the outcome of the experiment?

The first researcher who formalised the above “paradox” as a complexity theoretic question was Gottesman, in a 2004 conference [10]. It was then promoted, in 2007, as a complexity challenge by Aaronson who asked: “If a quantum computer can efficiently solve a problem, can it also efficiently convince an observer that the solution is correct? More formally, does every language in the class of quantumly tractable problems (BQP) admit an interactive proof where the prover is in \(\mathsf {BQP}\) and the verifier is in the class of classically tractable problems (BPP)?” [10]. Vazirani, then emphasized the importance of this question, not only from the perspective of complexity theory, but from a philosophical point of view [11]. In 2007, he raised the question of whether quantum mechanics is a falsifiable theory, and suggested that a computational approach could answer this question. This perspective was explored in depth by Aharonov and Vazirani in [12]. They argued that although many of the predictions of quantum mechanics have been experimentally verified to a remarkable precision, all of them involved systems of low complexity. In other words, they involved few particles or few degrees of freedom for the quantum mechanical system. But the same technique of “predict and verify” would quickly become infeasible for systems of even a few hundred interacting particles due to the exponential overhead in classically simulating quantum systems. And so what if, they ask, the predictions of quantum mechanics start to differ significantly from the real world in the high complexity regime? How would we be able to check this? Thus, the fundamental question is whether there exists a verification procedure for quantum mechanical predictions which is efficient for arbitrarily large systems.

In trying to answer this question we return to complexity theory. The primary complexity class that we are interested in is \(\mathsf {BQP}\), which, as mentioned above, is the class of problems that can be solved efficiently by a quantum computer. The analogous class for classical computers, with randomness, is denoted \(\mathsf {BPP}\). Finally, concerning verification, we have the class \(\mathsf {MA}\), which stands for Merlin-Arthur. This consists of problems whose solutions can be verified by a \(\mathsf {BPP}\) machine when given a proof string, called a witness.Footnote 1BPP is contained in \(\mathsf {BQP}\), since any problem which can be solved efficiently on a classical computer can also be solved efficiently on a quantum computer. Additionally \(\mathsf {BPP}\) is contained in \(\mathsf {MA}\) since any \(\mathsf {BPP}\) problem admits a trivial empty witness. Both of these containments are believed to be strict, though this is still unproven.

What about the relationship between \(\mathsf {BQP}\) and \(\mathsf {MA}\)? Problems are known that are contained in both classes and are believed to be outside of \(\mathsf {BPP}\). One such example is factoring. Shor’s polynomial-time quantum algorithm for factoring demonstrates that the problem is in \(\mathsf {BQP}\) [14]. Additionally, for any number to be factored, the witness simply consists of a list of its prime factors, thus showing that the problem is also in \(\mathsf {MA}\). In general, however, it is believed that \(\mathsf {BQP}\) is not contained in MA [15, 16]. The conjectured relationship between these complexity classes is illustrated in Fig. 1.

Fig. 1
figure 1

Suspected relationship between BQP and MA

What this tells us is that, very likely, there do not exist witnesses certifying the outcomes of general quantum experiments.Footnote 2 We therefore turn to a generalization of \(\mathsf {MA}\) known as an interactive-proof system. This consists of two entities: a verifier and a prover. The verifier is a \(\mathsf {BPP}\) machine, whereas the prover has unbounded computational power. Given a problem for which the verifier wants to check a reported solution, the verifier and the prover interact for a number of rounds which is polynomial in the size of the input to the problem. At the end of this interaction, the verifier should accept a valid solution with high probability and reject, with high probability, otherwise. The class of problems which admit such a protocol is denoted \(\mathsf {IP}\).Footnote 3 In contrast to \(\mathsf {MA}\), instead of having a single proof string for each problem, one has a transcript of back-and-forth communication between the verifier and the prover.

If we are willing to allow our notion of verification to include such interactive protocols, then one would like to know whether \(\mathsf {BQP}\) is contained in \(\mathsf {IP}\). Unlike the relation between \(\mathsf {BQP}\) and \(\mathsf {MA}\), it is, in fact, the case that \(\mathsf {BQP} \subseteq \textsf {IP}\), which means that every problem which can be efficiently solved by a quantum computer admits an interactive-proof system. One would be tempted to think that this solves the question of verification, however, the situation is more subtle. Recall that in \(\mathsf {IP}\), the prover is computationally unbounded, whereas for our purposes we would require the prover to be restricted to \(\mathsf {BQP}\) computations. Hence, the question that we would like answered and, arguably, the main open problem concerning quantum verification is the following:

Problem 1 (Verifiability of

\(\mathsf {BQP}\)computations] Does every problem in\(\mathsf {BQP}\)admitan interactive-proof system in which the prover is restricted to\(\mathsf {BQP}\)computations?

As mentioned, this complexity theoretic formulation of the problem was considered by Gottesman et al. [10, 11] and, in fact, Aaronson has offered a 25$ prize for its resolution [10]. While, as of yet, the question remains open, one does arrive at a positive answer through slight alterations of the interactive-proof system. Specifically, if the verifier interacts with two or more \(\mathsf {BQP}\)-restricted provers, instead of one, and the provers are not allowed to communicate with each other during the protocol, then it is possible to efficiently verify arbitrary \(\mathsf {BQP}\) computations [18,19,20,21,22,23,24]. Alternatively, in the single-prover setting, if we allow the verifier to have a constant-size quantum computer and the ability to send/receive quantum states to/from the prover then it is again possible to verify all polynomial-time quantum computations [25,26,27,28,29,30,31,32,33]. Note that in this case, while the verifier is no longer fully “classical”, its computational capability is still restricted to \(\mathsf {BPP}\) since simulating a constant-size quantum computer can be done in constant time. These scenarios are depicted in Fig. 2.

Fig. 2
figure 2

Models for verifiable quantum computation

The primary technique that has been employed in most, thought not all, of these settings, to achieve verification, is known as blindness. This entails delegating a computation to the provers in such a way that they cannot distinguish this computation from any other of the same size, unconditionally.Footnote 4 Intuitively, verification then follows by having most of these computations be tests or traps which the verifier can check. If the provers attempt to deviate they will have a high chance of triggering these traps and prompt the verifier to reject.

In this paper, we review all of these approaches to verification. We broadly classify the protocols as follows:

  1. 1.

    Single-prover prepare-and-send. These are protocols in which the verifier has the ability to prepare quantum states and send them to the prover. They are covered in Section 2.

  2. 2.

    Single-prover receive-and-measure. In this case, the verifier receives quantum states from the prover and has the ability to measure them. These protocols are presented in Section 3.

  3. 3.

    Multi-prover entanglement-based. In this case, the verifier is fully classical, however it interacts with more than one prover. The provers are not allowed to communicate during the protocol. Section 4 is devoted to these protocols.

From the complexity-theoretic perspective, the protocols from the first two sections are classified as \(\mathsf {QPIP}\) (quantum prover interactive proofs) protocols, or protocols in which the verifier has a minimal quantum device and can send or receive quantum states. Conversely, the entanglement-based protocols are classified as \(\mathsf {MIP^{*}}\) (multi prover interactive proofs with entanglement) protocols, in which the verifier is classical and interacting with provers that share entanglement.Footnote 5

After reviewing the major approaches to verification, in Section 5, we address a number of related topics. In particular, while all of the protocols from Sections 24 are concerned with the verification of general \(\mathsf {BQP}\) computations, in Section 5.1 we mention sub-universal protocols, designed to verify only a particular subclass of quantum computations. Next, in Section 5.2 we discuss an important practical aspect concerning verification, which is fault tolerance. We comment on the possibility of making protocols resistant to noise which could affect any of the involved quantum devices. This is an important consideration for any realistic implementation of a verification protocol. Finally, in Section 5.3 we outline some of the existing experimental implementations of these protocols.

Throughout the review, we are assuming familiarity with the basics of quantum information theory and some elements of complexity theory. However, we provide a brief overview of these topics as well as other notions that are used in this review (such as measurement-based quantum computing) in the appendix, Section A. Note also, that we will be referencing complexity classes such as \(\mathsf {BQP}\), \(\mathsf {QMA}\), \(\mathsf {QPIP}\) and \(\mathsf {MIP^{*}}\). Definitions for all of these are provided in Section A of the appendix. We begin with a short overview of blind quantum computing.

1.1 Blind Quantum Computing

The concept of blind computing is highly relevant to quantum verification. Here, we simply give a succinct outline of the subject. For more details, see this review of blind quantum computing protocols by Fitzsimons [34] as well as [35,36,37,38,39]. Note that, while the review of Fitzsimons covers all of the material presented in this section (and more), we restate the main ideas, so that our review is self-consistent and also in order to establish some of the notation that is used throughout the rest of the paper.

Blindness is related to the idea of computing on encrypted data [40]. Suppose a client has some input x and would like to compute a function f of that input, however, evaluating the function directly is computationally infeasible for the client. Luckily, the client has access to a server with the ability to evaluate \(f(x)\). The problem is that the client does not trust the server with the input x, since it might involve private or secret information (e.g. medical records, military secrets, proprietary information etc). The client does, however, have the ability to encrypt x, using some encryption procedure \(\mathcal {E}\), to a ciphertext \(y \leftarrow \mathcal {E}(x)\). As long as this encryption procedure hides x sufficiently well, the client can send y to the server and receive in return (potentially after some interaction with the server) a string z which decrypts to \(f(x)\). In other words, \(f(x) \leftarrow \mathcal {D}(z)\), where \(\mathcal {D}\) is a decryption procedure that can be performed efficiently by the client.Footnote 6 The encryption procedure can, roughly, provide two types of security: computational or information-theoretic. Computational security means that the protocol is secure as long as certain computational assumptions are true (for instance that the server is unable to invert one-way functions). Information-theoretic security (sometimes referred to as unconditional security), on the other hand, guarantees that the protocol is secure even against a server of unbounded computational power. See [45] for more details on these topics.

In the quantum setting, the situation is similar to that of \(\mathsf {QPIP}\) protocols: the client is restricted to \(\mathsf {BPP}\) computations, but has some limited quantum capabilities, whereas the server is a \(\mathsf {BQP}\) machine. Thus, the client would like to delegate \(\mathsf {BQP}\) functions to the server, while keeping the input and the output hidden. The first solution to this problem was provided by Childs [35]. His protocol achieves information-theoretic security but also requires the client and the server to exchange quantum messages for a number of rounds that is proportional to the size of the computation. This was later improved in a protocol by Broadbent et al. [36], known as universal blind quantum computing (UBQC), which maintained information-theoretic security but reduced the quantum communication to a single message from the client to the server. UBQC still requires the client and the server to have a total communication which is proportional to the size of the computation, however, apart from the first quantum message, the interaction is purely classical. Let us now state the definition of perfect, or information-theoretic, blindness from [36]:

Definition 1 (Blindness)

Let P be a delegated quantum computation protocol involving a client and a server. The client draws the input from the random variable X. Let \(L(X)\) be any function of this random variable. We say that the protocol is blind while leaking at most L(X) if, on the client’s input X, for any \(l \in Range(L)\), the following two hold when given \(l \leftarrow L(X)\):

  1. 1.

    The distribution of the classical information obtained by the server in P is independent of X.

  2. 2.

    Given the distribution of classical information described in 1, the state of the quantum system obtained by the server in P is fixed and independent of X.

The definition is essentially saying that the server’s “view” of the protocol should be independent of the input, when given the length of the input. This view consists, on the one hand, of the classical information he receives, which is independent of X, given \(L(X)\). On the other hand, for any fixed choice of this classical information, his quantum state should also be independent of X, given \(L(X)\). Note that the definition can be extended to the case of multiple servers as well. To provide intuition for how a protocol can achieve blindness, we will briefly recap the main ideas from [35, 36]. We start by considering the quantum one-time pad.

Quantum One-Time Pad

Suppose we have two parties, Alice and Bob, and Alice wishes to send one qubit, \(\rho \), to Bob such that all information about \(\rho \) is kept hidden from a potential eavesdropper, Eve. For this to work, we will assume that Alice and Bob share two classical random bits, denoted \(b_{1}\) and \(b_{2}\), that are known only to them. Alice will then apply the operation \(\mathsf {X}^{b_{1}} \textsf {Z}^{b_{2}}\) (the quantum one-time pad) to \(\rho \), resulting in the state \(\mathsf {X}^{b_{1}} \textsf {Z}^{b_{2}} \rho \textsf {Z}^{b_{2}} \textsf {X}^{b_{1}}\), and send this state to Bob. If Bob then also applies \(\mathsf {X}^{b_{1}} \textsf {Z}^{b_{2}}\) to the state he received, he will recover \(\rho \). What happens if Eve intercepts the state that Alice sends to Bob? Because Eve does not know the random bits \(b_{1}\) and \(b_{2}\), the state that she will intercept will be:

$$ \frac{1}{4} \sum\limits_{b_{1}, b_{2} \in \{0, 1\}} \textsf{X}^{b_{1}} \textsf{Z}^{b_{2}} \rho \textsf{Z}^{b_{2}} \textsf{X}^{b_{1}} $$
(1)

However, it can be shown that for any single-qubit state \(\rho \):

$$ \frac{1}{4} \sum\limits_{b_{1}, b_{2} \in \{0, 1\}} \textsf{X}^{b_{1}} \textsf{Z}^{b_{2}} \rho \textsf{Z}^{b_{2}} \textsf{X}^{b_{1}} = I/2 $$
(2)

In other words, the state that Eve intercepts is the totally mixed state, irrespective of the original state\(\rho \). But the totally mixed state is, by definition, the state of maximal uncertainty. Hence, Eve cannot recover any information about \(\rho \), regardless of her computational power. Note, that for this argument to work, and in particular for (2) to be true, Alice and Bob’s shared bits must be uniformly random. If Alice wishes to send n qubits to Bob, then as long as Alice and Bob share \(2n\) random bits, they can simply perform the same procedure for each of the n qubits. Equation (2) generalizes for the multi-qubit case so that for an n-qubit state \(\rho \) we have:

$$ \frac{1}{4^{n}} \sum\limits_{\mathbf{b_{1}}, \mathbf{b_{2}} \in \{0, 1\}^{n}} \textsf{X}(\mathbf{b_{1}}) \textsf{Z}(\mathbf{b_{2}}) \rho \textsf{Z}(\mathbf{b_{2}}) \textsf{X}(\mathbf{b_{1}}) = I/2^{n} $$
(3)

Here, \(\mathbf {b_{1}}\) and \(\mathbf {b_{2}}\) are n-bit vectors, \(\mathsf {X}(\mathbf {b}) = \bigotimes \limits _{i = 1}^{n} \textsf {X}^{\mathbf {b(i)}}\), \(\mathsf {Z}(\mathbf {b}) = \bigotimes \limits _{i = 1}^{n} \mathsf {Z}^{\mathbf {b(i)}}\) and I is the \(2^{n}\)-dimensional identity matrix.

Childs’ Protocol for Blind Computation

Now suppose Alice has some n-qubit state \(\rho \) and wants a quantum circuit \(\mathcal {C}\) to be applied to this state and the output to be measured in the computational basis. However, she only has the ability to store n qubits, prepare qubits in the \({\left \vert {0}\right \rangle }\) state, swap any two qubits, or apply a Pauli \(\mathsf {X}\) or \(\mathsf {Z}\) to any of the n qubits. So in general, she will not be able to apply a general quantum circuit \(\mathcal {C}\), or perform measurements. Bob, on the other hand, does not have these limitations as he is a \(\mathsf {BQP}\) machine and thus able to perform universal quantum computations. How can Alice delegate the application of \(\mathcal {C}\) to her state without revealing any information about it, apart from its size, to Bob? The answer is provided by Childs’ protocol [35]. Before presenting the protocol, recall that any quantum circuit, \(\mathcal {C}\), can be expressed as a combination of Clifford operations and \(\mathsf {T}\) gates. Additionally, Clifford operations normalise Pauli gates. All of these notions are defined in the appendix, Section 1.

First, Alice will one-time pad her state and send the padded state to Bob. As mentioned, this will reveal no information to Bob about \(\rho \). Next, Alice instructs Bob to start applying the gates in \(\mathcal {C}\) to the padded state. Apart from the \(\mathsf {T}\) gates, all other operations in \(\mathcal {C}\) will be Clifford operations, which normalise the Pauli gates.Footnote 7 Thus, if Alice’s padded state is \(\mathsf {X}(\mathbf {b_{1}}) \mathsf {Z}(\mathbf {b_{2}}) \rho \mathsf {Z}(\mathbf {b_{2}}) \mathsf {X}(\mathbf {b_{1}})\) and Bob applies the Clifford unitary \(U_{C}\), the resulting state will be:

$$ U_{C} \textsf{X}(\mathbf{b_{1}}) \textsf{Z}(\mathbf{b_{2}}) \rho \textsf{Z}(\mathbf{b_{2}}) \textsf{X}(\mathbf{b_{1}}) U_{C}^{\dagger} = \textsf{X}(\mathbf{b^{\prime}_{1}}) \textsf{Z}(\mathbf{b^{\prime}_{2}}) U_{C} \rho U_{C}^{\dagger} \textsf{Z}(\mathbf{b^{\prime}_{2}}) \textsf{X}(\mathbf{b^{\prime}_{1}}) $$
(4)

Here, \(\mathbf {b^{\prime }_{1}}\) and \(\mathbf {b^{\prime }_{2}}\) are linearly related to \(\mathbf {b_{1}}\) and \(\mathbf {b_{2}}\), meaning that Alice can compute them using only xor operations. This gives her an updated pad for her state. If \(\mathcal {C}\) consisted exclusively of Clifford operations then Alice would only need to keep track of the updated pad (also referred to as the Pauli frame) after each gate. Once Bob returns the state, she simply undoes the one-time pad using the updated key, that she computed, and recovers \(\mathcal {C} \rho \mathcal {C}^{\dagger }\). Of course, this will not work if \(\mathcal {C}\) contains \(\mathsf {T}\) gates, since, up to an overall phase, we have that:

$$ \textsf{T} \textsf{X}^{a} = \textsf{X}^{a} \textsf{S}^{a} \textsf{T} $$
(5)

where \(\mathsf {S} = \textsf {T}^{2}\) and is not a Pauli gate. In other words, if we try to commute the \(\mathsf {T}\) operation with the one-time pad we will get an unwanted \(\mathsf {S}\) gate applied to the state. Worse, the \(\mathsf {S}\) will have a dependency on one of the secret pad bits for that particular qubit. This means that if Alice asks Bob to apply an \(\mathsf {S}^{a}\) operation she will reveal one of her pad bits. Fortunately, as explained in [35], there is a simple way to remedy this problem. After each \(\mathsf {T}\) gate, Alice asks Bob to return the quantum state to her. Suppose that Bob had to apply a \(\mathsf {T}\) on qubit j. Alice then applies a new one-time pad on that qubit. If the previous pad had no \(\mathsf {X}\) gate applied to j, she will swap this qubit with a dummy state that does not take part in the computation,Footnote 8 otherwise she leaves the state unchanged. She then returns the state to Bob and asks him to apply an \(\mathsf {S}\) gate to qubit j. Since this operation will always be applied, after a \(\mathsf {T}\) gate, it does not reveal any information about Alice’s pad. Bob’s operation will therefore cancel the unwanted S gate when this appears and otherwise it will act on a qubit which does not take part in the computation. The state should then be sent back to Alice so that she can undo the swap operation if it was performed. Once all the gates in \(\mathcal {C}\) have been applied, Bob is instructed to measure the resulting state in the computational basis and return the classical outcomes to Alice. Since the quantum output was one-time padded, the classical outcomes will also be one-time padded. Alice will then undo the pad an recover her desired output.

While Childs’ protocol provides an elegant solution to the problem of quantum computing on encrypted data, it has significant requirements in terms of Alice’s quantum capabilities. If Alice’s input is fully classical, i.e. some state \({\left \vert {x}\right \rangle }\), where \(x \in \{0,1\}^{n}\), then Alice would only require a constant-size quantum memory. Even so, the protocol requires Alice and Bob to exchange multiple quantum messages. This, however, is not the case with UBQC which limits the quantum communication to one quantum message sent from Alice to Bob at the beginning of the protocol. Let us now briefly state the main ideas of that protocol.

Universal Blind Quantum Computation (UBQC)

In UBQC the objective is to not only hide the input (and output) from Bob, but also the circuit which will act on that inputFootnote 9 [36]. As in the previous case, Alice would like to delegate to Bob the application of some circuit \(\mathcal {C}\) on her input (which, for simplicity, we will assume is classical). This time, however, we view \(\mathcal {C}\) as an MBQC computation.Footnote 10 By considering some universal graph state, \({\left \vert {G}\right \rangle }\), such as the brickwork state (see Fig. 17), Alice can convert \(\mathcal {C}\) into a description of \({\left \vert {G}\right \rangle }\) (the graph G) along with the appropriate measurement angles for the qubits in the graph state. By the property of the universal graph state, the graph G would be the same for all circuits \(\mathcal {C^{\prime }}\) having the same number of gates as \(\mathcal {C}\). Hence, if she were to send this description to Bob, it would not reveal to him the circuit \(\mathcal {C}\), merely an upper bound on its size. It is, in fact, the measurement angles and the ordering of the measurements (known as flow) that uniquely characterise \(\mathcal {C}\) [46]. But the measurement angles are chosen assuming all qubits in the graph state were initially prepared in the \({\left \vert {+}\right \rangle }\) state. Since these are \(\mathsf {X}\textsf {Y}\)-plane measurements, as explained in Section A, the probabilities, for the two possible outcomes, depend only on the difference between the measurement angle and the preparation angle of the state, which is 0, in this case.Footnote 11 Suppose instead that each qubit, indexed i, in the cluster state, were instead prepared in the state \(\left \vert {+_{\theta _{i}}}\right \rangle \). Then, if the original measurement angle for qubit i was \(\phi _{i}\), to preserve the relative angles, the new value would be \(\phi _{i} + \theta _{i}\). If the values for \(\theta _{i}\) are chosen at random, then they effectively act as a one-time pad for the original measurement angles \(\phi _{i}\). This means that if Bob does not know the preparation angles of the qubits and were instructed to measure them at the updated angles \(\phi _{i} + \theta _{i}\), to him, these angles would be indistinguishable from random, irrespective of the values of ϕi. He would, however, learn the measurement outcomes of the MBQC computation. But there is a simple way to hide this information as well. One can flip the probabilities of the measurement outcomes for a particular state by performing a \(\pi \) rotation around \(\mathsf {Z}\) axis. In other words, the updated measurement angles will be δi = ϕi + 𝜃i + riπ, where \(r_{i}\) is sampled randomly from \(\{0, 1\}\).

To recap, UBQC works as follows:

  • (1) Alice chooses an input x and a quantum computation \(\mathcal {C}\) that she would like Bob to perform on \({\left \vert {x}\right \rangle }\).

  • (2) She converts x and \(\mathcal {C}\) into a pair \((G, \{\phi _{i}\}_{i})\), where \({\left \vert {G}\right \rangle }\) is an N-qubit universal graph state (with an established ordering for measuring the qubits), \(N = O(|\mathcal {C}|)\) and \(\{\phi _{i}\}_{i}\) is the set of computation angles allowing for the MBQC computation of \(\mathcal {C}\left \vert {x}\right \rangle \).

  • (3) She picks, uniformly at random, values \(\theta _{i}\), with i going from 1 to N, from the set \(\{0, \pi /4, 2\pi /4, ... 7\pi /4\}\) as well as values \(r_{i}\) from the set \(\{0, 1\}\).

  • (4) She then prepares the states \({\left \vert {+_{\theta _{i}}}\right \rangle }\) and sends them to Bob, who is instructed to entangle them, using \(\mathsf {CZ}\) operations, according to the graph structure G.

  • (5) Alice then asks Bob to measure the qubits at the angles \(\delta _{i} = \phi ^{\prime }_{i} + \theta _{i} + r_{i} \pi \) and return the measurement outcomes to her. Here, \(\phi ^{\prime }_{i}\) is an updated version of \(\phi _{i}\) that incorporates corrections resulting from previous measurements, as in the description of MBQC given in Section A.

  • (6) After all the measurements have been performed, Alice undoes the \(r_{i}\) one-time padding of the measurement outcomes, thus recovering the true outcome of the computation.

The protocol is illustrated schematically in Fig. 3, reproduced from [47] (the variables \(b_{1}\), \(b_{2}\), \(b_{3}\) indicate measurement outcomes).

Fig. 3
figure 3

Universal blind quantum computation

We can see that as long as Bob does not know the values of the \(\theta _{i}\) and \(r_{i}\) variables, the measurements he is asked to perform, as well as their outcomes, will appear totally random to him. The reason why Bob cannot learn the values of \(\theta _{i}\) and \(r_{i}\) from the qubits prepared by Alice is due to the limitation, in quantum mechanics, that one cannot distinguish between non-orthogonal states. In fact, a subsequent paper by Dunjko and Kashefi shows that Alice can utilize any two non-overlapping, non-orthogonal states in order to perform UBQC [48].

2 Prepare-and-Send Protocols

We start by reviewing \(\mathsf {QPIP}\) protocols in which the only quantum capability of the verifier is to prepare and send constant-size quantum states to the prover (no measurement). The verifier must use this capability in order to delegate the application of some \(\mathsf {BQP}\) circuit, \(\mathcal {C}\), on an input \({\left \vert {\psi }\right \rangle }\).Footnote 12 Through interaction with the prover, the verifier will attempt to certify that the correct circuit was indeed applied on her input, with high probability, aborting the protocol otherwise.

There are three major approaches that fit this description and we devote a subsection to each of them:

  1. 1.

    Section 2.1: two protocols based on quantum authentication, developed by Aharonov et al. [25, 26].

  2. 2.

    Section 2.2: a trap-based protocol, developed by Fitzsimons and Kashefi [27].

  3. 3.

    Section 2.3: a scheme based on repeating indistinguishable runs of tests and computations, developed by Broadbent [28].

In the context of prepare-and-send protocols, it is useful to provide more refined notions of completeness and soundness than the ones in the definition of a \(\mathsf {QPIP}\) protocol. This is because, apart from knowing that the verifier wishes to delegate a \(\mathsf {BQP}\) computation to the prover, we also know that it prepares a particular quantum state and sends it to the prover to act with some unitary operation on it (corresponding to the quantum circuit associated with the \(\mathsf {BQP}\) computation). This extra information allows us to define δ-correctness and 𝜖-verifiability. We start with the latter:

Definition 2 (𝜖-verifiability)

Consider a delegated quantum computation protocol between a verifier and a prover and let the verifier’s quantum state be \(\left \vert {\psi }\right \rangle \left \vert {flag}\right \rangle \), where \(\left \vert {\psi }\right \rangle \) is the input state to the protocol and \({\left \vert {flag}\right \rangle }\) is a flag state denoting whether the verifier accepts (\({\left \vert {flag}\right \rangle } = {\left \vert {acc}\right \rangle }\)) or rejects (\({\left \vert {flag}\right \rangle } = {\left \vert {rej}\right \rangle }\)) at the end of the protocol. Consider also the quantum channel \(Enc_{s}\) (encoding), acting on the verifier’s state, where s denotes a private random string, sampled by the verifier from some distribution p(s). Let \(\mathcal {P}_{honest}\) denote the CPTP map corresponding to the honest action of the prover in the protocol (i.e. following the instructions of the verifier) acting on the verifier’s state. Additionally, define:

$$ P_{incorrect}^{s} = (I - {\left\vert{{\Psi}^{s}_{out}}\right\rangle}{\left\langle{{\Psi}^{s}_{out}}\right\vert}) \otimes {\left\vert{acc^{s}}\right\rangle}{\left\langle{acc^{s}}\right\vert} $$
(6)

as a projection onto the orthogonal complement of the correct output:

$$ {\left\vert{{\Psi}^{s}_{out}}\right\rangle}{\left\langle{{\Psi}^{s}_{out}}\right\vert} = Tr_{flag} (\mathcal{P}_{honest} (Enc_{s}(\left\vert{\psi}\right\rangle{\left\langle{\psi}\right\vert} \otimes \left\vert{acc}\right\rangle \left\langle{acc}\right\rangle))) $$
(7)

and on acceptance for the flag state:

$$ {\left\vert{acc^{s}}\right\rangle}{\left\langle{acc^{s}}\right\vert} = Tr_{input} (\mathcal{P}_{honest} (Enc_{s}(\left\langle{\psi}\right\rangle{\left\langle{\psi}\right\vert} \otimes \left\langle{acc}\right\rangle \left\langle{acc}\right\rangle))) $$
(8)

We say that such a protocol is \(\epsilon \)-verifiable (with \(0\leq \epsilon \leq 1\)), if for any action \(\mathcal {P}\), of the prover, we have that:Footnote 13

$$ Tr \left( {\sum}_{s} p(s) P_{incorrect}^{s} \mathcal{P}(Enc_{s}({\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert})) \right) \leq \epsilon $$
(9)

Essentially, this definition says that the probability for the output of the protocol to be incorrect and the verifier accepting, should be bounded by \(\epsilon \). As a simple mathematical statement we would write this as the joint distribution:

$$ Pr(incorrect, accept ) \leq \epsilon $$
(10)

One could also ask whether \(Pr(incorrect | accept)\) should also be upper bounded. Indeed, it would seem like this conditional distribution is a better match for our intuition regarding the “probability of accepting an incorrect outcome”. However, giving a sensible upper bound for the conditional distribution can be problematic. To understand why, note that we can express the conditional distribution as:

$$ Pr(incorrect | accept ) = \frac{Pr(incorrect, accept )}{Pr(accept)} $$
(11)

Now, it is true that if \(Pr(accept)\) is close to 1 and the joint distribution is upper bounded, then the conditional distribution will also be upper bounded. Suppose however that \(Pr(accept) = 2^{-O(|\mathcal {C}|)}\). In other words, the probability of acceptance is exponentially small in the size of the delegated computation.Footnote 14 In this case, to upper bound the conditional distribution, it must be that the joint probability is also inverse exponential in the size of the computation. But this is a highly unusual condition, for it would mean that the prover is more likely to deceive the verifier for smaller computations, rather than for larger ones. Moreover, as we will see with the presented protocols, it is typical for the joint probability to be upper bounded by a quantity that is independent of the size of the computation. For this reason, approaches to verification will either bound Pr(incorrect,accept), or provide a bound for \(Pr(incorrect | accept )\) conditioned on the fact that \(Pr(accept)\) is close to 1 (see [18] for an example of this).

We now define \(\delta \)-correctness:

Definition 3 (δ-correctness)

Consider a delegated quantum computation protocol between a verifier and a prover. Using the notation from Definition 2, and letting:

$$ P_{correct}^{s} = {\left\vert{{\Psi}^{s}_{out}}\right\rangle}{\left\langle{{\Psi}^{s}_{out}}\right\vert} \otimes {\left\vert{acc^{s}}\right\rangle}{\left\langle{acc^{s}}\right\vert} $$
(12)

be the projection onto the correct output and on acceptance for the flag state, we say that such a protocol is \(\delta \)-correct (with \(0\leq \delta \leq 1\)), if for all strings s we have that:

$$ Tr \left( P_{correct}^{s} \mathcal{P}_{honest}(Enc_{s}({\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert})) \right) \geq \delta $$
(13)

This definition says that when the prover behaves honestly, the verifier obtains the correct outcome, with high probability, for any possible choice of its secret parameters.

If a prepare-and-send protocol has both \(\delta \)-correctness and \(\epsilon \)-verifiability, for some \(\delta > 0\), \(\epsilon < 1\), it will also have completeness δ(1/2 + 1/poly(n)) and soundness \(\epsilon \) as a \(\mathsf {QPIP}\) protocol, where n is the size of the input. The reason for the asymmetry in completeness and soundness is that in the definition of δ-correctness we require that the output quantum state of the protocol is \(\delta \)-close to the output quantum state of the desired computation. But the computation outcome is dictated by a measurement of this state, which succeeds with probability at least \(1/2 + 1/poly(n)\), from the definition of \(\mathsf {BQP}\). Combining these facts leads to δ(1/2 + 1/poly(n)) completeness. It follows that for this to be a valid \(\mathsf {QPIP}\) protocol it must be that \(\delta (1/2 + 1/poly(n)) - \epsilon \geq 1/poly(n)\), for all inputs. For simplicity, we will instead require \(\delta /2 - \epsilon \geq 1/poly(n)\), which implies the previous inequality. As we will see, for all prepare-and-send protocols \(\delta = 1\). This condition is easy to achieve by simply designing the protocol so that the honest behaviour of the prover leads to the correct unitary being applied to the verifier’s quantum state. Therefore, the main challenge with these protocols will be to show that \(\epsilon \leq 1/2 - 1/poly(n)\).

2.1 Quantum Authentication-Based Verification

This subsection is dedicated to the two protocols presented in [25, 26] by Aharonov et al. These protocols are extensions of Quantum Authentication Schemes (QAS), a security primitive introduced in [52] by Barnum et al. A QAS is a scheme for transmitting a quantum state over an insecure quantum channel and being able to indicate whether the state was corrupted or not. More precisely, a QAS involves a sender and a receiver. The sender has some quantum state \({\left \vert {\psi }\right \rangle }{\left \vert {flag}\right \rangle }\) that it would like to send to the receiver over an insecure channel. The state \({\left \vert {\psi }\right \rangle }\) is the one to be authenticated, while \({\left \vert {flag}\right \rangle }\) is an indicator state used to check whether the authentication was performed successfully. We will assume that \({\left \vert {flag}\right \rangle }\) starts in the state \({\left \vert {acc}\right \rangle }\). It is also assumed that the sender and the receiver share some classical key k, drawn from a probability distribution \(p(k)\). To be able to detect the effects of the insecure channel on the state, the sender will first apply some encoding procedure Enck thus obtaining \(\rho = {\sum }_{k} p(k) Enc_{k}({\left \vert {\psi }\right \rangle }{\left \vert {acc}\right \rangle })\). This state is then sent over the quantum channel where it can be tampered with by an eavesdropper resulting in a new state \(\rho ^{\prime }\). The receiver, will then apply a decoding procedure to this state, resulting in \(Dec_{k}(\rho ^{\prime })\) and decide whether to accept or reject by measuring the flag subsystem.Footnote 15 Similar to verification, this protocol must satisfy two properties:

  1. 1.

    δ-correctness. Intuitively this says that if the state sent through the channel was not tampered with, then the receiver should accept with high probability (at least \(\delta \)), irrespective of the used keys. More formally, for \(0 \leq \delta \leq 1\), let:

    $$P_{correct} = {\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} $$

    be the projector onto the correct state \({\left \vert {\psi }\right \rangle }\) and on acceptance for the flag state. Then, it must be the case that for all keys k:

    $$Tr \left( P_{correct} Dec_{k}(Enc_{k}({\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} )) \right) \geq \delta $$
  2. 2.

    𝜖-security. This property states that for any deviation that the eavesdropper applies on the sent state, the probability that the resulting state is far from ideal and the receiver accepts is small. Formally, for \(0 \leq \epsilon \leq 1\), let:

    $$P_{incorrect} = (I - {\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert}) \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} $$

    be the projector onto the orthogonal complement of the correct state \({\left \vert {\psi }\right \rangle }\), and on acceptance, for the flag state. Then, it must be the case that for any CPTP action, \(\mathcal {E}\), of the eavesdropper, we have:

    $$Tr \left( P_{incorrect} {\sum}_{k} p(k) Dec_{k} (\mathcal{E}(Enc_{k}({\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} ))) \right) \leq \epsilon $$

To make the similarities between QAS and prepare-and-send protocols more explicit, suppose that, in the above scheme, the receiver were trying to authenticate the state \(U{\left \vert {\psi }\right \rangle }\)instead of\({\left \vert {\psi }\right \rangle }\), for some unitary U. In that case, we could view the sender as the verifier at the beginning of the protocol, the eavesdropper as the prover and the receiver as the verifier at the end of the protocol. This is illustrated in Fig. 4, reproduced from [47]. If one could therefore augment a QAS scheme with the ability of applying a quantum circuit on the state, while keeping it authenticated, then one would essentially have a prepare-and-send verification protocol. This is what is achieved by the two protocols of Aharonov et al. (Fig. 4).

Fig. 4
figure 4

QAS-based verification

Clifford-QAS VQC

The first protocol, named Clifford QAS-based Verifiable Quantum Computing (Clifford-QAS VQC) is based on a QAS which uses Clifford operations in order to perform the encoding procedure. Strictly speaking, this protocol is not a prepare-and-send protocol, since, as we will see, it involves the verifier performing measurements as well. However, it is a precursor to the second protocol from [25, 26], which is a prepare-and-send protocol. Hence, why we review the Clifford-QAS VQC protocol here.

Let us start by explaining the authentication scheme first. As before, let \(\left \vert {\psi }\right \rangle {\left \vert {flag}\right \rangle }\) be the state that the sender wishes to send to the receiver and k be their shared random key. We will assume that \(\left \vert {\psi }\right \rangle \) is an n-qubit state, while \(\left \vert {flag}\right \rangle \) is an m-qubit state. Let \(t = n + m\) and \(\mathfrak {C}_{t}\) be the set of t-qubit Clifford operationsFootnote 16 We also assume that each possible key, k, can specify a unique t-qubit Clifford operation, denoted \(C_{k}\).Footnote 17 The QAS works as follows:

  • (1) The sender performs the encoding procedure \(Enc_{k}\). This consists of applying the Clifford operation \(C_{k}\) to the state \({\left \vert {\psi }\right \rangle }{\left \vert {acc}\right \rangle }\).

  • (2) The state is sent through the quantum channel.

  • (3) The receiver applies the decoding procedure \(Dec_{k}\) which consists of applying \(C_{k}^{\dagger }\) to the received state.

  • (4) The receiver measures the flag subsystem and accepts if it is in the \({\left \vert {acc}\right \rangle }\) state.

We can see that this protocol has correctness \(\delta = 1\), since, the sender and receiver’s operations are exact inverses of each other and, when there is no intervention from the eavesdropper, they will perfectly cancel out. It is also not too difficult to show that the protocol achieves security \(\epsilon = 2^{-m}\). We will include a sketch proof of this result as all other proofs of security, for prepare-and-send protocols, rely on similar ideas. Aharonov et al start by using the following lemma:

Lemma 1 (Clifford twirl)

Let\(P_{1}\), \(P_{2}\)betwo operators from the n-qubit Pauli group, suchthat\(P_{1} \neq P_{2}\).Footnote 18 For any n-qubit density matrix \(\rho \) it is the case that:

$$ {\sum}_{C \in \mathfrak{C}_{n}} C^{\dagger} P_{1} C \rho C^{\dagger} P_{2} C = 0 $$
(16)

To see how this lemma is applied, recall that any CPTP map admits a Kraus decomposition, so we can express the eavesdropper’s action as:

$$ \mathcal{E}(\rho) = {\sum}_{i} K_{i} \rho K_{i}^{\dagger} $$
(17)

where, \(\{ K_{i} \}_{i}\) is the set of Kraus operators, satisfying:

$$ {\sum}_{i} K_{i}^{\dagger} K_{i} = I $$
(18)

Additionally, recall that the n-qubit Pauli group is a basis for all \(2^{n} \times 2^{n}\) matrices, which means that we can express each Kraus operator as:

$$ K_{i} = {\sum}_{j} \alpha_{ij} P_{j} $$
(19)

where j ranges over all indices for n-qubit Pauli operators and \(\{ \alpha _{ij} \}_{i,j}\) is a set of complex numbers such that:

$$ {\sum}_{ij} \alpha_{ij} \alpha^{*}_{ij} = 1 $$
(20)

For simplicity, assume that the phase information of each Pauli operator, i.e. whether it is \(+ 1\), \(-1\), \(+i\) or \(-i\), is absorbed in the \(\alpha _{ij}\) terms. One can then re-express the eavesdropper’s deviation as:

$$ \mathcal{E}(\rho) = {\sum}_{ijk} \alpha_{ij} \alpha^{*}_{ik} P_{j} \rho P_{k} $$
(21)

We would now like to use Lemma 1 to see how this deviation affects the encoded state. Given that the encoding procedure involves applying a random Clifford operation to the initial state, which we will denote \(\left \vert {{\Psi }_{in}}\right \rangle = \left \vert {\psi }\right \rangle \left \vert {acc}\right \rangle \), the state received by the eavesdropper will be:

$$ \rho = \frac{1}{|\mathfrak{C}_{t}|} {\sum}_{l} C_{l} {\left\vert{{\Psi}_{in}}\right\rangle}{\left\langle{{\Psi}_{in}}\right\vert} C_{l}^{\dagger} $$
(22)

Acting with \(\mathcal {E}\) on this state and using (21) yields:

$$ \mathcal{E}(\rho) = \frac{1}{|\mathfrak{C}_{t}|} {\sum}_{ijkl} \alpha_{ij} \alpha^{*}_{ik} P_{j} C_{l} {\left\vert{{\Psi}_{in}}\right\rangle}{\left\langle{{\Psi}_{in}}\right\vert} C_{l}^{\dagger} P_{k} $$
(23)

The receiver takes this state and applies the decoding operation, which involves inverting the Clifford that was applied by the sender. This will produce the state:

$$ \frac{1}{|\mathfrak{C}_{t}|} {\sum}_{ijkl} \alpha_{ij} \alpha^{*}_{ik} C_{l}^{\dagger} P_{j} C_{l} {\left\vert{{\Psi}_{in}}\right\rangle}{\left\langle{{\Psi}_{in}}\right\vert} C_{l}^{\dagger} P_{k} C_{l} $$
(24)

Finally, using Lemma 1 we can see that all terms which act with different Pauli operations on both sides (i.e. \(j \neq k\)) will vanish, resulting in:

$$ \sigma = \frac{1}{|\mathfrak{C}_{t}|} {\sum}_{ijl} \alpha_{ij} \alpha^{*}_{ij} C_{l}^{\dagger} P_{j} C_{l} {\left\vert{{\Psi}_{in}}\right\rangle}{\left\langle{{\Psi}_{in}}\right\vert} C_{l}^{\dagger} P_{j} C_{l} $$
(25)

Let us take a step back and understand what happened. We saw that any general map can be expressed as a combination of Pauli operators acting on both sides of the target state, \(\rho \). Importantly, the Pauli operators on both sides needed not be equal. However, if the target state is an equal mixture of Clifford terms acting on some other state (in our case \(\left \vert {{\Psi }_{in}}\right \rangle \left \langle {{\Psi }_{in}}\right \rangle \)), which are then “undone” by the decoding procedure, the Clifford twirl lemma makes all non-equal Pauli terms vanish. In the resulting state, σ, we notice that each Pauli term is conjugated by Clifford operators from the set \(\mathfrak {C}_{t}\). We know that conjugating a Pauli matrix by a Clifford operator results in a new Pauli matrix. Moreover, we know that for all j it is the case that:

In other words, averaging over the Clifford group results in an equal mixture of all Pauli operations. From this and since \(\alpha _{ij}\alpha ^{*}_{ij} = |\alpha _{ij}|^{2}\) is a positive real number and \({\sum }_{ij} \alpha _{ij}\alpha ^{*}_{ij} = 1\), the resulting state is a uniform convex combination of Pauli operators acting on the initial state. Mathematically, this means:

$$ \sigma = \beta \left\vert{{\Psi}_{in}}\right\rangle\left\langle{{\Psi}_{in}}\right\rangle + \frac{1 - \beta}{4^{t} - 1} {\sum}_{i, P_{i} \neq I} P_{i} {\left\vert{{\Psi}_{in}}\right\rangle}{\left\langle{{\Psi}_{in}}\right\vert} P_{i} $$
(27)

where \(0 \leq \beta \leq 1\).

The last element in the proof is to compute \(Tr(P_{incorrect} \sigma )\). Since the first term in the mixture is the ideal state, we will be left with:

$$ Tr(P_{incorrect} \sigma) = \frac{1 - \beta}{4^{t} - 1} {\sum}_{i, P_{i} \neq I} Tr(P_{incorrect} P_{i} {\left\vert{{\Psi}_{in}}\right\rangle}{\left\langle{{\Psi}_{in}}\right\vert} P_{i}) $$
(28)

The terms in the summation will be non-zero whenever \(P_{i}\) acts as identity on the flag subsystem. The number of such terms can be computed to be exactly \(4^{n} 2^{m} - 1\) and using the fact that \(t = m + n\) and \(1 - \beta \leq 1\), we have:

$$ Tr(P_{incorrect} \sigma) \leq (1 - \beta) \frac{4^{n} 2^{m} - 1}{4^{m+n}} \leq \frac{1}{2^{m}} $$
(29)

concluding the proof.

As mentioned, in all prepare-and-send protocols we assume that the verifier will prepare some state \({\left \vert {\psi }\right \rangle }\) on which it wants to apply a quantum circuit denoted \(\mathcal {C}\). Since we are assuming that the verifier has a constant-size quantum device, the state \({\left \vert {\psi }\right \rangle }\) will be a product state, i.e. \({\left \vert {\psi }\right \rangle } = {\left \vert {\psi _{1}}\right \rangle } \otimes {\left \vert {\psi _{2}}\right \rangle } \otimes ... \otimes {\left \vert {\psi _{n}}\right \rangle }\). For simplicity, assume each \({\left \vert {\psi _{i}}\right \rangle }\) is one qubit, though any constant number of qubits is allowed. In Clifford-QAS VQC the verifier will use the prover as an untrusted quantum storage device. Specifically, each \({\left \vert {\psi _{i}}\right \rangle }\), from \({\left \vert {\psi }\right \rangle }\), will be paired with a constant-size flag system in the accept state, \({\left \vert {acc}\right \rangle }\), resulting in a block of the form \({\left \vert {block_{i}}\right \rangle } = {\left \vert {\psi _{i}}\right \rangle }{\left \vert {acc}\right \rangle }\). Each block will be encoded, by having a random Clifford operation applied on top of it. The verifier prepares these blocks, one at a time, for all \(i \in \{1,... n\}\), and sends them to the prover. The prover is then asked to return pairs of blocks to the verifier so that she may apply gates from \(\mathcal {C}\) on them (after undoing the Clifford operations). The verifier then applies new random Clifford operations on the blocks and sends them back to the prover. The process continues until all gates in \(\mathcal {C}\) have been applied.

But what if the prover corrupts the state or deviates in some way? This is where the QAS enters the picture. Since each block has a random Clifford operation applied, the idea is to have the verifier use the Clifford QAS scheme to ensure that the quantum state remains authenticated after each gate in the quantum circuit is applied. In other words, if the prover attempts to deviate at any point resulting in a corrupted state, this should be detected by the authentication scheme. Putting everything together, the protocol works as follows:

  • (1) Suppose the input state that the verifier intends to prepare is \({\left \vert {\psi }\right \rangle } = {\left \vert {\psi _{1}}\right \rangle } \otimes {\left \vert {\psi _{2}}\right \rangle } \otimes ... \otimes {\left \vert {\psi _{n}}\right \rangle }\), where each \(\left \vert {\psi _{i}}\right \rangle \) is a one qubit state.Footnote 19 Also let \(\mathcal {C}\) be quantum circuit that the verifier wishes to apply on \(\left \vert {\psi }\right \rangle \). The verifier prepares (one block at a time) the state \(\left \vert {\psi }\right \rangle \left \vert {flag}\right \rangle = \left \vert {block_{1}}\right \rangle \otimes \left \vert {block_{2}}\right \rangle \otimes ... \otimes \left \vert {block_{n}}\right \rangle \), where \(\left \vert {block_{i}}\right \rangle = \left \vert {\psi _{i}}\right \rangle \left \vert {acc}\right \rangle \) and each \(\left \vert {acc}\right \rangle \) state consists of a constant number m of qubits. Additionally let the size of each block be \(t = m + 1\).

  • (2) The verifier applies a random Clifford operation, from the set \(\mathfrak {C}_{t}\) on each block and sends it to the prover.

  • (3) The verifier requests a pair of blocks, \(({\left \vert {block_{i}}\right \rangle }, {\left \vert {block_{j}}\right \rangle })\), from the prover, in order to apply a gate from \(\mathcal {C}\) on the corresponding qubits, \((\left \vert {\psi _{i}}\right \rangle , \left \vert {\psi _{j}})\right \rangle \). Once the blocks have been received, the verifier undoes the random Clifford operations and measures the flag registers, aborting if these are not in the \(\left \vert {acc}\right \rangle \) state. Otherwise, the verifier performs the gate from \(\mathcal {C}\), applies new random Clifford operations on each block and sends them back to the prover. This step repeats until all gates in \(\mathcal {C}\) have been performed.

  • (4) Once all gates have been performed, the verifier requests all the blocks (one by one) in order to measure the output. As in the previous step, the verifier will undo the Clifford operations first and measure the flag registers, aborting if any of them are not in the \(\left \vert {acc}\right \rangle \) state.

We can see that the security of this protocol reduces to the security of the Clifford QAS. Moreover, it is also clear that if the prover behaves honestly, then the verifier will obtain the correct output state exactly. Hence:

Theorem 1

For a fixed constant\(m > 0\), Clifford-QAS VQC is a prepare-and-send\(\mathsf {QPIP}\)protocol having correctness\(\delta = 1\)andverifiability\(\epsilon = 2^{-m}\).

Poly-QAS VQC

The second protocol in [25, 26], is referred to as Polynomial QAS-based Verifiable Quantum Computing (Poly-QAS VQC). It improves upon the previous protocol by removing the interactive quantum communication between the verifier and the prover, reducing it to a single round of quantum messages sent at the beginning of the protocol. To encode the input, this protocol uses a specific type of quantum error correcting code known as a polynomial CSS code [53]. We will not elaborate on the technical details of these codes as that is beyond the scope of this review. We only mention a few basic characteristics which are necessary in order to understand the Poly-QAS VQC protocol. The polynomial CSS codes operate on qudits instead of qubits. A q-qudit is simply a quantum state in a q-dimensional Hilbert space. The generalized computational basis for this space is given by \(\{ {\left \vert {i}\right \rangle } \}_{i \leq q}\). The code takes a q-qudit, \({\left \vert {i}\right \rangle }\), as well as \({\left \vert {0}\right \rangle }\) states, and encodes them into a state of \(t = 2d + 1\) qudits as follows:

$$ E {\left\vert{i}\right\rangle}{\left\vert{0}\right\rangle}^{\otimes t - 1} = {\sum}_{p, deg(p) \leq d, p(i) = 0} {\left\vert{p(\alpha_{1})}\right\rangle}{\left\vert{p(\alpha_{2})}\right\rangle} ... {\left\vert{p(\alpha_{t})}\right\rangle} $$
(30)

where E is the encoding unitary, p ranges over polynomials of degree less than d over the field \(F_{q}\) of integers \(mod \; q\), and \(\{ \alpha _{j} \}_{j \leq t}\) is a fixed set of mnon-zero values from \(F_{q}\) (it is assumed that \(q > t\)). The code can detect errors on at most d qudits and can correct errors on up to \(\lfloor \frac {d}{2} \rfloor \) qudits (hence \(\lfloor \frac {d}{2} \rfloor \) is the weight of the code). Importantly, the code is transversal for Clifford operations. Aharonov et al consider a slight variation of this scheme called a signed polynomial code, which allows one to randomize over different polynomial codes. The idea is to have the encoding (and decoding) procedure also depend on a key \(k \in \{-1, + 1\}^{t}\) as follows:

$$ E_{k} {\left\vert{i}\right\rangle}{\left\vert{0}\right\rangle}^{\otimes t - 1} = {\sum}_{p, deg(p) \leq d, p(i) = 0} {\left\vert{k_{1} p(\alpha_{1})}\right\rangle}{\left\vert{k_{2} p(\alpha_{2})}\right\rangle} ... {\left\vert{k_{t} p(\alpha_{t})}\right\rangle} $$
(31)

The signed polynomial CSS code can be used to create a simple authentication scheme having security \(\epsilon = 2^{-d}\). This works by having the sender encode the state \({\left \vert {{\Psi }_{in}}\right \rangle } = {\left \vert {\psi }\right \rangle }{\left \vert {0}\right \rangle }^{\otimes t - 1}\), where \({\left \vert {\psi }\right \rangle }\) is a qudit to be authenticated, in the signed code and then one-time padding the encoded state. Note that the \({\left \vert {0}\right \rangle }^{\otimes t - 1}\) part of the state is acting as a flag system. We are assuming that the sender and the receiver share both the sign key of the code and the key for the one-time padding. The one-time padded state is then sent over the insecure channel. The receiver undoes the pad and applies the inverse of the encoding operation. It then measures the last \(t - 1\) qudits, accepting if and only if they are all in the \({\left \vert {0}\right \rangle }\) state. Proving security is similar to the Clifford QAS and relies on two results:

Lemma 2 (Pauli twirl)

Let\(P_{1}\), \(P_{2}\)betwo operators from the n-qudit Pauli group, denoted, such that \(P_{1} \neq P_{2}\). For any n-qudit density matrix\(\rho \) it is the case that:

This result is identical to the Clifford twirl lemma, except the Clifford operations are replaced with Pauli operators.Footnote 20 The result is also valid for qubits.

Lemma 3 (Signed polynomial code security)

Let\(\rho = {\left \vert {\psi }\right \rangle }{\left \langle {\psi }\right \vert } \otimes {\left \vert {0}\right \rangle }{\left \langle {0}\right \vert }^{\otimes t - 1}\), be a state which will be encoded in the signed polynomialcode, \(P = (I - {\left \vert {\psi }\right \rangle }{\left \langle {\psi }\right \vert }) \otimes {\left \vert {0}\right \rangle }\left \langle {0}\right \rangle ^{\otimes t - 1}\), be a projector onto the orthogonal complement of\({\left \vert {\psi }\right \rangle }\)andon\({\left \vert {0}\right \rangle }^{t-1}\), andbe a non-identity Pauli operation on t qudits. Then it is the case that:

$$ \frac{1}{2^{t}} {\sum}_{k \in \{-1, + 1\}^{t}} Tr \left( P \;\; E_{k}^{\dagger} Q E_{k} \; \rho \; E_{k}^{\dagger} Q E_{k} \right) \leq \frac{1}{2^{t-1}} $$
(33)

Using these two results, and the ideas from the Clifford QAS scheme, it is not difficult to prove the security of the above described authentication scheme. As before, the eavesdropper’s map is decomposed into Kraus operators which are then expanded into Pauli operations. Since the sender’s state is one-time padded (and the receiver will undo the one-time pad), the Pauli twirl lemma will turn the eavesdropper’s deviation into a convex combination of Pauli deviations:

which can be split into the identity and non-identity Pauli terms:

where \(\beta _{Q}\) are positive real coefficients satisfying:

The receiver takes this state and applies the inverse encoding operation, resulting in:

But now we know that \(\epsilon = Tr(P_{incorrect} \rho )\), and using Lemma 3 together with the facts that \(Tr(P_{incorrect} {\left \vert {{\Psi }_{in}}\right \rangle }{\left \langle {{\Psi }_{in}}\right \vert }) = 0\) and that the \(\beta _{Q}\) coefficients sum to 1 we end up with:

$$ \epsilon \leq \frac{1}{2^{t-1}} \leq \frac{1}{2^{d}} $$
(38)

There are two more aspects to be mentioned before giving the steps of the Poly-QAS VQC protocol. The first is that the encoding procedure for the signed polynomial code is implemented using the following interpolation operation:

$$ D_{k} {\left\vert{i}\right\rangle} {\left\vert{k_{2} p(\alpha_{2})}\right\rangle} ... {\left\vert{k_{d + 1} p(\alpha_{d + 1})}\right\rangle} {\left\vert{0}\right\rangle}^{\otimes d} = {\left\vert{k_{1} p(\alpha_{1})}\right\rangle} ... {\left\vert{k_{t} p(\alpha_{t})}\right\rangle} $$
(39)

The inverse operation \(D_{k}^{\dagger }\) can be though of as a decoding of one term from the superposition in (31). Akin to Lemma 3, the signed polynomial code has the property that, when averaging over all sign keys, k, if such a term had a non-identity Pauli applied to it, when decoding it with \(D_{k}^{\dagger }\), the probability that its last d qudits are not \(\left \vert {0}\right \rangle \) states is upper bounded by \(2^{-d}\).

The second aspect is that, as mentioned, the signed polynomial code is transversal for Clifford operations. However, in order to apply non-Clifford operations it is necessary to measure encoded states together with so-called magic states (which will also be encoded). This manner of performing gates is known as gate teleportation [54]. The target state, on which we want to apply a non-Clifford operation, and the magic state are first entangled using a Clifford operation and then the magic state is measured in the computational basis. The effect of the measurement is to have a non-Clifford operation applied on the target state, along with Pauli errors which depend on the measurement outcome. For the non-Clifford operations, Aharonov et al use Toffoli gates.Footnote 21

Given all of these, the Poly-QAS VQC protocol works as follows:

  • (1) Suppose the input state that the verifier intends to prepare is \({\left \vert {\psi }\right \rangle } = {\left \vert {\psi _{1}}\right \rangle } \otimes {\left \vert {\psi _{2}}\right \rangle } \otimes ... \otimes {\left \vert {\psi _{n}}\right \rangle }\), where each \(\left \vert {\psi _{i}}\right \rangle \) is a q-qudit. Also suppose that the verifier wishes to apply the quantum circuit \(\mathcal {C}\) on \(\left \vert {\psi }\right \rangle \), which contains L Toffoli gates. The verifier prepares the state \(\left \vert {{\Psi }_{in}}\right \rangle = \left \vert {\psi _{1}}\right \rangle \left \vert {0}\right \rangle ^{t-1} \otimes \left \vert {\psi _{2}}\right \rangle \left \vert {0}\right \rangle ^{t-1} \otimes ... \otimes \left \vert {\psi _{n}}\right \rangle \left \vert {0}\right \rangle ^{t-1} \otimes \left \vert {M_{1}}\right \rangle \left \vert {0}\right \rangle ^{3t-3} \otimes ... \otimes \left \vert {M_{L}}\right \rangle \left \vert {0}\right \rangle ^{3t-3}\), where \(t = 2d + 1\) and each \(\left \vert {M_{i}}\right \rangle \) is a 3-qudit magic state, used for performing Toffoli gates. Groups of t qubits will comprise a block as follows. The first n blocks are simply \(\left \vert {block_{i}}\right \rangle =\left \vert {\psi _{i}}\right \rangle \left \vert {0}\right \rangle ^{t-1}\), with \(i \in \{1, ..., n\}\). Next, we have the states of the form \(\left \vert {M_{i}}\right \rangle \left \vert {0}\right \rangle ^{3t-3}\) which consist of 3 blocks, each. Each block, from such a state, will comprise of one qudit from \(\left \vert {M_{i}}\right \rangle \) and a \(\left \vert {0}\right \rangle ^{t-1}\) state. Note that we can no longer represent these blocks as pure states, since the 3 qudits of a \(\left \vert {M_{i}}\right \rangle \) state are entangled. So, to summarize, each block contains one qudit from either the state \(\left \vert {\psi }\right \rangle \) or a magic state \(\left \vert {M_{i}}\right \rangle \), together with a flag system, \(\left \vert {0}\right \rangle ^{t-1}\).

  • (2) The verifier encodes each block in a signed polynomial code with a randomly chosen key \(k \in \{-1, + 1\}^{t}\) (the same key for each block) and then quantum one-time pads each block (using different keys for the padding of each block). The blocks are prepared and encoded in sequence (the verifier has the ability to process 3 blocks, or \(3t\) qudits, at a time) and then sent to the prover.

  • (3) When applying Clifford operations, the verifier simply asks the prover to apply the gates in a transversal fashion. Since Clifford operations normalise Pauli operators, the verifier then updates the one-time pad keys similar to Childs’ protocol (see Section 1.1).

  • (4) When applying a Toffoli gate, the verifier asks the prover to measure 3 blocks, comprising a magic state, in the computational basis and report the measurement outcomes. It is assumed that the magic state was entangled, using a Clifford operation, with 3 target blocks on which the Toffoli gate is to be applied. The verifier undoes the (classical) one-time padding of the measurement outcomes and expects each of the 3 groups of measurement outcomes (associated with each of the 3 blocks) to be of the form \([ k_{1} p(\alpha _{1}), ..., k_{t} p(\alpha _{t})]\). The verifier then takes these classical strings and turns them into states of the form \(\left \vert {\phi }\right \rangle = \left \vert {k_{1} p(\alpha _{1})}\right \rangle ... \left \vert {k_{t} p(\alpha _{t})}\right \rangle \) (using her constant-sized quantum computer).Footnote 22 She then applies \(D_{k}^{\dagger }\) on each of these \(\left \vert {\phi }\right \rangle \) states and checks that the last d qudits, of each state, are \(\left \vert {0}\right \rangle \), aborting otherwise. Assuming not-abort, the verifier instructs the prover to perform the appropriate Pauli corrections resulting from the gate teleportation.

  • (5) Once all gates have been performed, the verifier instructs the prover to measure all blocks in the computational basis. As in step 4, the verifier will then de-one-time pad the outcomes, apply \(D_{k}^{\dagger }\) to each state of the form \(\left \vert {\phi }\right \rangle \) (prepared from these outcomes), and check that the last d qudits are \(\left \vert {0}\right \rangle \), aborting otherwise.

The protocol is schematically illustrated in Fig. 5.

Fig. 5
figure 5

Poly-QAS VQC

As with the previous protocol, the security is based on the security of the authentication scheme. However, there is a significant difference. In the Clifford-QAS VQC protocol, one could always assume that the state received by the verifier was the correctly encoded state with a deviation on top that was independent of this encoding. However, in the Poly-QAS VQC protocol, the quantum state is never returned to the verifier and, moreover, the prover’s instructed actions on this state are adaptive based on the responses of the verifier. Since the prover is free to deviate at any point throughout the protocol, if we try to commute all of his deviations to the end (i.e. view the output state as the correct state resulting from an honest run of the protocol, with a deviation on top that is independent of the secret parameters), we find that the output state will have a deviation on top which depends on the verifier’s responses. Since the verifier’s responses depend on the secret keys, we cannot directly use the security of the authentication scheme to prove that the protocol is 2d-verifiable.

The solution, as explained in [26], is to consider the state of the entire protocol comprising of the prover’s system, the verifier’s system and the transcript of all classical messages exchanged during the protocol. For a fixed interaction transcript, the prover’s attacks can be commuted to the end of the protocol. This is because, if the transcript is fixed, there is no dependency of the prover’s operations on the verifier’s messages. We simply view all of his operations as unitaries acting on the joint system of his private memory, the input quantum state and the transcript. One can then use Lemma 2 and Lemma 3 to bound the projection of this state onto the incorrect subspace with acceptance. The whole state, however, will be a mixture of all possible interaction transcripts, but since each term is bounded and the probabilities of the terms in the mixture must add up to one, it follows that the protocol is \(2^{-d}\)-verifiable:

Theorem 2

For a fixed constant\(d > 0\), Poly-QAS VQC is a prepare-and-send\(\mathsf {QPIP}\)protocol having correctness\(\delta = 1\)andverifiability\(\epsilon = 2^{-d}\).

Let us briefly summarize the two protocols in terms of the verifier’s resources. In both protocols, if one fixes the security parameter, \(\epsilon \), the verifier must have a O(log(1/𝜖))-size quantum computer. Additionally, both protocols are interactive with the total amount of communication (number of messages times the size of each message) being upper bounded by \(O(|\mathcal {C}| \cdot log(1/\epsilon ))\), where \(\mathcal {C}\) is the quantum circuit to be performed.Footnote 23 However, in Clifford-QAS VQC, this communication is quantum whereas in Poly-QAS VQC only one quantum message is sent at the beginning of the protocol and the rest of the interaction is classical.

Before ending this subsection, we also mention the result of Broadbent et al. from [55]. This result generalises the use of quantum authentication codes for achieving verification of delegated quantum computation (not limited to decision problems). Moreover, the authors prove the security of these schemes in the universal composability framework, which allows for secure composition of cryptographic protocols and primitives [56].

2.2 Trap-Based Verification

In this subsection we discuss Verifiable Universal Blind Quantum Computing (VUBQC), which was developed by Fitzsimons and Kashefi in [27]. The protocol is written in the language of MBQC and relies on two essential ideas. The first is that an MBQC computation can be performed blindly, using UBQC, as described in Section 1.1. The second is the idea of embedding checks or traps in a computation in order to verify that it was performed correctly. Blindness will ensure that these checks remain hidden and so any deviation by the prover will have a high chance of triggering a trap. Notice that this is similar to the QAS-based approaches where the input state has a flag subsystem appended to it in order to detect deviations and the whole state has been encoded in some way so as to hide the input and the flag subsystem. This will lead to a similar proof of security. However, as we will see, the differences arising from using MBQC and UBQC lead to a reduction in the quantum resources of the verifier. In particular, in VUBQC the verifier requires only the ability to prepare single qubit states, which will be sent to the prover, in contrast to the QAS-based protocols which required the verifier to have a constant-size quantum computer.

Recall the main steps for performing UBQC. The client, Alice, sends qubits of the form \({\left \vert {+_{\theta _{i}}}\right \rangle }\) to Bob, the server, and instructs him to entangle them according to a graph structure, G, corresponding to some universal graph state. She then asks him to measure qubits in this graph state at angles \(\delta _{i} = \phi ^{\prime }_{i} + \theta _{i} + r_{i} \pi \), where \(\phi ^{\prime }_{i}\) is the corrected computation angle and \(r_{i} \pi \) acts a random \(\mathsf {Z}\) operation which flips the measurement outcome. Alice will use the measurement outcomes, denoted \(b_{i}\), provided by Bob to update the computation angles for future measurements. Throughout the protocol, Bob’s perspective is that the states, measurements and measurement outcomes are indistinguishable from random. Once all measurements have been performed, Alice will undo the \(r_{i}\) padding of the final outcomes and recover her output. Of course, UBQC does not provide any guarantee that the output she gets is the correct one, since Bob could have deviated from her instructions.

Transitioning to VUBQC, we will identify Alice as the verifier and Bob as the prover. To augment UBQC with the ability to detect malicious behaviour on the prover’s part, the verifier will introduce traps in the computation. How will she do this? Recall that the qubits which will comprise \({\left \vert {G}\right \rangle }\) need to be entangled with the \(\mathsf {CZ}\) operation. Of course, for X Y-plane states \(\mathsf {CZ}\) does indeed entangle the states. However, if either qubit, on which \(\mathsf {CZ}\) acts, is \({\left \vert {0}\right \rangle }\) or \({\left \vert {1}\right \rangle }\), then no entanglement is created. So suppose that we have a \({\left \vert {+_{\theta }}\right \rangle }\) qubit whose neighbours, according to G, are computational basis states. Then, this qubit will remain disentangled from the rest of the qubits in \({\left \vert {G}\right \rangle }\). This means that if the qubit is measured at its preparation angle, the outcome will be deterministic. The verifier can exploit this fact to certify that the prover is performing the correct measurements. Such states are referred to as trap qubits, whereas the \({\left \vert {0}\right \rangle }\), \({\left \vert {1}\right \rangle }\) neighbours are referred to as dummy qubits. Importantly, as long as G’s structure remains that of a universal graph stateFootnote 24 and as long as the dummy qubits and the traps are chosen at random, adding these extra states as part of the UBQC computation will not affect the blindness of the protocol. The implication of this is that the prover will be completely unaware of the positions of the traps and dummies. The traps effectively play a role that is similar to that of the flag subsystem in the authentication-based protocols. The dummies, on the other hand, are there to ensure that the traps do not get entangled with the rest of qubits in the graph state. They also serve another purpose. When a dummy is in a \({\left \vert {1}\right \rangle }\) state, and a \(\mathsf {CZ}\) acts on it and a trap qubit, in the state \({\left \vert {+_{\theta }}\right \rangle }\), the effect is to “flip” the trap to \({\left \vert {-_{\theta }}\right \rangle }\) (alternatively \({\left \vert {-_{\theta }}\right \rangle }\) would have been flipped to \({\left \vert {+_{\theta }}\right \rangle }\)). This means that if the trap is measured at its preparation angle, \(\theta \), the measurement outcome will also be flipped, with respect to the initial preparation. Conversely, if the dummy was initially in the state \({\left \vert {0}\right \rangle }\), then no flip occurs. Traps and dummies, therefore, serve to also certify that the prover is performing the \(\mathsf {CZ}\) operations correctly. Thus, by using the traps (and the dummies), the verifier can check both the prover’s measurements and his entangling operations and hence verify his MBQC computation.

We are now ready to present the steps of VUBQC:

  • (1) The verifier chooses an input x and a quantum computation \(\mathcal {C}\) that she would like the prover to perform on \({\left \vert {x}\right \rangle }\).Footnote 25

  • (2) She converts x and \(\mathcal {C}\) into a pair \((G, \{\phi _{i}\}_{i})\), where \({\left \vert {G}\right \rangle }\) is an N-qubit universal graph state (with an established ordering for measuring the qubits), which admits an embedding of T traps and D dummies. We therefore have that \(N = T + D + Q\), where \(Q = O(|\mathcal {C}|)\) is the number of computation qubits used for performing \(\mathcal {C}\) and \(\{\phi _{i}\}_{i \leq Q}\) is the associated set of computation angles.Footnote 26

  • (3) Alice picks, uniformly at random, values \(\theta _{i}\), with i going from 1 to \(T+Q\), from the set \(\{0, \pi /4, 2\pi /4, ... 7\pi /4\}\) as well as values \(r_{i}\) from the set \(\{0, 1\}\) for the trap and computation qubits.

  • (4) She then prepares the \(T+Q\) states \({\left \vert {+_{\theta _{i}}}\right \rangle }\), as well as D dummy qubits which are states chosen at random from \(\{ {\left \vert {0}\right \rangle }, {\left \vert {1}\right \rangle } \}\). All these states are sent to Bob, who is instructed to entangle them, using \(\mathsf {CZ}\) operations, according to the graph structure G.

  • (5) Alice then asks Bob to measure the qubits as follows: computation qubits will be measured at \(\delta _{i} = \phi ^{\prime }_{i} + \theta _{i} + r_{i} \pi \), where \(\phi ^{\prime }_{i}\) is an updated version of \(\phi _{i}\) that incorporates corrections resulting from previous measurements; trap qubits will be measured at \(\delta _{i} = \theta _{i} + r_{i} \pi \); dummy qubits are measured at randomly chosen angles from \(\{0, \pi /4, 2\pi /4, ... 7\pi /4\}\). This step is interactive as Alice needs to update the angles of future measurements based on past outcomes. The number of rounds of interaction is proportional to the depth of \(\mathcal {C}\). If any of the trap measurements produce incorrect outcomes, Alice will abort upon completion of the protocol.

  • (6) Assuming all trap measurements succeeded, after all the measurements have been performed, Alice undoes the \(r_{i}\) one-time padding of the measurement outcomes, thus recovering the outcome of the computation.

The protocol is illustrated schematically in Fig. 6, where all the parameters have been labelled by their position, \((i, j)\), in a rectangular cluster state.

Fig. 6
figure 6

Verifiable universal blind quantum computing

One can see that VUBQC has correctness \(\delta = 1\), since if the prover behaves honestly then all trap measurements will produce the correct result and the computation will have been performed correctly. What about verifiability? We will first answer this question for the case where there is a single trap qubit (T = 1) at a uniformly random position in \({\left \vert {G}\right \rangle }\), denoted \({\left \vert {+_{\theta _{t}}}\right \rangle }\). Adopting a similar notation to that from [27], we let:

$$ \mathcal{B}_{j} (\nu) = {\sum}_{\mathbf{s}} p_{\nu,j}(\mathbf{s}) {\left\vert{\mathbf{s}}\right\rangle}{\left\langle{\mathbf{s}}\right\vert} \otimes \rho_{\nu, j}^{\mathbf{s}} $$
(40)

denote the outcome density operator of all classical and quantum messages exchanged between the verifier and the prover throughout the protocol, excluding the last round of measurements (which corresponds to measuring the output of the computation). Additionally, \(\nu \) denotes the set of secret parameters of Alice (i.e. the positions of the traps and dummies as well as the sets \(\{ \phi _{i} \}_{i}\), \(\{ \theta _{i} \}_{i}\) and \(\{ r_{i} \}_{i}\)); j ranges over the possible strategies of the proverFootnote 27 with \(j = 0\) corresponding to the honest strategy; \(\mathbf {s}\) is a binary vector which ranges over all possible corrected values of the measurement outcomes sent by the prover; lastly, \(\rho _{\nu , j}^{\mathbf {s}}\) is the state of the unmeasured qubits, representing the output state of the computation (prior to the final measurement). To match Definition 2, one also considers:

$$ P^{\nu}_{incorrect} = (I - \mathcal{C} {\left\vert{x}\right\rangle}{\left\langle{x}\right\vert} \mathcal{C}^{\dagger}) \otimes {\left\vert{+^{\nu}_{\theta_{t}}}\right\rangle} {\left\langle{+^{\nu}_{\theta_{t}}}\right\vert} $$
(41)

to be the projection onto the orthogonal complement of the correct output together with the trap state being projected onto acceptance. The dependence on \(\nu \), for the trap qubit, arises because the acceptance outcome depends on the states of the dummy neighbors for that qubit. This is because if one of the dummies is \({\left \vert {1}\right \rangle }\), the \(\mathsf {CZ}\) operation has the effect of flipping \({\left \vert {+_{\theta _{t}}}\right \rangle }\) to \({\left \vert {-_{\theta _{t}}}\right \rangle }\). Additionally, \(\nu \) also encodes the position of this trap, in the graph state, as well as the \(\mathsf {Z}\) flip specified by the \(r_{i}\) parameter, for \(i=t\). One then needs to find an \(\epsilon \) such that:

$$ Tr \left( {\sum}_{\nu} p(\nu)P^{\nu}_{incorrect} \mathcal{B}_{j}(\nu) \right) \leq \epsilon $$
(42)

This is done in a manner similar to the proof of security for the Poly-QAS VQC scheme of the previous section.Footnote 28 Specifically, one fixes the interaction transcript for the protocol. This just means fixing the measurement angles \(\delta _{i}\), and then considering all possible transcripts compatible with the fixed angles. One can do this because UBQC guarantees that the prover learns nothing from the interaction except for, at most, an upper bound on \(|\mathcal {C}|\). This means that there will be multiple transcripts compatible with the same values for the \(\delta _{i}\)angles. It also means that any deviation that the prover performs is independent of the secret parameters of the verifier (though it can depend on the \(\delta _{i}\) angles) and can therefore be commuted to the end of the protocol. The outcome density operator \(\mathcal {B}_{j}(\nu )\) can then be expressed as the ideal outcome with a CPTP deviation, \(\mathcal {E}_{j}\), on top, that is independent of \(\nu \):

$$ \mathcal{B}_{j}(\nu) = \mathcal{E}_{j}(\mathcal{B}_{0}(\nu)) $$
(43)

The deviation \(\mathcal {E}_{j}\) is then decomposed into Kraus operators which, in turn, are decomposed into Pauli operators leading to:

$$ \mathcal{B}_{j}(\nu) = {\sum}_{k,l,m} \alpha_{kl}(j)\alpha^{*}_{km}(j) \; P_{l} \mathcal{B}_{0}(\nu) P_{m} $$
(44)

where \(\alpha _{kl}(j)\) (and their conjugates) are the complex coefficients for the Pauli operators. This summation can be split into the terms that act as identity on \(\mathcal {B}_{0}(\nu )\)> and those that do not. Suppose the terms that act trivially have weight \(0 \leq \beta \leq 1\), we then have:

$$ \mathcal{B}_{j}(\nu) = \beta\mathcal{B}_{0}(\nu) + (1 - \beta){\sum}_{k,l,m} \alpha_{kl}(j)\alpha^{*}_{km}(j) \; P_{l} \mathcal{B}_{0}(\nu) P_{m} $$
(45)

where the second term is summing over Pauli operators that act non-trivially. We use this to compute the probability of accepting an incorrect outcome, noting that \(P_{incorrect}^{\nu } \mathcal {B}_{0}(\nu ) = 0\):

$$\begin{array}{@{}rcl@{}} &&Tr \left( {\sum}_{\nu} p(\nu)P^{\nu}_{incorrect} \mathcal{B}_{j}(\nu) \right)\\ &=& (1 - \beta) Tr \left( {\sum}_{\nu} {\sum}_{k,l,m} p(\nu)P^{\nu}_{incorrect} (\alpha_{kl}(j) \alpha^{*}_{km}(j) \; P_{l} \mathcal{B}_{0}(\nu) P_{m}) \right) \end{array} $$
(46)

We now use the fact that \(P^{\nu }_{incorrect} = (I - \mathcal {C} {\left \vert {x}\right \rangle }{\left \langle {x}\right \vert } \mathcal {C}^{\dagger }) \otimes {\left \vert {+^{\nu }_{\theta _{t}}}\right \rangle } {\left \langle {+^{\nu }_{\theta _{t}}}\right \vert }\) and keep only the projection onto the trap qubit. The projection onto the space orthogonal to the correct state is a trace decreasing operation and also \((1 - \beta ) \leq 1\) hence:

$$\begin{array}{@{}rcl@{}} &&Tr \left( {\sum}_{\nu} p(\nu)P^{\nu}_{incorrect} \mathcal{B}_{j}(\nu) \right)\\ &\leq& Tr \left( {\sum}_{\nu} p(\nu) {\left\vert{+^{\nu}_{\theta_{t}}}\right\rangle} {\left\langle{+^{\nu}_{\theta_{t}}}\right\vert} \; {\sum}_{k,l,m} \alpha_{kl}(j)\alpha^{*}_{km}(j) \; P_{l} \mathcal{B}_{0}(\nu) P_{m} \right) \end{array} $$
(47)

The summation over \(\nu \) can be broken into two summations: one over the position of the trap (and the dummies) and one over the remaining parameters. This latter sum makes the reduced state appear totally mixed to the prover (a fact which is ensured by UBQC). The above expression then becomes:

$$ Tr \left( {\sum}_{\nu^{t}} p(\nu^{t}) {\left\vert{+^{\nu^{t}}_{\theta_{t}}}\right\rangle} {\left\langle{+^{\nu^{t}}_{\theta_{t}}}\right\vert} \; {\sum}_{k,l,m} \alpha_{kl}(j) \alpha^{*}_{km}(j) \; P_{l} (\left\vert{+_{\theta_{t}}^{\nu^{t}}}\right\rangle \left\langle{+_{\theta_{t}}^{\nu^{t}}}\right\rangle \otimes (I/Tr(I))) P_{m} \right) $$
(48)

where \(\nu ^{t}\) denotes the secret parameters for the trap qubit and consists of \(\theta _{t}\), \(r_{t}\) and the position of the trap in the graph. But notice that, on the identity system, the terms in which \(l \neq m\) will have no contribution to the summation. This is because at least one of the Pauli terms (either \(P_{l}\) or \(P_{m}\)) will act on the identity system. Since Pauli operators are traceless, when taking the trace these terms will be zero. For the trap system we will have:

$$ Tr \left( {\sum}_{\nu^{t}} p(\nu^{t}) {\left\vert{+^{\nu^{t}}_{\theta_{t}}}\right\rangle} {\left\langle{+^{\nu^{t}}_{\theta_{t}}}\right\vert} \; P_{l} {\left\vert{+_{\theta_{t}}^{\nu^{t}}}\right\rangle} {\left\langle{+_{\theta_{t}}^{\nu^{t}}}\right\vert} P_{m} \right) \,=\, {\sum}_{\nu^{t}} p(\nu^{t}) {\left\langle{+^{\nu^{t}}_{\theta_{t}}}\right\vert} P_{l} {\left\vert{+^{\nu^{t}}_{\theta_{t}}}\right\rangle} {\left\langle{+^{\nu^{t}}_{\theta_{t}}}\right\vert} P_{m} {\left\vert{+^{\nu^{t}}_{\theta_{t}}}\right\rangle} $$
(49)

Note that we are taking \(p(\nu ^{t})\) to be the uniform distribution over these parameters. By summing over \(\theta _{t}\) and \(r_{t}\), the above expression becomes zero, whenever lm. This is a result of the Pauli twirl Lemma 2. Thus, only terms in which \(l = m\) will remain. Substituting this back into expression (48) leads to:

$$ Tr \left( {\sum}_{\nu^{t}} p(\nu^{t}) {\left\vert{+^{\nu^{t}}_{\theta_{t}}}\right\rangle} {\left\langle{+^{\nu^{t}}_{\theta_{t}}}\right\vert} \; {\sum}_{k,l} |\alpha_{kl}(j)|^{2} \; P_{l} ({\left\vert{+_{\theta_{t}}^{\nu^{t}}}\right\rangle} {\left\langle{+_{\theta_{t}}^{\nu^{t}}}\right\vert} \otimes (I/Tr(I))) P_{l} \right) $$
(50)

In other words, the resulting state is a convex combination of Pauli deviations. The position of the trap is completely randomised so that it is equally likely that any of the N qubits is the trap. Therefore, in the above summation, there will be N terms (corresponding to the N possible positions of the trap), one of which will be zero (the one in which the non-trivial Pauli deviations act on the trap qubit). Hence:

$$ Tr \left( {\sum}_{\nu} p(\nu)P^{\nu}_{incorrect} \mathcal{B}_{j}(\nu) \right) \leq \frac{N-1}{N} = 1 - \frac{1}{N} $$
(51)

We have found that for the case of a single trap qubit, out of the total N qubits, one has \(\epsilon = 1 - \frac {1}{N}\).

If however, there are multiple trap states, the bound improves. Specifically, for a type of resource state called dotted-triple graph, the number of traps can be a constant fraction of the total number of qubits, yielding \(\epsilon = 8/9\). If the protocol is then repeated a constant number of times, d, with the verifier aborting if any of these runs gives incorrect trap outcomes, it can be shown that \(\epsilon = (8/9)^{d}\) [57]. Alternatively, if the input state and computation are encoded in an error correcting code of distance d, then one again obtains \(\epsilon = (8/9)^{d}\). This is useful if one is interested in a quantum output, or a classical bit string output. If, instead, one would only like a single bit output (i.e. the outcome of the decision problem) then sequential repetition and taking the majority outcome is sufficient. The fault tolerant encoding need not be done by the verifier. Instead, the prover will simply be instructed to prepare a larger resource state which also offers topological error-correction. See [27, 58, 59] for more details. An important observation, however, is that the fault tolerant encoding, just like in the Poly-QAS VQC protocol, is used only to boost security and not for correcting deviations arising from faulty devices. This latter case is discussed in Section 5.2. To sum up:

Theorem 3

For a fixed constant\(d > 0\), VUBQC is a prepare-and-send\(\mathsf {QPIP}\)protocolhaving correctness\(\delta = 1\)andverifiability\(\epsilon = (8/9)^{d}\).

It should be noted that in the original construction of the protocol, the fault tolerant encoding, used for boosting security, required the use of a resource state having \(O(|\mathcal {C}|^{2})\) qubits. The importance of the dotted-triple graph construction is that it achieves the same level of security while keeping the number of qubits linear in \(|\mathcal {C}|\). The same effect is achieved by a composite protocol which combines the Poly-QAS VQC scheme, from the previous section, with VUBQC [51]. This works by having the verifier run small instances of VUBQC in order to prepare the encoded blocks used in the Poly-QAS VQC protocol. Because of the blindness property, the prover does not learn the secret keys used in the encoded blocks. The verifier can then run the Poly-QAS VQC protocol with the prover, using those blocks. This hybrid approach illustrates how composition can lead to more efficient protocols. In this case, the composite protocol maintains a single qubit preparation device for the verifier (as opposed to a O(log(1/𝜖))-size quantum computer) while also achieving linear communication complexity. We will encounter other composite protocols when reviewing entanglement-based protocols in Section 4.

Lastly, let us explicitly state the resources and overhead of the verifier throughout the VUBQC protocol. As mentioned, the verifier requires only a single-qubit preparation device, capable of preparing states of the form \(\left \vert {+_{\theta }}\right \rangle \), with \(\theta \in \{0, \pi /4, 2\pi /4, ... 7\pi /4 \}\), and \(\left \vert {0}\right \rangle \), \(\left \vert {1}\right \rangle \). The number of qubits needed is on the order of \(O(|\mathcal {C}|)\). After the qubits have been sent to the prover, the two interact classically and the size of the communication is also on the order of \(O(|\mathcal {C}|)\).

2.3 Verification Based on Repeated Runs

The final prepare-and-send protocol we describe is the one defined by Broadbent in [28]. While the previous approaches relied on hiding a flag subsystem or traps in either the input or the computation, this protocol has the verifier alternate between different runs designed to either test the behaviour of the prover or perform the desired quantum computation. We will refer to this as the Test-or-Compute protocol. From the prover’s perspective, the possible runs are indistinguishable from each other, thus making him unaware if he is being tested or performing the verifier’s chosen computation. Specifically, suppose the verifier would like to delegate the quantum circuit \(\mathcal {C}\) to be applied on the \({\left \vert {0}\right \rangle }^{\otimes n}\) state,Footnote 29 where n is the size of the input. The verifier then chooses randomly between three possible runs:

  • Computation run. The verifier delegates \(\mathcal {C} {\left \vert {0}\right \rangle }^{\otimes n}\) to the prover.

  • X-test run. The verifier delegates the identity computation on the \({\left \vert {0}\right \rangle }^{\otimes n}\) state to the prover.

  • Z-test run. The verifier delegates the identity computation on the \({\left \vert {+}\right \rangle }^{\otimes n}\) state to the prover.

It turns out that this suffices in order to test against any possible malicious behavior of the prover, with high probability.

In more detail, the protocol uses a technique for quantum computing on encrypted data, described in [60], which is similar to Childs’ protocol from Section 1.1, except it does not involve two-way quantum communication. The verifier will one-time pad either the \(\left \vert {0}\right \rangle ^{\otimes n}\) state or the \(\left \vert {+}\right \rangle ^{\otimes n}\) state and send the qubits to the prover. The prover is then instructed to apply the circuit \(\mathcal {C}\), which consists of the gates \(\mathsf {X}\), \(\mathsf {Z}\), \(\mathsf {H}\), \(\mathsf {T}\), \(\mathsf {CNOT}\). As we know, the Clifford operations commute (or normalise) with the one-time pad, so the verifier would only need to appropriately update the one-time pad to account for this. However, \(\mathsf {T}\) gates do not commute with the pad. In particular, commuting them past the \(\mathsf {X}\) gates introduces unwanted \(\mathsf {S}\) operations. To resolve this issue, the verifier will use a particular gadget which will allow the prover to apply \(\mathsf {T}\) and correct for \(\mathsf {S}\) at the same time. This gadget is shown in Fig. 7, reproduced from [28].

Fig. 7
figure 7

Gadget for performing T on one-time padded state, reproduced from [28]

The gadget works in a gate teleportation fashion. For each qubit, labelled j, on which the prover should apply a \(\mathsf {T}\) gate, the verifier sends a qubit of the form \(\mathsf {X}^{d} \textsf {Z}^{c} \textsf {S}^{y} \textsf {T} {\left \vert {+}\right \rangle }\), as well as the classical bit \(x = a \oplus c \oplus y\), where a is the \(\mathsf {X}\) padding of qubit j and c, d and y are chosen at random. The verifier then instructs the prover to apply a \(\mathsf {CNOT}\) between the sent qubit and qubit j, effectively entangling them, and then measure qubit j in the computational basis. Lastly, the verifier instructs the prover to apply an \(\mathsf {S}^{x}\) gate to the sent qubit. The end result is that this qubit will be the same as the de-one-time padded qubit j but with a \(\mathsf {T}\) and a new one-time pad acting on it. Importantly, the new pad is kept secret from the prover.

The \(\mathsf {T}\) gate gadget allows the verifier to control the application of either a non-Clifford operation or a Clifford operation through gate teleportation. For note that if the verifier does not apply a \(\mathsf {T}\) gate on the qubit sent to the prover, the resulting computation is Clifford. This is what allows the verifier to switch between the computation run and the two test runs. The prover cannot distinguish between the two cases, since his side of the gadget is identical in both instances. Thus, in a test run, the computation the prover performs will be Clifford and the verifier can simply update the one-time pad, of the input, accordingly. There is, however, one complication. In an \(\mathsf {X}\)-test run, the input is \({\left \vert {0}\right \rangle }^{\otimes n}\) and should remain this way until the end of the circuit, up to qubit flips resulting from the one-time pad. But any Hadamard gate in the circuit will map \({\left \vert {0}\right \rangle }\) to \({\left \vert {+}\right \rangle }\). The same is true for \(\mathsf {Z}\)-test runs, where \({\left \vert {+}\right \rangle }\) states can be mapped to \({\left \vert {0}\right \rangle }\). To resolve this issue, Broadbent uses the following identities:

$$ \textsf{H} \textsf{T}^{2} \textsf{H} \textsf{T}^{2} \textsf{H} \textsf{T}^{2} \textsf{H} = \textsf{H} $$
(52)
$$ \textsf{H} \textsf{H} \textsf{H} \textsf{H} = I $$
(53)

The idea is to have the prover implement each Hadamard operation in \(\mathcal {C}\) by applying four \(\mathsf {H}\) gates alternating with \(\mathsf {S} = \textsf {T}^{2}\) gates. Each \(\mathsf {T}^{2}\) operation is performed by using the \(\mathsf {T}\) gate gadget twice. When the verifier chooses a computation run, she will apply the \(\mathsf {T}\) gates in the gadget and therefore, via (52), this leads to a Hadamard operation. Conversely, in a rest run, no \(\mathsf {T}\) gates are applied, hence, from (53), no Hadamard operation will act on the target qubit. Since the output is always measured, by the prover, in the computational basis, in an \(\mathsf {X}\)-test run the verifier simply checks that the de-one-time padded output is \({\left \vert {0}\right \rangle }^{\otimes n}\).

There is, in fact, an additional testing step being performed during an \(\mathsf {X}\)-test run. Consider the \(\mathsf {T}\) gadget for such a run in Fig. 8, reproduced from [28].

Fig. 8
figure 8

T gate gadget in X-test run, reproduced from [28]

Note that the measurement bit, c, provided by the prover to the verifier should be an xor of the original \(\mathsf {X}\) padding of the input and the updated \(\mathsf {X}\) padding of the input. Checking the value of this bit allows the verifier to test that the gadget was applied correctly.

What about the \(\mathsf {Z}\)-test run? In that case, the output should be the \({\left \vert {+}\right \rangle }^{\otimes n}\) which, upon measurement, should collapse with equal probability into any of the \(2^{n}\) possible binary strings. The verifier does not test for this. Instead, each \(\mathsf {H}\) gate that should be applied by the prover has the effect of switching between the \(\mathsf {Z}\)-test run and the \(\mathsf {X}\)-test run. Thus, whenever a Hadamard is applied to a qubit, during a \(\mathsf {Z}\)-test run, the verifier switches to an \(\mathsf {X}\)-test run until the next Hadamard operation. In the \(\mathsf {X}\)-test runs, the verifier will use the \(\mathsf {T}\) gate gadget from Fig. 8. These are the only checks that are performed in the \(\mathsf {Z}\)-test run.

Putting everything together, the protocol starts by having the verifier choose an input x and a quantum computation \(\mathcal {C}\) to act on this input (that includes preparing the input from the \(\left \vert {0}\right \rangle ^{\otimes n}\) state). Then, the verifier will randomly choose to perform one of the following:

  • Computation run. The verifier one-time pads the \({\left \vert {0}\right \rangle }^{\otimes n}\) state and sends it to the prover. The prover is then instructed to apply \(\mathcal {C}\) on this state, such that for each \(\mathsf {T}\) gate in the circuit the prover and the verifier interact in order to perform the \(\mathsf {T}\) gadget. Additionally, any \(\mathsf {H}\) in \(\mathcal {C}\) is performed as in (52). For Clifford operations, the verifier updates the one-time pad of the state accordingly. The prover is instructed to measure the output state of the circuit in the computational basis and return the outcome to the verifier. The verifier undoes the padding of this outcome and accepts if the output of the circuit indicates acceptance.

  • X-test run. The verifier one-time pads the \({\left \vert {0}\right \rangle }^{\otimes n}\) state and sends it to the prover. As in the computation run, for each \(\mathsf {T}\), the verifier and the prover will interact to run the \(\mathsf {T}\) gate gadget. In this case, however, the verifier will use the \(\mathsf {T}\) gate gadget from Fig. 8, making the circuit effectively act as identity and checking that the prover is performing these gadgets correctly (rejecting otherwise). Additionally, the \(\mathsf {H}\) gates in \(\mathcal {C}\) will also act as identity, from (53), as described previously. The verifier updates the one-time padding of the state accordingly for all gates in the circuit. Once the circuit is finished, the prover is instructed to measure the output in the computational basis and report the outcome to the verifier. The verifier accepts if the de-one-time padded output is \(\left \vert {0}\right \rangle ^{\otimes n}\).

  • Z-test run. The verifier one-time pads the \({\left \vert {+}\right \rangle }^{\otimes n}\) state and sends it to the prover. As in the \(\mathsf {X}\)-test run, the \(\mathsf {T}\) gate gadgets will act as identity. The \(\mathsf {H}\) operations that the prover performs will temporarily switch the \(\mathsf {Z}\)-test run into an \(\mathsf {X}\)-test run, in which the verifier uses the gadget from Fig. 8 to check that prover implemented it correctly. Any subsequent \(\mathsf {H}\) will switch back to a \(\mathsf {Z}\)-test run. Additionally, the verifier updates the one-time padding of the state accordingly for all gates in the circuit. The prover is instructed to measure the output in the computational basis and report the outcome to the verifier, however in this case the verifier discards the output.

The asymmetry between the \(\mathsf {X}\)-test run and the \(\mathsf {Z}\)-test run stems from the fact that the output is always measured in the computational basis. This means that an incorrect output is one which has been bit-flipped. In turn, this implies that only \(\mathsf {X}\) and \(\mathsf {Y}\) operations on the output will act as deviations, since \(\mathsf {Z}\) effectively acts as identity on computational basis states. If the circuit \(\mathcal {C}\) does not contain any Hadamard gates and hence, the computation takes place entirely in the computational basis, then the \(\mathsf {X}\)-test is sufficient for detecting such deviations. However, when Hadamard gates are present, this is no longer the case since deviations can occur in the conjugate basis, \(({\left \vert {+}\right \rangle }, {\left \vert {-}\right \rangle })\), as well. This is why the \(\mathsf {Z}\)-test is necessary. Its purpose is to check that the prover’s operations are performed correctly when switching to the conjugate basis. For this reason, a Hadamard gate will switch a \(\mathsf {Z}\)-test run into an \(\mathsf {X}\)-test run which provides verification using the \(\mathsf {T}\) gate gadget.

In terms of the correctness of the protocol, we can see that if the prover behaves honestly then the correct outcome is obtained in the computation run and the verifier will accept the test runs, hence \(\delta = 1\).Footnote 30 For verifiability, the analysis is similar to the previous protocols. Suppose that \({\left \vert {\psi }\right \rangle }\) is either the \({\left \vert {0}\right \rangle }^{\oplus n}\) or the \({\left \vert {+}\right \rangle }^{\oplus n}\) state representing the input. Additionally, assuming there are t\(\mathsf {T}\) gates in \(\mathcal {C}\) (including the ones used for performing the Hadamards), let \({\left \vert {\phi }\right \rangle }\) be the state of the t qubits that the verifier sends for the \(\mathsf {T}\) gate gadgets. Then, the one-time padded state that the prover receives is:

The prover is then instructed to follow the steps of the protocol in order to run the circuit \(\mathcal {C}\). Note that all of the operations that he is instructed to perform are Clifford operations. This is because any non-Clifford operation from \(\mathcal {C}\) is performed with the \(\mathsf {T}\) gate gadgets (which require only Clifford operations) and the states from \({\left \vert {\phi }\right \rangle }\), prepared by the verifier. In following the notation from [28], we denote the honest action of the protocol as C. As in the previous protocols, the prover’s deviation can be commuted to the end of the protocol, so that it acts on top of the correct state. After expressing the deviation map in terms of Pauli operators one gets:

Note that we have also commuted C past the one-time pad so that it acts on the state \({\left \vert {\psi }\right \rangle }{\left \vert {\phi }\right \rangle }\), rather than on the one-time padded versions of these states. This is possible precisely because C is a Clifford operation and therefore normalises Pauli operations. One can then assume that the verifier performs the decryption of the padding before the final measurement, yielding:

>We now use the Pauli twirl from Lemma 2 to get:

which is a convex combination of Pauli attacks acting on the correct output state. If we now denote M to be the set of non-benign Pauli attacks (i.e. attacks which do not act as identity on the output of the computation), then one of the test runs will reject with probability:

$$ {\sum}_{P_{i} \in M} |\alpha_{i}|^{2} $$
(58)

This is because non-benign Pauli \(\mathsf {X}\) or \(\mathsf {Y}\) operations are detected by the \(\mathsf {X}\)-test run, whereas non-benign Pauli \(\mathsf {Z}\) operations are detected by the \(\mathsf {Z}\)-test run. Since either test occurs with probability \(1/3\), it follows that, the probability of the verifier accepting an incorrect outcome is at most \(2/3\), hence \(\epsilon = 2/3\).

Note that when discussing the correctness and verifiability of the Test-or-Compute protocol, we have slightly abused the terminology, since this protocol does not rigorously match the established definitions for correctness and verifiability that we have used for the previous protocols. The reason for this is the fact that in the Test-or-Compute protocol there is no additional flag or trap subsystem to indicate failure. Rather, the verifier detects malicious behaviour by alternating between different runs. It is therefore more appropriate to view the Test-or-Compute protocol simply as a \(\mathsf {QPIP}\) protocol having a constant gap between completeness and soundness:

Theorem 4

Test-or-Compute is a prepare-and-send\(\mathsf {QPIP}\)protocolhaving completeness\(8/9\)andsoundness\(7/9\).

In terms of the verifier’s quantum resources, we notice that, as with the VUBQC protocol, the only requirement is the preparation of single qubit states. All of these states are sent in the first round of the protocol, the rest of the interaction being completely classical.

2.4 Summary of Prepare-and-Send Protocols

The protocols, while different, have the common feature that they all use blindness or have the potential to be blind protocols. Out of the five presented protocols, only the Poly-QAS VQC and the Test-or-Compute protocols are not explicitly blind since, in both cases, the computation is revealed to the server. However, it is relatively easy to make the protocols blind by encoding the circuit into the input (which is one-time padded). Hence, one can say that all protocols achieve blindness.

This feature is essential in the proof of security for these protocols. Blindness combined with either the Pauli twirl Lemma 2 or the Clifford twirl Lemma 1 have the effect of reducing any deviation of the prover to a convex combination of Pauli attacks. Each protocol then has a specific way of detecting such an attack. In the Clifford-QAS VQC protocol, the convex combination is turned into a uniform combination and the attack is detected by a flag subsystem associated with a quantum authentication scheme. A similar approach is employed in the Poly-QAS VQC protocol, using a quantum authentication scheme based on a special type of quantum error correcting code. The VUBQC protocol utilizes trap qubits and either sequential repetition or encoding in an error correcting code to detect Pauli attacks. Finally, the Test-or-Compute protocol uses a hidden identity computation acting on either the \({\left \vert {0}\right \rangle }^{\otimes n}\) or \({\left \vert {+}\right \rangle }^{\otimes n}\) states, in order to detect the malicious behavior of the prover.

Because of these differences, each protocol will have different “quantum requirements” for the verifier. For instance, in the authentication-based protocols, the verifier is assumed to be a quantum computer operating on a quantum memory of size \(O(log(1/\epsilon ))\), where \(\epsilon \) is the desired verifiability of the protocol. In VUBQC and Test-or-Compute, however, the verifier only requires a device capable of preparing single-qubit states. Additionally, out of all of these protocols, only Clifford-QAS VQC requires 2-way quantum communication, whereas the other three require the verifier to send only one quantum message at the beginning of the protocol, while the rest of the communication is classical. These facts, together with the communication complexities of the protocols are shown in Table 1.

Table 1 Comparison of prepare-and-send protocols

As mentioned, if we want to make the Poly-QAS VQC and Test-or-Compute protocols blind, the verifier will hide her circuit by incorporating it into the input. The input would then consist of an encoding of \(\mathcal {C}\) and an encoding of x. The prover would be asked to perform controlled operations from the part of the input containing the description of \(\mathcal {C}\), to the part containing x, effectively acting with \(\mathcal {C}\) on x. We stress that in this case, the protocols would have a communication complexity of \(O(|\mathcal {C}| \cdot log(1/\epsilon ))\), just like VUBQC and Clifford-QAS VQC.Footnote 31

3 Receive-and-Measure Protocols

The protocols presented so far have utilized a verifier with a trusted preparation device (and potentially a trusted quantum memory) interacting with a prover having the capability of storing and performing operations on arbitrarily large quantum systems. In this section, we explore protocols in which the verifier possesses a trusted measurement device. The point of these protocols is to have the prover prepare a specific quantum state and send it to the verifier. The verifier’s measurements have the effect of either performing the quantum computation or extracting the outcome of the computation. An illustration of receive-and-measure protocols is shown in Fig. 9.

Fig. 9
figure 9

Receive-and-measure protocols

For prepare-and-send protocols we saw that blindness was an essential feature for achieving verifiability. While most of the receive-and-measure protocols are blind as well, we will see that it is possible to perform verification without hiding any information about the input or computation, from the prover. Additionally, while in prepare-and-send protocols the verifier was sending an encoded or encrypted quantum state to the prover, in receive-and-measure protocols, the quantum state received by the verifier is not necessarily encoded or encrypted. Moreover, this state need not contain a flag or a trap subsystem. For this reason, we can no longer consistently define \(\epsilon \)-verifiability and \(\delta \)-correctness, as we did for prepare-and-send protocols. Instead, we will simply view receive-and-measure protocols as \(\mathsf {QPIP}\) protocols.

The protocols presented in this section are:

  1. 1.

    Section 3.1: a measurement-only protocol developed by Morimae and Hayashi that employs ideas from MBQC in order to perform verification [31].

  2. 2.

    Section 3.2: a post hoc verification protocol, developed by Morimae and Fitzsimons [29, 61] (and independently by Hangleiter et al. [30]).

There is an additional receive-and-measure protocol by Gheorghiu et al. [33] which we refer to as Steering-based VUBQC. That protocol, however, is similar to the entanglement-based GKW protocol from Section 4.1. We will therefore review Steering-based VUBQC in that subsection by comparing it to the entanglement-based protocol.

3.1 Measurement-Only Verification

In this section we discuss the measurement-only protocol from [31], which we shall simply refer to as the measurement-only protocol. This protocol uses MBQC to perform the quantum computation, like the VUBQC protocol from Section 2.2, however the manner in which verification is performed is more akin to Broabdent’s Test-or-Compute protocol, from Section 2.3. This is because, just like in the Test-or-Compute protocol, the measurement-only approach has the verifier alternate between performing the computation or testing the prover’s operations.

The key idea for this protocol, is the fact that graph states can be completely specified by a set of stabilizer operators. This fact is explained in Section A. To reiterate the main points, recall that for a graph G, with associated graph state \(\left \vert {G}\right \rangle \), if we denote as \(V(G)\) the set of vertices in G and as \(N_{G}(v)\) the set of neighbours for a given vertex v, then the generators for the stabilizer group of \(\left \vert {G}\right \rangle \) are:

$$ K_{v} = \textsf{X}_{v} {\prod}_{w \in N_{G}(v)} \textsf{Z}_{w} $$
(59)

for all \(v \in V(G)\). In other words, the \(K_{v}\) operators generate the entire group of operators, O, such that \(O{\left \vert {G}\right \rangle } = {\left \vert {G}\right \rangle }\).

When viewed as observables, stabilizers allow one to test that an unknown quantum state is in fact a particular graph state \({\left \vert {G}\right \rangle }\), with high probability. This is done by measuring random stabilizers of \(\left \vert {G}\right \rangle \) on multiple copies of the unknown quantum state. If all measurements return the \(+ 1\) outcome, then, the unknown state is close in trace distance to \(\left \vert {G}\right \rangle \). This is related to a concept known as self-testing, which is the idea of determining whether an unknown quantum state and an unknown set of observables are close to a target state and observables, based on observed statistics. We postpone a further discussion of this topic to the next section, since self-testing is ubiquitous in entanglement-based protocols.

As mentioned, the measurement-only protocol involves a testing phase and a computation phase. The prover will be instructed to prepare multiple copies of a 2D cluster state, \(\left \vert {G}\right \rangle \), and send them, qubit by qubit, to the verifier. The verifier will then randomly use one of these copies to perform the MBQC computation, whereas the other copies are used for testing that the correct cluster state was prepared.Footnote 32 This testing phase will involve checking all possible stabilizers of \({\left \vert {G}\right \rangle }\). In particular, the verifier will divide the copies to be tested into two groups, which we shall refer to as the \(\mathsf {X}\textsf {Z}\) group and the \(\mathsf {Z}\textsf {X}\) group. In the \(\mathsf {X}\textsf {Z}\) group of states, the verifier will measure the qubits according to the 2D cluster structure, starting with an \(\mathsf {X}\) operator in the upper left corner of the lattice and then alternating between \(\mathsf {X}\) and \(\mathsf {Z}\). In the \(\mathsf {Z}\textsf {X}\) group, she will measure the dual operators by swapping \(\mathsf {X}\) with \(\mathsf {Z}\). The two cases are illustrated in Fig. 10.

Fig. 10
figure 10

Stabilizer measurements

Together, the measurement outcomes of the two groups can be used to infer outcomes of all stabilizer measurements defined by the \(K_{v}\) operators. For instance, given that the measurement outcomes for the qubits take values \(\pm 1\), to compute the outcome of a \(K_{v}\) measurement, for some node v that is measured with \(\mathsf {X}\), the verifier simply takes product of the measurement outcomes for all nodes in \(\{v\} \cup N_{v}\). These tests allow the verifier to certify that the prover is indeed preparing copies of the state \({\left \vert {G}\right \rangle }\). She can then use one of these copies to run the computation. Since the prover does not know which state the verifier will use for the computation, any deviation he implements has a high chance of being detected by one of the verifier’s tests. Hence, the protocol works as follows:

  • (1) The verifier chooses an input x and a quantum computation \(\mathcal {C}\).

  • (2) She instructs the prover to prepare \(2k + 1\) copies of a 2D cluster state, \({\left \vert {G}\right \rangle }\), for some constant k, and send all of the qubits, one at a time, to the verifier.

  • (3) The verifier randomly picks one copy to run the computation of \(\mathcal {C}\) on x in an MBQC fashion. The remaining \(2k\) copies are randomly divided into the \(\mathsf {X}\mathsf {Z}\) groups and the \(\mathsf {Z}\mathsf {X}\) group and measured, as described above, so as to check the stabilizers of \(\left \vert {G}\right \rangle \).

  • (4) If all stabilizer measurement outcomes are successful (i.e. produced the outcome \(+ 1\)), then the verifier accepts the outcome of the computation, otherwise she rejects.

As with all protocols, completeness follows immediately, since if the prover behaves honestly, the verifier will accept the outcome of the computation. In the case of soundness, Hayashi and Morimae treat the problem as a hypothesis test. In other words, in the testing phase of the protocol the verifier is checking the hypothesis that the prover prepared \(2k + 1\) copies of the state \({\left \vert {G}\right \rangle }\). Hayashi and Morimae then prove the following theorem:

Theorem 5

Let\(1/(2k + 1) \leq \alpha \leq 1\)bethe verifier’s confidence level in the testing phase of the measurement-onlyprotocol. Then, the state used by the verifier for the computation, denoted\(\rho \), satisfies:

$$ {\left\langle{G}\right\vert} \rho {\left\vert{G}\right\rangle} \geq 1 - \frac{1}{\alpha(2k + 1)} $$
(60)

This theorem is essentially showing that as the number of copies of \({\left \vert {G}\right \rangle }\), requested by the verifier, increases, and the verifier accepts in the testing phase, one gets that the state \(\rho \), used by the verifier for the computation, is close in trace distance to the ideal state, \({\left \vert {G}\right \rangle }\). The confidence level, \(\alpha \), represents the maximum acceptance probability for the verifier, such that the computation state, \(\rho \), does not satisfy (60). Essentially this represents the probability for the verifier to accept a computation state that is far from ideal. Hayashi and Morimae argue that the lower bound, \(\alpha \geq 1/(2k + 1)\), is tight, because if the prover corrupts one of the \(2k + 1\) states sent to the verifier, there is a \(1/(2k + 1)\) chance that that state will not be tested and the verifier accepts.

If one now denotes with C the POVM that the verifier applies on the computation state in order to perform the computation of \(\mathcal {C}\), then it is the case that:

$$ \lvert Tr(C \rho) - Tr(C {\left\vert{G}\right\rangle}{\left\langle{G}\right\vert}) \rvert \leq \frac{1}{\sqrt{\alpha(2k + 1)}} $$
(61)

What this means is that the distribution of measurement outcomes for the state \(\rho \), sent by the prover in the computation run, is almost indistinguishable from the distribution of measurement outcomes for the ideal state \({\left \vert {G}\right \rangle }\). The soundness of the protocol is therefore upper bounded by \(\frac {1}{\sqrt {\alpha (2k + 1)}}\). This implies that to achieve soundness below \(\epsilon \), for some \(\epsilon > 0\), the number of copies that the prover would have to prepare scales as \(O \left (\frac {1}{\alpha } \cdot \frac {1}{\epsilon ^{2}} \right )\).

In terms of the quantum capabilities of the verifier, she only requires a single qubit measurement device capable of measuring the observables: \(\mathsf {X}, \textsf {Y}, \textsf {Z}, (\textsf {X} + \textsf {Y})/\sqrt {2}, (\textsf {X} - \textsf {Y})/\sqrt {2}\). Recently, however, Morimae, Takeuchi and Hayashi have proposed a similar protocol which uses hypergraph states [32]. These states have the property that one can perform universal quantum computations by measuring only the Pauli observables (X, \(\mathsf {Y}\) and \(\mathsf {Z}\). Hypergraph states are generalizations of graph states in which the vertices of the graph are linked by hyperedges, which can connect more than two vertices. Hence, the entangling of qubits is done with a generalized \(\mathsf {CZ}\) operation involving multiple qubits. The protocol itself is similar to the one from [31], as the prover is required to prepare many copies of a hypergraph state and send them to the verifier. The verifier will then test all but one of these states using stabilizer measurements and use the remaining one to perform the MBQC computation. For a computation, \(\mathcal {C}\), the protocol has completeness lower bounded by \(1 - |\mathcal {C}| e^{-|\mathcal {C}|}\) and soundness upper bounded by\(1/\sqrt {|\mathcal {C}|}\). The communication complexity is higher than the previous measurement-only protocol, as the prover needs to send \(O(|\mathcal {C}|^{21})\) copies of the \(O(|\mathcal {C}|)\)-qubit graph state, leading to a total communication cost of \(O(|\mathcal {C}|^{22})\). We end with the following result:

Theorem 6

The measurement-only protocols are receive-and-measure \(\mathsf {QPIP}\) protocols having an inverse polynomial gap between completeness and soundness.

3.2 Post Hoc Verification

The protocols we have reviewed so far have all been based on cryptographic primitives. There were reasons to believe, in fact, that any quantum verification protocol would have to use some form of encryption or hiding. This is due to the parallels between verification and authentication, which were outlined in Section 2. However, it was shown that this is not the case when Morimae and Fitzsimons, and independently Hangleiter et al, proposed a protocol for post hoc quantum verification [29, 30]. The name “post hoc” refers to the fact that the protocol is not interactive, requiring a single round of back and forth communication between the prover and the verifier. Moreover, verification is performed after the computation has been carried out. It should be mentioned that the first post hoc protocol was proposed in [22], by Fitzsimons and Hajdušek, however, that protocol utilizes multiple quantum provers, and we review it in Section 4.3.

In this section, we will present the post hoc verification approach, refered to as 1S-Post-hoc, from the perspective of the Morimae and Fitzsimons paper [29]. The reason for choosing their approach, over the Hangleiter et al one, is that the entanglement-based post hoc protocols, from Section 4.3, are also described using similar terminology to the Morimae and Fitzsimons paper. The protocol of Hangleiter et al is essentially identical to the Morimae and Fitzsimons one, except it is presented from the perspective of certifying the ground state of a gapped, local Hamiltonian. Their certification procedure is then used to devise a verification protocol for a class of quantum simulation experiments, with the purpose of demonstrating a quantum computational advantage [30].

The starting point is the complexity class \(\mathsf {QMA}\), for which we have stated the definition in Section A. Recall, that one can think of \(\mathsf {QMA}\) as the class of problems for which the solution can be checked by a \(\mathsf {BQP}\) verifier receiving a quantum state \({\left \vert {\psi }\right \rangle }\), known as a witness, from a prover. We also stated the definition of the k-local Hamiltonian problem, a complete problem for the class \(\mathsf {QMA}\), in Definition 9. We mentioned that for \(k = 2\) the problem is \(\mathsf {QMA}\)-complete [64]. For the post hoc protocol, Morimae and Fitzsimons consider a particular type of 2-local Hamiltonian known as an \(\mathsf {X}\textsf {Z}\)-Hamiltonian.

To define an \(\mathsf {X}\textsf {Z}\)-Hamiltonian we introduce some helpful notation. Consider an n-qubit operator S, which we shall refer to as \(\mathsf {X}\textsf {Z}\)-term, such that \(S = \bigotimes _{j = 1}^{n} P_{j}\), with \(P_{j} \in \{I, \textsf {X}, \textsf {Z}\}\). Denote \(w_{X}(S)\) as the \(\mathsf {X}\)-weight of S, representing the total number of j’s for which \(P_{j} = \textsf {X}\). Similarly denote \(w_{Z}(S)\) as the \(\mathsf {Z}\)-weight for S. An \(\mathsf {X}\textsf {Z}\)-Hamiltonian is then a 2-local Hamiltonian of the form \(H = {\sum }_{i} a_{i} S_{i}\), where the \(a_{i}\)’s are real numbers and the \(S_{i}\)’s are \(\mathsf {X}\textsf {Z}\)-terms having \(w_{X}(S_{i}) + w_{Z}(S_{i}) \leq 2\).

The 1S-Post-hoc protocol starts with the observation that \(\mathsf {BQP} \subseteq \mathsf {QMA}\). This means that any problem in \(\mathsf {BQP}\) can be viewed as an instance of the 2-local Hamiltonian problem. Therefore, for any language \(L \in \mathsf {BQP}\) and input x, there exists an \(\mathsf {X}\textsf {Z}\)-Hamiltonian, H, such that the smallest eigenvalue of H is less than a when \(x \in L\) or larger than b, when \(x \not \in L\), where a and b are a pair of numbers satisfying \(b - a \geq 1/poly(|x|)\). Hence, the lowest energy eigenstate of H (also referred to as ground state), denoted \({\left \vert {\psi }\right \rangle }\), is a quantum witness for \(x \in L\). In a \(\mathsf {QMA}\) protocol, the prover would be instructed to send this state to the verifier. The verifier then performs a measurement on \({\left \vert {\psi }\right \rangle }\) to estimate its energy, accepting if the estimate is below a and rejecting otherwise. However, we are interested in a verification protocol for \(\mathsf {BQP}\) problems where the verifier has minimal quantum capabilities. This means that there will be two requirements: the verifier can only perform single-qubit measurements; the prover is restricted to \(\mathsf {BQP}\) computations. The 1S-Post-hoc protocol satisfies both of these constraints.

The first requirement is satisfied because estimating the energy of a quantum state, \({\left \vert {\psi }\right \rangle }\), with respect to an \(\mathsf {X}\textsf {Z}\)-Hamiltonian H, can be done by measuring one of the observables \(S_{i}\) on the state \({\left \vert {\psi }\right \rangle }\). Specifically, it is shown in [65] that if one chooses the local term \(S_{i}\) according to a probability distribution given by the normalized terms \(|a_{i}|\), and measures \({\left \vert {\psi }\right \rangle }\) with the \(S_{i}\) observables, this provides an estimate for the energy of \({\left \vert {\psi }\right \rangle }\). Since H is an X Z-Hamiltonian, this entails performing at most two measurements, each of which can be either an \(\mathsf {X}\) measurement or a \(\mathsf {Z}\) measurement. This implies that the verifier need only perform single-qubit measurements.

For the second requirement, one needs to show that for any \(\mathsf {BQP}\) computation, there exists an \(\mathsf {X}\textsf {Z}\)-Hamiltonian such that the ground state can be prepared by a polynomial-size quantum circuit. Suppose the computation that the verifier would like to delegate is denoted as \(\mathcal {C}\) and the input for this computation is x. Given what we have mentioned above, regarding the local Hamiltonian problem, it follows that there exists an \(\mathsf {X}\textsf {Z}\)-Hamiltonian H and numbers a and b, with \(b - a \geq 1/poly(|x|)\), such that if \(\mathcal {C}\) accepts x with high probability then the ground state of H has energy below a, otherwise it has energy above b. It was shown in [64, 66, 67], that starting from \(\mathcal {C}\) and x one can construct an \(\mathsf {X}\textsf {Z}\)-Hamiltonian satisfying this property and which also has a ground state that can be prepared by a \(\mathsf {BQP}\) machine. The ground state is known as the Feynman-Kitaev clock state. To describe this state, suppose the circuit \(\mathcal {C}\) has T gates (i.e. \(T=|\mathcal {C}|\)) and that these gates, labelled in the order in which they are applied, are denoted \(\{U_{i}\}_{i = 0}^{T}\). For \(i = 0\) we assume U0 = I. The Feynman-Kitaev state is the following:

$$ {\left\vert{\psi}\right\rangle} = \frac{1}{\sqrt{T + 1}} \sum\limits_{t = 0}^{T} U_{t} U_{t-1} ... U_{0} {\left\vert{x}\right\rangle} {\left\vert{1^{t} 0^{T - t}}\right\rangle} $$
(62)

This is essentially a superposition over all time steps, t, of the time evolved state in the circuit \(\mathcal {C}\). Hence, the state can be prepared by a \(\mathsf {BQP}\) machine. The X Z-Hamiltonian is then a series of 2-local constraints that are all simultaneously satisfied by this state.

We can now present the steps of the 1S-Post-hoc protocol:

  • (1) The verifier chooses a quantum circuit, \(\mathcal {C}\), and an input x to delegate to the prover.

  • (1) The verifier computes the terms \(a_{i}\) of the \(\mathsf {X}\textsf {Z}\)-Hamiltonian, \(H = {\sum }_{i} a_{i} S_{i} \), having as a ground state the Feynman-Kitaev state asswith \(\mathcal {C}\) and x, denoted \(\left \vert {\psi }\right \rangle \).

  • (2) The verifier instructs the prover to send her \({\left \vert {\psi }\right \rangle }\), qubit by qubit.

  • (4) The verifier chooses one of the \(\mathsf {X}\textsf {Z}\)-terms \(S_{i}\), according to the normalized distribution \(\{|a_{i}|\}_{i}\), and measures it on \({\left \vert {\psi }\right \rangle }\). She accepts if the measurement indicates the energy of \(\left \vert {\psi }\right \rangle \) is below a.

Note that the protocol is not blind, since the verifier informs the prover about both the computation \(\mathcal {C}\) and the input x.

As mentioned, the essential properties that any \(\mathsf {QPIP}\) protocol should satisfy are completeness and soundness. For the post hoc protocol, these follow immediately from the local Hamiltonian problem. Specifically, we know that there exist a and b such that \(b - a \geq 1/poly(|x|)\). When \(\mathcal {C}\) accepts x with high probability, the state \(\left \vert {\psi }\right \rangle \) will be an eigenstate of H having eigenvalue smaller than a. Otherwise, any state, when measured under the H observable, will have an energy greater than b. Of course, the verifier is not computing the exact energy \(\left \vert {\psi }\right \rangle \) under H, merely an estimate. This is because she is measuring only one local term from H. However, it is shown in [29] that the precision of her estimate is also inverse polynomial in \(|x|\). Therefore:

Theorem 7

1S-Post-hoc is a receive-and-measure \(\mathsf {QPIP}\) protocol having an inverse polynomial gap between completeness and soundness.

The only quantum capability of the verifier is the ability to measure single qubits in the computational and Hadamard bases (i.e. measuring the \(\mathsf {Z}\) and \(\mathsf {X}\) observables). The protocol, as described, suggests that it is sufficient for the verifier to measure only two qubits. However, since the completeness-soundness gap decreases with the size of the input, in practice one would perform a sequential repetition of this protocol in order to boost this gap. It is easy to see that, for a protocol with a completeness-soundness gap of 1/p(|x|), for some polynomial p, in order to achieve a constant gap of at least \(1 - \epsilon \), where \(\epsilon > 0\), the protocol needs to be repeated O(p(|x|) ⋅ log(1/𝜖)) times. It is shown in [30, 68] that \(p(|x|)\) is \(O(|\mathcal {C}|^{2})\), hence the protocol should be repeated \(O(|\mathcal {C}|^{2} \cdot log(1/\epsilon ))\) times and this also gives us the total number of measurements for the verifier.Footnote 33 Note, however, that this assumes that each run of the protocol is independent of the previous one (in other words, that the states sent by the prover to the verifier in each run are uncorrelated). Therefore, the \(O(|\mathcal {C}|^{2} \cdot log(1/\epsilon ))\) overhead should be taken as an i.i.d. (independent and identically distributed states) estimate. This is, in fact, mentioned explicitly in the Hangleiter et al result, where they explain that the prover should prepare “a number of independent and identical copies of a quantum state” [30]. Thus, when considering the most general case of a malicious prover that does not obey the i.i.d. constraint, one requires a more thorough analysis involving non-independent runs, as is done in the measurement-only protocol [31] or the steering-based VUBQC protocol [33].

3.3 Summary of Receive-and-Measure Protocols

Receive-and-measure protocols are quite varied in the way in which they perform verification. The measurement-only protocols use stabilizers to test that the prover prepared a correct graph state and then has the verifier use this state to perform an MBQC computation. The 1S-Post-hoc protocol relies on the entirely different approach of estimating the ground state energy of a local Hamiltonian. Lastly, the steering-based VUBQC protocol, which we detail in Section 4.1, is different from these other two approaches by having the verifier remotely prepare the VUBQC states on the prover’s side and then doing trap-based verification. Having such varied techniques leads to significant differences in the total number of measurements performed by the verifier, as we illustrate in Table 2.

Table 2 Comparison of receive-and-measure protocols

Of course, the number of measurements is not the only metric we use in comparing the protocols. Another important aspect is how many observables the verifier should be able to measure. The 1S-Post-hoc protocol is optimal in that sense, since the verifier need only measure \(\mathsf {X}\) and \(\mathsf {Z}\) observables. Next is the hypergraph state measurement-only protocol which requires all three Pauli observables. Lastly, the other two protocols require the verifier to be able to measure the \(\mathsf {X}\textsf {Y}\)-plane observables \(\mathsf {X}\), \(\mathsf {Y}\), \((\textsf {X}+\textsf {Y})/\sqrt {2}\) and \((\textsf {X}-\textsf {Y})/\sqrt {2}\) plus the \(\mathsf {Z}\) observable.

Finally, we compare the protocols in terms of blindness, which we have seen plays an important role in prepare-and-send protocols. For receive-and-measure protocols, the 1S-Post-hoc protocol is the only one that is not blind. While this is our first example of a verification protocol that does not hide the computation and input from the prover, it is not the only one. In the next section, we review two other post hoc protocols that are also not blind.

4 Entanglement-Based Protocols

The protocols discussed in the previous sections have been either prepare-and-send or receive-and-measure protocols. Both types employ a verifier with some minimal quantum capabilities interacting with a single \(\mathsf {BQP}\) prover. In this section we explore protocols which utilize multiple non-communicating provers that share entanglement and a fully classical verifier. The main idea will be for the verifier to distribute a quantum computation among many provers and verify its correct execution from correlations among the responses of the provers.

We classify the entanglement-based approaches as follows:

  1. 1.

    Section 4.1 three protocols which make use of the CHSH game, the first one developed by Reichardt et al. [18], the second by Gheorghiu et al. [19] and the third by Hajdušek, Pérez-Delgado and Fitzsimons.

  2. 2.

    Section 4.2 a protocol based on self-testing graph states, developed by McKague [21].

  3. 3.

    Section 4.3 two post hoc protocols, one developed by Fitzsimons and Hajdušek [29] and another by Natarajan and Vidick [23].

Unlike the previous sections where, for the most part, each protocol was based on a different underlying idea for performing verification, entanglement-based protocols are either based on some form of rigid self-testing or on testing local Hamiltonians via the post hoc approach. In fact, as we will see, even the post hoc approaches employ self-testing. Of course, there are distinguishing features within each of these broad categories, but due to their technical specificity, we choose to label the protocols in this section by the initials of the authors.

Since self-testing plays such a crucial role in entanglement-based protocols, let us provide a brief description of the concept. The idea of self-testing was introduced by Mayers and Yao in [69], and is concerned with characterising the shared quantum state and observables of n non-communicating players in a non-local game. A non-local game is one in which a referee (which we will later identify with the verifier) will ask questions to the n players (which we will identify with the provers) and, based on their responses, decide whether they win the game or not. Importantly, we are interested in games where there is a quantum strategy that outperforms a classical strategy. By a classical strategy, we mean that the players can only produce local correlations.Footnote 34 Conversely, in a quantum strategy, the players are allowed to share entanglement in order to produce non-local correlations and achieve a higher win rate. Even so, there is a limit to how well the players can perform in the game. In other words, the optimal quantum strategy has a certain probability of winning the game, which may be less than 1. Self-testing results are concerned with non-local games in which the optimal quantum strategy is unique, up to local isometries on the players’ systems. This means that if the referee observes a near maximal win rate for the players, in the game, she can conclude that they are using the optimal strategy and can therefore characterise their shared state and their observables, up to a local isometries. More formally, we give the definition of self-testing, adapted from [70] and using notation similar to that of [23]:

Definition 4 (Self-testing)

Let G denote a game involving n non-communicating players denoted \(\{ P_{i} \}_{i = 1}^{n}\). Each player will receive a question from a set, Q and reply with an answer from a set A. Thus, each \(P_{i}\) can be viewed as a mapping from Q to A. There exists some condition establishing which combinations of answers to the questions constitutes a win for the game. Let \(\omega ^{*}(G)\) denote the maximum winning probability of the game for players obeying quantum mechanics.

The mappings \(P_{i}\) are implemented by a measurement strategy \(S = ({\left \vert {\psi }\right \rangle }, \{ {O_{i}^{j}} \}_{ij})\) consisting of a state \({\left \vert {\psi }\right \rangle }\) shared among the n players and local observables \(\{ {O_{i}^{j}} \}_{j}\), for each player \(P_{i}\). We say that the game G self-tests the strategy S, with robustness \(\epsilon = \epsilon (\delta )\), for some δ > 0, if, for any strategy \(\tilde {S} = ({\left \vert {\tilde {\psi }}\right \rangle }, \{ \tilde {O}_{i}^{j} \}_{ij})\) achieving winning probability \(\omega ^{*}(G) - \epsilon \) there exists a local isometry \({\Phi } = \bigotimes _{i = 1}^{n} {\Phi }_{i}\) and a state \({\left \vert {junk}\right \rangle }\) such that:

$$ TD({\Phi}({\left\vert{\tilde{\psi}}\right\rangle}), {\left\vert{junk}\right\rangle}{\left\vert{\psi}\right\rangle}) \leq \delta $$
(63)

and for all j:

$$ TD \left( {\Phi} \left( \bigotimes_{i = 1}^{n} \tilde{O}_{i}^{j} {\left\vert{\tilde{\psi}}\right\rangle} \right), {\left\vert{junk}\right\rangle} \bigotimes_{i = 1}^{n} {O_{i}^{j}} {\left\vert{\psi}\right\rangle} \right) \leq \delta $$
(64)

Note that TD denotes trace distance, and is defined in Section 1.

4.1 Verification Based on CHSH Rigidity

RUV Protocol

In [71], Tsirelson gave an upper bound for the total amount of non-local correlations shared between two non-communicating parties, as predicted by quantum mechanics. In particular, consider a two-player game consisting of Alice and Bob. Alice is given a binary input, labelled a, and Bob is given a binary input, labelled b. They each must produce a binary output and we label Alice’s output as x and Bob’s output as y. Alice and Bob win the game iff \(a \cdot b = x \oplus y\). The two are not allowed to communicate during the game, however they are allowed to share classical or quantum correlations (in the form of entangled states). This defines a non-local game known as the CHSH game [72]. The optimal classical strategy for winning the game achieves a success probability of \(75\%\), whereas, what Tsirelson proved, is that any quantum strategy achieves a success probability of at most \(cos^{2}(\pi /8) \approx 85.3\%\). This maximal winning probability, in the quantum case, can in fact be achieved by having Alice and Bob do the following. First, they will share the state \(\left \vert {{\Phi }_{+}}\right \rangle = (\left \vert {00}\right \rangle + \left \vert {11}\right \rangle ) / \sqrt {2}\). If Alice receives input \(a = 0\), then she will measure the Pauli \(\mathsf {X}\) observable on her half of the \(\left \vert {{\Phi }_{+}}\right \rangle \) state, otherwise (when \(a = 1\)) she measures the Pauli \(\mathsf {Z}\) observable. Bob, on input \(b = 0\) measures \((\textsf {X}+\textsf {Z})/\sqrt {2}\), on his half of the Bell pair, and on input \(b = 1\), he measures \((\textsf {X}-\textsf {Z})/\sqrt {2}\). We refer to this strategy as the optimal quantum strategy for the CHSH game.

McKague, Yang and Scarani proved a converse of Tsierlson’s result, by showing that if one observes two players winning the CHSH game with a near \(cos^{2}(\pi /8)\) probability, then it can be concluded that the players’ shared state is close to a Bell pair and their observables are close to the ideal observables of the optimal strategy (Pauli \(\mathsf {X}\) and \(\mathsf {Z}\), for Alice, and \((\textsf {X} + \textsf {Z})/\sqrt {2}\) and \((\textsf {X} - \textsf {Z})/\sqrt {2}\), for Bob) [73]. This is effectively a self-test for a Bell pair. Reichardt, Unger and Vazirani then proved a more general result for self-testing a tensor product of multiple Bell states as well as the observables acting on these states [18].Footnote 35 It is this latter result that is relevant for the RUV protocol so we give a more formal statement for it:

Theorem 8

Suppose two players, Alice and Bob, are instructed to play n sequentialCHSH games. Let the inputs, for Alice and Bob, be given by the n-bit stringsa,b ∈{0,1}n. Additionally, let\(S = ({\left \vert {\tilde {\psi }}\right \rangle }, \tilde {A}(\mathbf {a}), \tilde {B}(\mathbf {b}))\)bethe strategy employed by Alice and Bob in playing the n CHSH games, where\({\left \vert {\tilde {\psi }}\right \rangle }\)istheir shared state and\(\tilde {A}(\mathbf {a})\)and \(\tilde {B}(\mathbf {b})\)are their respective observables, for inputs\(\mathbf {a}, \mathbf {b}\).

Suppose Alice and Bob win at least\(n(1-\epsilon )cos^{2}(\pi /8)\)games,with\(\epsilon = poly(\delta , 1/n)\)forsome\(\delta > 0\), such that\(\epsilon \rightarrow 0\)as \(\delta \rightarrow 0\)or\(n \rightarrow \infty \). Then, there exist a local isometry\({\Phi } = {\Phi }_{A} \otimes {\Phi }_{B}\)anda state\(\left \vert {junk}\right \rangle \)suchthat:

$$ TD({\Phi}({\left\vert{\tilde{\psi}}\right\rangle}), {\left\vert{junk}\right\rangle} {\left\vert{{\Phi}_{+}}\right\rangle}^{\otimes n}) \leq \delta $$
(65)

and:

$$ TD \left( {\Phi} \left( \tilde{A}(\mathbf{a}) \otimes \tilde{B}(\mathbf{b}) {\left\vert{\tilde{\psi}}\right\rangle} \right), {\left\vert{junk}\right\rangle} A(\mathbf{a}) \otimes B(\mathbf{b}) {\left\vert{{\Phi}_{+}}\right\rangle}^{\otimes n} \right) \leq \delta $$
(66)

where \(A(\mathbf {a}) = \bigotimes \limits _{i = 1}^{n} P(\mathbf {a(i)})\), \(B(\mathbf {b}) = \bigotimes \limits _{i = 1}^{n} Q(\mathbf {b(i)})\) and \(P(0)=\textsf {X}\), \(P(1)=\textsf {Z}\), \(Q(0)=(\mathsf {X}+\mathsf {Z})/\sqrt {2}\), \(Q(1)=(\mathsf {X}-\mathsf {Z})/\sqrt {2}\).

What this means is that, up to a local isometry, the players share a state which is close in trace distance to a tensor product of Bell pairs and their measurements are close to the ideal measurements. This result, known as CHSH game rigidity, is the key idea for performing multi-prover verification using a classical verifier. We will refer to the protocol in this section as the RUV protocol.

Before giving the description of the protocol, let us first look at an example of gate teleportation, which we also mentioned when presenting the Poly-QAS VQC protocol of Section 2.1. Suppose two parties, Alice and Bob, share a Bell state \(\left \vert {{\Phi }_{+}}\right \rangle \). Bob applies a unitary U on his share of the entangled state so that the joint state becomes \((I \otimes U) \left \vert {{\Phi }_{+}}\right \rangle \). Alice now takes an additional qubit, labelled \(\left \vert {\psi }\right \rangle \) and measures this qubit and the one from the \(\left \vert {{\Phi }_{+}}\right \rangle \) state in the Bell basis given by the states:

$${\left\vert{{\Phi}_{+}}\right\rangle} = \frac{{\left\vert{00}\right\rangle} + {\left\vert{11}\right\rangle}}{\sqrt{2}} \;\;\;\; {\left\vert{{\Phi}_{-}}\right\rangle} = \frac{{\left\vert{00}\right\rangle} - {\left\vert{11}\right\rangle}}{\sqrt{2}} $$
$${\left\vert{{\Psi}_{+}}\right\rangle} = \frac{{\left\vert{01}\right\rangle} + {\left\vert{10}\right\rangle}}{\sqrt{2}} \;\;\;\; {\left\vert{{\Psi}_{-}}\right\rangle} = \frac{{\left\vert{01}\right\rangle} - {\left\vert{10}\right\rangle}}{\sqrt{2}} $$

The outcome of this measurement will be two classical bits which we label \(b_{1}\) and \(b_{2}\). After the measurement, the state on Bob’s system will be \(U \textsf {X}^{b_{1}} \textsf {Z}^{b_{2}} \left \vert {\psi }\right \rangle \). Essentially, Bob has a one-time padded version of \(\left \vert {\psi }\right \rangle \) with the U gate applied.

We now describe the RUV protocol. It uses two quantum provers but can be generalized to any number of provers greater than two. Suppose that Alice and Bob are the two provers. They are allowed to share an unbounded amount of quantum entanglement but are not allowed to communicate during the protocol. A verifier will interact classically with both of them in order to delegate and check an arbitrary quantum computation specified by the quantum circuit \(\mathcal {C}\). The protocol consists in alternating randomly between four sub-protocols:

  • CHSH games. In this subprotocol, the verifier will simply play CHSH games with Alice and Bob. To be precise, the verifier will repeatedly instruct Alice and Bob to perform the ideal measurements of the CHSH game. She will collect the answers of the two provers (which we shall refer to as CHSH statistics) and after a certain number of games, will compute the win rate of the two provers. The verifier is interested in the case when Alice and Bob win close to the maximum number of games as predicted by quantum mechanics. Thus, at the start of the protocol she takes \(\epsilon = poly(1/|\mathcal {C}|)\) and accepts the statistics produced by Alice and Bob if and only if they win at least a fraction \((1 - \epsilon )cos^{2}(\pi /8)\) of the total number of games. Using the rigidity result, this implies that Alice and Bob share a state which is close to a tensor product of perfect Bell states (up to a local isometry). This step is schematically illustrated in Fig. 11.

  • State tomography. This time the verifier will instruct Alice to perform the ideal CHSH game measurements, as in the previous case. However, she instructs Bob to measure his halves of the entangled states so that they collapse to a set of resource states which will be used to perform gate teleportation. The resource states are chosen so that they are universal for quantum computation. Specifically, in the RUV protocol, the following resource states are used: \(\{ \mathsf {P}\left \vert {0}\right \rangle , (\mathsf {HP})_{2} \left \vert {{\Phi }_{+}}\right \rangle , (\mathsf {GY})_{2} \left \vert {{\Phi }_{+}}\right \rangle , \textsf {CNOT}_{2,4}\mathsf {P}_{2} \mathsf {Q}_{4} (\left \vert {{\Phi }_{+}}\right \rangle \otimes \left \vert {{\Phi }_{+}}\right \rangle ) : \mathsf {P}, \mathsf {Q} \in \{\textsf {X}, \textsf {Y}, \textsf {Z}, I \} \}\), where \(\mathsf {G} = exp \left (-i \frac {\pi }{8} \textsf {Y}\right )\) and the subscripts indicate on which qubits the operators act. Assuming Alice and Bob do indeed share Bell states, Bob’s measurements will collapse Alice’s states to the same resource states (up to a one-time padding known to the verifier). Alice’s measurements on these states are used to check Bob’s preparation, effectively performing state tomography on the resource states.

  • Process tomography. This subprotocol is similar to the state tomography one, except the roles of Alice and Bob are reversed. The verifier instructs Bob to perform the ideal CHSH game measurements. Alice, on the other hand, is instructed to perform Bell basis measurements on pairs of qubits. As in the previous subprotocol, Bob’s measurement outcomes are used to tomographically check that Alice is indeed performing the correct measurements.

  • Computation. The final subprotocol combines the previous two. Bob is asked to perform the resource preparation measurements, while Alice is asked to perform Bell basis measurements. This effectively makes Alice perform the desired computation through repeated gate teleportation.

Fig. 11
figure 11

Ideal CHSH game strategy

An important aspect, in proving the correctness of the protocol, is the local similarity of pairs of subprotocols. For instance, Alice cannot distinguish between the CHSH subprotocol and the state tomography one, or between the process tomography one and computation. This is because, in those situations, she is asked to perform the same operations on her side, while being unaware of what Bob is doing. Moreover, since the verifier can test all but the computation part, if Alice deviates there will be a high probability of her deviation being detected. The same is true for Bob. In this way, the verifier can, essentially, enforce that the two players behave honestly and thus perform the correct quantum computation. Note, that this is not the same as the blindness property, discussed in relation to the previous protocols. The RUV protocol does, however, posses that property as well. This follows from a more involved argument regarding the way in which the computation by teleportation is performed.

It should be noted that there are only two constraints imposed on the provers: that they cannot communicate once the protocol has commenced and that they produce close to quantum optimal win-rates for the CHSH games. Importantly, there are no constraints on the quantum systems possessed by the provers, which can be arbitrarily large. Similarly, there are no constraints on what measurements they perform or what strategy they use in order to respond to the verifier. In spite of this, the rigidity result shows that for the provers to produce statistics that are accepted by the verifier, they must behave according to the ideal strategy (up to local isometry). Having the ability to fully characterise the prover’s shared state and their strategies in this way is what allows the verifier to check the correctness of the delegated quantum computation. This approach, of giving a full characterisation of the states and observables of the provers, is a powerful technique which is employed by all the other entanglement-based protocols, as we will see.

In terms of practically implementing such a protocol, there are two main considerations: the amount of communication required between the verfier and the provers and the required quantum capabilities of the provers. For the latter, it is easy to see that the RUV protocol requires both provers to be universal quantum computers (i.e. \(\mathsf {BQP}\) machines), having the ability to store multiple quantum states and perform quantum circuits on these states. In terms of the communication complexity, since the verifier is restricted to \(\mathsf {BPP}\), the amount of communication must scale polynomially with the size of the delegated computation. It was computed in [19], that this communication complexity is of the order \(O(|\mathcal {C}|^{c})\), with \(c > 8192\). Without even considering the constant factors involved, this scaling is far too large for any sort of practical implementation in the near future.Footnote 36

There are essentially two reasons for the large exponent in the scaling of the communication complexity. The first, as mentioned by the authors, is that the bounds derived in the rigidity result are not tight and could possibly be improved. The second and, arguably more important reason, stems from the rigidity result itself. In Theorem 8, notice that \(\epsilon = poly(\delta , 1/n)\) and \(\epsilon \rightarrow 0\) as \(n \rightarrow \infty \). We also know that the provers need to win a fraction \((1-\epsilon )cos^{2}(\pi /8)\) of CHSH games, in order to pass the verifier’s checks. Thus, the completeness-soundness gap of the protocol will be determined by \(\epsilon \). But since, for fixed \(\delta \), \(\epsilon \) is essentially inverse polynomial in n, the completeness-soundness gap will also be inverse polynomial in n. Hence, one requires polynomially many repetition in order to boost the gap to constant.

We conclude with:

Theorem 9

The RUV protocol is an \(\mathsf {MIP^{*}}\) protocol achieving an inverse polynomial gap between completeness and soundness.

GKW Protocol

As mentioned, in the RUV protocol the two quantum provers must be universal quantum computers. One could ask whether this is a necessity or whether there is a way to reduce one of the provers to be non-universal. In a paper by Gheorghiu, Kashefi and Wallden it was shown that the latter option is indeed possible. This leads to a protocol which we shall refer to as the GKW protocol. The protocol is based on the observation that one could use the state tomography subprotocol of RUV in such a way so that one prover is remotely preparing single qubit states for the other prover. The preparing prover would then only be required to perform single qubit measurements and, hence, not need the full capabilities of a universal quantum computer. The specific single qubit states that are chosen, can be the ones used in the VUBQC protocol of Section 2.2. This latter prover can then be instructed to perform the VUBQC protocol with these states. Importantly, because the provers are not allowed to communicate, this would preserve the blindness requirement of VUBQC. We will refer to the preparing prover as the sender and the remaining prover as the receiver. Once again, we assume the verifier wishes to delegate to the provers the evaluation of some quantum circuit \(\mathcal {C}\).

The protocol, therefore, has a two-step structure:

  • (1)Verified preparation. This part is akin to the state tomography subprotocol of RUV. The verifier is trying to certify the correct preparation of states \(\{ {\left \vert {+_{\theta }}\right \rangle } \}_{\theta }\) and \({\left \vert {0}\right \rangle }\), \({\left \vert {1}\right \rangle }\), where \(\theta \in \{0, \pi /4, ..., 7\pi /4 \}\). Recall that these are the states used in VUBQC. We shall refer to them as the resource states. This is done by self-testing a tensor product of Bell pairs and the observables of the two provers using CHSH games and the rigidity result of Theorem 8.Footnote 37 As in the RUV protocol, the verifier will play multiple CHSH games with the provers. This time, however, each game will be an extended CHSH game (as defined in [18]) in which the verifier will ask each prover to measure an observable from the set \(\{ \mathsf {X}, \mathsf {Y}, \mathsf {Z}, (\mathsf {X} \pm \mathsf {Z})/\sqrt {2}, (\mathsf {Y} \pm \mathsf {Z})/\sqrt {2}, (\mathsf {X} \pm \mathsf {Y})/\sqrt {2} \}\). Alternatively, this can be viewed as the verifier choosing to play one of 6 possible CHSH games defined by the observables in that setFootnote 38 These observables are sufficient for obtaining the desired resource states. In particular, measuring the \(\mathsf {X}\), \(\mathsf {Y}\), and \((\textsf {X} \pm \textsf {Y}) / \sqrt {2}\) observables on the Bell pairs will collapse the entangled qubits to states of the form \(\{ {\left \vert {+_{\theta }}\right \rangle } \}_{\theta }\), while measuring \(\mathsf {Z}\) will collapse them to \({\left \vert {0}\right \rangle }\), \({\left \vert {1}\right \rangle }\). The verifier accepts if the provers win a fraction \((1-\epsilon )cos^{2}(\pi /8)\) of the CHSH games, where \(\epsilon = poly(\delta , 1/|\mathcal {C}|)\), and \(\delta > 0\) is the desired trace distance between the reduced state on the receiver’s side and the ideal state consisting of the required resource states in tensor product, up to a local isometry (\(\epsilon \rightarrow 0\) as \(\delta \rightarrow 0\) or \(|\mathcal {C}| \rightarrow \infty \)). The verifier will also instruct the sender prover to perform additional measurements so as to carry out the remote preparation on the receiver’s side. This verified preparation is illustrated in Fig. 12.

  • (2)Verified computation. This part involves verifying the actual quantum computation, \(\mathcal {C}\). Once the resource states have been prepared on the receiver’s side, the verifier will perform the VUBQC protocol with that prover as if she had sent him the resource states. She accepts the outcome of the computation if all trap measurements succeed, as in VUBQC.

Fig. 12
figure 12

Verified preparation

Note the essential difference, in terms of the provers’ requirements, between this protocol and the RUV protocol. In the RUV protocol, both provers had to perform entangling measurements on their side. However, in the GKW protocol, the sender prover is required to only perform single qubit measurements. This means that the sender prover can essentially be viewed as an untrusted measurement device, whereas the receiver is the only universal quantum computer. For this reason, the GKW protocol is also described as a device-independent [75, 76] verification protocol. This stems from comparing it to VUBQC or the receive-and-measure protocols, of Section 3, where the verifier had a trusted preparation or measurement device. In this case, the verifier essentially has a measurement device (the sender prover) which is untrusted.

Of course, performing the verified preparation subprotocol and combining it with VUBQC raises some questions. For starters, in the VUBQC protocol, the state sent to the prover is assumed to be an ideal state (i.e. an exact tensor product of states of the form \(\left \vert {+_{\theta }}\right \rangle \) or \(\left \vert {0}\right \rangle \), \(\left \vert {1}\right \rangle \)). However, in this case the preparation stage is probabilistic in nature and therefore the state of the receiver will be \(\delta \)-close to the ideal tensor product state, for some \(\delta > 0\). How is the completeness-soundness gap of the VUBQC protocol affected by this? Stated differently, is VUBQC robust to deviations in the input state? A second aspect is that, since the resource state is prepared by the untrusted sender, even though it is \(\delta \)-close to ideal, it can, in principle, be correlated with the receiving prover’s system. Do these initial correlations affect the security of the protocol?

Both of these issues are addressed in the proofs of the GKW protocol. Firstly, assume that in the VUBQC protocol the prover receives a state which is \(\delta \)-close to ideal and uncorrelated with his private system. Any action of the prover can, in the most general sense, be modelled as a CPTP map. This CPTP map is of course distance preserving and so the output of this action will be \(\delta \)-close to the output in the ideal case. It follows from this that the probabilities of the verifier accepting a correct or incorrect result change by at most \(O(\delta )\). As long as \(\delta > 1/poly(|\mathcal {C}|)\) (for a suitably chosen polynomial), the protocol remains a valid \(\mathsf {QPIP}\) protocol.

Secondly, assume now that the \(\delta \)-close resource state is correlated with the prover’s private system, in VUBQC. It would seem that the prover could, in principle, exploit this correlation in order to convince the verifier to accept an incorrect outcome. However, it is shown that this is, in fact, not the case, as long as the correlations are small. Mathematically, let \(\rho _{VP}\) be the state comprising of the resource state and the prover’s private system. In the ideal case, this state should be a product state of the form ρVρP, where \(\rho _{V} = {\left \vert {\psi _{id}}\right \rangle }{\left \langle {\psi _{id}}\right \vert }\) is the ideal resource state and \(\rho _{P}\) the prover’s system. However, in the general case the state can be entangled. In spite of this, it is known that:

$$ TD(Tr_{P}(\rho_{VP}), {\left\vert{\psi_{id}}\right\rangle}{\left\langle{\psi_{id}}\right\vert}) \leq \delta $$
(67)

Using a result known as the gentle measurement lemma [18], one can show that this implies:

$$ TD(\rho_{VP}, {\left\vert{\psi_{id}}\right\rangle}{\left\langle{\psi_{id}}\right\vert} \otimes Tr_{V}(\rho_{VP})) \leq O(\sqrt{\delta}) $$
(68)

In other words, the joint system of resource states and the prover’s private memory is \(O(\sqrt {\delta })\)-close to the ideal system. Once again, as long as \(\delta > 1/poly(|\mathcal {C}|)\) (for a suitably chosen polynomial), the protocol is a valid \(\mathsf {QPIP}\) protocol.

These two facts essentially show that the GKW protocol is a valid entanglement-based protocol, as long as sufficient tests are performed in the verified preparation stage so that the system of resource states is close to the ideal resource states. As with the RUV protocol, this implies a large communication overhead, with the communication complexity being of the order \(O(|\mathcal {C}|^{c})\), where \(c > 2048\). One therefore has:

Theorem 10

The GKW protocol is an \(\mathsf {MIP^{*}}\) protocol achieving an inverse polynomial gap between completeness and soundness.

Before concluding this section, we describe the steering-based VUBQC protocol that we referenced in Section 3. As mentioned, the GKW protocol can be viewed as a protocol involving a verifier with an untrusted measurement device interacting with a quantum prover. In a subsequent paper, Gheorghiu, Wallden and Kashefi addressed the setting in which the verifier’s device becomes trusted [33]. They showed that one can define a self-testing game for Bell states which involves steering correlations [77] as opposed to non-local correlations. Steering correlations arise in a two-player setting in which one of the players is trusted to measure certain observables. This extra piece of information allows for the characterisation of Bell states with comparatively fewer statistics than in the non-local case. The steering-based VUBQC protocol, therefore, has exactly the same structure as the GKW protocol. First, the verifier uses this steering-based game, between her measurement device and the prover, to certify that the prover prepared a tensor product of Bell pairs. She then measures some of the Bell pairs so as to remotely prepare the resource states of VUBQC on the prover’s side and then performs the trap-based verification. As mentioned in Section 3, the protocol has a communication complexity of \(O(|\mathcal {C}|^{13} log(|\mathcal {C}|))\) which is clearly an improvement over \(O(|\mathcal {C}|^{2048})\). This improvement stems from the trust added to the measurement device. However, the overhead is still too great for any practical implementation.

HPDF Protocol

Independently from the GKW approach, Hajdušek, Pérez-Delgado and Fitzsimons developed a protocol which also combines the CHSH rigidity result with the VUBQC protocol. This protocol, which we refer to as the HPDF protocol has the same structure as GKW in the sense that it is divided into a verified preparation stage and a verified computation stage. The major difference is that the number of non-communicating provers is on the order \(O(poly(|\mathcal {C}|))\), where \(\mathcal {C}\) is the computation that the verifier wishes to delegate. Essentially, there is one prover for each Bell pair that is used in the verified preparation stage. This differs from the previous two approaches in that the verifier knows, a priori, that there is a tensor product structure of states. She then needs to certify that these states are close, in trace distance, to Bell pairs. The advantage of assuming the existence of the tensor product structure, instead of deriving it through the RUV rigidity result, is that the overhead of the protocol is drastically reduced. Specifically, the total number of provers, and hence the total communication complexity of the protocol is of the order \(O(|\mathcal {C}|^{4} log(|\mathcal {C}| ))\).

We now state the steps of the HPDF protocol. We will refer to one of the provers as the verifier’s untrusted measurement device. This is akin to the sender prover in the GKW protocol. The remaining provers are the ones which will “receive” the states prepared by the verifier and subsequently perform the quantum computation.

  • (1)Verified preparation. The verifier is trying to certify the correct preparation of the resource states \(\{ {\left \vert {+_{\theta }}\right \rangle } \}_{\theta }\) and \({\left \vert {0}\right \rangle }\), \(\left \vert {1}\right \rangle \), where \(\theta \in \{0, \pi /4, ..., 7\pi /4 \}\). The verifier instructs each prover to prepare a Bell pair and send one half to her untrusted measurement device. For each received state, she will randomly measure one of the following observables \(\{ \mathsf {X}, \mathsf {Y}, \mathsf {Z}, (\mathsf {X} + \mathsf {Z})/\sqrt {2}, (\mathsf {Y} + \mathsf {Z})/\sqrt {2}, (\mathsf {X} + \mathsf {Y})/\sqrt {2}, (\mathsf {X} - \mathsf {Y})/\sqrt {2} \}\). Each prover is either instructed to randomly measure an observable from the set \(\{ \mathsf {X}, \mathsf {Y}, \mathsf {Z} \}\) or to not perform any measurement at all. The latter case corresponds to the qubits which are prepared for the computation stage. The verifier will compute correlations between the measurement outcomes of her device and the provers and accept if these correlations are above some threshold parametrized by \(\epsilon = poly(\delta , 1/|\mathcal {C}|)\) (\(\epsilon \rightarrow 0\) as \(\delta \rightarrow 0\) or \(|\mathcal {C}| \rightarrow \infty \)), where \(\delta > 0\) is the desired trace distance between the reduced state on the receiving provers’ sides and the ideal state consisting of the required resource states in tensor product, up to a local isometry.

  • (2)Verified computation. Assuming the verifier accepted in the previous stage, she instructs the provers that have received the resource states to act as a single prover. The verifier then performs the VUBQC protocol with that prover as if she had sent him the resource states. She accepts the outcome of the computation if all trap measurements succeed, as in VUBQC.

In their paper, Hajdušek et al have proved that the procedure in the verified preparation stage of their protocol constitutes a self-testing procedure for Bell states. This procedure self-tests individual Bell pairs, as opposed to the CHSH rigidity theorem which self-tests a tensor product of Bell pairs. In this case, however, the tensor product structure is already given by having the \(O(|\mathcal {C}|^{4} log(|\mathcal {C}| ))\) non-communicating provers. The correctness of the verified computation stage follows from the robustness of the VUBQC protocol, as mentioned in the previous section. One therefore has the following:

Theorem 11

The HPDF protocol is an \(\mathsf {MIP^{*}[poly]}\) protocol achieving an inverse polynomial gap between completeness and soundness.

4.2 Verification Based on Self-Testing Graph States

We saw, in the HPDF protocol, that having multiple non-communicating provers presents a certain advantage in characterising the shared state of these provers, due to the tensor product structure of the provers’ Hilbert spaces. This approach not only leads to simplified proofs, but also to a reduced overhead in characterising this state, when compared to the CHSH rigidity Theorem 8, from [18].

Another approach which takes advantage of this tensor product structure is the one of McKague from [21]. In his protocol, as in HPDF, the verifier will interact with \(O(poly(|\mathcal {C}|))\) provers. Specifically, there are multiple groups of \(O(|\mathcal {C}|)\) provers, each group jointly sharing a graph state \({\left \vert {G}\right \rangle }\). In particular, each prover should hold only one qubit from \({\left \vert {G}\right \rangle }\). The central idea is for the verifier to instruct the provers to measure their qubits to either test that the provers are sharing the correct graph state or to perform an MBQC computation of \(\mathcal {C}\). This approach is similar to the stabilizer measurement-only protocol of Section 3.1 and, just like in that protocol or the Test-or-Compute or RUV protocols, the verifier will randomly alternate between tests and computation.

Before giving more details about this verification procedure, we first describe the type of graph state that is used in the protocol and the properties which allow this state to be useful for verification. McKague considers \(\left \vert {G}\right \rangle \) to be a triangular graph state, which is a type of universal cluster state. What this means is that the graph G on which the state is based on, is a triangular lattice (a planar graph with triangular faces). An example is shown in Fig. 13. For each vertex v in G we have that:

$$ X_{v} Z_{N(v)} \left\vert{G}\right\rangle = \left\vert{G}\right\rangle $$
(69)

Where\(N(v)\) denotes the neighbors of the vertex v. In other words, \(\left \vert {G}\right \rangle \) is stabilized by the operators \(K_{v} = X_{v} Z_{N(v)}\), for all vertices v. This is important for the testing part of the protocol as it means that measuring the observable \(S_{v}\) will always yield the outcome 1. Another important property is:

$$ X_{\tau} Z_{N(\tau)} \left\vert{G}\right\rangle = -\left\vert{G}\right\rangle $$
(70)

where \(\tau \) is a set of 3 neighboring vertices which comprise a triangle in the graph G (and \(N(\tau )\) are the odd neighbors of those verticesFootnote 39). This impliesthat measuring \(T_{\tau } = X_{\tau } Z_{N(\tau )}\) produces the outcome − 1. Triangular graph states are universal for quantum computation, as explained in [21, 78], by performing local measurements (with corrections) on the vertex qubits using the observables \(\mathsf {R}(\theta ) = cos(\theta ) \textsf {X} + sin(\theta ) \textsf {Z}\), where \(\theta \in \{0, \pi /4, ..., 7\pi /4 \}\).

Fig. 13
figure 13

Triangular lattice graph

We now have the necessary elements to describe McKague’s protocol. The verifier considers a triangular graph state \(\left \vert {G}\right \rangle \) for the computation she wishes to verify. Let \(n = O(|\mathcal {C}|)\) denote the number of vertices in G. In the ideal case, there will be multiple groups of n provers and, in each group, every prover should have one of the qubits of this graph (entangled with its neighbors). Denote T as the number of triangles (consisting of 3 neighboring vertices) in G and \(N_{G} = 3n + T\). The protocol’s setting is shown in Fig. 14.

Fig. 14
figure 14

Verifier instructing some of the provers to perform measurements in McKague’s protocol

The verifier will choose one of the n groups of provers at random to perform the computation \(\mathcal {C}\). The computation is performed in an MBQC fashion. In other words, the verifier will pick appropriate measurement angles \(\{\theta _{v}\}_{v \in V(G)}\), for all vertices in G, as well as a partial order for the vertices. To perform the computation \(\mathcal {C}\), the verifier instructs the provers to measure the qubits of \({\left \vert {G}\right \rangle }\) with the observables \(\mathsf {R}(\theta _{v})\), defined above. The partial order establishes the temporal ordering of these measurements. Additionally, the \(\theta _{v}\) angles, for the \(\mathsf {R}(\theta _{v})\) measurements, should be updated so as to account for corrections arising from previous measurement outcomes. In other words, the angles \(\{\theta _{v}\}_{v \in V(G)}\), which we shall refer to as computation angles, are the ideal angles assuming no corrections. See Section A for more details about measurement-based quantum computations.

The remaining groups of provers are used to test that the correct graph state, \({\left \vert {G}\right \rangle }\), was prepared. This testing phase consists in the verifier randomly choosing to run one of the following sub-protocols:

  • Vertex test, chosen with probability \(\frac {n}{N_{G}}\). The verifier randomly picks a vertex v of the graph. She instructs the prover having the qubit corresponding to v to measure it in the \(\mathsf {X}\) basis. At the same time, she asks the provers having the neighboring qubits, \(N(v)\), to measure them in the \(\mathsf {Z}\) basis. The \(\pm 1\) outcomes of the provers are multiplied by the verifier and she checks to see whether this product is 1 (since she is essentially measuring the \(S_{v}\) stabilizer). If this is the case, she accepts, otherwise she rejects.

  • Triangle test, chosen with probability \(\frac {T}{N_{G}}\). Similar to the previous test, the verifier picks a random triangle in G, consisting of the vertices \(\tau \). She instructs the provers having these vertices to measure them in the \(\mathsf {X}\) basis, while their neighbors (N(τ)) are instructed to measure in the \(\mathsf {Z}\) basis. The verifier checks if the product of their outcomes is \(-1\) and if so, she accepts, otherwise she rejects.

  • ]\(\boldsymbol {R(\theta )}\)test, chosen with probability \(\frac {2n}{N_{G}}\). In this case the verifier first randomly picks a vertex v of the graph, a neighbor u of v (so \(u \in N(v)\)) and t in \(\{-1, + 1\}\). She then randomly picks \(\mathsf {X}\) with probability \(p = \frac {cos(\theta _{v})}{cos(\theta _{v}) + |sin(\theta _{v})|}\) or \(\mathsf {Z}\) with probability \(1-p\), where \(\theta _{v}\) is the computation angle associated with v. If she chose \(\mathsf {X}\), then she queries the prover holding v to measure \(\mathsf {R}(t \theta _{v})\), and his neighbors (N(v)) to measure \(\mathsf {Z}\). She accepts if the product of their replies is \(+ 1\). If the verifier instead chose \(\mathsf {Z}\), then she instructs the prover holding v to measure \(t \mathsf {R}(t \theta _{v})\), the prover holding u to measure \(\mathsf {X}\) and the neighbors of u and v to measure \(\mathsf {Z}\). She accepts if the product of their outcomes is \(+ 1\).

Together, these three tests are effectively performing a self-test of the graph state \({\left \vert {G}\right \rangle }\) and the prover’s observables. Specifically, McKague showed the following:

Theorem 12

For a triangular graph G, having n vertices, suppose that n provers,performing the strategy\(S = ({\left \vert {\tilde {\psi }}\right \rangle }, \{\tilde {O}_{i}^{j}\}_{ij})\)succeed in the test described above with probability \(1 - \epsilon \), where \(\epsilon = poly(\delta , 1/n)\)forsome\(\delta > 0\)and\(\epsilon \rightarrow 0\)as\(\delta \rightarrow 0\)or\(n \rightarrow \infty \). The strategy involves sharing the state\({\left \vert {\tilde {\psi }}\right \rangle }\)andmeasuring the observables\(\{\tilde {O}_{i}^{j}\}_{ij}\), where each prover, i, has observables\(\{\tilde {O}_{i}^{j}\}_{j}\). Then there exists a local isometry\({\Phi } = \bigotimes _{i = 1}^{n} {\Phi }_{i}\)anda state\({\left \vert {junk}\right \rangle }\)suchthat:

$$ TD({\Phi}({\left\vert{\tilde{\psi}}\right\rangle}), {\left\vert{junk}\right\rangle} {\left\vert{G}\right\rangle}) \leq \delta $$
(71)

and for all j:

$$ TD \left( {\Phi} \left( \bigotimes_{i = 1}^{n} \tilde{O}_{i}^{j} {\left\vert{\tilde{\psi}}\right\rangle} \right), {\left\vert{junk}\right\rangle} \bigotimes_{i = 1}^{n} {O_{i}^{j}} {\left\vert{G}\right\rangle} \right) \leq \delta $$
(72)

where for all i, \({O_{i}^{j}} \in \{ \mathsf {R}(\theta ) | \; \theta \in \{0, \pi /4, ... , 7\pi /4\} \}\).Footnote 40

Note that the verifier will ask the provers to perform the same types of measurements in both the testing phase and the computation phase of the protocol. This means that, at the level of each prover, the testing and computation phases are indistinguishable. Moreover, the triangular state \({\left \vert {G}\right \rangle }\), being a universal cluster state, will be the same for computations of the same size. Therefore, the protocol is blind in the sense that each prover, on its own, is unaware of what computation is being performed. In summary, the protocol consists of the verifier choosing to perform one of the following:

  • Computation. In this case, the verifier instructs the provers to perform the MBQC computation of \(\mathcal {C}\) on the graph state \({\left \vert {G}\right \rangle }\), as described above.

  • Testing\({\left \vert {G}\right \rangle }\). In this case, the verifier will randomly choose between one of the three tests described above accepting if an only if the test succeeds.

It is therefore the case that:

Theorem 13

McKague’s protocol is an \(\mathsf {MIP^{*}}[poly]\) protocol having an inverse polynomial gap between completeness and soundness.

As with the previous approaches, the reason for the inverse polynomial gap between completeness and soundness is the use of a self-test with robustness \(\epsilon = poly(1/n)\) (and \(\epsilon \rightarrow 0\) as \(n \rightarrow \infty \)). In turn, this leads to a polynomial overhead for the protocol as a whole. Specifically, McKague showed that the total number of required provers and communication complexity, for a quantum computation \(\mathcal {C}\), is of the order \(O(|\mathcal {C}|^{22})\). Note, however, that each of the provers must only perform a single-qubit measurement. Hence, apart from the initial preparation of the graph state \({\left \vert {G}\right \rangle }\), the individual provers are not universal quantum computers, merely single-qubit measurement devices.

4.3 Post Hoc Verification

In Section 3.2 we reviewed a protocol by Morimae and Fitzsimons for post hoc verification of quantum computation. Of course, that protocol involved a single quantum prover and a verifier with a measurement device. In this section, we review two post hoc protocols for the multi-prover setting having a classical verifier. We start with the first post hoc protocol by Fitzsimons and Hajdušek.

FH Protocol

Similar to the 1S-Post-hoc protocol from Section 3.2, the protocol of Fitzsimons and Hajdušek, which we shall refer to as the FH protocol, also makes use of the local Hamiltonian problem stated in Definition 9. As mentioned, this problem is complete for the class \(\mathsf {QMA}\), which consists of problems that can be decided by a \(\mathsf {BQP}\) verifier receiving a witness state from a prover. Importantly, the size of the witness state is polynomial in the size of the input to the problem. However, Fitzsimons and Vidick proposed a protocol for the k-local Hamiltonian problem (and hence any \(\mathsf {QMA}\) problem), involving 5 provers, in which the quantum state received by the verifier is of constant size [79]. That protocol is the basis for the FH protocol and so we start with a description of it.

Suppose that the k-local Hamiltonian is of the form \(H = {\sum }_{i} H_{i}\), acting on a system of n qubits and each \(H_{i}\) is a k-local, n-qubit projector. For fixed a and b, such that \(b-a \geq 1/poly(n)\), the verifier should accept if there exists a state \({\left \vert {\psi }\right \rangle }\) such that \({\left \langle {\psi }\right \vert } H {\left \vert {\psi }\right \rangle } \leq a\) and reject if for all states \({\left \vert {\psi }\right \rangle }\) it is the case that \({\left \langle {\psi }\right \vert } H {\left \vert {\psi }\right \rangle } \geq b\). Suppose we are in the acceptance case and let \({\left \vert {\psi }\right \rangle }\) be the witness state. The 5 provers must share a version of \({\left \vert {\psi }\right \rangle }\) encoded in the 5-qubit error correcting code,Footnote 41 denoted \({\left \vert {\psi }\right \rangle }_{L}\). Specifically, for each logical qubit of \({\left \vert {\psi }\right \rangle }_{L}\), each prover should hold one of its constituent physical qubits. The verifier will then check that the provers are indeed sharing this state, accepting if this is the case and rejecting otherwise. She will also perform an energy measurement on the state, to estimate if it has energy above b or below a. To do this she will, with equal probability, choose to either test that the shared state of the provers has energy below a or that the provers share a state encoded in the 5-qubit code:

  • Energy measurement. In this case, the verifier will pick a random term \(H_{i}\), from H, and ask each prover for k qubits corresponding to the logical states on which \(H_{i}\) acts. The verifier will then perform a two-outcome measurement, defined by the operators \(\{H_{i}, I - H_{i} \}\)on the received qubits. As in the 1S-Post-hoc protocol, this provides an estimate for the energy of \(\left \vert {\psi }\right \rangle \). The verifier accepts if the measurement outcome indicates the state has energy below a.

  • Encoding measurement. In this case the verifier will choose at random between two subtests. In the first subtest, she will choose j at random from 1 to n and ask each prover to return the physical qubits comprising the j’th logical qubit. She then measures these qubits to check whether their joint state lies within the code space, accepting if it does and rejecting otherwise. In the second subtest, the verifier chooses a random set, S, of 3 values between 1 and n. She also picks one of the values at random, labelled j. The verifier then asks a randomly chosen prover for the physical qubits of the logical states indexed by the values in S, while asking the remaining provers for their shares of logical qubit j. As an example, if the set contains the values \(\{1, 5, 8 \}\), then the verifier picks one of the 5 provers at random and asks him for his shares (physical qubits) of logical qubits 1, 5 and 8 from \(\left \vert {\psi }\right \rangle \). Assuming that the verifier also picked the random value 8 from the set, then she will ask the remaining provers for their shares of logical qubit 8. The verifier then measures logical qubit j (or 8, in our example) and checks if it is in the code subspace, accepting if it is and rejecting otherwise. The purpose of this second subtest is to guarantee that the provers respond with different qubits when queried.

One can see that when the witness state exists and the provers follow the protocol, the verifier will indeed accept with high probability. On the other hand, Fitzsimons and Vidick show that when there is no witness state, the provers will fail at convincing the verifier to accept with high probability. This is because they cannot simultaneously provide qubits yielding the correct energy measurements and also have their joint state be in the correct code space. This also illustrates why their protocol required testing both of these conditions. If one wanted to simplify the protocol, so as to have a single prover providing the qubits for the verifier’s \(\{H_{i}, I - H_{i} \}\) measurement, then it is no longer possible to prove soundness. The reason is that even if there does not exist a \({\left \vert {\psi }\right \rangle }\) having energy less than a for H, the prover could still find a group of k qubits which minimize the energy constraint for the specific \(H_{i}\) that the verifier wishes to measure. The second subtest prevents this from happening, with high probability, since it forces the provers to consistently provide the requested indexed qubits from the state \({\left \vert {\psi }\right \rangle }\).

Note that for a \(\mathsf {BQP}\) computation, defined by the quantum circuit \(\mathcal {C}\) and input x, the state \({\left \vert {\psi }\right \rangle }\) in the Fitzsimons and Vidick protocol becomes the Feynman-Kitaev state of that circuit, as described in Section 3.2. The FH protocol essentially takes the Fitzsimons and Vidick protocol for \(\mathsf {BQP}\) computations and alters it by making the verifier classical. This is achieved using an approach of Ji [81] which allows for the two tests to be performed using only classical interaction with the provers. The idea is based on self-testing and is similar to the rigidity of the CHSH game. To understand this approach let us first examine the stabilizer generators, \(\{ g_{i} \}_{i = 1}^{4}\) for the code space of the 5-qubit code, shown in Table 3. Notice that they all involve only Pauli \(\mathsf {X}\), \(\mathsf {Z}\) or identity operators. In particular, the operator acting on the fifth qubit is always either \(\mathsf {X}\) or \(\mathsf {Z}\). Ji then considers rotating this operator so that \(\mathsf {X} \rightarrow \textsf {X}^{\prime }\) and \(\mathsf {Z} \rightarrow \textsf {Z}^{\prime }\), where \(\mathsf {X}^{\prime } = (\textsf {X} + \textsf {Z})/\sqrt {2}\) and \(\mathsf {Z}^{\prime } = (\textsf {X} - \textsf {Z})/\sqrt {2}\), resulting in the new operators \(\{ g^{\prime }_{i} \}_{i = 1}^{4}\) shown in Table 4.

Table 3 Generators for 5-qubit code
Table 4 Generators with fifth operator rotated

The new operators satisfy a useful property. For any state \({\left \vert {\phi }\right \rangle }\) in the code space of the 5-qubit code, it is the case that:

$$ {\sum}_{i} {\left\langle{\phi}\right\vert} g^{\prime}_{i} {\left\vert{\phi}\right\rangle} = 4 \sqrt{2} $$
(73)

This is similar to the CHSH game. In the CHSH game, the ideal strategy involves Alice measuring either \(\mathsf {X}\) or \(\mathsf {Z}\) and Bob measuring either \(\mathsf {X}^{\prime }\) or \(\mathsf {Z}^{\prime }\), respectively, on the maximally entangled state \({\left \vert {{\Phi }_{+}}\right \rangle }\). These observables and the Bell state satisfy:

$$ {\left\langle{{\Phi}_{+}}\right\vert} \textsf{X} \textsf{X}^{\prime} + \textsf{X} \textsf{Z}^{\prime} + \textsf{Z} \textsf{X}^{\prime} - \textsf{Z} \textsf{Z}^{\prime} {\left\vert{{\Phi}_{+}}\right\rangle} = 2 \sqrt{2} $$
(74)

It can be shown that having observables which satisfy this relation implies that Alice and Bob win the CHSH game with the (quantum) optimal probability of success \(cos^{2}(\pi /8)\). Analogous to the CHSH game, the stabilizers \(\{ g^{\prime }_{i} \}_{i = 1}^{4}\), viewed as observables, can be used to define a 5-player non-local game, in which the optimal quantum strategy involves measuring these observables on a state encoded in the 5-qubit code. Moreover, just like in the CHSH game, observing the players achieve the maximum quantum win-rate for the game implies that, up to local isometry, the players are following the ideal quantum strategy. We will not detail the game, except to say that it inolves partitioning the 5 provers into two sets, one consisting of four provers and the other with the remaining prover. Such a bipartition of a state encoded in the 5-qubit code yields a state which is isometric to a Bell pair. This means that the 5-player game is essentially self-testing a maximally entangled state, hence the similarity to the CHSH game. This then allows a classical verifier, interacting with the 5 provers, to perform the encoding test of the Fitzsimons and Vidick protocol.

We have discussed how a classical verifier can test that the 5 provers share a state encoded in the logical space of the 5-qubit code. But to achieve the functionality of the Fitzsimons and Vidick protocol, one needs to also delegate to the provers the measurement of a local term \(H_{i}\) from the Hamiltonian. This is again possible using the 5-player non-local game. Firstly, it can be shown that, without loss of generality, that each \(H_{i}\), in the k-local Hamiltonian, can be expressed as a linear combination of terms comprised entirely of I, \(\mathsf {X}\) and \(\mathsf {Z}\). This means that the Hamiltonian itself is a linear combination of such terms, \(H = {\sum }_{i} a_{i} S_{i}\), where \(a_{i}\) are real coefficients and \(S_{i}\) are k-local \(\mathsf {X}\mathsf {Z}\)-terms. This is akin to the \(\mathsf {X}\mathsf {Z}\)-Hamiltonian from the 1S-Post-hoc protocol.

Given this fact, the verifier can measure one of the \(S_{i}\) terms, in order to estimate the energy of the ground state, instead of measuring \(\{ H_{i}, I - H_{i} \}\). She will pick an \(S_{i}\) term at random and ask the provers to measure the constituent Pauli observables in \(S_{i}\). However, the verifier will also alternate these measurements with the stabilizer measurements of the non-local game, rejecting if the provers do not achieve the maximal non-local value of the game. This essentially forces the provers to perform the correct measurements (Fig. 15).

Fig. 15
figure 15

Verifier interacting with the 5 provers

To summarize, the FH protocol is a version of the Fitzsimons and Vidick protocol which restricts the provers to be \(\mathsf {BQP}\) machines and uses Ji’s techniques, based on non-local games, to make the verifier classical. The steps of the FH protocol are as follows:

  • (1) The verifier instructs the provers to share the Feynman-Kitaev state, associated with her circuit \(\mathcal {C}\), encoded in the 5-qubit error correcting code, as described above. We denote this state as \(\left \vert {\psi }\right \rangle _{L}\). The provers are then split up and not allowed to communicate. The verifier then considers a k-local Hamiltonian having \(\left \vert {\psi }_{L}\right \rangle \) as a ground state as well as the threshold values a and b, with \(b - a > 1/poly(|\mathcal {C}|)\).

  • (2) The verifier chooses to either perform the energy measurement or the encoding measurement as described above. For the energy measurement she asks the provers to measure a randomly chosen \(\mathsf {X}\mathsf {Z}\)-term from the local Hamiltonian. The verifier accepts if the outcome indicates that the energy of \(\left \vert {\psi }_{L}\right \rangle \) is below a. For the encoding measurement the verifier instructs the provers to perform the measurements of the 5-player non-local game. She accepts if the provers win the game, indicating that their shared state is correctly encoded.

One therefore has:

Theorem 14

The FH protocol is an \(\mathsf {MIP^{*}}\) protocol achieving an inverse polynomial gap between completeness and soundness.

There are two significant differences between this protocol and the previous entanglement-based approaches. The first is that the protocol does not use self-testing to enforce that the provers are performing the correct operations in order to implement the computation \(\mathcal {C}\). Instead, the computation is checked indirectly by using the self-testing result to estimate the ground-state energy of the k-local Hamiltonian. This then provides an answer to the considered \(\mathsf {BQP}\) computation viewed as a decision problem.Footnote 42 The second difference is that the protocol is not blind. In all the previous approaches, the provers had to share an entangled state which was independent of the computation, up to a certain size. However, in the FH protocol, the state that the provers need to share depends on which quantum computation the verifier wishes to perform.

In terms of communication complexity, the protocol, as described, would involve only 2 rounds of interaction between the verifier and the provers. However, since the completeness-soundness gap is inverse polynomial, and therefore decreases with the size of the computation, it becomes necessary to repeat the protocol multiple times to properly differentiate between the accepting and rejecting cases. On the one hand, the local Hamiltonian itself has an inverse polynomial gap between the two cases of acceptance and rejection. As shown in [30, 68], for the Hamiltonian resulting from a quantum circuit, \(\mathcal {C}\), that gap is \(1/|\mathcal {C}|^{2}\). To boost this gap to constant, the provers must share \(O(|\mathcal {C}|^{2})\) copies of the Feynman-Kitaev state.

On the other hand, the self-testing result has an inverse polynomial robustness. This means that estimating the energy of the ground state is done with a precision which scales inverse polynomially in the number of qubits of the state. More precisely, according to Ji’s result, the scaling should be \(1/O(N^{16})\), where N is the number of qubits on which the Hamiltonian acts [81]. This means that the protocol should be repeated on the order of \(O(N^{16})\) times, in order to boost the completeness-soundness gap to constant.

NV Protocol

The second entanglement-based post hoc protocol was developed by Natarajan and Vidick [23] and we therefore refer to it as the NV protocol. The main ideas of the protocol are similar to those of the FH protocol. However, Natarajan and Vidick prove a self-testing result having constant robustness and use it in order to perform the energy estimation of the ground state for the local Hamiltonian.

The statement of their general self-testing result is too involved to state here, so instead we reproduce a corollary to their result (also from [23]) that is used for the NV protocol. This corollary involves self-testing a tensor product of Bell pairs:

Theorem 15

For any integer n there exists a two-player non-local game, known as the Pauli braiding test (PBT), with \(O(n)\) -bit questions and \(O(1)\) -bit answers satisfying the following:

Let\(S = ({\left \vert {\tilde {\psi }}\right \rangle }, \tilde {A}(\mathbf {a}), \tilde {B}(\mathbf {b}))\)bethe strategy employed by two players (Alice and Bob) in playing the game,where\({\left \vert {\tilde {\psi }}\right \rangle }\)istheir shared state and\(\tilde {A}(\mathbf {a})\)and\(\tilde {B}(\mathbf {b})\)aretheir respective (multi-qubit) observables when given n-bitquestions\(\mathbf {a}\)and\(\mathbf {b}\), respectively. Suppose Alice and Bob win the Pauli braiding test withprobability\(\omega ^{*}(PBT) - \epsilon \), for some\(\epsilon > 0\)(notethat\(\omega ^{*}(PBT)= 1\)). Then there existδ = poly(𝜖),a local isometry\({\Phi } = {\Phi }_{A} \otimes {\Phi }_{B}\)anda state\({\left \vert {junk}\right \rangle }\)suchthat:

$$ TD({\Phi}({\left\vert{\tilde{\psi}}\right\rangle}), {\left\vert{junk}\right\rangle} {\left\vert{{\Phi}_{+}}\right\rangle}^{\otimes n}) \leq \delta $$
(75)
$$ TD \left( {\Phi} \left( \tilde{A}(\mathbf{a}) \otimes \tilde{B}(\mathbf{b}) {\left\vert{\tilde{\psi}}\right\rangle} \right), {\left\vert{junk}\right\rangle} \textsf{X}(\mathbf{a}) \otimes \textsf{Z}(\mathbf{b}) {\left\vert{{\Phi}_{+}}\right\rangle}^{\otimes n} \right) \leq \delta $$
(76)

where \(\mathsf {X}(\mathbf {a}) = \bigotimes \limits _{i = 1}^{n} \textsf {X}^{\mathbf {a(i)}}\) and \(\mathsf {Z}(\mathbf {b}) = \bigotimes \limits _{i = 1}^{n} \textsf {Z}^{\mathbf {b(i)}}\).

This theorem is essentially a self-testing result for a tensor product of Bell states, and Pauli \(\mathsf {X}\) and \(\mathsf {Z}\) observables, achieving a constant robustness. The Pauli braiding test is used in the NV protocol in a similar fashion to Ji’s result, from the previous subsection, in order to certify that a set of provers are sharing a state that is encoded in a quantum error correcting code. Again, this relies on a bi-partition of the provers into two sets, such that, an encoded state shared across the bi-partition is equivalent to a Bell pair.

Let us first explain the general idea of the Pauli braiding test for self-testing n Bell pairs and n-qubit observables. We have a referee that is interacting with two players, labelled Alice and Bob. The test consists of three subtests which are chosen at random by the referee. The subtests are:

  • Linearity test. In this test, the referee will randomly pick a basis setting, W, from the set \(\{ \textsf {X}, \textsf {Z} \}\). She then randomly chooses two strings \(\mathbf {a_{1}}, \mathbf {a_{2}} \in \{ 0, 1 \}^{n}\) and sends them to Alice. With equal probability, the referee takes \(\mathbf {b_{1}}\) to be either \(\mathbf {a_{1}}\), \(\mathbf {a_{2}}\) or \(\mathbf {a_{1}} \oplus \mathbf {a_{2}}\). She also randomly chooses a string \(\mathbf {b_{2}} \in \{ 0, 1 \}^{n}\) and sends the pair \((\mathbf {b_{1}}, \mathbf {b_{2}})\) to Bob.Footnote 43 Alice and Bob are then asked to measure the observables \(W(\mathbf {a_{1}})\), \(W(\mathbf {a_{2}})\) and \(W(\mathbf {b_{1}})\), \(W(\mathbf {b_{2}})\), respectively, on their shared state. We denote Alice’s outcomes as \(a_{1}\), \(a_{2}\) and Bob’s outcomes as \(b_{1}\), \(b_{2}\). If \(\mathbf {b_{1}} = \mathbf {a_{1}}\) (or \(\mathbf {b_{1}}=\mathbf {a_{2}}\), respectively), the referee checks that \(b_{1} = a_{1}\) (or \(b_{1} = a_{2}\), respectively). If \(\mathbf {b_{1}} = \mathbf {a_{1}} \oplus \mathbf {a_{2}}\), she checks that \(b_{1} = a_{1} a_{2}\). This test is checking, on the one hand, that when Alice and Bob measure the same observables, they should get the same outcome (which is what should happen if they share Bell states). On the other hand, and more importantly, it is checking the commutation and linearity of their operators, i.e. that \(W(\mathbf {a_{1}})W(\mathbf {a_{2}}) = W(\mathbf {a_{2}})W(\mathbf {a_{1}}) = W(\mathbf {a_{1}} + \mathbf {a_{2}})\) (and similarly for Bob’s operators).

  • Anticommutation test. The referee randomly chooses two strings \(\mathbf {x}, \mathbf {z} \in \{ 0, 1 \}^{n}\), such that \(\mathbf {x} \cdot \mathbf {z} = 1 \; mod \; 2\), and sends them to both players. These strings define the observables \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\) which are anticommuting because of the imposed condition on \(\mathbf {x}\) and \(\mathbf {z}\). The referee then engages in a non-local game with Alice and Bob designed to test the anticommutation of these observables for both of their systems. This can be any game that tests this property, such as the CHSH game or the magic square game, described in [82, 83]. As an example, if the referee chooses to play the CHSH game, then Alice will be instructed to measure either \(\mathsf {X}(\mathbf {x})\) or \(\mathsf {Z}(\mathbf {z})\) on her half of the shared state, while Bob would be instructed to measure either \((\mathsf {X}(\mathbf {x}) + \mathsf {Z}(\mathbf {z}))/\sqrt {2}\) or \((\mathsf {X}(\mathbf {x}) - \mathsf {Z}(\mathbf {z}))/\sqrt {2}\). The test is passed if the players achieve the win condition of the chosen anticommutation game. Note that for the case of the magic square game, the condition can be achieved with probability 1 when the players implement the optimal quantum strategy. For this reason, if the chosen game is the magic square game, then \(\omega ^{*}(PBT) = 1\).

  • Consistency test. This test combines the previous two. The referee randomly chooses a basis setting, \(W \in \{ \textsf {X}, \textsf {Z} \}\) and two strings \(\mathbf {x}, \mathbf {z} \in \{ 0, 1 \}^{n}\). Additionally, let \(\mathbf {w} = \mathbf {x}\), if \(W = \mathsf {X}\) and \(\mathbf {w} = \mathbf {z}\) if \(W = \mathsf {Z}\). The referee sends W, \(\mathbf {x}\) and \(\mathbf {z}\) to Alice. With equal probability the referee will then choose to perform one of two subtests. In the first subtest, the referee sends \(\mathbf {x}, \mathbf {z}\) to Bob as well and plays the anticommutation game with both, such that Alice’s observable is \(W(\mathbf {w})\). As an example, if \(W = \mathsf {X}\) and the game is the CHSH game, then Alice would be instructed to measure \(\mathsf {X}(\mathbf {x})\), while Bob is instructed to measure either \((\mathsf {X}(\mathbf {x}) + \mathsf {Z}(\mathbf {z}))/\sqrt {2}\) or \((\mathsf {X}(\mathbf {x}) - \mathsf {Z}(\mathbf {z}))/\sqrt {2}\). This subtest essentially mimics the anticommutation test and is passed if the players achieve the win condition of the game. In the second subtest, which mimics the linearity test, the referee sends W, \(\mathbf {w}\) and a random string \(\mathbf {y} \in \{ 0, 1 \}^{n}\) to Bob, instructing him to measure \(W(\mathbf {w})\) and \(W(\mathbf {y})\). Alice is instructed to measure \(W(\mathbf {x})\) and \(W(\mathbf {z})\). The test if passed if Alice and Bob obtain the same result for the \(W(\mathbf {w})\) observable. For instance, if \(W = \mathsf {X}\), then both Alice and Bob will measure \(\mathsf {X}(\mathbf {x})\) and their outcomes for that measurement must agree.

Having observables that satisfy the linearity conditions of the test as well as the anticommutation condition implies that they are isometric to the actual \(\mathsf {X}\) and \(\mathsf {Z}\) observables acting on a maximally entangled state. This is what the Pauli braiding test checks and what is proven by the self-testing result of Natarajan and Vidick.

We can now describe the NV protocol. Similar to the FH protocol, for a quantum circuit, \(\mathcal {C}\), and an input, x, one considers the associated Feynman-Kitaev state, denoted\({\left \vert {\psi }\right \rangle }\). This is then used to construct a 2-local \(\mathsf {X}\textsf {Z}\)-Hamiltonian such that the ground state of this Hamiltonian is \({\left \vert {\psi }\right \rangle }\). As before, for some a and b, with\(b-a > 1/poly(|\mathcal {C}|)\), when \(\mathcal {C}\) accepts x we have that \({\left \langle {\psi }\right \vert } H {\left \vert {\psi }\right \rangle } < a\), otherwise \({\left \langle {\psi }\right \vert } H {\left \vert {\psi }\right \rangle } > b\). The verifier will instruct 7 provers to share a copy of \({\left \vert {\psi }\right \rangle }\) state, encoded in a 7-qubit quantum error correcting code known as Steane’s code. The provers are then asked to perform measurements so as to self-test an encoded state or perform an energy measurement on this state. The code space, for Steane’s code, is the 7-qubit subspace stabilized by all operators generated by \(\{g_{i}\}_{i = 1}^{6}\), where the generators are listed in Table 5.

Table 5 Generators for Steane’s 7-qubit code

The reason Natarajan and Vidick use this specific error correcting code is because it has two properties that are necessary for the application of their self-testing result. The first property is that each stabilizer generator is a tensor product of only the I, \(\mathsf {X}\) and \(\mathsf {Z}\) operators. This, of course, is true for the 5-qubit code as well. The second property is a symmetry condition: for each index \(i \in \{1, ..., 6\}\), there exists a pair of stabilizer generators, \(S_{\mathsf {X}}\) and \(S_{\mathsf {Z}}\), such that \(S_{\mathsf {X}}\) consists exclusively of I and \(\mathsf {X}\) operators and has an \(\mathsf {X}\) on position i, whereas \(S_{\mathsf {Z}}\) is identical to \(S_{\mathsf {X}}\) but with all \(\mathsf {X}\) operators replaced with \(\mathsf {Z}\). This property is not satisfied by the 5-qubit code and will allow the verifier to delegate to the provers measurements of the form \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\), where \(\mathbf {x}\) and \(\mathbf {z}\) are binary strings, as in the Pauli braiding test.

Putting everything together, the protocol works as follows. The verifier instructs the 7 provers to share an encoded instance of the Feynman-Kitaev state, \({\left \vert {\psi }\right \rangle }_{L}\), such that, for each logical qubit in \(\left \vert {\psi }_{L}\right \rangle \), each prover will hold one of 7 the constituent physical qubits. She then chooses at random to perform one of the following:

  • Pauli braiding test. The verifier chooses one of the 7 provers at random to be Alice, while the remaining provers will take on the role of Bob. The verifier then performs the Pauli braiding test with Alice and Bob in order to self-test the logical qubits in \(\left \vert {\psi }_{L}\right \rangle \). As mentioned, each logical qubit, encoded in the 7 qubit code, is equivalent to a Bell pair under the chosen bi-partition. The Pauli braiding test is essentially checking that the provers have correctly encoded each of the qubits in \(\left \vert {\psi }\right \rangle \) and that they are correctly measuring \(\mathsf {X}\) and \(\mathsf {Z}\) observables. The verifier rejects if the provers do not pass the test.

  • Energy test. In this case, the verifier will pick an \(\mathsf {X}\textsf {Z}\)-term, S, from H, at random, and instruct the provers to measure this term on their shared state. Note that S consists of logical\(\mathsf {X}\) and \(\mathsf {Z}\) operators. This means that each prover will need to perform local measurements so that their joint measurement acts as either \(\mathsf {X}_{L}\) or \(\mathsf {Z}_{L}\), respectively. Additionally, \(\mathsf {X}_{L}\) and \(\mathsf {Z}_{L}\), for the 7 qubit code, are expressed as tensor products of physical \(\mathsf {X}\) and \(\mathsf {Z}\) operations. This means that each prover will be instructed to measure an operators of the form \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\), on its physical qubits, where \(\mathbf {x}, \mathbf {z} \in \{0, 1\}^{N}\), and N is the total number of logical qubits on which H acts. The product \(\mathsf {X}(\mathbf {x})\mathsf {Z}(\mathbf {z})\) is the outcome for that prover’s share of S. The verifier then takes all of these \(\pm 1\) outcomes and multiplies them together, thus obtaining the outcome of measuring S itself. She accepts if the outcome of the measurement indicates that the estimated energy of \(\left \vert {\psi }\right \rangle \) is below a and rejects otherwise.

  • Energy consistency test. This test is a combination of the previous two. As in the Pauli braiding test, the provers are bi-partitioned into two sets, one consisting of one prover, denoted Alice, and the other consisting of the other 6 provers, jointly denoted as Bob. The verifier now performs a test akin to the linearity test from Pauli braiding. She randomly chooses \(W \in \{ \mathsf {X}, \mathsf {Z} \}\), and let \(\mathbf {w} = \mathbf {x}\), if \(W = \mathsf {X}\) and \(\mathbf {w} = \mathbf {z}\) if \(W = \mathsf {Z}\). She also chooses \(\mathbf {x}, \mathbf {z} \in \{0, 1\}^{N}\) according to the same distribution as in the energy test (i.e. as if she were instructing the provers to measure a random \(\mathsf {X}\mathsf {Z}\)-term from H). The verifier then does one of the following:

    • With probability \(1/2\), instructs Alice to measure the observables \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\). Additionally, the verifier chooses \(\mathbf {y} \in \{0, 1\}^{N}\) at random and instructs Bob to measure \(W(\mathbf {y})\) and \(W(\mathbf {y} \oplus \mathbf {w})\). If \(W = \mathsf {X}\), the verifier accepts if the product of Bob’s answers agrees with Alice’s answer for the \(\mathsf {X}(\mathbf {x})\) observable. If \(W = \mathsf {Z}\), the verifier accepts if the product of Bob’s answers agrees with Alice’s answer for the \(\mathsf {Z}(\mathbf {z})\) observable. Note that this is the case since the product of Bob’s observables should be \(W(\mathbf {w})\) if he is behaving honestly.

    • With probability \(1/4\), instructs Alice to measure \(W(\mathbf {y})\) and \(W(\mathbf {v})\), where \(\mathbf {y}, \mathbf {w} \in \{0, 1\}^{N}\) are chosen at random. Bob is instructed to measure \(W(\mathbf {y})\) and \(W(\mathbf {y} \oplus \mathbf {w})\). The verifier accepts if the outcomes of Alice and Bob for \(W(\mathbf {y})\) agree.

    • With probability \(1/4\), instructs Alice to measure \(W(\mathbf {y} \oplus \mathbf {w})\) and \(W(\mathbf {v})\), where \(\mathbf {y}, \mathbf {w} \in \{0, 1\}^{N}\) are chosen at random. Bob is instructed to measure \(W(\mathbf {y})\) and \(W(\mathbf {y} \oplus \mathbf {w})\). The verifier accepts if the outcomes of Alice and Bob for \(W(\mathbf {y} \oplus \mathbf {w})\) agree.

The self-testing result guarantees that if these tests succeed, the verifier obtains an estimate for the energy of the ground state. Importantly, unlike the FH protocol, her estimate has constant precision. However, the protocol, as described up to this point, will still have an inverse polynomial completeness-soundness gap given by the local Hamiltonian. Recall that this is because the Feynman-Kitaev state will have energy below a when \(\mathcal {C}\) accepts x with high probability, and energy above b otherwise, where \(b - a > 1/|\mathcal {C}|^{2}\). But one can easily boost the protocol to a constant gap between completeness and soundness by simply requiring the provers to share \(M = O(|\mathcal {C}|^{2})\) copies of the ground state. This new state, \({\left \vert {\psi }\right \rangle }^{\otimes M}\), would then be the ground state of a new Hamiltonian \(H^{\prime }\).Footnote 44 One then runs the NV protocol for this Hamiltonian. It should be mentioned that this Hamiltonian is no longer 2-local, however, all of the tests in the NV protocol apply for these general Hamiltonians as well (as long as each term is comprised of I, \(\mathsf {X}\) and \(\mathsf {Z}\) operators, which is the case for \(H^{\prime }\)). Additionally, the new Hamiltonian has a constant gap. The protocol therefore achieves a constant number of rounds of interaction with the provers (2 rounds) and we have that:

Theorem 16

The NV protocol is an \(\mathsf {MIP^{*}}\) protocol achieving a constant gap between completeness and soundness.

To then boost the completeness-soundness gap to \(1-\epsilon \), for some \(\epsilon > 0\), one can perform a parallel repetition of the protocol \(O(log(1/\epsilon ))\) times.

4.4 Summary of Entanglement-Based Protocols

We have seen that having non-communicating provers sharing entangled states allows for verification protocols with a classical client. What all of these protocols have in common is that they all make use of self-testing results. These essentially state that if a number of non-communicating players achieve a near optimal win rate in a non-local game, the strategy they employ in the game is essentially fixed, up to a local isometry. The strategy of the players consists of their shared quantum state as well as their local observables. Hence, self-testing results provide a precise characterisation for both.

This fact is exploited by the surveyed protocols in order to achieve verifiability. Specifically, we have seen that one approach is to define a number of non-local games so that by combining the optimal strategies of these games, the provers effectively perform a universal quantum computation. This is the approach employed by the RUV protocol [18]. Alternatively, the self-testing result can be used to check only for the correct preparation of a specific resource state. This resource state is then used by the provers to perform a quantum computation. How this is done depends on the type of resource state and on how the computation is delegated to the provers. For instance, one possibility is to remotely prepare the resource state used in the VUBQC protocol and then run the verification procedure of that protocol. This is the approach used by the GKW and HPDF protocols [19, 20]. Another possibility is to prepare a cluster state shared among many provers and then have each of those provers measure their states so as to perform an MBQC computation. This approach was used by McKague in his protocol [21]. Lastly, the self-tested resource state can be the ground state of a local Hamiltonian leading to the post hoc approaches employed by the FH and NV protocols.

Table 6 Comparison of entanglement-based protocols

We noticed that, depending on the approach that is used, there will be different requirements for the quantum operations of the provers. Of course, all protocols require that collectively the provers can perform \(\mathsf {BQP}\) computations, however, individually some provers need not be universal quantum computers. Related to this is the issue of blindness. Again, based on what approach is used some protocols utilize blindness and some do not. In particular, the post hoc protocols are not blind since the computation and the input are revealed to the provers so that they can prepare the Feynman-Kitaev state.

We have also seen that the robustness of the self-testing game impacts the communication complexity of the protocol. Specifically, having robustness which is inverse polynomial in the number of qubits of the self-tested state, leads to an inverse polynomial gap between completeness and soundness. In order to make this gap constant, the communication complexity of the protocol has to be made polynomial. This means that most protocols will have a relatively large overhead, when compared to prepare-and-send or receive-and-measure protocols. Out of the surveyed protocols, the NV protocol is the only one which utilizes a self-testing result with constant robustness and therefore has a constant completeness-soundness gap. We summarize all of these facts in Table 6.Footnote 45

5 Outlook

5.1 Sub-Universal Protocols

So far we have presented protocols for the verification of universal quantum computations, i.e. protocols in which the provers are assumed to be \(\mathsf {BQP}\) machines. In the near future, however, quantum computers might be more limited in terms of the type of computations that they can perform. Examples of this include the class of so-called instantaneous quantum computations, denoted \(\mathsf {IQP}\), boson sampling or the one-pure qubit model of quantum computation [1, 2, 84]. While not universal, these examples are still highly relevant since, assuming some plausible complexity theoretic conjectures hold, they could solve certain problems or sample from certain distributions that are intractable for classical computers. One is therefore faced with the question of how to verify the correctness of outcomes resulting from these models. In particular, when considering an interactive protocol, the prover should be restricted to the corresponding sub-universal class of problems and yet still be able to prove statements to a computationally limited verifier. We will see that many of the considered approaches are adapted versions of the VUBQC protocol from Section 2.2. It should be noted, however, that the protocols themselves are not direct applications of VUBQC. In each instance, the protocol was constructed so as to adhere to the constraints of the model.

The first sub-universal verification protocol is for the one-pure (or one-clean) qubit model. A machine of this type takes as input a state of limited purity (for instance, a system comprising of the totally mixed state and a small number of single qubit pure states), and is able to coherently apply quantum gates. The model was considered in order to reflect the state of a quantum computer with noisy storage. In [85], Kapourniotis, Kashefi and Datta introduced a verification protocol for this model by adapting VUBQC to the one-pure qubit setting. The verifier still prepares individual pure qubits, as in the original VUBQC protocol, however the prover holds a mixed state of limited purity at all times.Footnote 46 Additionally, the prover can inject or remove pure qubits from his state, during the computation, as long as it does not increase the total purity of the state. The resulting protocol has an inverse polynomial completeness-soundness gap. However, unlike the universal protocols we have reviewed, the constraints on the prover’s state do not allow for the protocol to be repeated. This means that the completeness-soundness gap cannot be boosted through repetition.

Another model, for which verification protocols have been proposed, is that of instantaneous quantum computations, or \(\mathsf {IQP}\) [2, 86]. An \(\mathsf {IQP}\) machine is one which can only perform unitary operations that are diagonal in the \(\mathsf {X}\) basis and therefore commute with each other. The name “instantaneous quantum computation” illustrates that there is no temporal structure to the quantum dynamics [2]. Additionally, the machine is restricted to measurements in the computational basis. It is important to mention that IQP does not represent a decision class, like \(\mathsf {BQP}\), but rather a sampling class. The input to a sampling problem is a specification of a certain probability distribution and the output is a sample from that distribution. The class \(\mathsf {IQP}\), therefore, contains all distributions which can be sampled efficiently (in polynomial time) by a machine operating as described above. Under plausible complexity theoretic assumptions, it was shown that this class is not contained in the set of distributions which can be efficiently sampled by a classical computer [86].

In [2], Shepherd and Bremner proposed a hypothesis test in which a classical verifier is able to check that the prover is sampling from an \(\mathsf {IQP}\) distribution. The verifier cannot, however, check that the prover sampled from the correct distributions. Nevertheless, the protocol serves as a practical tool for demonstrating a quantum computational advantage. The test itself involves an encoding, or obfuscation scheme which relies on a computational assumption (i.e. it assumes that a particular problem is intractable for IQP machines).

Another test of \(\mathsf {IQP}\) problems is provided by the Hangleiter et al approach, from Section 3.2 [30]. Recall that this was essentially the 1S-Post-hoc protocol for certifying the ground state of a local Hamiltonian. Hangleiter et al have the prover prepare multiple copies of a state which is the Feynman-Kitaev state of an \(\mathsf {IQP}\) circuit. They then use the post hoc protocol to certify that the prover prepared the correct state (measuring local terms from the Hamiltonian associated with that state) and then use one copy to sample from the output of the \(\mathsf {IQP}\) circuit. This is akin to the measurement-only approach of Section 3.1. In a subsequent paper, by Bermejo-Vega et al, they consider a subclass of sampling problems that are contained in \(\mathsf {IQP}\) and prove that this class is also hard to classically simulate (subject to standard complexity theory assumptions). The problems can be viewed as preparing a certain entangled state and then measuring all qubits in a fixed basis. The authors provide a way to certify that the state prepared is close to the ideal one, by giving an upper bound on the trace distance. Moreover, the measurements required for this state certification can be made using local stabilizer measurements, for the considered architectures and settings [5].

Recently, another scheme has been proposed, by Mills et al. [87], which again adapts the VUBQC protocol to the \(\mathsf {IQP}\) setting. This eliminates the need for computational assumptions, however it also requires the verifier to have a single qubit preparation device. In contrast to VUBQC, however, the verifier need only prepare eigenstates of the \(\mathsf {Y}\) and \(\mathsf {Z}\) operators.

Yet another scheme derived from VUBQC was introduced in [88] for a model known as the Ising spin sampler. This is based on the Ising model, which describes a lattice of interacting spins in the presence of a magnetic field [89]. The Ising spin sampler is a translation invariant Ising model in which one measures the spins thus obtaining samples from the partition function of the model. Just like with \(\mathsf {IQP}\), it was shown in [90] that, based on complexity theoretic assumptions, sampling from the partition function is intractable for classical computers.

Lastly, Disilvestro and Markham proposed a verification protocol [91] for Spekkens’ toy model [92]. This is a local hidden variable theory which is phenomenologically very similar to quantum mechanics, though it cannot produce non-local correlations. The existence of the protocol, again inspired by VUBQC, suggests that Bell non-locality is not a necessary feature for verification protocols, at least in the setting in which the verifier has a trusted quantum device.

5.2 Fault Tolerance

The protocols reviewed in this paper have all been described in an ideal setting in which all quantum devices work perfectly and any deviation from the ideal behaviour is the result of malicious provers. This is not, however, the case in the real world. The primary obstacle, in the development of scalable quantum computers, is noise which affects quantum operations and quantum storage devices. As a solution to this problem, a number of fault tolerant techniques, utilizing quantum error detection and correction, have been proposed. Their purpose is to reduce the likelihood of the quantum computation being corrupted by imperfect gate operations. But while these techniques have proven successful in minimizing errors in quantum computations, it is not trivial to achieve the same effect for verification protocols. To clarify, while we have seen the use of quantum error correcting codes in verification protocols, their purpose was to either boost the completeness-soundness gap (in the case of prepare-and-send protocols), or to ensure an honest behaviour from the provers (in the case of entanglement-based post hoc protocols). The question we ask, therefore, is: how can one design a fault-tolerant verification protocol? Note that this question pertains primarily to protocols in which the verifier is not entirely classical (such as the prepare-and-send or receive-and-measure approaches) or in which one or more provers are assumed to be single-qubit devices (such as the GKW and HPDF protocols). For the remaining entanglement-based protocols, one can simply assume that the provers are performing all of their operations on top of a quantum error correcting code.

Let us consider what happens if, in the prepare-and-send and receive-and-measure protocols, the devices of the verifier and the prover are subject to noise.Footnote 47 If, for simplicity, we assume that the errors on these devices imply that each qubit will have a probability, p, of producing the same outcome as in the ideal setting, when measured, we immediately notice that the probability of n qubits producing the same outcomes scales as \(O(p^{n})\). This means that, even if the prover behaves honestly, the computation is very unlikely to result in the correct outcome [19].

Ideally, one would like the prover to perform his operations in a fault tolerant manner. In other words, the prover’s state should be encoded in a quantum error correcting code, the gates he performs should result in logical operations being applied on his state and he should, additionally, perform error-detection (syndrome) measurements and corrections. But we can see that this is problematic to achieve. Firstly, in prepare-and-send protocols, the computation state of the prover is provided by the verifier. Who should then encode this state in the error-correcting code, the verifier or the prover? It is known that in order to suppress errors in a quantum circuit, \(\mathcal {C}\), each qubit should be encoded in a logical state having \(O(polylog(|\mathcal {C}|))\)-many qubits [93]. This means that if the encoding is performed by the verifier, she must have a quantum computer whose size scales poly-logarithmically with the size of the circuit that she would like to delegate. It is preferable, however, that the verifier has a constant-size quantum computer. Conversely, even if the prover performs the encoding, there is another complication. Since the verifier needs to encrypt the states she sends to the prover, and since her operations are susceptible to noise, the errors acting on these states will have a dependency on her secret parameters. This means that when the prover performs error-detection procedures he could learn information about these secret parameters and compromise the protocol.

For receive-and-measure protocols, one encounters a different obstacle. While the verifier’s measurement device is not actively malicious, if the errors occurring in this device are correlated with the prover’s operations in preparing the state, this can compromise the correctness of the protocol.

A number of fault tolerant verification protocols have been proposed, however, they all overcome these limitations by making additional assumptions. For instance, one proposal, by Kapourniotis and Datta [88], for making VUBQC fault tolerant, uses a topological error-correcting code described in [58, 59]. The error-correcting code is specifically designed for performing fault tolerant MBQC computations, which is why it is suitable for the VUBQC protocol. In the proposed scheme, the verifier still prepares single qubit states, however there is an implicit assumption that the errors on these states are independent of the verifier’s secret parameters. The prover is then instructed to perform a blind MBQC computation in the topological code. The protocol described in [88] is used for a specific type of MBQC computation designed to demonstrate a quantum computational advantage. However, the authors argue that the techniques are general and could be applied for universal quantum computations.

A fault-tolerant version of the measurement-only protocol from Section 3.1 has also been proposed in [95]. The graph state prepared by the prover is encoded in an error-correcting code, such as the topological lattice used by the previous approaches. As in the ‘non-fault-tolerant’ version of the protocol, the prover is instructed to send many copies of this state which the verifier will test using stabilizer measurements. The verifier also uses one copy in order to perform her computation in an MBQC fashion. The protocol assumes that the errors occurring on the verifier’s measurement device are independent of the errors occurring on the prover’s devices.

More details, regarding the difficulties with achieving fault tolerance in \(\mathsf {QPIP}\) protocols, can be found in [26].

5.3 Experiments and Implementations

Protocols for verification will clearly be useful for benchmarking experiments implementing quantum computations. Experiments implementing quantum computations on a small number of qubits can be verified with brute force simulation on a classical computer. However, as we have pointed out that this is not scalable, in the long-term it is worthwhile to try and implement verification protocols on these devices. As a result, there have been proof of concept experiments that demonstrate the components necessary for verifiable quantum computing.

Inspired by the prepare-and-send VUBQC protocol, Barz et al implemented a four-photon linear optical experiment, where the four-qubit linear cluster state was constructed from entangled pairs of photons produced through parametric down-conversion [96]. Within this cluster state, in runs of the experiment, a trap qubit was placed in one of two possible locations, thus demonstrating some of the elements of the VUBQC protocol. However, it should be noted that the trap qubits are placed in the system through measurements on non-trap qubits within the cluster state, i.e. through measurements made on the the other three qubits. Because of this, the analysis of the VUBQC protocol cannot be directly translated over to this setting, and bespoke analysis of possible deviations is required. In addition, the presence of entanglement between the photons was demonstrated through Bell tests that are performed blindly. This work also builds on a previous experimental implementation of blind quantum computation by Barz et al. [97].

With regards to receive-and-measure protocols, and in particular the measurement-only protocol of Section 3.1, Greganti et al. implemented [98] some of the elements of these protocols with a four-photon experiment, similar to the experiment of Barz et al mentioned above [96]. This demonstration builds on previous work in the experimental characterisation of stabiliser states [99]. In this case, two four-qubit cluster states were generated: the linear cluster state and the star graph state, where in the latter case the only entanglement is between one central qubit and pairwise with every other qubit. In order to demonstrate the elements for measurement-only verification, by suitable measurements made by the client, traps can be placed in the state. Furthermore, the linear cluster state and star graph state can be used as computational resources for implementing single qubit unitaries and an entangling gate respectively.

Finally, preliminary steps have been taken towards an experimental implementation of the RUV protocol, from Section 4.1. Huang et al implemented a simplified version of this protocol using sources of pairs of entangled photons [74]. Repeated CHSH tests were performed on thousands of pairs of photons demonstrating a large violation of the CHSH inequality; a vital ingredient in the protocol of RUV. In between the many rounds of CHSH tests, state tomography, process tomography, and a computation were performed, with the latter being the factorisation of the number 15. Again, all of these elements are ingredients in the protocol, however, the entangled photons are created ’on-the-fly’. In other words, in RUV, two non-communicating provers share a large number of maximally entangled states prior to the full protocol, but in this experiment these states are generated throughout.

6 Conclusions

The realization of the first quantum computers capable of outperforming classical computers at non-trivial tasks is fast approaching. All signs indicate that their development will follow a similar trajectory to that of classical computers. In other words, the first generation of quantum computers will comprise of large servers that are maintained and operated by specialists working either in academia, industry or a combination of both. However, unlike with the first super-computers, the Internet opens up the possibility for users, all around the world, to interface with these devices and delegate problems to them. This has already been the case with the 5-qubit IBM machine [100], and more powerful machines are soon to follow [101, 102]. But how will these computationally restricted users be able to verify the results produced by the quantum servers? That is what the field of quantum verification aims to answer. Moreover, as mentioned before and as is outlined in [12], the field also aims to answer the more foundational question of: how do we verify the predictions of quantum mechanics in the large complexity regime?

In this paper, we have reviewed a number of protocols that address these questions. While none of them achieve the ultimate goal of the field, which is to have a classical client verify the computation performed by a single quantum server, each protocol provides a unique approach for performing verification and has its own advantages and disadvantages. We have seen that these protocols combine elements from a multitude of areas including: cryptography, complexity theory, error correction and the theory of quantum correlations. We have also seen that proof-of-concept experiments, for some of these protocols, have already been realized.

What all of the surveyed approaches have in common, is that none of them are based on computational assumptions. In other words, they all perform verification unconditionally. However, recently, there have been attempts to reduce the verifier’s requirements by incorporating computational assumptions as well. What this means is that the protocols operate under the assumption that certain problems are intractable for quantum computers. We have already mentioned an example: a protocol for verifying the sub-universal sampling class of \(\mathsf {IQP}\) computations, in which the verifier is entirely classical. Other examples include protocols for quantum fully homomorphic encryption [103, 104]. In these protocols, a client is delegating a quantum computation to a server while trying to keep the input to the computation hidden. The use of computational assumptions allows these protocols to achieve this functionality using only one round of back-and-forth communication. However, in the referenced schemes, the client does require some minimal quantum capabilities. A recent modification of these schemes has been proposed in order to make the protocols verifiable as well [105]. Additionally, an even more recent paper introduces a protocol for quantum fully homomorphic encryption with an entirely classical client (again, based on computational assumptions) [106]. We can therefore see a new direction emerging in the field of delegated quantum computations. This recent success in developing protocols based on computational assumptions could very well lead to the first single-prover verification protocol with a classical client.

Another new direction, especially pertaining to entanglement-based protocols, is given by the development of self-testing results achieving constant robustness. This started with the work of Natarajan and Vidick, which was the basis of their protocol from Section 4.3 [23]. We saw, in Section 4, that all entanglement-based protocols rely, one way or another, on self-testing results. Consequently, the robustness of these results greatly impacts the communication complexity and overhead of these protocols. Since most protocols were based on results having inverse polynomial robustness, this led to prohibitively large requirements in terms of quantum resources (see Table 6). However, subsequent work by Coladangelo et al, following up on the Natarajan and Vidick result, has led to two entanglement-based protocols, which achieve near linear overhead [24].Footnote 48 This is a direct consequence of using a self-testing result with constant robustness and combining it with the Test-or-Compute protocol of Broadbent from Section 2.3. Of course, of the two protocols proposed by Coladangelo et al, only one is blind and so an open problem, of their result, is whether the second protocol can also be made blind. Another question is whether the protocols can be further optimized so that only one prover is required to perform universal quantum computations, in the spirit of the GKW protocol from Section 4.1.

We conclude by listing a number of other open problems that have been raised by the field of quantum verification. The resolution of these problems is relevant not just to quantum verification but to quantum information theory as a whole.

  • While the problem of a classical verifier delegating computations to a single prover is the main open problem of the field, we emphasize a more particular instance of this problem: can the proof that any problem in \(\mathsf {PSPACE}\)Footnote 49 admits an interactive proof system, be adapted to show that any problem in \(\mathsf {BQP}\) admits an interactive proof system with a \(\mathsf {BQP}\) prover? The proof that \(\mathsf {PSPACE} = \mathsf {IP}\) (in particular the \(\mathsf {PSPACE} \subseteq \mathsf {IP}\) direction) uses error-correcting properties of low-degree polynomials to give a verification protocol for a \(\mathsf {PSPACE}\)-complete problem [107]. We have seen that the Poly-QAS VQC scheme, presented in Section 16, also makes use of error-correcting properties of low-degree polynomials in order to perform quantum verification (albeit, with a quantum error correcting code and a quantum verifier). Can these ideas lead to a classical verifier protocol for \(\mathsf {BQP}\) problems with a \(\mathsf {BQP}\) prover?

  • In all existing entanglement-based protocols, one assumes that the provers are not allowed to communicate during the protocol. However, this assumption is not enforced by physical constraints. Is it, therefore, possible to have an entanglement-based verification protocol in which the provers are space-like separated?Footnote 50 Note, that since all existing protocols require the verifier to query the two (or more) provers adaptively, it is not directly possible to make the provers be space-like separated.

  • What is the optimal overhead (in terms of either communication complexity, or the resources of the verifier) in verification protocols? For all types of verification protocols we have seen that, for a fixed completeness-soundness gap, the best achieved communication complexity is linear. For the prepare-and-send case is it possible to have a protocol in which the verifier need only prepare a poly-logarithmic number of single qubits (in the size of the computation)? For the entanglement-based case, can the classical verifier send only poly-logarithmic sized questions to the provers? This latter question is related to the quantum \(\mathsf {PCP}\) conjecture [108].

  • Are there other models of quantum computation that are suitable for developing verification protocols? We have seen that the way in which we view quantum computations has a large impact on how we design verification protocols and what characteristics those protocols will have. Specifically, the separation between classical control and quantum resources in MBQC lead to VUBQC, or the \(\mathsf {QMA}\)-completeness of the local Hamiltonian problem lead to the post hoc approaches. Of course, all universal models are equivalent in terms of the computations which can be performed, however each model provides a particular insight into quantum computation which can prove useful when devising new protocols. Can other models of quantum computation, such as the adiabatic model, the anyon model etc, provide new insights?

  • We have seen that while certain verification protocols employ error-correcting codes, these are primarily used for boosting the completeness-soundness gap. Alternatively, for the protocols that do in fact incorporate fault tolerance, in order to cope with noisy operations, there are additional assumptions such as the noise in the verifier’s device being uncorrelated with the noise in the prover’s devices. Therefore, the question is: can one have a fault tolerant verification protocol, with a minimal quantum verifier, in the most general setting possible? By this we mean that there are no restrictions on the noise affecting the quantum devices in the protocol, other than those resulting from the standard assumptions of fault tolerant quantum computations (constant noise rate, local errors etc). This question is addressed in more detail in [26]. Note that the question refers in particular to prepare-and-send and receive-and-measure protocols, since entanglement-based approaches are implicitly fault tolerant (one can assume that the provers are performing the computations on top of error correcting codes).