1 Introduction

In this paper, we aim to automatically synthesize coupling proofs for probabilistic programs and properties. Originally designed for proving properties comparing two probabilistic programs—so-called relational properties—a coupling proof describes how to correlate two executions of the given programs, simulating both programs with a single probabilistic program. By reasoning about this combined, coupled process, we can often give simpler proofs of probabilistic properties for the original pair of programs.

A number of recent works have leveraged this idea to verify relational properties of randomized algorithms, including differential privacy [8, 10, 12], security of cryptographic protocols [9], convergence of Markov chains [11], robustness of machine learning algorithms [7], and more. Recently, Barthe et al. [6] showed how to reduce certain non-relational properties—which describe a single probabilistic program—to relational properties of two programs, by duplicating the original program or by sequentially composing it with itself.

While coupling proofs can simplify reasoning about probabilistic properties, they are not so easy to use; most existing proofs are carried out manually in relational program logics using interactive theorem provers. In a nutshell, the main challenge in a coupling proof is to select a correlation for each pair of corresponding sampling instructions, aiming to induce a particular relation between the outputs of the coupled process; this relation then implies the desired relational property. Just like finding inductive invariants in proofs for deterministic programs, picking suitable couplings in proofs can require substantial ingenuity.

To ease this task, we recently showed how to cast the search for coupling proofs as a program synthesis problem [1], giving a way to automatically find sophisticated proofs of differential privacy previously beyond the reach of automated verification. In the present paper, we build on this idea and present a general technique for constructing coupling proofs, targeting uniformity and probabilistic independence properties. Both are fundamental properties in the analysis of randomized algorithms, either in their own right or as prerequisites to proving more sophisticated guarantees; uniformity states that a randomized expression takes on all values in a finite range with equal probability, while probabilistic independence states that two probabilistic expressions are somehow uncorrelated—learning the value of one reveals no additional information about the value of the other.

Our techniques are inspired by the automated proofs of differential privacy we considered previously [1], but the present setting raises new technical challenges.

  • Non-lockstep execution. To prove differential privacy, the behavior of a single program is compared on two related inputs. To take advantage of the identical program structure, previous work restricted attention to synchronizing proofs, where the two executions can be analyzed assuming they follow the same control flow path. In contrast, coupling proofs for uniformity and independence often require relating two programs with different shapes, possibly following completely different control flows [6].

    To overcome this challenge, we take a different approach. Instead of incrementally finding couplings for corresponding pairs of sampling instructions—requiring the executions to be tightly synchronized—we first lift all sampling instructions to the front of the program and pick a coupling once and for all. The remaining execution of both programs can then be encoded separately, with no need for lockstep synchronization (at least for loop-free programs—looping programs require a more careful treatment).

  • Richer space of couplings. The heart of a coupling proof is selecting—among multiple possible options—a particular correlation for each pair of random sampling instructions. Random sampling in differentially private programs typically use highly domain-specific distributions, like the Laplace distribution, which support a small number of useful couplings. Our prior work leveraged this feature to encode a collection of primitive couplings into the synthesis system. However, this is no longer possible when programs sample from distributions supporting richer couplings, like the uniform distribution. Since our approach coalesces all sampling instructions at the beginning of the program (more generally, at the head of the loop), we also need to find couplings for products of distributions.

    We address this problem in two ways. First, we allow couplings of two sampling instructions to be specified by an injective function f from one range to another. Then, we impose requirements—encoded as standard logical constraints—to ensure that f indeed represents a coupling; we call such couplings f-couplings.

  • More general class of properties. Finally, we consider a broad class of properties rather than just differential privacy. While we focus on uniformity and independence for concreteness, our approach can establish general equalities between products of probabilities, i.e., probabilistic properties of the form

    $$\begin{aligned} \prod _{i = 1}^m\Pr [ e_i \in E_i ] = \prod _{j = 1}^n\Pr [ e_j' \in E_j' ], \end{aligned}$$

    where \(e_i\) and \(e_j'\) are program expressions in the first and second programs respectively, and \(E_i\) and \(E_j'\) are predicates. As an example, we automatically establish a key step in the proof of Bertrand’s Ballot theorem [20].

Paper Outline. After overviewing our technique on a motivating example (Sect. 2), we detail our main contributions.

  • Proof technique: We introduce f-coupled postconditions, a form of postcondition for two probabilistic programs where random sampling instructions in the two programs are correlated by a function f. Using f-coupled postconditions, we present proof rules for establishing uniformity and independence of program variables, fundamental properties in the analysis of randomized algorithms (Sect. 3).

  • Reduction to constraint-based synthesis: We demonstrate how to automatically find coupling proofs by transforming our proof rules into logical constraints of the form —a synthesis problem. A satisfiable constraint shows the existence of a function f—essentially, a compact encoding of a coupling proof—implying the target property (Sect. 4).

  • Extension to looping programs: We extend our technique to reason about loops, by requiring synchronization at the loop head and finding a coupled invariant (Sect. 5).

  • Implementation and evaluation: We implement our technique and evaluate it on several case studies, automatically constructing coupling proofs for interesting properties of a variety of algorithms (Sect. 6).

We conclude by comparing our technique with related approaches (Sect. 7).

2 Overview and Illustration

2.1 Introducing f-Couplings

A Simple Example. We begin by illustrating f-couplings over two identical Bernoulli distributions, denoted by the following probability mass functions: \(\mu _1(x) = \mu _2(x) = 0.5\) for all \(x \in \mathbb {B}\) (where \(\mathbb {B} = \{ true , false \})\). In other words, the distribution \(\mu _i\) returns \( true \) with probability 0.5, and \( false \) with probability 0.5.

An f-coupling for \(\mu _1,\mu _2\) is a function \(f: \mathbb {B}\rightarrow \mathbb {B}\) from the domain of the first distribution (\(\mathbb {B}\)) to the domain of the second (also \(\mathbb {B}\)); f should be injective and satisfy the monotonicity property: \(\mu _1(x) \le \mu _2(f(x))\) for all \(x \in \mathbb {B}\). In other words, f relates each element \(x \in \mathbb {B}\) with an element f(x) that has an equal or larger probability in \(\mu _2\). For example, consider the function \(f_\lnot \) defined as

$$\begin{aligned} f_\lnot (x) = \lnot x . \end{aligned}$$

This function relates \( true \) in \(\mu _1\) with \( false \) in \(\mu _2\), and vice versa. Observe that \(\mu _1(x) \leqslant \mu _2(f_\lnot (x))\) for all \(x \in \mathbb {B}\), satisfying the definition of an \(f_\lnot \)-coupling. We write \(\mu _1 \leftrightsquigarrow ^{f_\lnot } \mu _2\) when there is an \(f_\lnot \)-coupling for \(\mu _1\) and \(\mu _2\).

Using f-Couplings. An f-coupling can imply useful properties about the distributions \(\mu _1\) and \(\mu _2\). For example, suppose we want to prove that \(\mu _1( true ) = \mu _2( false )\). The fact that there is an \(f_\lnot \)-coupling of \(\mu _1\) and \(\mu _2\) immediately implies the equality: by the monotonicity property,

$$\begin{aligned} \mu _1( true ) \leqslant \mu _2(f_\lnot ( true )) = \mu _2( false )\\ \mu _1( false ) \leqslant \mu _2(f_\lnot ( false )) = \mu _2( true ) \end{aligned}$$

and therefore \(\mu _1( true ) = \mu _2( false )\). More generally, it suffices to find an f-coupling of \(\mu _1\) and \(\mu _2\) such that

$$\begin{aligned} \underbrace{\{(x,f(x)) \mid x\in \mathbb {B}\}}_{\varPsi _f} \subseteq \{(z_1,z_2) \mid z_1 = true \iff z_2 = false \}, \end{aligned}$$

where \(\varPsi _f\) is induced by f; in particular, the \(f_\lnot \)-coupling satisfies this property.

Fig. 1.
figure 1

Simulating a fair coin using an unfair one

2.2 Simulating a Fair Coin

Now, let’s use f-couplings to prove more interesting properties. Consider the program fairCoin in Fig. 1; the program simulates a fair coin by flipping a possibly biased coin that returns \( true \) with probability \(p \in (0,1)\), where p is a program parameter. Our goal is to prove that for any p, the output of the program is a uniform distribution—it simulates a fair coin. We consider two separate copies of fairCoin generating distributions \(\mu _1\) and \(\mu _2\) over the returned value x for the same bias p, and we construct a coupling showing \(\mu _1( true ) = \mu _2( false )\), that is, heads and tails have equal probability.

Constructing f-Couplings. At first glance, it is unclear how to construct an f-coupling; unlike the distributions in our simple example, we do not have a concrete description of \(\mu _1\) and \(\mu _2\) as uniform distributions (indeed, this is what we are trying to establish). The key insight is that we do not need to construct our coupling in one shot. Instead, we can specify a coupling for the concrete, primitive sampling instructions in the body of the loop—which we know sample from \(\mathsf {bern}(p)\)—and then extend to a f-coupling for the whole loop and \(\mu _1, \mu _2\).

For each copy of fairCoin, we coalesce the two sampling statements inside the loop into a single sampling statement from the product distribution:

$$\begin{aligned} x, y \sim \mathsf {bern}(p) \times \mathsf {bern}(p) \end{aligned}$$

We have two such joint distributions \(\mathsf {bern}(p) \times \mathsf {bern}(p)\) to couple, one from each copy of fairCoin. We use the following function \(f_ swap : \mathbb {B}^2 \rightarrow \mathbb {B}^2\):

$$\begin{aligned} f_{ swap }(x,y) = (y,x) \end{aligned}$$

which exchanges the values of x and y. Since this is an injective function satisfying the monotonicity property

$$\begin{aligned} (\mathsf {bern}(p) \times \mathsf {bern}(p))(x, y) \leqslant (\mathsf {bern}(p) \times \mathsf {bern}(p))(f_{ swap }(x, y)) \end{aligned}$$

for all \((x, y) \in \mathbb {B} \times \mathbb {B}\) and \(p \in (0,1)\), we have an \(f_{ swap }\)-coupling for the two copies of \(\mathsf {bern}(p) \times \mathsf {bern}(p)\).

Analyzing the Loop. To extend a \(f_{body}\)-coupling on loop bodies to the entire loop, it suffices to check a synchronization condition: the coupling from \(f_{body}\) must ensure that the loop guards are equal so the two executions synchronize at the loop head. This holds in our case: every time the first program executes the statement \(x, y \sim \mathsf {bern}(p) \times \mathsf {bern}(p)\), we can think of xy as non-deterministically set to some values (ab), and the corresponding variables in the second program as set to \(f_{ swap }(a,b) = (b,a)\). The loop guards in the two programs are equivalent under this choice, since \(a = b\) is equivalent to \(b = a\), hence we can analyze the loops in lockstep. In general, couplings enable us to relate samples from a pair of probabilistic assignments as if they were selected non-deterministically, often avoiding quantitative reasoning about probabilities.

Our constructed coupling for the loop guarantees that (i) both programs exit the loop at the same time, and (ii) when the two programs exit the loop, x takes opposite values in the two programs. In other words, there is an \(f_{ loop }\)-coupling of \(\mu _1\) and \(\mu _2\) for some function \(f_{ loop }\) such that

$$\begin{aligned} \varPsi _{f_{ loop }} \subseteq \{(z_1,z_2) \mid z_1 = true \iff z_2 = false \}, \end{aligned}$$
(1)

implying \(\mu _1( true ) = \mu _2( false )\). Since both distributions are output distributions of \(\textsf {fairCoin}\)—hence \(\mu _1 = \mu _2\)—we conclude that fairCoin simulates a fair coin.

Note that our approach does not need to construct \(f_{loop}\) concretely—this function may be highly complex. Instead, we only need to show that \(\varPsi _{f_{ loop }}\) (or some over-approximation) lies inside the target relation in Formula 1.

Achieving Automation. Observe that once we have fixed an \(f_ body \)-coupling for the sampling instructions inside the loop body, checking that the \(f_ loop \)-coupling satisfies the conditions for uniformity (Formula 1) is essentially a program verification problem. Therefore, we can cast the problem of constructing a coupling proof as a logical problem of the form , where f is the f-coupling we need to discover and is a constraint ensuring that (i) f indeed represents an f-coupling, and (ii) the f-coupling implies uniformity. Thus, we can use established synthesis-verification techniques to solve the resulting constraints (see, e.g., [2, 13, 27]).

3 A Proof Rule for Coupling Proofs

In this section, we develop a technique for constructing couplings and formalize proof rules for establishing uniformity and independence properties over program variables. We begin with background on probability distributions and couplings.

3.1 Distributions and Couplings

Distributions. A function \(\mu : B\rightarrow [0,1]\) defines a distribution over a countable set \(B\) if \(\sum _{b\in B} \mu (b) = 1\). We will often write \(\mu (A)\) for a subset \(A \subseteq B\) to mean \(\sum _{x \in A} \mu (x)\). We write \( dist (B)\) for the set of all distributions over \(B\).

We will need a few standard constructions on distributions. First, the support of a distribution \(\mu \) is defined as \( supp (\mu ) = \{b\in B\mid \mu (b) > 0\}\). Second, for a distribution on pairs \(\mu \in dist (B_1 \times B_2)\), the first and second marginals of \(\mu \), denoted \(\pi _1(\mu )\) and \(\pi _2(\mu )\) respectively, are distributions over \(B_1\) and \(B_2\):

$$\begin{aligned} \pi _1(\mu )(b_1) \triangleq \sum _{b_2 \in B_2}\mu (b_1,b_2) \qquad \qquad \qquad \quad \pi _2(\mu )(b_2) \triangleq \sum _{b_1 \in B_1}\mu (b_1,b_2) . \end{aligned}$$

Couplings. Let \(\varPsi \subseteq B_1 \times B_2\) be a binary relation. A \(\varPsi \)-coupling for distributions \(\mu _1\) and \(\mu _2\) over \(B_1\) and \(B_2\) is a distribution \(\mu \in dist (B_1 \times B_2)\) with (i) \(\pi _1(\mu ) = \mu _1\) and \(\pi _2(\mu ) = \mu _2\); and (ii) \( supp (\mu ) \subseteq \varPsi \). We write \(\mu _1 \leftrightsquigarrow ^{\varPsi } \mu _2\) when there exists a \(\varPsi \)-coupling between \(\mu _1\) and \(\mu _2\).

An important fact is that an injective function \(f: B_1 \rightarrow B_2\) where \(\mu _1(b) \leqslant \mu _2(f(b))\) for all \(b\in B_1\) induces a coupling between \(\mu _1\) and \(\mu _2\); this follows from a general theorem by Strassen [28], see also [23]. We write \(\mu _1 \leftrightsquigarrow ^{f} \mu _2\) for \(\mu _1 \leftrightsquigarrow ^{\varPsi _f} \mu _2\), where \(\varPsi _f = \{(b_1, f(b_1)) \mid b_1 \in B_1\}\). The existence of a coupling can imply various useful properties about the two distributions. The following general fact will be the most important for our purposes—couplings can prove equalities between probabilities.

Proposition 1

Let \(E_1 \subseteq B_1\) and \(E_2 \subseteq B_2\) be two events, and let \(\varPsi _= \triangleq \{(b_1, b_2) \mid b_1 \in E_1 \iff b_2 \in E_2\}\). If \(\mu _1 \leftrightsquigarrow ^{\varPsi _=} \mu _2\), then \(\mu _1(E_1) = \mu _2(E_2)\).

3.2 Program Model

Our program model uses an imperative language with probabilistic assignments, where we can draw a random value from primitive distributions. We consider the easier case of loop-free programs first; we consider looping programs in Sect. 5.

Syntax. A (loop-free) program P is defined using the following grammar:

where \(V\) is the set of variables that can appear in \(P\), \( exp \) is an expression over \(V\), and \( bexp \) is a Boolean expression over \(V\). A probabilistic assignment samples from a probability distribution defined by expression \( dexp \); for instance, \( dexp \) might be \(\mathsf {bern}(p)\), the Bernoulli distribution with probability p of returning \( true \). We use \(V^I\subseteq V\) to denote the set of input program variables, which are never assigned to. All other variables are assumed to be defined before use.

We make a few simplifying assumptions. First, distribution expressions only mention input variables \(V^I\), e.g., in the example above, \(\mathsf {bern}(p)\), we have \(p\in V^I\). Also, all programs are in static single assignment (ssa) form, where each variable is assigned to only once and are well-typed. These assumptions are relatively minor; they can can be verified using existing tools, or lifted entirely at the cost of slightly more complexity in our encoding.

Semantics. A state s of a program \(P\) is a valuation of all of its variables, represented as a map from variables to values, e.g., s(x) is the value of \(x\in V\) in s. We extend this mapping to expressions: \(s( exp )\) is the valuation of \( exp \) in s, and \(s( dexp )\) is the probability distribution defined by \( dexp \) in s.

We use S to denote the set of all possible program states. As is standard [24], we can give a semantics of \(P\) as a function \(\llbracket P\rrbracket : S \rightarrow dist (S)\) from states to distributions over states. For an output distribution \(\mu = \llbracket P\rrbracket (s)\), we will abuse notation and write, e.g., \(\mu (x = y)\) to denote the probability of the event that the program returns a state s where \(s(x = y) = true \).

Self-Composition. We will sometimes need to simulate two separate executions of a program with a single probabilistic program. Given a program \(P\), we use \(P_i\) to denote a program identical to \(P\) but with all variables tagged with the subscript i. We can then define the self-composition: given a program \(P\), the program \(P_1; P_2\) first executes \(P_1\), and then executes the (separate) copy \(P_2\).

3.3 Coupled Postconditions

We are now ready to present the f-coupled postcondition, an operator for approximating the outputs of two coupled programs.

Strongest Postcondition. We begin by defining a standard strongest postcondition operator over single programs, treating probabilistic assignments as no-ops. Given a set of states \(Q \subseteq S\), we define \(\mathsf {post}\) as follows:

$$\begin{aligned} \mathsf {post}(v \leftarrow exp , Q)&= \{s[v \mapsto s( exp )] \mid s \in Q\}\\ \mathsf {post}(v \sim dexp , Q)&= Q\\ \mathsf {post}(\texttt {if }\;\, bexp \;\, \texttt { then }\; \,P \;\, \texttt { else }\; \,P',\;\, Q)&= \{s' \mid s \in Q, s' \in \mathsf {post}(P,s), s( bexp ) = true \}\\&\cup \{s' \mid s \in Q, s' \in \mathsf {post}(P',s), s( bexp ) = false \}\\ \mathsf {post}(P;P',Q)&= \mathsf {post}(P',\mathsf {post}(P,Q)) \end{aligned}$$

where \(s[v \mapsto c]\) is state s with variable v mapped to the value c.

f-Coupled Postcondition. We rewrite programs so that all probabilistic assignments are combined into a single probabilistic assignment to a vector of variables appearing at the beginning of the program, i.e., an assignment of the form \(\varvec{v} \sim dexp \) in P and \(\varvec{v}' \sim dexp '\) in \(P'\), where \(\varvec{v},\varvec{v}'\) are vectors of variables. For instance, we can combine \(x \sim \mathsf {bern}(0.5); y\sim \mathsf {bern}(0.5)\) into the single statement \(x,y \sim \mathsf {bern}(0.5)\times \mathsf {bern}(0.5)\).

Let \(B, B'\) be the domains of \(\varvec{v}\) and \(\varvec{v}'\), \(f: B\rightarrow B'\) be a function, and \(Q \subseteq S \times S'\) be a set of pairs of input states, where S and \(S'\) are the states of \(P\) and \(P'\), respectively. We define the f-coupled postcondition operator \(\mathsf {cpost}\) as

(2)

The intuition is that the values drawn from sampling assignments in both programs are coupled using the function f. Note that this operation non-deterministically assigns \(\varvec{v}\) from P with some values \(\varvec{b}\), and \(\varvec{v}'\) with \(f(\varvec{b})\). Then, the operation simulates the executions of the two programs. Formula 2 states that there is an f-coupling for every instantiation of the two distributions used in probabilistic assignments in both programs.

Example 1

Consider the simple program P defined as \(x \sim \mathsf {bern}(0.5); x = \lnot x\) and let \(f_\lnot (x) = \lnot x\). Then, \(\mathsf {cpost}(P,P,Q,f_\lnot )\) is \(\{(s,s') \mid s(x) = \lnot s'(x)\}\).

The main soundness theorem shows there is a probabilistic coupling of the output distributions with support contained in the coupled postcondition (we defer all proofs to the full version of this paper).

Theorem 1

Let programs P and \(P'\) be of the form \(\varvec{v} \sim dexp ; P_D\) and \(\varvec{v}' \sim dexp '; P'_D\), for deterministic programs \(P_D, P'_D\). Given a function \(f : B \rightarrow B'\) satisfying Formula 2, for every \((s,s') \in S \times S'\) we have \(\llbracket P\rrbracket (s) \leftrightsquigarrow ^{\varPsi } \llbracket P'\rrbracket (s')\), where \(\varPsi = \mathsf {cpost}(P, P', (s,s'), f)\).

3.4 Proof Rules for Uniformity and Independence

We are now ready to demonstrate how to establish uniformity and independence of program variables using f-coupled postconditions. We will continue to assume that random sampling commands have been lifted to the front of each program, and that f satisfies Formula 2.

Uniformity. Consider a program \(P\) and a variable \(v^* \in V\) of finite, non-empty domain \(B\). Let \(\mu = \llbracket P\rrbracket (s)\) for some state \(s\in S\). We say that variable \(v^*\) is uniformly distributed in \(\mu \) if \(\mu (v^* = b) = \frac{1}{|B|}\) for every \(b\in B\).

The following theorem connects uniformity with f-coupled postconditions.

Theorem 2

(Uniformity). Consider a program P with \(\varvec{v} \sim dexp \) as its first statement and a designated return variable \(v^* \in V\) with domain \(B\). Let \(Q = \{(s,s) \mid s \in S\}\) be the input relation. If we have

for all \(b,b' \in B\), then for any input \(s \in S\) the final value of \(v^*\) is uniformly distributed over \(B\) in \(\llbracket P\rrbracket (s)\).

The intuition is that in the two f-coupled copies of P, the first \(v^*\) is equal to b exactly when the second \(v^*\) is equal to \(b'\). Hence, the probability of returning b in the first copy and \(b'\) in the second copy are the same. Repeating for every pair of values \(b,b'\), we conclude that \(v^*\) is uniformly distributed.

Example 2

Recall Example 1 and let \(b = true \) and \(b' = false \). We have

$$\mathsf {cpost}(P,P,Q,f_\lnot ) \subseteq \{ (s, s') \in S \times S \mid s(x) = b \iff s'(x) = b'\}.$$

This is sufficient to prove uniformity (the case with \(b = b'\) is trivial).

Independence. We now present a proof rule for independence. Consider a program P and two variables \(v^*,w^* \in V\) with domains \(B\) and \(B'\), respectively. Let \(\mu = \llbracket P\rrbracket (s)\) for some state \(s\in S\). We say that \(v^*,w^*\) are probabilistically independent in \(\mu \) if \(\mu (v^* = b\wedge w^* = b') = \mu (v^* = b) \cdot \mu (w^* = b')\) for every \(b\in B\) and \(b' \in B'\).

The following theorem connects independence with f-coupled postconditions. We will self-compose two tagged copies of P, called \(P_1\) and \(P_2\).

Theorem 3

(Independence). Assume a program P and define the relation

$$ Q = \{(s, s_1\oplus s_2) \mid s \in S, s_i \in S_i, s(v) = s_i(v_i), \; \;{for}\;\, {all }\;\, v\in V^I\} , $$

where \(\oplus \) takes the union of two maps with disjoint domains. Fix some \(w^*, v^* \in V\) with domains \(B,B'\), and assume that for all \(b\in B\), \(b' \in B'\), there exists a function f such that \(\mathsf {cpost}(P,(P_1;P_2),Q,f)\) is contained in

$$ \{ (s', s_1' \oplus s_2') \mid s'(v^*) = b\wedge s'(w^*) = b' \iff s_1'(v^*_1) = b\wedge s_2'(w^*_2) = b' \} . $$

Then, \(w^*,v^*\) are independently distributed in \(\llbracket P\rrbracket (s)\) for all inputs \(s \in S\).

The idea is that under the coupling, the probability of P returning \(v^* = b\wedge w^* = b'\) is the same as the probability of \(P_1\) returning \(v^* = b\) and \(P_2\) returning \(w^* = b'\), for all values \(b,b'\). Since \(P_1\) and \(P_2\) are two independent executions of P by construction, this establishes independence of \(v^*\) and \(w^*\).

4 Constraint-Based Formulation of Proof Rules

In Sect. 3, we formalized the problem of constructing a coupling proof using f-coupled postconditions. We now automatically find such proofs by posing the problem as a constraint, where a solution gives a function f establishing our desired property.

4.1 Generating Logical and Probabilistic Constraints

Logical Encoding. We first encode program executions as formulas in first-order logic, using the following encoding function:

$$\begin{aligned} \mathsf {enc}(v \leftarrow exp )&\triangleq v = exp \\ \mathsf {enc}(v \sim dexp )&\triangleq true \\ \mathsf {enc}(\texttt {if } \;\, bexp \;\,\texttt { then }\;\, P\;\, \texttt { else }\;\, P')&\triangleq ( bexp \Rightarrow \mathsf {enc}(P)) \wedge (\lnot bexp \Rightarrow \mathsf {enc}(P'))\\ \mathsf {enc}(P;P')&\triangleq \mathsf {enc}(P) \wedge \mathsf {enc}(P') \end{aligned}$$

We assume a direct correspondence between expressions in our language and the first-order theory used for our encoding, e.g., linear arithmetic. Note that the encoding disregards probabilistic assignments, encoding them as \( true \); this mimics the semantics of our strongest postcondition operator \(\mathsf {post}\). Probabilistic assignments will be handled via a separate encoding of f-couplings.

As expected, enc reflects the strongest postcondition post.

Lemma 1

Let P be a program and let \(\rho \) be any assignment of the variables. An assignment \(\rho '\) agreeing with \(\rho \) on all input variables \(V^I\) satisfies the constraint \(\mathsf {enc}(P)[\rho '/V]\) precisely when \(\mathsf {post}(P, \{\rho \}) = \{ \rho ' \}\), treating \(\rho ,\rho '\) as program states.

Uniformity Constraints. We can encode the conditions in Theorem 2 for showing uniformity as a logical constraint. For a program P and a copy \(P_1\), with first statements \(\varvec{v} \sim dexp \) and \(\varvec{v}_1 \sim dexp _1\), we define the constraints:

figure a

Note that this is a second-order formula, as it quantifies over the uninterpreted function f. The left side of the implication in Formula 3 encodes an f-coupled execution of P and \(P_1\), starting from equal initial states. The right side of this implication encodes the conditions for uniformity, as in Theorem 2.

Formula 4 ensures that there is an f-coupling between \( dexp \) and \( dexp _1\) for any initial state; recall that \( dexp \) may mention input variables \(V^I\). The constraint \( dexp \leftrightsquigarrow ^{f} dexp _1\) is not a standard logical constraint—intuitively, it is satisfied if \( dexp \leftrightsquigarrow ^{f} dexp _1\) holds for some interpretation of f, \( dexp \), and \( dexp _1\).

Example 3

The constraint

holds by setting f to the identity function \(\text {id}\), since for any \(p = p'\) we have an f-coupling \(\mathsf {bern}(p) \leftrightsquigarrow ^{\text {id}} \mathsf {bern}(p')\).

Example 4

Consider the program \(x \sim \mathsf {bern}(0.5); y = \lnot x\). The constraints for uniformity of y are

Since there are no input variables, \(V^I= V^I_1\) is equivalent to \( true \).

Theorem 4

(Uniformity constraints). Fix a program P and variable \(v^* \in V\). Let \(\varphi \) be the uniformity constraints in Formulas 3 and 4. If \(\varphi \) is valid, then \(v^*\) is uniformly distributed in \(\llbracket P\rrbracket (s)\) for all \(s \in S\).

Independence Constraints. Similarly, we can characterize independence constraints using the conditions in Theorem 3. After transforming the program \(P_1; P_2\) to start with the single probabilistic assignment statement \(\varvec{v}_{1,2} \sim dexp _{1,2}\), combining probabilistic assignments in \(P_1\) and \(P_2\), we define the constraints:

figure b

Theorem 5

(Independence constraints). Fix a program P and two variables \(v^*,w^* \in V\). Let \(\varphi \) be the independence constraints from Formulas 5 and 6. If \(\varphi \) is valid, then \(v^*,w^*\) are independent in \(\llbracket P\rrbracket (s)\) for all \(s \in S\).

4.2 Constraint Transformation

To solve our constraints, we transform our constraints into the form , where \(\varphi \) is a first-order formula. Such formulas can be viewed as synthesis problems, and are often solvable automatically using standard techniques.

We perform our transformation in two steps. First, we transform our constraint into the form , where \(\varphi _p\) still contains the coupling constraint. Then, we replace the coupling constraint with a first-order formula by logically encoding primitive distributions as uninterpreted functions.

Quantifier Reordering. Our constraints are of the form . Intuitively, this means that for every possible value of \(a,a'\), we want one function f satisfying . We can pull the existential quantifier \(\exists f\) to the outermost level by extending the function with additional parameters for \(a,a'\), thus defining a different function for every interpretation of \(a,a'\). For the uniformity constraints this transformation yields the following formulas:

figure c

where \(g(a,a',-)\) is the function after partially applying g.

Transforming Coupling Constraints. Our next step is to eliminate coupling constraints. To do so, we use the definition of f-coupling, which states that \(\mu _1 \leftrightsquigarrow ^{f} \mu _2\) if (i) f is injective and (ii) . The first constraint (injectivity) is straightforward. For the second point (monotonicity), we can encode distribution expressions—which represent functions to reals—as uninterpreted functions, which we then further constrain. For instance, the coupling constraint \(\mathsf {bern}(p) \leftrightsquigarrow ^{f} \mathsf {bern}(p')\) can be encoded as

where \(h, h': \mathbb {B} \rightarrow \mathbb {R}^{\ge 0}\) are uninterpreted functions representing the probability mass functions of \(\mathsf {bern}(p)\) and \(\mathsf {bern}(p')\); note that the third constraint encodes the distribution \(\mathsf {bern}(p)\), which returns \( true \) with probability p and false with probability \(1-p\), and the fourth constraint encodes \(\mathsf {bern}(p')\).

Note that if we cannot encode the definition of the distribution in our first-order theory (e.g., if it requires non-linear constraints), or if we do not have a concrete description of the distribution, we can simply elide the last two constraints and under-constrain h and \(h'\). In Sect. 6 we use this feature to prove properties of a program encoding a Bayesian network, where the primitive distributions are unknown program parameters.

Theorem 6

(Transformation soundness). Let \(\varphi \) be the constraints generated for some program P. Let \(\varphi '\) be the result of applying the above transformations to \(\varphi \). If \(\varphi '\) is valid, then \(\varphi \) is valid.

Constraint Solving. After performing these transformations, we finally arrive at constraints of the form , where \(\varphi \) is a first-order formula. These exactly match constraint-based program synthesis problems. In Sect. 6, we use smt solvers and enumerative synthesis to handle these constraints.

5 Dealing with Loops

So far, we have only considered loop-free programs. In this section, we our approach to programs with loops.

f-Coupled Postconditions and Loops. We consider programs of the form

$$\begin{aligned} \textsf {while }\;\, bexp \; P^{b }\end{aligned}$$

where \(P^{b }\) is a loop-free program that begins with the statement \(\varvec{v}\sim dexp \); our technique can also be extended to handle nested loops. We assume all programs terminate with probability 1 for any initial state; there are numerous systems for verifying this basic property automatically (see, e.g., [15,16,17]). To extend our f-coupled postconditions, we let \(\mathsf {cpost}(P, P', Q, f)\) be the smallest set I satisfying:

where \(I_{en } \triangleq \{ (s,s') \in I \mid s( bexp ) = true \}\).

Intuitively, the set I is the least inductive invariant for the two coupled programs running with synchronized loops. Theorem 1, which establishes that f-coupled postconditions result in couplings over output distributions, naturally extends to a setting with loops.

Constraint Generation. To prove uniformity, we generate constraints much like the loop-free case except that we capture the invariant I, modeled as a relation over the variables of both programs, using a Constrained Horn-Clause (chc) encoding. As is standard, we use \(V', V_1'\) to denote primed copies of program variables denoting their value after executing the body, and we assume that \(\mathsf {enc}(P^b)\) encodes a loop-free program as a transition relation from states over V to states over \(V'\).

figure d

The first three constraints encode the definition of \(\mathsf {cpost}\); the last two ensure that f constructs a coupling and that the invariant implies the uniformity condition when the loop terminates. Using the technique presented in Sect. 4.2, we can transform these constraints into the form . That is, in addition to discovering the function f, we need to discover the invariant I.

Proving independence in looping programs poses additional challenges, as directly applying the self-composition construction from Sect. 3 requires relating a single loop with two loops. When the number of loop iterations is deterministic, however, we may simulate two sequentially composed loops with a single loop that interleaves the iterations (known as synchronized or cross product [4, 29]) so that we reduce the synthesis problem to finding a coupling for two loops.

6 Implementation and Evaluation

We now discuss our implementation and five case studies used for evaluation.

Fig. 2.
figure 2

Case study programs

Implementation. To solve formulas of the form , we implemented a simple solver using a guess-and-check loop: We iterate through various interpretations of f, insert them into the formula, and check whether the resulting formula is valid. In the simplest case, we are searching for a function f from n-tuples to n-tuples. For instance, in Sect. 2.2, we discovered the function \(f(x,y) = (y,x)\). Our implementation is parameterized by a grammar defining an infinite set of interpretations of f, which involves permuting the arguments (as above), conditionals, and other basic operations (e.g., negation for Boolean variables). For checking validity of given f, we use the Z3 smt solver [19] for loop-free programs. For loops, we use an existing constrained-Horn-clause solver based on the MathSAT smt solver [18].

Benchmarks and Results. As a set of case studies for our approach, we use 5 different programs collected from the literature and presented in Fig. 2. For these programs, we prove uniformity, (conditional) independence properties, and other probabilistic equalities. For instance, we use our implementation to prove a main lemma for the Ballot theorem [20], encoded as the program ballot.

Figure 3 shows the time and number of loop iterations required by our implementation to discover a coupling proof. The small number of iterations and time needed demonstrates the simplicity of the discovered proofs. For instance, the ballot theorem was proved in 3 s and only 4 iterations, while the fairCoin example (illustrated in Sect. 2.2) required only two iterations and 1.4 s. In all cases, the size of the synthesize function f in terms of depth of its ast is no more than 4. We describe these programs and properties in a bit more detail.

Case Studies: Uniformity ( fairCoin ,  fairDie). The first two programs produce uniformly random values. Our approach synthesizes a coupling proof certifying uniformity for both of these programs. The first program fairCoin, which we saw in Sect. 2.2, produces a fair coin flip given access to biased coin flips by repeatedly flipping two coins while they are equal, and returning the result of the first coin as soon as the flips differ. Note that the bias of the coin flip is a program parameter, and not fixed statically. The synthesized coupling swaps the result of the two samples, mapping the values of (xy) to (yx).

Fig. 3.
figure 3

Statistics

The second program fairDie gives a different construction for simulating a roll of a fair die given fair coin flips. Three fair coins are repeatedly flipped as long as they are all equal; the returned triple is the binary representation of a number in \(\{ 1, \dots , 6 \}\), the result of the simulated roll. The synthesized coupling is a bijection on triples of booleans \(\mathbb {B} \times \mathbb {B} \times \mathbb {B}\); fixing any two possible output triples \((b_1, b_2, b_3)\) and \((b_1', b_2', b_3')\) of distinct booleans, the coupling maps \((b_1, b_2, b_3) \mapsto (b_1', b_2', b_3')\) and vice versa, leaving all other triples unchanged.

Case Studies: Independence ( noisySum , bayes). In the next two programs, our approach synthesizes coupling proofs of independence and conditional independence of program variables in the output distribution. The first program, noisySum, is a stylized program inspired from privacy-preserving algorithms that sum a series of noisy samples; for giving accuracy guarantees, it is often important to show that the noisy draws are probabilistically independent. We show that any pair of samples are independent.

The second program, bayes, models a simple Bayesian network with three independent variables xyz and two dependent variables w and \(w'\), computed from (xy) and (yz) respectively. We want to show that w and \(w'\) are independent conditioned on any value of y; intuitively, w and \(w'\) only depend on each other through the value of y, and are independent otherwise. We use a constraint encoding similar to the encoding for showing independence to find a coupling proof of this fact. Note that the distributions \(\mu , \mu ', \mu ''\) of xyz are unknown parameters, and the functions f and g are also uninterpreted. This illustrates the advantage of using a constraint-based technique—we can encode unknown distributions and operations as uninterpreted functions.

Case Studies: Probabilistic Equalities ( ballot). As we mentioned in Sect. 1, our approach extends naturally to proving general probabilistic equalities beyond uniformity and independence. To illustrate, we consider a lemma used to prove Bertrand’s Ballot theorem [20]. Roughly speaking, this theorem considers counting ballots one-by-one in an election where there are \(n_A\) votes cast for candidate A and \(n_B\) votes cast for candidate B, where \(n_A, n_B\) are parameters. If \(n_A > n_B\) (so A is the winner) and votes are counted in a uniformly random order, the Ballot theorem states that the probability that A leads throughout the whole counting process—without any ties—is precisely \((n_A - n_B) / (n_A + n_B)\).

One way of proving this theorem, sometimes called André’s reflection principle, is to show that the probability of counting the first vote for A and reaching a tie is equal to the probability of counting the first vote for B and reaching a tie. We simulate the counting process slightly differently—instead of drawing a uniform order to count the votes, our program draws uniform samples for votes—but the original target property is equivalent to the equality

$$\begin{aligned} \Pr [ first _1 = 0 \wedge tie _1 \wedge \psi (x_{A1}, x_{B1}) ] = \Pr [ first _2 = 1 \wedge tie _2 \wedge \psi (x_{A2}, x_{B2}) ] \end{aligned}$$
(9)

with \(\psi (x_{Ai}, x_{Bi})\) is \(x_{Ai} = n_A \wedge x_{Bi} = n_B\). Our approach synthesizes a coupling and loop invariant showing that the coupled post-condition is contained in

$$\begin{aligned} \{ (s_1, s_2) \mid s_1( first = 0 \wedge tie \wedge \psi (x_A, x_B)) \iff s_2( first = 0 \wedge tie \wedge \psi (x_A, x_B)) \}, \end{aligned}$$

giving Formula (9) by Proposition 1 (see Barthe et al. [6] for more details).

7 Related Work

Probabilistic programs have been a long-standing target of formal verification. We compare with two of the most well-developed lines of research: probabilistic model checking and deductive verification via program logics or expectations.

Probabilistic Model Checking. Model checking has proven to be a powerful tool for verifying probabilistic programs, capable of automated proofs for various probabilistic properties (typically encoded in probabilistic temporal logics); there are now numerous mature implementations (see, e.g., [21] or [3, Chap. 10] for more details). In comparison, our approach has the advantage of being fully constraint-based. This gives it a number of unique features: (i) it applies to programs with unknown inputs and variables over infinite domains; (ii) it applies to programs sampling from distributions with parameters, or even ones sampling from unknown distributions modeled as uninterpreted functions in first-order logic; (iii) it applies to distributions over infinite domains; and (iv) the generated coupling proofs are compact. At the same time, our approach is specialized to the coupling proof technique and is likely to be more incomplete.

Deductive Verification. Compared to general deductive verification systems for probabilistic programs, like program logics [5, 14, 22, 26] or techniques reasoning by pre-expectations [25], the main benefit of our technique is automation—deductive verification typically requires an interactive theorem prover to manipulate complex probabilistic invariants. In general, the coupling proof method limits reasoning about probabilities and distributions to just the random sampling commands; in the rest of the program, the proof can avoid quantitative reasoning entirely. As a result, our system can work with non-probabilistic invariants and achieve full automation. Our approach also smoothly handles properties involving the probabilities of multiple events, like probabilistic independence, unlike techniques that analyze probabilistic events one-by-one.