# Gradual Consistency Checking

## Abstract

We address the problem of checking that computations of a shared memory implementation (with write and read operations) adheres to some given consistency model. It is known that checking conformance to Sequential Consistency (SC) for a given computation is NP-hard, and the same holds for checking Total Store Order (TSO) conformance. This poses a serious issue for the design of scalable verification or testing techniques for these important memory models. In this paper, we tackle this issue by providing an approach that avoids hitting systematically the worst-case complexity. The idea is to consider, as an intermediary step, the problem of checking weaker criteria that are as strong as possible while they are still checkable in polynomial time (in the size of the computation). The criteria we consider are new variations of causal consistency suitably defined for our purpose. The advantage of our approach is that in many cases (1) it can catch violations of SC/TSO early using these weaker criteria that are efficiently checkable, and (2) when a computation is causally consistent (according to our newly defined criteria), the work done for establishing this fact simplifies significantly the work required for checking SC/TSO conformance. We have implemented our algorithms and carried out several experiments on realistic cache-coherence protocols showing the efficiency of our approach.

## 1 Introduction

This paper addresses the problem of checking whether a given implementation of a shared memory offers the expected consistency guarantees to its clients which are concurrent programs composed of several threads running in parallel. Indeed, users of a memory need to see it as an abstract object allowing to perform concurrent reads and writes over a set of variables, which conform to some *memory model* defining the valid visible sequences of such operations. Various memory models can be considered in this context. Sequential Consistency (SC) [24] is the model where operations can be seen as atomic, executing according to some interleaving of the operations issued by the different threads, while preserving the order in which these operations were issued by each of the threads. This fundamental model offers strong consistency in the sense that for each write operation, when it is issued by a thread, it is immediately visible to all the other threads. Other weaker memory models are adopted in order to meet performance and/or availability requirements in concurrent/distributed systems. One of the most widely used models in this context is Total Store Order (TSO) [29]. In this model, writes can be delayed, which means that after a write is issued, it is not immediately visible to all threads (except for the thread that issued it), and it is committed later after some arbitrary delay. However, writes issued by the same thread are committed in the same order are they were issued, and when a write is committed it becomes visible to all the other threads simultaneously. TSO is implemented in hardware but also in a distributed context over a network [22].

Implementing shared memories that are both highly performant and correct with respect to a given memory model is an extremely hard and error prone task. Therefore, checking that a given implementation is indeed correct from this point of view is of paramount importance. In this paper we address the issue of checking that a given execution of a shared memory implementation is consistent, and we consider as consistency criteria the cases of SC and TSO.

Checking SC or TSO conformance is known to be NP-complete [18, 21]. This is due to the fact that in order to justify that the execution is consistent, one has to find a total order between the writes which explains the read operations happening along the computation. It can be proved that one cannot avoid enumerating all the possible total orders between writes, in the worst case. The situation is different for other weaker criteria such as Causal Consistency (CC) and its different variations, which have been shown to be checkable in polynomial time (in the the size of the computation) [6]. In fact, CC imposes fewer constraints than SC/TSO on the order between writes, and the way it imposes these constraints is “deterministic”, in the sense that they can be derived from the history of the execution by applying a least fixpoint computation (which can be encoded for instance, as a standard DATALOG program). All these complexity results hold under the assumption that each value is written at most once, which is without loss of generality for implementations which are data-independent [31], i.e., their behavior doesn’t depend on the concrete values read or written in the program. Indeed, any buggy behavior of such implementations can be exposed in executions satisfying this assumption ^{1}.

The intrinsic hardness of the problem of checking SC/TSO poses a crucial issue for the design of scalable verification or testing techniques for these important consistency models. Tackling this issue requires the development of practical approaches that can work well (with polynomial complexity) when the instance of the problem does not need to generate the worst case (exponential) complexity.

The purpose of this paper is to propose such an approach. The idea is to reduce the amount of “nondeterminism” in searching for the write orders in order to establish SC/TSO conformance. For that, our approach for SC is to consider a weaker consistency model called CCM (for Convergent Causal Memory), that is “as strong as possible” while being polynomial time checkable. In fact CCM is stronger than both causal memory [2, 26] (CM) and causal convergence [7] (CCv), two other well-known variations of causal consistency. Then, if CCM is already violated by the given computation then we can conclude that the computation does not satisfy the stronger criterion SC. Here the hope is that in practice many computations violating SC can be caught already at this stage using a polynomial time check. Now, in the case that the computation does not violate CCM, we exploit the fact that establishing CCM already imposes a set of constraints on the order between writes. We show that these constraints form a partial order which *must* be a subset of any total write order that would witness for SC conformance. Therefore, at this point, it is enough to find an extension of this partial write order, and the hope is that in many practical cases, this set of constraints is already large enough, letting only a small number of pairs of writes to be ordered in order to check SC conformance. For the case of TSO, we proceed in the same way, but we consider a different intermediary polynomial time checkable criterion called *weak* CCM (wCCM). This is due to the fact that some causality constraints need to be relaxed in order to take into account the program order relaxations of TSO, that allow reads to overtake writes. The definitions of the new criteria CCM and wCCM we use in our approach are quite subtile. Ensuring that these criteria are “as strong as possible” by including all possible order constraints on pairs of writes that can be computed (in polynomial time) using a least fixpoint calculation, while still ensuring that they are weaker than SC/TSO, and proving this fact, is not trivial.

As a proof of concept, we implemented our approach for checking SC/TSO and applied it to executions extracted from realistic cache coherence protocols within the Gem5 simulator [5] in system emulation mode. This evaluation shows that our approach scales better than a direct encoding of the axioms defining SC and TSO [3] into boolean satisfiability. We also show that the partial order of writes imposed by the stronger criteria CCM and wCCM leaves only a small percentage of writes unordered (6.6% in average) in the case that the executions are valid, and most SC/TSO violations are also CCM/wCCM violations.

## 2 Sequential Consistency and TSO

*O*is denoted by \(\mathbb {R}({O})\), resp., \(\mathbb {W}({O})\). The variable accessed by an operation

*o*is denoted by \(\mathsf {var}({o})\).

Consistency criteria like SC or TSO are formalized on an abstract view of an execution called *history*. A history includes a set of write or read operations ordered according to a (partial) *program order* \(\mathsf {po}\) which order operations issued by the same thread. Most often, \(\mathsf {po}\) is a union of sequences, each sequence containing all the operations issued by some thread. Then, we assume that the history includes a *write-read* relation which identifies the write operation writing the value returned by each read in the execution. Such a relation can be extracted easily from executions where each value is written at most once. Since shared-memory implementations (or cache coherence protocols) are data-independent [31] in practice, i.e., their behavior doesn’t depend on the concrete values read or written in the program, any potential buggy behavior can be exposed in such executions.

### Definition 1

A *history* \({\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) is a set of operations *O* along with a strict partial *program order* \(\mathsf {po}\) and a *write-read* relation \(\mathsf {wr}_{}\subseteq \mathbb {W}({O})\times \mathbb {R}({O})\), such that the inverse of \(\mathsf {wr}_{}\) is a total function and if \((\textsf {write}_{}({{x}},{{v}}),\textsf {read}_{}({{x}'},{{v}'}))\in \mathsf {wr}_{}\), then \({x}={x}'\) and \({v}={v}'\).

We assume that every history includes a write operation writing the initial value of variable \({x}\), for each variable \({x}\). These write operations precede all other operations in \(\mathsf {po}\). We use \({h}\), \({h}_1\), \({h}_2\), \(\ldots \) to range over histories.

We now define the SC and TSO memory models (we use the same definitions as in the formal framework developed by Alglave et al. [3]). Given a *history* \({h}={\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) and a variable \({x}\), a *store order on* \({x}\) is a strict total order \(\mathsf {ww}_{{x}}\) on the write operations \(\textsf {write}_{\_}({{x}},{\_})\) in *O*. A *store order* is a union of store orders \(\mathsf {ww}_{{x}}\), one for each variable \({x}\) used in \({h}\). A *history* \({\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) is *sequentially consistent* (SC, for short) if there exists a *store order* \(\mathsf {ww}_{}\) such that \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}\) is acyclic. The *read-write* relation \(\mathsf {rw}\) is defined by \(\mathsf {rw}=\mathsf {wr}_{}^{-1}\circ \mathsf {ww}_{}\) (where \(\circ \) denotes the standard relation composition).

The definition of TSO relies on three additional relations: (1) the \(\mathsf {ppo}\) relation which excludes from the program order pairs formed of a write and respectively, a read operation, i.e., \(\mathsf {ppo}=\mathsf {po}\setminus (\mathbb {W}({O})\times \mathbb {R}({O}))\), (2) the \(\mathsf {po}\text {-}\mathsf {loc}\) relation which is a restriction of \(\mathsf {po}\) to operations accessing the same variable, i.e., \(\mathsf {po}\text {-}\mathsf {loc}=\mathsf {po}\cap \{(o,o')\mid \mathsf {var}({o})=\mathsf {var}({o'})\}\), and (3) the write-read external relation \(\mathsf {wr}_e\) which is a restriction of the write-read relation to pairs of operations in different threads (not related by program order), i.e., \(\mathsf {wr}_e=\mathsf {wr}_{}\cap \{(o,o')\mid (o,o')\not \in \mathsf {po} \text{ and } (o',o)\not \in \mathsf {po}\}\). Then, we say that a history satisfies TSO if there exists a *store order* \(\mathsf {ww}_{}\) such that \(\mathsf {po}\text {-}\mathsf {loc}\cup \mathsf {wr}_e\cup \mathsf {ww}_{}\cup \mathsf {rw}\) and \(\mathsf {ppo}\cup \mathsf {wr}_e\cup \mathsf {ww}_{}\cup \mathsf {rw}\) are both acyclic.

Notice that the formal definition of the TSO given above is equivalent to the formal operational model of TSO that consists in considering that each thread has a store buffer, and then, each write issued by a thread is first sent to its store buffer before being committed to the memory later in a nondeterministic way. To read a value on some variable *x*, a thread first checks if it there is still a write on *x* pending in its own buffer and in this case it takes the value of the last such as write, otherwise it fetches the value of *x* in the memory.

## 3 Checking Sequential Consistency

We define an algorithm for checking whether a history satisfies SC which enforces a polynomially-time checkable criterion weaker than SC, a variation of causal consistency, in order to construct a *partial* store order, i.e., one in which not all the writes on the same variable are ordered. This partial store order is then completed until it orders every two writes on the same variable using a standard backtracking enumeration. This approach is efficient when the number of writes that remain to be ordered using the backtracking enumeration is relatively small, a hypothesis confirmed by our experimental evaluation (see Sect. 5.).

The variation of causal consistency mentioned above, called *convergent causal memory* (CCM, for short), is stronger than existing variations [6] while still being polynomially-time checkable (and weaker than SC). Its definition uses several relations between read and write operations which are analogous or even exactly the same relations used to define those variations. Section 3.1 recalls the existing notions of causal consistency as they are defined in [6] (using the so called “bad-pattern” characterization introduced in that paper), Sect. 3.2 introduces CCM, while Sect. 3.3 presents our algorithm for checking SC.

### 3.1 Causal Consistency

*weak causal consistency*(CC, for short), requires that any two causally-dependent values are observed in the same order by all threads, where causally-dependent means that either those values were written by the same thread (i.e., the corresponding writes are ordered by \(\mathsf {po}\)), or that one value was written by a thread after reading the other value, or any transitive composition of such dependencies. Values written concurrently by two threads can be observed in any order, and even-more, this order may change in time. A

*history*\({\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) satisfies CC if \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {rw}[{\mathsf {co}}]\) is acyclic where \(\mathsf {co}=(\mathsf {po}\cup \mathsf {wr}_{})^+\) is called the

*causal relation*. The

*read-write*relation \(\mathsf {rw}[{\mathsf {co}}]\) induced by the causal relation is defined by

*R*on operations, \(R_{\mathbb {W}\mathbb {W}}\) denotes the projection of

*R*on pairs of writes on the same variable. Then,

### Definition 2

The read-write relation \(\mathsf {rw}[{R}]\) induced by a relation *R* is defined by \(\mathsf {rw}[{R}]=\mathsf {wr}_{}^{-1}\circ R_{\mathbb {W}\mathbb {W}}\).

*Causal convergence* (CCv, for short) is a strengthening of CC where concurrent values are required to be observed in the same order by all threads.

*history*\({\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) satisfies CCv if it satisfies CC and \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {cf}\) is acyclic where the

*conflict relation*\(\mathsf {cf}\) is defined by

*R*on operations, \(R_{\mathbb {W}\mathbb {R}}\) denotes the projection of

*R*on pairs of writes and reads on the same variable, respectively.

### Definition 3

The conflict relation \(\mathsf {cf}[R]\) induced by a relation *R* is defined by \(\mathsf {cf}[R]=R_{\mathbb {W}\mathbb {R}}\circ \mathsf {wr}_{}^{-1}\).

Finally, *causal memory* (CM, for short) is a strengthening of CC where roughly, concurrent values are required to be observed in the same order by a thread during its entire execution. Differently from CCv, this order can differ from one thread to another. Although this intuitive description seems to imply that CM is weaker than CCv, the two models are actually incomparable. For instance, the history in Fig. 1a is allowed by CM, but not by CCv. It is not allowed by CCv because reading 1 from *x* in the first thread implies that it observed \(\textsf {write}_{}({{x}},{1})\) after \(\textsf {write}_{}({{x}},{2})\) while reading 2 from *x* in the second thread implies that it observed \(\textsf {write}_{}({{x}},{2})\) after \(\textsf {write}_{}({{x}},{1})\). While this is allowed by CM where different threads can observe concurrent writes in different orders, it is not allowed by CCv. Then, the history in Fig. 1b is CCv but not CM. It is not allowed by CM because reading the initial value 0 from *z* implies that \(\textsf {write}_{}({{x}},{1})\) is observed after \(\textsf {write}_{}({{x}},{2})\) while reading 2 from \({x}\) implies that \(\textsf {write}_{}({{x}},{2})\) is observed after \(\textsf {write}_{}({{x}},{1})\) (\(\textsf {write}_{}({{x}},{1})\) must have been observed because the same thread reads 1 from \({y}\) and the writes on *x* and *y* are causally related). However, under CCv, a thread simply reads the most recent value on each variable and the order in which these values are ordered using timestamps for instance is independent of the order in which variables are read in a thread, e.g., reading 0 from *z* doesn’t imply that the timestamp of \(\textsf {write}_{}({{x}},{2})\) is smaller than the timestamp of \(\textsf {write}_{}({{x}},{1})\). This history is admitted by CCv assuming that the order in which \(\textsf {write}_{}({{x}},{1})\) and \(\textsf {write}_{}({{x}},{2})\) are observed is \(\textsf {write}_{}({{x}},{1})\) before \(\textsf {write}_{}({{x}},{2})\).

*o*in h, let \(\mathsf {hb}_{o}\) be the smallest transitive relation such that:

- 1.
if two operations are causally related, and each one causally related to

*o*, then they are related by \(\mathsf {hb}_{o}\), i.e., \((o_1,o_2)\in \mathsf {hb}_{o}\) if \((o_1,o_2)\in \mathsf {co}\), \((o_1,o)\in \mathsf {co}\), and \((o_2,o)\in \mathsf {co}^*\) (where \(\mathsf {co}^*\) is the reflexive closure of \(\mathsf {co}\)), and - 2.
two writes \(w_1\) and \(w_2\) are related by \(\mathsf {hb}_{o}\) if \(w_1\) is \(\mathsf {hb}_{o}\)-related to a read taking its value from \(w_2\), and that read is done by the same thread executing

*o*and before*o*(this scenario is similar to the definition of the conflict relation above), i.e., \((\textsf {write}_{}({{x}},{{v}}),\textsf {write}_{}({{x}},{{v}'}))\in \mathsf {hb}_{o}\) if \((\textsf {write}_{}({{x}},{{v}}),\textsf {read}_{}({{x}},{{v}'}))\in \mathsf {hb}_{o}\), \((\textsf {write}_{}({{x}},{{v}'}),\textsf {read}_{}({{x}},{{v}'}))\in \mathsf {wr}_{}\), and \((\textsf {read}_{}({{x}},{{v}'}),o)\in \mathsf {po}^*\), for some \(\textsf {read}_{}({{x}},{{v}'})\).

A history \({\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) satisfies CM if it satisfies CC and for each operation *o* in the history, the relation \(\mathsf {hb}_{o}\) is acyclic.

Bouajjani et al. [6] show that the problem of checking whether a history satisfies CC, CCv, or CM is polynomial time. This result is a straightforward consequence of the above definitions, since the union of relations required to be acyclic can be computed in polynomial time from the relations \(\mathsf {po}\) and \(\mathsf {wr}_{}\) which are fixed in a given history. In particular, the union of these relations can be computed by a DATALOG program.

### 3.2 Convergent Causal Memory

*convergent causal memory*(CCM, for short) if \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {pww}_{}\cup \mathsf {rw}[{\mathsf {pww}_{}}]\) is acyclic, where the

*partial store order*\(\mathsf {pww}_{}\) is defined by

As a first result, we show that all the variations of causal consistency in Sect. 3.1, i.e., CC, CCv and CM, are strictly weaker than CCM.

### Lemma 1

If a history satisfies CCM, then it satisfies CC, CCv and CM.

### Proof

Let \({h}={\left\langle {O, \mathsf {po}, \mathsf {wr}_{}}\right\rangle }\) be a history satisfying CCM. By the definition of \(\mathsf {hb}_{}\), we have that \(\mathsf {co}_{\mathbb {W}\mathbb {W}}\subseteq \mathsf {hb}_{\mathbb {W}\mathbb {W}}\). Indeed, any two writes \(o_1\) and \(o_2\) related by \(\mathsf {co}\) are also related by \(\mathsf {hb}_{o_2}\), which by the definition of \(\mathsf {hb}_{}\), implies that they are related by \(\mathsf {hb}_{\mathbb {W}\mathbb {W}}\). Then, by the definition of \(\mathsf {pww}_{}\), we have that \(\mathsf {hb}_{\mathbb {W}\mathbb {W}}\subseteq \mathsf {pww}_{}\). This implies that \(\mathsf {rw}[\mathsf {co}]\subseteq \mathsf {rw}[{\mathsf {pww}_{}}]\) (by definition, \(\mathsf {rw}[\mathsf {co}]=\mathsf {rw}[\mathsf {co}_{\mathbb {W}\mathbb {W}}]\)). Therefore, the acyclicity of \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {pww}_{}\cup \mathsf {rw}[{\mathsf {pww}_{}}]\) implies that its subset \((\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {rw}[\mathsf {co}]\) is also acyclic, which means that \({h}\) satisfies CC. Also, it implies that \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {cf}[{\mathsf {hb}_{}}]\) is acyclic (the last term of the union is included in \(\mathsf {pww}_{}\)), which by \(\mathsf {co}\subseteq \mathsf {hb}_{}\), implies that \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {cf}[{\mathsf {co}}]\) is acyclic, and thus, *h* satisfies CCv. The fact that \({h}\) satisfies CM follows from the fact that \({h}\) satisfies CC (since \(\mathsf {po}\cup \mathsf {wr}_{}\) is acyclic) and \(\mathsf {hb}_{}\) is acyclic (\(\mathsf {hb}_{\mathbb {W}\mathbb {W}}\) is included in \(\mathsf {pww}_{}\) and the rest of the dependencies in \(\mathsf {hb}_{}\) are included in \(\mathsf {po}\cup \mathsf {wr}_{}\)).\(\Box \)

The reverse of the above lemma doesn’t hold. Figure 1c shows a history which satisfies CM and CCv, but it is not CCM. To show that this history does not satisfy CCM we use the fact that \(\mathsf {pww}_{}\) relates any two writes which are ordered by program order. Then, we get that \(\textsf {read}_{}({{x}},{1})\) and \(\textsf {write}_{}({{x}},{2})\) are related by \(\mathsf {rw}[{\mathsf {pww}_{}}]\) (because \(\textsf {write}_{}({{x}},{1})\) is related by write-read with \(\textsf {read}_{}({{x}},{1})\)), which further implies that \((\textsf {read}_{}({{x}},{1}),\textsf {read}_{}({{y}},{1}))\in \mathsf {rw}[{\mathsf {pww}_{}}]\circ \mathsf {po}\). Similarly, we have that \((\textsf {read}_{}({{y}},{1}),\textsf {read}_{}({{x}},{1}))\in \mathsf {rw}[{\mathsf {pww}_{}}]\circ \mathsf {po}\), which implies that \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {pww}_{}\cup \mathsf {rw}[{\mathsf {pww}_{}}]\) is *not* acyclic, and therefore, the history does not satisfy CCM. The fact that this history satisfies CM and CCv follows easily from definitions.

Next, we show that CCM is weaker than SC, which will be important in our algorithm for checking whether a history satisfies SC.

### Lemma 2

If a history satisfies SC, then it satisfies CCM.

### Proof

*store order*\(\mathsf {ww}_{}\) such that \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}]\) is acyclic. We show that the two relations \(\mathsf {hb}_{\mathbb {W}\mathbb {W}}\) and \(\mathsf {cf}[{\mathsf {hb}_{}}]\), whose union constitutes \(\mathsf {pww}_{}\), are both included in \(\mathsf {ww}_{}\). We first prove that \(\mathsf {hb}_{}\subseteq (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\) by structural induction on the definition of \(\mathsf {hb}_{o}\):

- 1.
if \((o_1,o_2)\in \mathsf {co}=(\mathsf {po}\cup \mathsf {wr}_{})^+\), then clearly, \((o_1,o_2)\in (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\),

- 2.
if \((\textsf {write}_{}({{x}},{{v}}),\textsf {read}_{}({{x}},{{v}'}))\in (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\) and there is \(\textsf {read}_{}({{x}},{{v}'})\) such that \((\textsf {write}_{}({{x}},{{v}'}),\textsf {read}_{}({{x}},{{v}'}))\in \mathsf {wr}_{}\), then \((\textsf {write}_{}({{x}},{{v}}),\textsf {write}_{}({{x}},{{v}'}))\in \mathsf {ww}_{}\). Otherwise, assuming by contradiction that \((\textsf {write}_{}({{x}},{{v}'}),\textsf {write}_{}({{x}},{{v}}))\in \mathsf {ww}_{}\), we get that \((\textsf {read}_{}({{x}},{{v}'}),\textsf {write}_{}({{x}},{{v}}))\in \mathsf {rw}[{\mathsf {ww}_{}}]\) (by the definition of \(\mathsf {rw}[{\mathsf {ww}_{}}]\) using the hypothesis \((\textsf {write}_{}({{x}},{{v}'}),\textsf {read}_{}({{x}},{{v}'}))\in \mathsf {wr}_{}\)). Note that the latter implies that \(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}]\) is cyclic.

Since \(\mathsf {hb}_{}\subseteq (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\), we get that \(\mathsf {hb}_{\mathbb {W}\mathbb {W}}\subseteq \mathsf {ww}_{}\). Also, since \(\mathsf {cf}[(\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+]\subseteq (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\) (using a similar argument as in point (2) above), we get that \(\mathsf {cf}[\mathsf {hb}_{}]\subseteq (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\).

Finally, since \(\mathsf {pww}_{}\subseteq \mathsf {ww}_{}\), we get that \((\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {pww}_{}\cup \mathsf {rw}[{\mathsf {pww}_{}}])^+\subseteq (\mathsf {po}\cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\), which implies that the acyclicity of the latter implies the acyclicity of the former. Therefore, \({h}\) satisfies CCM.\(\Box \)

The left side of Fig. 2 (ignoring wCCM and TSO) summarizes the relationships between the consistency models presented in this section.

The partial store order \(\mathsf {pww}_{}\) can be computed in polynomial time (in the size of the input history). Indeed, the \(\mathsf {hb}_{o}\) relations can be computed using a least fixpoint calculation that converges in at most a quadratic number of iterations and acyclicity can be decided in polynomial time. Therefore,

### Theorem 1

Checking whether a history satisfies CCM is polynomial time in the size of the history.

### 3.3 An Algorithm for Checking Sequential Consistency

Algorithm 1 checks whether a given history satisfies sequential consistency. As a first step, it checks whether the given history satisfies CCM. If this is not the case, then, by Lemma 2, the history does not satisfy SC as well, and the algorithm returns \( false \). Otherwise, it enumerates store orders which extend the partial store order \(\mathsf {pww}_{}\), until finding one that witnesses for satisfaction of SC. The history is a violation to SC iff no such store order is found. The soundness of this last step is implied by the proof of Lemma 2, which shows that \(\mathsf {pww}_{}\) is included in any store order \(\mathsf {ww}_{}\) witnessing for SC satisfaction.

### Theorem 2

Algorithm 1 returns \( true \) iff the input history \({h}\) satisfies SC.

## 4 Checking Conformance to the TSO Model

We consider now the problem of checking whether a history satisfies TSO. Following the approach developed above for SC, we define a polynomial time checkable criterion, based on a (different) variation of causal consistency that is suitable for the case of TSO. This allows to reduce the number of pairs of writes for which an order must be guessed in order to establish conformance to TSO.

The case of TSO requires the definition of a new intermediary consistency model because CCM is based on a causality order that includes the program order \(\mathsf {po}\) which is relaxed in the context of TSO, compared to the SC model. Indeed, CCM is *not* weaker than TSO as shown by the history in Fig. 1b (note that this does not imply that other variations of causal consistency, CC and CCv, are also not weaker than TSO). This history satisfies TSO because, based on its operational model, the operation \(\textsf {write}_{}({{x}},{2})\) of thread \(t_1\) can be delayed (pending in the store buffer of \(t_1\)) until the end of the execution. Therefore, after executing \(\textsf {read}_{}({{z}},{0})\), all the writes of thread \(t_0\) are committed to the main memory so that thread \(t_1\) can read 1 from \({y}\) and 2 from \({x}\) (it is obliged to read the value of \({x}\) from its own store buffer). This history is not admitted by CCM because it is not admitted by the weaker causal consistency variation CM. Figure 3 shows a history admitted by CCM but not by TSO. Indeed, under TSO, both \(t_2\) and \(t_3\) should see the writes on *x* and *y* performed by \(t_0\) and \(t_1\), respectively, in the same order. This is not the case, because \(t_2\) “observes” the write on *x* before the write on *y* (since it reads 0 from *y*) and \(t_3\) “observes” the write on *y* before the write on *x* (since it reads 0 from *x*). This history is admitted by CCM because the two writes are causally independent and they concern different variables. We mention that TSO and CM are also incomparable. For instance, the history in Fig. 1a is allowed by CM, but not by TSO. The history in Fig. 1b is admitted by TSO, but not by CM.

*weak convergent causal memory*(wCCM), which is also weaker than TSO. The model wCCM is based on causality relations induced by the relaxed program orders \(\mathsf {ppo}\) and \(\mathsf {po}\text {-}\mathsf {loc}\) instead of \(\mathsf {po}\), and the external write-read relation instead of the full write-read relation.

### 4.1 Weak Convergent Causal Memory

First, we define two causality relations relative to the partial program orders in the definition of TSO and the external write-read relation: For \(\pi \in \{ \mathsf {ppo}, \mathsf {po}\text {-}\mathsf {loc}\}\), let \(\mathsf {co}^{\pi } = (\pi \cup \mathsf {wr}_e)^+\). We also consider a notion of conflict that is defined in terms of the external write-read relation as follows: For a given relation *R*, let \(\mathsf {cf}_{e}[R]=R_{\mathbb {W}\mathbb {R}}\circ \mathsf {wr}_e^{-1}\).

*o*two happens-before relations \(\mathsf {hb}^{\mathsf {ppo}}_{o}\) and \(\mathsf {hb}^{\mathsf {po}\text {-}\mathsf {loc}}_{o}\). The definition of these relations is similar to the one of \(\mathsf {hb}_{o}\) (from causal memory), the differences being that \(\mathsf {po}\) is replaced by \(\mathsf {ppo}\) and \(\mathsf {po}\text {-}\mathsf {loc}\) respectively, \(\mathsf {co}\) is replaced by \(\mathsf {co}^{\mathsf {ppo}}\) and \(\mathsf {co}^{\mathsf {po}\text {-}\mathsf {loc}}\) respectively, and \(\mathsf {wr}_{}\) is replaced by \(\mathsf {wr}_e\). Therefore, for \(\pi \in \{ \mathsf {ppo}, \mathsf {po}\text {-}\mathsf {loc}\}\), \(\mathsf {hb}^{\pi }_{o}\) is is the smallest transitive relation such that:

- 1.
\((o_1,o_2)\in \mathsf {hb}^{\pi }_{o}\) if \((o_1,o_2)\in \mathsf {co}^\pi \), \((o_1,o)\in \mathsf {co}^\pi \), and \((o_2,o)\in (\mathsf {co}^\pi )^*\), and

- 2.
\((\textsf {write}_{}({{x}},{{v}}),\textsf {write}_{}({{x}},{{v}'}))\in \mathsf {hb}^{\pi }_{o}\) if \((\textsf {write}_{}({{x}},{{v}}),\textsf {read}_{}({{x}},{{v}'}))\in \mathsf {hb}^{\pi }_{o}\), and \((\textsf {write}_{}({{x}},{{v}'}),\textsf {read}_{}({{x}},{{v}'}))\in \mathsf {wr}_{}\) and \((\textsf {read}_{}({{x}},{{v}'}),o)\in \pi ^*\), for some \(\textsf {read}_{}({{x}},{{v}'})\).

*weak convergent causal memory*(wCCM) if both relations:

\(\mathsf {ppo}\cup \mathsf {wr}_e\cup \mathsf {wpww}_{}\cup \mathsf {rw}[{\mathsf {wpww}_{}}]\) and \(\mathsf {po}\text {-}\mathsf {loc}\cup \mathsf {wr}_e\cup \mathsf {wpww}_{}\cup \mathsf {rw}[{\mathsf {wpww}_{}}]\)

are acyclic.

### Lemma 3

If a history satisfies TSO, then it satisfies wCCM.

### Proof

Finally, since \(\mathsf {wpww}_{}\subseteq \mathsf {ww}_{}\), we get that \((\pi \cup \mathsf {wr}_{}\cup \mathsf {wpww}_{}\cup \mathsf {rw}[{\mathsf {wpww}_{}}])^+\subseteq (\pi \cup \mathsf {wr}_{}\cup \mathsf {ww}_{}\cup \mathsf {rw}[{\mathsf {ww}_{}}])^+\), for each \(\pi \in \{\mathsf {ppo},\mathsf {po}\text {-}\mathsf {loc}\}\). In each case, the acyclicity of the latter implies the acyclicity of the former. Therefore, \({h}\) satisfies wCCM.

The reverse of the above lemma does not hold. Indeed, it can be easily seen that wCCM is weaker than CCM (since \(\mathsf {wpww}_{}\) is included in \(\mathsf {pww}_{}\)) and the history in Fig. 3, which satisfies CCM but not TSO (as explained in the beginning of the section), is also an example of a history that satisfies wCCM but not TSO. Then, wCCM is incomparable to CM. For instance, the history in Fig. 1b is allowed by wCCM (since it is allowed by TSO as explained in the beginning of the section) but not by CM. Also, since CCM is stronger than CM, the history in Fig. 3 satisfies CM but not wCCM (since it does not satisfy TSO). These relationships are summarized in Fig. 2. Establishing the precise relation between CC/CCv and TSO is hard because they are defined using one, resp., two, acyclicity conditions. We believe that CC and CCv are weaker than TSO, but we don’t have a formal proof.

Finally, it can be seen that, similarly to \(\mathsf {pww}_{}\), the weak partial store order \(\mathsf {wpww}_{}\) can be computed in polynomial time, and therefore:

### Theorem 3

Checking whether a history satisfies wCCM is polynomial time in the size of the history.

### 4.2 An Algorithm for Checking TSO Conformance

The algorithm for checking TSO conformance for a given history is given in Fig. 2. It starts by checking whether the history violates the weaker consistency model wCCM. If yes, it returns false. If not, it starts enumerating the orders between the writes that are not related by the weak partial store order \(\mathsf {wpww}_{}\) until it founds one that allows establishing TSO conformance and in this case it returns true. Otherwise it returns false.

### Theorem 4

Algorithm 2 returns \( true \) iff the input history \({h}\) satisfies TSO.

## 5 Experimental Evaluation

To demonstrate the practical value of the theory developed in the previous sections, we argue that our algorithms are efficient and scalable. We experiment with both SC and TSO algorithms, investigating their running time compared to a standard encoding of these models into boolean satisfiability on a benchmark obtained by running realistic cache coherence protocols within the Gem5 simulator [5] in system emulation mode.

Histories are generated with random clients of the following cache coherence protocols included in the Gem5 distribution: MI, MEOSI Hammer, MESI Two Level, and MEOSI AMD Base. The randomization process is parametrized by the number of cpus (threads) and the total number of read/write operations. We ensure that every value is written at most once.

We have compared two variations of our algorithms for checking SC/TSO with a standard encoding of SC/TSO into boolean satisfiability (named X-SAT where X is SC or TSO). The two variations differ in the way in which the partial store order \(\mathsf {pww}_{}\) dictated by CCM is completed to a total store order \(\mathsf {ww}_{}\) as required by SC/TSO: either using standard enumeration (named X-CCM+Enum where X is SC or TSO) or using a SAT solver (named X-CCM+SAT where X is SC or TSO).

### 5.1 Checking SC

Figure 4 reports on the running time of the three algorithms while increasing the number of operations or cpus. All the histories considered in this experiment satisfy SC. This is intended because valid histories force our algorithms to enumerate extensions of the partial store order (SC violations may be detected while checking CCM). The graph on the left pictures the evolution of the running time when increasing the number of operations from 100 to 500, in increments of 100 (while using a constant number of 4 cpus). For each number of operations, we have considered 200 histories and computed the average running time. The graph on the right shows the running time when increasing the number of cpus from 2 to 6, in increments of 1. For *x* cpus, we have limited the number of operations to 50*x*. As before for each number of cpus, we have considered 200 histories and computed the average running time. As it can be observed, our algorithms scale much better than the SAT encoding and interestingly enough, the difference between an explicit enumeration of \(\mathsf {pww}_{}\) extensions and one using a SAT solver is not significant. Note that even small improvements on the average running time provide large speedups when taking into account the whole testing process, i.e., checking consistency for a possibly large number of (randomly-generated) executions. For instance, the work on McVerSi [13], which focuses on the complementary problem of finding clients that increase the probability of uncovering bugs, shows that exposing bugs in some realistic cache coherence implementations requires even 24 h of continuous testing.

*not*ordered by \(\mathsf {pww}_{}\). Thus, we have considered a random sample of 200 histories (with 200 operations per history) and evaluated this percentage to be just 6.6\(\%\), which is surprisingly low. This explains the net gain in comparison to a SAT encoding of SC, since the number of \(\mathsf {pww}_{}\) extensions that need to be enumerated is quite low. As a side remark, using CCv instead of CCM in the algorithms above leads to a drastic increase in the number of unordered writes. For the same random sample of 200 histories, we conclude that using CCv instead of CCM leaves 57.75% of unordered writes in average which is considerably bigger than the percentage of unordered writes when using CCM.

We have also evaluated our algorithms on SC violations. These violations were generated by reordering statements from the MI implementation, e.g., swapping the order of the actions s_store_hit and p_profileHit in the transition transition(M, Store). As an optimization, our implementation checks gradually the weaker variations of causal consistency CC and CCv before checking CCM. This is to increase the chances of returning in the case of a violation (a violation to CC/CCv is also a violation to CCM and SC). We have considered 1000 histories with 100 to 400 operations and 2 to 8 cpus, equally distributed in function of the number of cpus. Figure 5 reports on the evolution of the average running time. Since these histories happen to all be CCM violations, SC-CCM+Enum and SC-CCM+SAT have the same running time. As an evaluation of our optimization, we have found that 50% of the histories invalidate weaker variations of causal consistency, CC or CCv.

### 5.2 Checking TSO

We have evaluated our TSO algorithms on the same set of histories used for SC in Fig. 4. Since these histories satisfy SC, they satisfy TSO as well. As in the case of SC, our algorithms scale better than the SAT encoding. However, differently from SC, the enumeration of \(\mathsf {wpww}_{}\) extensions using a SAT solver outperforms the explicit enumeration. Since this difference was more negligible in the case of SC, it seems that the SAT variation is generally better.

## 6 Related Work

While several static techniques have been developed to prove that a shared-memory implementation (or cache coherence protocol) satisfies SC [1, 4, 9, 10, 11, 12, 17, 20, 23, 27, 28] few have addressed dynamic techniques such as testing and runtime verification (which scale to more realistic implementations). From the complexity standpoint, Gibbons and Korach [21] showed that checking whether a history is SC is np-hard while Alur et al. [4] showed that checking SC for finite-state shared-memory implementations (over a bounded number of threads, variables, and values) is undecidable [4]. The fact that checking whether a history satisfies TSO is also np-hard has been proved by Furbach et al. [18].

There are several works that addressed the testing problem for related criteria, e.g., linearizability. While SC requires that the operations in a history be explained by a linearization that is consistent with the program order, linearizability requires that such a linearization be also consistent with the real-time order between operations (linearizability is stronger than SC). The works in [25, 30] describe monitors for checking linearizability that construct linearizations of a given history incrementally, in an online fashion. This incremental construction cannot be adapted to SC since it strongly relies on the specificities of linearizability. Line-Up [8] performs systematic concurrency testing via schedule enumeration, and offline linearizability checking via linearization enumeration. The works in [15, 16] show that checking linearizability for some particular class of ADTs is polynomial time. Emmi and Enea [14] consider the problem of checking weak consistency criteria, but their approach focuses on specific relaxations in those criteria, falling back to an explicit enumeration of linearizations in the context of a criterion like SC or TSO. Bouajjani et al. [6] consider the problem of checking causal consistency. They formalize the different variations of causal consistency we consider in this work and show that the problem of checking whether a history satisfies one of these variations is polynomial time.

The complementary issue of test generation, i.e., finding clients that increase the probability of uncovering bugs in shared memory implementations, has been approached in the McVerSi framework [13]. Their methodology for checking a criterion like SC lies within the context of white-box testing, i.e., the user is required to annotate the shared memory implementation with events that define the store order in an execution. Our algorithms have the advantage that the implementation is treated as a black-box requiring less user intervention.

## 7 Conclusion

We have introduced an approach for checking the conformance of a computation to SC or to TSO, a problem known to be NP-hard. The idea is to avoid an explicit enumeration of the exponential number of possible total orders between writes in order to solve these problems. Our approach is to define weaker criteria that are as strong as possible but still polynomial time checkable. This is useful for (1) early detection of violations, and (2) reducing the number of pairs of writes for which an order must be found in order to check SC/TSO conformance. Morally, the approach consists in being able to capture an “as large as possible” partial order on writes that can be computed in polynomial time (using a least fixpoint calculation), and which is a subset of any total order witnessing SC/TSO conformance. Our experimental results show that this approach is indeed useful and performant: it allows to catch most of violations early using an efficient check, and it allows to compute a large kernel of write constraints that reduces significantly the number of pairs of writes that are left to be ordered in an enumerative way. Future work consists in exploring the application of this approach to other correctness criteria that are hard to check such a serializability in the context of transactional programs.

## Footnotes

## References

- 1.Abdulla, P.A., Haziza, F., Holík, L.: Parameterized verification through view abstraction. STTT
**18**(5), 495–516 (2016). https://doi.org/10.1007/s10009-015-0406-xCrossRefGoogle Scholar - 2.Ahamad, M., Neiger, G., Burns, J.E., Kohli, P., Hutto, P.W.: Causal memory: definitions, implementation, and programming. Distrib. Comput.
**9**(1), 37–49 (1995)MathSciNetCrossRefGoogle Scholar - 3.Alglave, J., Maranget, L., Tautschnig, M.: Herding cats: modelling, simulation, testing, and data mining for weak memory. ACM Trans. Program. Lang. Syst.
**36**(2), 7:1–7:74 (2014). https://doi.org/10.1145/2627752CrossRefGoogle Scholar - 4.Alur, R., McMillan, K.L., Peled, D.A.: Model-checking of correctness conditions for concurrent objects. Inf. Comput.
**160**(1–2), 167–188 (2000). https://doi.org/10.1006/inco.1999.2847MathSciNetCrossRefzbMATHGoogle Scholar - 5.Binkert, N., et al.: The gem5 simulator. SIGARCH Comput. Archit. News
**39**(2), 1–7 (2011). https://doi.org/10.1145/2024716.2024718CrossRefGoogle Scholar - 6.Bouajjani, A., Enea, C., Guerraoui, R., Hamza, J.: On verifying causal consistency. In: Castagna, G., Gordon, A.D. (eds.) Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18–20, 2017, pp. 626–638. ACM (2017). http://dl.acm.org/citation.cfm?id=3009888
- 7.Burckhardt, S.: Principles of Eventual Consistency. Now publishers, Boston, October 2014Google Scholar
- 8.Burckhardt, S., Dern, C., Musuvathi, M., Tan, R.: Line-up: a complete and automatic linearizability checker. In: Zorn, B.G., Aiken, A. (eds.) Proceedings of the 2010 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2010, Toronto, Ontario, Canada, 5–10 June 2010, pp. 330–340. ACM (2010). https://doi.org/10.1145/1806596.1806634
- 9.Clarke, E.M., et al.: Verification of the futurebus+ cache coherence protocol. In: Agnew, D., Claesen, L.J.M., Camposano, R. (eds.) Computer Hardware Description Languages and their Applications, Proceedings of the 11th IFIP WG10.2 International Conference on Computer Hardware Description Languages and their Applications - CHDL 1993, sponsored by IFIP WG10.2 and in cooperation with IEEE COMPSOC, Ottawa, Ontario, Canada, 26–28 April 1993. IFIP Transactions, vol. A-32, pp. 15–30. North-Holland (1993)Google Scholar
- 10.Delzanno, G.: Automatic verification of parameterized cache coherence protocols. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 53–68. Springer, Heidelberg (2000). https://doi.org/10.1007/10722167_8CrossRefzbMATHGoogle Scholar
- 11.Delzanno, G.: Constraint-based verification of parameterized cache coherence protocols. Formal Methods Syst. Des.
**23**(3), 257–301 (2003)CrossRefGoogle Scholar - 12.Eiríksson, Á.T., McMillan, K.L.: Using formal verification/analysis methods on the critical path in system design: a case study. In: Wolper, P. (ed.) CAV 1995. LNCS, vol. 939, pp. 367–380. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-60045-0_63CrossRefGoogle Scholar
- 13.Elver, M., Nagarajan, V.: Mcversi: a test generation framework for fast memory consistency verification in simulation. In: 2016 IEEE International Symposium on High Performance Computer Architecture, HPCA 2016, Barcelona, Spain, 12–16 March 2016, pp. 618–630. IEEE Computer Society (2016). https://doi.org/10.1109/HPCA.2016.7446099
- 14.Emmi, M., Enea, C.: Monitoring weak consistency. In: Chockler, H., Weissenbacher, G. (eds.) CAV 2018. LNCS, vol. 10981, pp. 487–506. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96145-3_26CrossRefGoogle Scholar
- 15.Emmi, M., Enea, C.: Sound, complete, and tractable linearizability monitoring for concurrent collections. PACMPL
**2**(POPL), 25:1–25:27 (2018). https://doi.org/10.1145/3158113CrossRefGoogle Scholar - 16.Emmi, M., Enea, C., Hamza, J.: Monitoring refinement via symbolic reasoning. In: Grove, D., Blackburn, S. (eds.) Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, Portland, OR, USA, 15–17 June, 2015, pp. 260–269. ACM (2015). https://doi.org/10.1145/2737924.2737983
- 17.Esparza, J., Finkel, A., Mayr, R.: On the verification of broadcast protocols. In: 14th Annual IEEE Symposium on Logic in Computer Science, Trento, Italy, 2–5 July 1999, pp. 352–359. IEEE Computer Society (1999). https://doi.org/10.1109/LICS.1999.782630
- 18.Furbach, F., Meyer, R., Schneider, K., Senftleben, M.: Memory-model-aware testing: a unified complexity analysis. ACM Trans. Embed. Comput. Syst.
**14**(4), 63:1–63:25 (2015). https://doi.org/10.1145/2753761CrossRefGoogle Scholar - 19.Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T.: Clingo = ASP + control: Preliminary report. CoRR abs/1405.3694 (2014). http://arxiv.org/abs/1405.3694
- 20.German, S.M., Sistla, A.P.: Reasoning about systems with many processes. J. ACM
**39**(3), 675–735 (1992). https://doi.org/10.1145/146637.146681MathSciNetCrossRefzbMATHGoogle Scholar - 21.Gibbons, P.B., Korach, E.: Testing shared memories. SIAM J. Comput.
**26**(4), 1208–1244 (1997). https://doi.org/10.1137/S0097539794279614MathSciNetCrossRefzbMATHGoogle Scholar - 22.Gotsman, A., Burckhardt, S.: Consistency models with global operation sequencing and their composition. In: Richa, A.W. (ed.) 31st International Symposium on Distributed Computing, DISC 2017, LIPIcs, 16–20 October 2017, Vienna, Austria, vol. 91, pp. 23:1–23:16. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). https://doi.org/10.4230/LIPIcs.DISC.2017.23
- 23.Ip, C.N., Dill, D.L.: Better verification through symmetry. Formal Methods Syst. Des.
**9**(1/2), 41–75 (1996). https://doi.org/10.1007/BF00625968CrossRefGoogle Scholar - 24.Lamport, L.: How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Trans. Comput.
**28**(9), 690–691 (1979). https://doi.org/10.1109/TC.1979.1675439CrossRefzbMATHGoogle Scholar - 25.Lowe, G.: Testing for linearizability. Concurrency Comput. Pract. Experience 29(4) (2017). https://doi.org/10.1002/cpe.3928CrossRefGoogle Scholar
- 26.Perrin, M., Mostefaoui, A., Jard, C.: Causal consistency: beyond memory. In: Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2016, pp. 26:1–26:12. ACM, New York (2016)Google Scholar
- 27.Pong, F., Dubois, M.: A new approach for the verification of cache coherence protocols. IEEE Trans. Parallel Distrib. Syst.
**6**(8), 773–787 (1995). https://doi.org/10.1109/71.406955CrossRefGoogle Scholar - 28.Qadeer, S.: Verifying sequential consistency on shared-memory multiprocessors by model checking. IEEE Trans. Parallel Distrib. Syst.
**14**(8), 730–741 (2003). https://doi.org/10.1109/TPDS.2003.1225053CrossRefGoogle Scholar - 29.Sewell, P., Sarkar, S., Owens, S., Nardelli, F.Z., Myreen, M.O.: x86-tso: a rigorous and usable programmer’s model for x86 multiprocessors. Commun. ACM
**53**(7), 89–97 (2010). https://doi.org/10.1145/1785414.1785443CrossRefGoogle Scholar - 30.Wing, J.M., Gong, C.: Testing and verifying concurrent objects. J. Parallel Distrib. Comput.
**17**(1–2), 164–182 (1993). https://doi.org/10.1006/jpdc.1993.1015MathSciNetCrossRefzbMATHGoogle Scholar - 31.Wolper, P.: Expressing interesting properties of programs in propositional temporal logic. In: Conference Record of the Thirteenth Annual ACM Symposium on Principles of Programming Languages, St. Petersburg Beach, Florida, USA, January 1986, pp. 184–193. ACM Press (1986). https://doi.org/10.1145/512644.512661

## Copyright information

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.