An Algorithm Inspired by Constraint Solvers to Infer Inductive Invariants in Numeric Programs
 4 Citations
 947 Downloads
Abstract
This paper addresses the problem of proving a given invariance property \(\varphi \) of a loop in a numeric program, by inferring automatically a stronger inductive invariant \(\psi \). The algorithm we present is based on both abstract interpretation and constraint solving. As in abstract interpretation, it computes the effect of a loop using a numeric abstract domain. As in constraint satisfaction, it works from “above”—interactively splitting and tightening a collection of abstract elements until an inductive invariant is found. Our experiments show that the algorithm can find nonlinear inductive invariants that cannot normally be obtained using intervals (or octagons), even when classic techniques for increasing abstractinterpretation precision are employed—such as increasing and decreasing iterations with extrapolation, partitioning, and disjunctive completion. The advantage of our work is that because the algorithm uses standard abstract domains, it sidesteps the need to develop complex, nonstandard domains specialized for solving a particular problem.
Keywords
Constraint Programming Loop Iteration Abstract Interpretation Constraint Solver Abstract Domain1 Introduction
A key concept in formal verification of programs is that of invariants, i.e., properties true of all executions of a program. For instance, safety properties, such as the fact that program variables stay within their expected bounds, are invariance properties, while more complex properties, such as termination, often depend crucially on invariants. Hence, a large part of programverification research concerns methods for checking or inferring suitable invariants. Invariants are particularly important for loops: an invariant for a loop provides a single property that holds on every loop iteration, and relieves one from having to prove the safety of each loop iteration individually (which might not be possible for infinitestate systems when the number of loop iterations is unbounded, and not practical when it is finite but large). The main principle for proving that a property is an invariant for a loop is to find a stronger property that is an inductive invariant, i.e., it holds when entering the loop for the first time, and is stable across a single loop iteration.
Consider the program in Fig. 1. It has two variables, x and y, and its loop body performs a 45degree rotation of the point (x, y) about the origin, with a slight inward scaling. Additionally, we are given a precondition telling us that, upon entry to the loop, \(x \in [1,1]\) and \(y \in [1,1]\). Intuitively, no matter how many times we go around this loop, the point (x, y) will not move far away from the origin. To make this statement precise, we choose a bounding box that is aligned with the axes, such as Open image in new window , and observe that the program state, considered as a point (x, y), is guaranteed to lie inside I every time that execution reaches the head of the loop. Consequently, I is an invariant at the head of the loop.
Even though I is an invariant at the head of the loop, it is not an inductive invariant, because there are some points in I, such as its corners, that do not remain inside I after a 45degree rotation. Actually, the corner points are not reachable states; however, there is no way to express that fact using an axisaligned box. In fact, no box is an inductive invariant for this program.
To express symbolically the distinction between invariants and inductive invariants, we write \(E=[1,1]\times [1,1]\) for the set of states that satisfy the precondition, and write Open image in new window for the function that represents the transformation performed by the loop body. The set of program states reachable at the loop head after any number of iterations is then \(\bigcup _{n \in \mathbb {N}} F^{n}(E)\). The box I is an invariant at the loop head because I includes all states reachable at the loop head; that is, \(\bigcup _{n \in \mathbb {N}} F^{n}(E) \subseteq I\). However, I is not an inductive invariant because \(F(I) \nsubseteq I\).
Suppose that we wish to prove that I is an invariant. It is not practical to do so by directly computing the set of reachable states and checking whether that is a subset of I, because that would require executing the loop an extremely large (and possibly infinite) number of times. Instead, we will look for a set of boxes like J, shown in Fig. 1(c). One iteration of the loop maps this set of boxes into a subset of itself: the inward scaling, along with the rotation, maps the entire shape, including the jagged edges, into its own interior. Thus, even though no single box is an inductive invariant, a set of boxes, such as J, can be an inductive invariant. Note that this property can be verified by analyzing the effect of just one iteration of the loop applied to J. However, as this example illustrates, an inductive invariant like J may be significantly more complicated, and therefore more difficult to find, than an invariant such as I.
This paper is concerned with automatically strengthening a given invariant into an inductive invariant for numeric programs that manipulate reals or floats. In contrast to classical fixpointfinding methods that start from the bottom of a lattice of abstract values and iteratively work upwards (towards an overapproximation of the infinite union mentioned above), the algorithm described in Sect. 4 starts with the invariant that the user wishes to prove and incrementally strengthens it until an inductive invariant is found.

We demonstrate that ideas and techniques from constraint programming, such as an abstract domain consisting of disjunctions of boxes, and operators for splitting and tightening boxes, can be used to find inductive invariants.

We present an algorithm based on those techniques and demonstrate that it is able to find inductive invariants for numerical, singleloop programs that manipulate reals or floats.

We demonstrate the generality of our approach by extending the algorithm to optionally use an abstract domain based on octagons instead of boxes.

We further extend our algorithm to optionally run in a “relativelycomplete mode”, which guarantees that an inductive invariant will be found if one exists in the appropriate abstract domain and can be proven inductive using boxes.
2 Overview
We wish to prove that, on each iteration, the property \(s_0,s_1\in [4,4]\) holds. This property is indeed invariant; however, it is not inductive—and no box is inductive—for reasons similar to those explained in Sect. 1. More generally, interval arithmetic [23] is ineffective at proving that box invariants hold. Control theory teaches us that there exist quadratic properties—ellipsoids—that can be inductive. Verification techniques that exploit this knowledge a priori can be developed, such as [12]. However, that approach constitutes a programspecific method. Figure 2(b) shows an inductive invariant found by our method in 76 ms. While the shape of an ellipsoid is roughly recognizable, the inductive invariant found is actually the set union of 181 nonoverlapping boxes. Our method employs only sets of boxes and does not use any knowledge from control theory.
Abstract Interpretation. Abstract interpretation [8] provides tools to infer inductive invariants. It starts from the observation that the mostprecise loop invariant is inductive and can be expressed mathematically as a least fixpoint or, constructively, as the limit of an iteration sequence. In general, the mostprecise loop invariant is not computable because the iterates live in infinitestate spaces, and the iteration sequences can require a transfinite number of iterations to reach the least fixpoint. Abstract interpretation solves the first issue by reasoning in an abstract domain of simpler, computable properties (such as intervals [8] or octagons [20]), and the second issue by using extrapolation operators (widenings and narrowings). Its effectiveness relies on the choice of an appropriate abstract domain, which must be able to represent inductive invariants, and also to support sufficiently precise operators—both to model program instructions and to perform extrapolation.
The Open image in new window analyzer [4] makes use of a large number of abstract domains, some of which are specific to particular application domains. In particular, the use of digital filters similar to Fig. 2(a) in aerospace software, and the inability of the interval domain to find any inductive invariant for them, motivated the design of the digitalfilter domain [12] and its inclusion in Open image in new window . Note that Fig. 2(b) shows that an inductive invariant can, in fact, be expressed using a disjunction of boxes. However, applying classical disjunctivedomain constructions, such as disjunctive completion, or state or tracepartitioning over the interval domain does not help in finding an inductive invariant. Indeed, these methods rely on program features, such as tests, controlflow joins, or userprovided hints, none of which occur in Fig. 2(a).
These rules, along with a few others, are applied until either the set of boxes is inductive or failure is detected.
Even when an inductive invariant can be represented in a given abstract domain, approximation due to widening can prevent an abstract interpreter from finding any inductive invariant—let alone the best one. The version of our algorithm presented in Sect. 4 shares this deficiency (although not because widening is involved); however, we modify our method to overcome this problem in Sect. 5.3.
3 Terminology and Notation
3.1 Language and Semantics
We can now formalize the notion of invariant and inductive invariant.
 1.
An invariant is any set I such that Open image in new window
 2.
An inductive invariant is any set I such that Open image in new window
By Tarski’s Theorem [29], Open image in new window implies Open image in new window Moreover, Open image in new window is the smallest invariant, and it is also an inductive invariant. Because computing or approximating Open image in new window is generally much easier than Open image in new window , it is much easier to check that a property is an inductive invariant than to check that it is an invariant.
3.2 Abstract Interpretation
The key idea underlying abstract interpretation is to replace each operation in \(\mathcal {P}(\mathcal {E})\) with an operation that works in an abstract domain \(\mathcal {D}^\sharp \) of properties. Each abstract element \(S^\sharp \in \mathcal {D}^\sharp \) represents a set of points \(\gamma (S^\sharp )\) through a concretization function \(\gamma :\mathcal {D}^\sharp \rightarrow \mathcal {P}(\mathcal {E})\). Additionally, we assume that there exist abstract primitives \(\mathfrak {assign}^\sharp (V, expr ,S^\sharp )\), \(\mathfrak {guard}^\sharp ( bexpr ,S^\sharp )\), \(\cup ^\sharp \), \(\cap ^\sharp \), \(\bot ^\sharp \), \(\top ^\sharp \in \mathcal {D}^\sharp \) that model, respectively, the effect of Open image in new window , \(\cup \), \(\cap \), \(\emptyset \), and \(\mathcal {E}\). Then, the semantics of Fig. 5 can be abstracted as shown in Fig. 6.
3.3 Continuous Constraint Solving
 1.
Propagation steps. These exploit constraints to reduce the domains of variables by removing values that cannot participate in a solution. Ultimately, the goal is to achieve consistency, when no more values can be removed. Several notions of consistency exist. We use here socalled hull consistency, where domains are intervals and they are consistent when their bounds cannot be tightened anymore without possibly losing a solution.
 2.
Splitting steps. When domains cannot be reduced anymore, the solver performs an assumption: it splits a domain and continues searching in that reduced search space. The search proceeds, alternating propagation and splits, until the search space contains no solution, only solutions, or shrinks below a userspecified size. Backtracking is then used to explore other assumptions.
This algorithm is sketched in Fig. 7. It maintains a collection of search spaces yet to explore and a set of solution boxes. The algorithm always terminates, at which point every box either contains only solutions or meets the userspecified size limit, and every solution is accounted for (i.e., belongs to a box).
There are strong connections with abstract interpretation. We observed in [24] that the classic propagator for hull consistency for a constraint \(\phi _i\), socalled HC4 [3], is very similar to the classic algorithm for \(\mathfrak {guard}^\sharp (\phi _i,S)\) in the interval abstract domain. When there are several constraints, the propagators for each constraint are applied in turn, until a fixpoint is reached; this approach is similar to Granger’s method of local iterations [15] used in abstract interpretation. We went further in [24] and showed that the algorithm of Fig. 7 is an instance of a general algorithm that is parameterized by an arbitrary numeric abstract domain. While standard continuous constraint programming corresponds to the interval domain, we established that there are benefits from using relational domains instead, such as octagons [20]. In abstractinterpretation terminology, this algorithm computes decreasing iterations in the disjunctive completion of the abstract domain. These iterations can be interpreted as an iterative overapproximation of a fixpoint: Open image in new window where Open image in new window , and we can check easily that this concrete fixpoint Open image in new window indeed equals Open image in new window .
In the following, we will adapt this algorithm to overapproximate a program’s semantic fixpoint, Open image in new window , instead (Eq. (1)). There are two main differences between these fixpoints. First, constraint programming approximates a greatest fixpoint instead of a least fixpoint. Second, and less obvious, is that constraint programming is limited to approximating Open image in new window , where \(\mathbb {C}\) is reductive, i.e., \(\forall S:\,\mathbb {C}(S)\subseteq S\). It relies on the fact that, at every solving step, the join of all the boxes (whether solutions or yet to be explored) already covers all of the solutions. In a sense, it starts with a trivial covering of the solution, the entire search space, and refines it to provide a better (tighter) covering of the solution. In contrast, when we approximate Open image in new window is a join morphism, but is often neither extensive nor reductive. Moreover, the search will start with an invariant that may not be an inductive invariant.
4 InductiveInvariant Inference

Open image in new window : the abstraction of the entry states

Open image in new window : the abstraction of the loop body

\(I^\sharp \in \mathcal {D}^\sharp \): an abstract underapproximation of the invariant I we seek, i.e., \(\gamma (I^\sharp )\subseteq I\) (in general the equality holds). We assume that \(\gamma (E^\sharp )\subseteq \gamma (I^\sharp )\); otherwise, no subset of \(\gamma (I^\sharp )\) can contain \(\gamma (E^\sharp )\) and be an inductive invariant.
Property 1
 1.
\(\gamma (E^\sharp )\subseteq \gamma (\mathfrak {S}^\sharp )\) (\(\mathfrak {S}^\sharp \) contains the entry)
 2.
\(\gamma (\mathfrak {S}^\sharp )\subseteq \gamma (I^\sharp )\) (\(\mathfrak {S}^\sharp \) entails the invariant)
 3.
\(\gamma F^\sharp (\mathfrak {S}^\sharp )\subseteq \gamma (\mathfrak {S}^\sharp )\) (\(\mathfrak {S}^\sharp \) is inductive)
Property 1 is indeed sufficient to ensure we have found our inductive invariant.
Theorem 1
\(\gamma (\mathfrak {S}^\sharp )\) satisfying Properties 1.1–1.3 is an inductive invariant that implies I.
Proof
See the appendix of the technical report version of this paper [21]. \(\square \)
In addition to Property 1, we will ensure that the boxes in \(\mathfrak {S}^\sharp \) do not overlap. Because we use the interval abstract domain, which allows nonstrict inequality constraints only, we do not enforce that \(\mathfrak {S}^\sharp \) forms a partition. That is, we may have \(\gamma (S_1^\sharp )\cap \gamma (S_2^\sharp )\ne \emptyset \) for \(S_1^\sharp \ne S_2^\sharp \in \mathfrak {S}^\sharp \); however, intersecting boxes have an intersection of null volume, Open image in new window , where Open image in new window is the volume of a box \(S^\sharp \).
In a manner similar to a constraintsolving algorithm, our algorithm starts with a single box Open image in new window , here containing the invariant to prove, and iteratively selects a box to shrink, split, or discard, until Properties 1.1–1.3 are satisfied. Property 1.1 (entry containment) holds when the algorithm starts, and we take care never to remove box parts that intersect \(E^\sharp \) from \(\mathfrak {S}^\sharp \). Property 1.2 (invariant entailment) also holds when the algorithm starts, and naturally holds at every step of the algorithm because we never add any point to \(\gamma (\mathfrak {S}^\sharp )\). Property 1.3 (inductiveness) does not hold when the algorithm begins, and it is the main property to establish. We will start by presenting a few useful operations on individual boxes, before presenting our algorithm in Sect. 4.2.
4.1 Box Operations

necessary if \(\gamma (S^\sharp )\cap \gamma (E^\sharp )\ne \emptyset \);

benign if \(\gamma (F^\sharp (S^\sharp ))\subseteq \gamma (\mathfrak {S}^\sharp )\);

doomed^{1} if \(\gamma (F^\sharp (S^\sharp ))\cap \gamma (\mathfrak {S}^\sharp )=\emptyset \);

useful if \(\gamma (S^\sharp )\cap \gamma F^\sharp (\mathfrak {S}^\sharp )\ne \emptyset \).
A necessary box contains some point from the entry \(E^\sharp \), and thus cannot be completely discarded. A benign box has its image under \(F^\sharp \) completely covered by \(\gamma (\mathfrak {S}^\sharp )\), and so does not impede inductiveness. Our goal is to make all boxes benign. In contrast, a doomed box prevents inductiveness, and no shrinking or splitting operation on this or another box will make it benign. A useful box \(S^\sharp \) intersects the image of a box \(S^\sharp _0\) in \(\mathfrak {S}^\sharp \), i.e., \(S^\sharp \) helps make \(S^\sharp _0\) benign and therefore \(S^\sharp \) is worth keeping.
Splitting a box also carries precision benefits. In general, abstract functions \(F^\sharp \) are subjoin morphisms: \(\gamma (F^\sharp (L^\sharp ))\cup \gamma (F^\sharp (U^\sharp )) \subseteq \gamma (F^\sharp (L^\sharp \cup ^\sharp U^\sharp ))=\gamma (F^\sharp (S^\sharp ))\). Splitting \(S^\sharp \) corresponds to a case analysis, where the effect of the loop body is analyzed separately for \(V_i\le c\) and \(V_i\ge c\), which is at the core of partitioning techniques used in abstract interpretation. While replacing \(S^\sharp \) with \(\{L^\sharp ,U^\sharp \}\) in \(\mathfrak {S}^\sharp \) does not change \(\gamma (\mathfrak {S}^\sharp )\), it is likely to shrink \(\gamma F^\sharp (\mathfrak {S}^\sharp )\), helping Property 1.3 to hold. Further improvements can be achieved by tightening \(L^\sharp \) and \(U^\sharp \) after the split.
Discarding. Constraint solvers only discard boxes when they do not contain any solution. Likewise, we can discard boxes that are neither necessary (no intersection with \(E^\sharp \)) nor useful (no intersection with \(\gamma F^\sharp (\mathfrak {S}^\sharp )\)). However, we can go further, and also remove useful boxes as well. On the one hand, this may shrink \(\gamma F^\sharp (\mathfrak {S}^\sharp )\), and help Property 1.3 to hold. On the other hand, given \(T^\sharp \) for which \(S^\sharp \) is useful, i.e., \(\gamma (F^\sharp (T^\sharp ))\cap \gamma (S^\sharp )\ne \emptyset \), it will strictly reduce \(T^\sharp \)’s coverage, and possibly make \(T^\sharp \) not benign anymore. Nevertheless, in this latter case, it is often possible to recover from this situation later, by splitting, shrinking, or discarding \(T^\sharp \) itself. The possibility to remove boxes that are unlikely to be in any inductive invariant is a particularly interesting heuristic to apply after the algorithm reaches the cutoff threshold for size or coverage. Such a removal will trigger the removal of parts of boxes whose image intersects the removed box, until—one hopes—after a cascade of similar effects, the algorithm identifies a core of boxes that forms an inductive invariant.
4.2 Algorithm
Our algorithm is presented in Fig. 8. It maintains, in \(\mathfrak {S}^\sharp \), a set of boxes to explore, initialized with the candidate invariant \(I^\sharp \) of interest. Then, while there are boxes to explore, the box \(S^\sharp \) with smallest coverage (Eq. (4)) is removed from \(\mathfrak {S}^\sharp \). If \(S^\sharp \) has a coverage of 1, then so have the remaining boxes in \(\mathfrak {S}^\sharp \); hence all of the boxes are benign and the algorithm has found an inductive invariant: we add back \(S^\sharp \) to \(\mathfrak {S}^\sharp \) and return that set of boxes. Otherwise, some work is performed to help make \(S^\sharp \) benign. In the general case, where \(S^\sharp \) is not necessary (does not intersect \(E^\sharp \)), we are free to discard \(S^\sharp \), which we do if it is useless, too small to be split, or its coverage makes it unlikely to become benign; otherwise, we split \(S^\sharp \). When \(S^\sharp \) is necessary, however, we refrain from removing it. Hence we split it, unless it is too small and we cannot split it anymore. At this point there is no way to satisfy all of Properties 1.1 and 1.3 and the user’s specified minimum box size; consequently, the algorithm fails.
Normally, the search function always returns through either the success or failure return statements in the loop body. Normal loop exit corresponds to the corner case where all boxes have been discarded, which means that no box was ever necessary, i.e., Open image in new window we return \(\emptyset \) in that case, because it is indeed an inductive invariant for loops with an empty entry state set.
The splitting procedure takes care to tighten the two subboxes \(S^\sharp _1\) and \(S^\sharp _2\) generated from the split. split is always called with a box of nonzero size, and it generates boxes smaller than \(S^\sharp \).
Note that, given a box \(S^\sharp \), it is possible that \(\gamma (F^\sharp (S^\sharp ))\cap \gamma (S^\sharp )\ne \emptyset \), i.e., a box is useful to itself. For this reason, after removing \(S^\sharp \) from \(\mathfrak {S}^\sharp \), we take care to compute the coverage as Open image in new window and not as Open image in new window , and similarly after a split.
The following theorem states the correctness of the algorithm:
Theorem 2
search always terminates, either with a failure, or returns a set \(\mathfrak {S}^\sharp \) of boxes satisfying Properties 1.1–1.3.
Proof
See the appendix of the technical report version of this paper [21]. \(\square \)
4.3 Implementation Details
The algorithm of Fig. 8 requires maintaining some information about boxes in \(\mathfrak {S}^\sharp \), such as their coverage and whether they are benign or useful. It is important to note that modifying a single box in \(\mathfrak {S}^\sharp \) may not only modify the coverage or class of the modified box, but also of other boxes in \(\mathfrak {S}^\sharp \). We discuss here data structures to perform such updates efficiently, without scanning \(\mathfrak {S}^\sharp \) entirely after each operation.
Partitioning. We enrich the algorithm state with a set \(\mathfrak {P}^\sharp \in \mathcal {P}(\mathcal {D}^\sharp )\) of boxes that, similarly to \(\mathfrak {S}^\sharp \), do not overlap, but—unlike \(\mathfrak {S}^\sharp \)—always covers the whole invariant space \(I^\sharp \). Moreover, we ensure that every box in \(\mathfrak {P}^\sharp \) contains at most one box from \(\mathfrak {S}^\sharp \), hence, we maintain a contentsof function Open image in new window indicating which box, if any, of \(\mathfrak {S}^\sharp \) is contained in each part of \(\mathfrak {P}^\sharp \). Because the algorithm can discard boxes from \(\mathfrak {S}^\sharp \), some parts in \(P^\sharp \in \mathfrak {P}^\sharp \) may have no box at all, in which case Open image in new window ; otherwise Open image in new window . This property can be easily ensured by splitting boxes in \(\mathfrak {P}^\sharp \) whenever a box in \(\mathfrak {S}^\sharp \) is split. When a box from \(\mathfrak {S}^\sharp \) is tightened or discarded, no change in \(\mathfrak {P}^\sharp \) is required.
We rely on the fact that, apart from the initial box \(I^\sharp \), every new box considered by the algorithm comes from a parent box by splitting or tightening, and is thus smaller than a box already in \(\mathfrak {S}^\sharp \). Assuming for now that, when \(S^\sharp \) is shrunk into \(T^\sharp \), \(F^\sharp (T^\sharp )\) is also smaller than \(F^\sharp (S^\sharp )\) (see the note on monotonicity below), we see that Open image in new window and Open image in new window are subsets, respectively, of Open image in new window and Open image in new window , so that it is sufficient to iterate over these sets and filter out elements that no longer belong to them. Finally, to quickly decide for which \(S^\sharp \) we must update Open image in new window and Open image in new window whenever some element in \(\mathfrak {P}^\sharp \) or \(\mathfrak {S}^\sharp \) is modified or discarded, we maintain Open image in new window and Open image in new window as well.
Float Implementation. While it is possible to use exact numbers—such as rationals with arbitrary precision—to represent interval bounds, more efficiency can be achieved using floatingpoint numbers. Floats suffer, however, from rounding error. It is straightforward to perform sound abstract interpretation with floatingpoint arithmetic. In particular, to construct a suitable \(F^\sharp \) function from the program syntax, it is sufficient to round—in every computation—upper bounds towards \(+\infty \) and lower bounds towards \(\infty \). In addition to the requirement that we have a sound \(F^\sharp \), our algorithm requires that we can (i) compute the coverage of a box, and (ii) determine whether it is benign, necessary, or useful. Because intersection, inclusion, and emptiness checking are exact operations on float boxes, we can exactly determine whether a box is benign, necessary, or useful; it is also possible to compute a box size exactly.
In contrast, it is difficult to compute exactly the volume of a box using floatingpoint arithmetic, which is a necessary operation to compute coverage (Eqs. (4) and (8)). Fortunately, our algorithm does not require exact coverage information to be correct; thus, we settle for an approximate value computed naively using Eq. (8) with rounding errors. However, it is critical that we use the exact formula Eq. (9), which uses exact inclusion checking, to determine whether a box is benign; this is almost equivalent to checking whether the coverage of a box equals 1, except that the coverage calculation is subject to floatingpoint roundoff error, so it is not safe to use here. This approach also requires changing, in Fig. 8, the test “if Open image in new window ” into a test that all the boxes in \(\mathfrak {S}^\sharp \) are benign, using Eq. (9).
5 Extensions
5.1 Octagonal Invariants
Following the custom from constraint solvers for continuous constraints, the algorithm presented in Sect. 4 is based on boxes: the loop body \(F^\sharp \) is evaluated in the interval abstract domain. However, we showed in [24] that, in constraint solvers, boxes can be replaced with arbitrary numeric abstract domains, for instance relational domains such as polyhedra or octagons, provided that certain needed operators are provided. Similarly, the algorithm from the present paper can be generalized to use an arbitrary abstract domain \(\mathcal {D}^\sharp \). Existing domains readily provide abstractions \(F^\sharp \) of the loop body, together with the abstractions of \(\cup \), \(\cap \), and \(\subseteq \) required for tightening as well as testing whether an abstract element is necessary, benign, or useful. We only need three extra operations not found in classical abstract domains for our algorithm: the volume Open image in new window (used to compute the coverage), the size operation, and a split operation. As discussed in Sect. 4.3, it is not necessary for Open image in new window to compute the exact volume, as long as it is possible to determine exactly whether an abstract element is benign, which is the case using Eq. (9). Likewise, we have a great deal of freedom in defining the size and split operations. The only requirement is that, to ensure the termination of the algorithm, after a sufficient number of splits are performed, an abstract element will be reduced below an arbitrary size.
Example. Consider the octagon domain [20], which is more expressive than intervals because it can represent relational information of the form \(\pm X\pm Y\le c\), with two variables and unit coefficients. We choose to approximate the volume of an octagon as the volume of its bounding box, which is far easier to compute than the actual volume of the octagon. Likewise, to create a simple and efficient implementation, we use the same split operation as for boxes, splitting only along an axis, and use the “size” of the bounding box as the “size” of an octagon. As a consequence, the elements of the partition \(\mathfrak {P}^\sharp \) remain boxes. Nevertheless, the elements of \(\mathfrak {S}^\sharp \) can be more complex octagons, due to tightening. We refer the reader to [24] for more advanced size and split operators for octagons.
Figure 10 compares the result of our algorithm on the program of Fig. 2 using intervals and using octagons. Inference with intervals takes 965 iterations and 76 ms and produces an inductive invariant composed of 181 parts, while, using octagons, it takes 224 iterations and 89 ms, and the inductive invariant is composed of only 42 parts. As can be seen in Fig. 10, the slant of octagons allows a much tighter fit to an ellipsoid, with far fewer elements. Octagons are also slightly slower: they require far fewer iterations and elements, but the manipulation of a single octagon is more costly than that of a single box.
5.2 Invariant Refinement
The algorithm assumes that a candidate invariant I is provided, and by removing points only looks for an inductive invariant sufficient to prove that I holds. It overapproximates a least fixpoint from above and, as a consequence, the inductive invariant it finds may not be the smallest one. In fact, because it stops as soon as an inductive invariant is found, it is more likely to output one of the greatest inductive invariants contained in I. In contrast, the iterationwithwidening technique used in abstract interpretation overapproximates the least fixpoint, but approaches it from below. Although such iterations may also fail to find the least fixpoint, even when decreasing iterations with narrowing are used,^{3} they are more likely to find a small inductive invariant, because they start small and increase gradually.
We propose here several methods to improve the result of our algorithm towards a better inductive invariant. They also mitigate the need for the user to provide a candidate invariant I, which was a drawback of our method compared to the iterationwithwidening method approaching from below. In fact, we can start the algorithm with a large box I (such as the whole range of a data type) and rely on the methods given below to discover an inductive invariant whose bounding box is much smaller than I, effectively inferring an interval invariant, alongside a more complex inductive invariant implying that invariant.
Global Tightening. The base algorithm of Fig. 8 only applies tightening (Eq. (5)) to the two boxes created after each split. This approach is not sufficient to ensure that all the boxes in \(\mathfrak {S}^\sharp \) are as tight as possible. Indeed, shrinking or removing a box \(S^\sharp \) also causes \(\gamma F^\sharp (\mathfrak {S}^\sharp )\) to shrink. Hence, boxes in \(\mathfrak {S}^\sharp \) that contained parts of \(\gamma F^\sharp (\mathfrak {S}^\sharp )\) that have been removed may be tightened some more, enabling further tightening in a cascading effect. Hence, to improve our result, we may iteratively perform tightening of all the boxes in \(\mathfrak {S}^\sharp \) until a fixpoint is reached. Because this operation may be costly, it is better to apply it rarely, such as after finding a first inductive invariant.
Reachability Checking. The concrete semantics has the form Open image in new window , where Open image in new window is the entry state and Open image in new window is the loop body. This fixpoint can also be seen as the limit Open image in new window , i.e., the points reachable from the entry state after a sequence of loop iterations. To improve our result, we can thus remove parts of \(\gamma (\mathfrak {S}^\sharp )\) that are definitely not reachable from the entry state. This test can be performed in the abstract domain by computing the transitive closure of Open image in new window (Eq. (7)), starting from all the necessary boxes (i.e., those intersecting \(E^\sharp \)). This operation may also be costly and is better performed after an inductive invariant is found.
Shell Peeling. Like iteration with narrowing, the methods given above are only effective at improving our abstraction of a given fixpoint, but are unable to reach an abstraction of a smaller fixpoint. The shellpeeling method builds on the reachability principle to discard boxes more aggressively, based on their likelihood of belonging to the mostprecise inductive invariant. Intuitively, as imprecision accumulates by repeated applications of \(F^\sharp \) from necessary boxes, boxes reachable in many steps are likely to contain the most spurious points. For each box \(S^\sharp \in \mathfrak {S}^\sharp \), we compute its depth, i.e., the minimal value i such that Open image in new window for some \(T^\sharp \in \mathfrak {S}^\sharp \) that intersects \(E^\sharp \). Given the maximal depth Open image in new window , we merely remove all the boxes with depth greater than \(\varDelta \delta \), for some userspecified \(\delta \), i.e., the boxes farthest away from the entry state in terms of applications of \(F^\sharp \). Because the resulting \(\mathfrak {S}^\sharp \) is not likely to be inductive, we again run the algorithm of Fig. 8 to try to recover an inductive invariant.
Resplitting. As explained in Sect. 4.1, splitting boxes in \(\mathfrak {S}^\sharp \) may, by itself, improve the precision because \(F^\sharp \) is generally a subjoin morphism, and so, even though it leaves \(\gamma (\mathfrak {S}^\sharp )\) unchanged, \(\gamma F^\sharp (\mathfrak {S}^\sharp )\) will decrease. As a consequence, some parts of \(\mathfrak {S}^\sharp \) that were reachable from \(E^\sharp \) through Open image in new window may become unreachable. Hence, we suggest splitting boxes in \(\mathfrak {S}^\sharp \) proactively, and prioritize the boxes \(S^\sharp \) that are likely to reduce the size of Open image in new window (i.e., the number of boxes intersecting \(F^\sharp (S^\sharp )\)), before using other techniques such as global tightening or reachability checking. More precisely, we split all the boxes such that Open image in new window exceeds a userdefined threshold R.
Application. We apply all four methods on the example of Fig. 2 as follows. First, we apply the algorithm of Fig. 8, followed by global tightening and reachability checking. Then, we perform several rounds of the following steps: resplitting, shell peeling, applying the algorithm of Fig. 8, global tightening, followed by reachability checking. The result of this process is shown in Fig. 11. We set \(\delta =1\) for shell peeling, and \(R=12\) as the resplitting threshold. Moreover, each time the algorithm from Fig. 8 is run again, the \(\epsilon \) values are halved, to account for the general reduction of box sizes due to resplitting and to allow finergranularity box covering. Each refinement round results in a much tighter inductive invariant. We see a large gain in precision at the cost of higher computation time and a finer invariant decomposition (i.e., more and smaller boxes). Almost all the work goes into the four refinement methods, with just a few boxlevel iterations to reestablish inductiveness.
5.3 Relative Completeness
Our algorithm searches for loop invariants in a particular abstract domain, namely, sets of boxes with size between \(\epsilon _s\) and \(\epsilon _s/2\). As noted in Sect. 4.2, however, the algorithm is not guaranteed to find an inductive invariant even if one exists in this abstract domain. In this section, we define a different abstract domain and present a modified version of the algorithm that has the following relativecompleteness property [17]: if there is an inductive invariant in the abstract domain, and its inductiveness can be proven using boxes, then the modified algorithm will find an inductive invariant. Thus, when the modified algorithm fails to find an inductive invariant, it provides a stronger guarantee than the original algorithm, because a failure of the modified algorithm can only occur when there exists no inductive invariant in the abstract domain. The modified algorithm is slower than the original, however, so we have implemented it as a variant that can be used when completeness is more important than speed.
During a run of the algorithm, if no boxes are ever discarded or tightened, the algorithm’s process of boxsplitting, beginning with some (abstract) candidate invariant \(I^\sharp \), will eventually produce a regular grid of boxes of minimal size—i.e., having size between \(\epsilon _s\) and \(\epsilon _s/2\). Denote by Q the set of all boxes that can be produced in such a run. In the twodimensional case, the boxes of Q may be conceptualized as a binary space partition; it is almost like quadtree, except that boxes are actually split along one dimension at a time, so each node in the tree has two children, not four. The width of the minimalsize boxes of Q along dimension i is \(max\{2^{k}w_i \mid k \in \mathbb {N}, 2^{k}w_i < \epsilon _s\}\), where \(w_i\) is the width of the given invariant \(I^\sharp \) along dimension i. However, Q also contains the larger boxes (unions of the minimalsize ones) that were produced at intermediate stages of the splitting process. We define the abstract domain \(C_Q\) for the modified version of the algorithm to be the set of all nonoverlapping sets of boxes from Q.
There are two reasons why the original algorithm, presented in Fig. 8, lacks the relativecompleteness property. First, the algorithm discards a box \(S^\sharp \) if its coverage is less than \(\epsilon _c\); however, it is possible that \(S^\sharp \) intersects every inductive invariant, in which case discarding \(S^\sharp \) guarantees that a failure will occur. Second, the tightening operation can replace a box \(S^\sharp \) that is part of Q with a box \(T^\sharp \) that is not part of Q. In many cases an inductive invariant can still be found; the invariant can contain boxes that are not part of Q, and the vertices of boxes in such an invariant might not correspond to the vertices of any box from Q. However, it is also possible that \(T^\sharp \), along with the other boxes of \(\mathfrak {S}^\sharp \), cannot be refined into an inductive invariant.
Two simple modifications, which address the above two problems, allow the algorithm to have the relativecompleteness property. First, we set \(\epsilon _c=0\), and second, we never perform tightening. For each of these two modifications, we have constructed an input program for which the modified algorithm finds an invariant, but the original algorithm does not. Note that there also exist programs for which the original algorithm can find an inductive invariant, but the modified algorithm cannot, because there is no inductive invariant in the modified abstract domain \(C_Q\). As an optimization to the modified algorithm, which performs no tightening, we could reintroduce the tightening operation in a modified form: instead of tightening down to an arbitrary smaller box \(T^\sharp \), we could tighten down to the smallest box in Q that contains \(T^\sharp \). We present a proof of the relativecompleteness property of the modified algorithm in [21].
6 Experiments
We have implemented a prototype analyzer, written in OCaml. It supports a simple toy language similar to Fig. 4, with only realvalued variables, no functions, and no nested loops. It implements our main algorithm (Fig. 8) with arbitrary numeric domains (Sect. 5.1). It also implements invariantrefinement and failurerecovery methods (Sect. 5.2). It is parameterized by a numeric abstract domain, and currently supports boxes with rational bounds, boxes with float bounds, as well as octagons with float bounds (through the Apron library [18]). It exploits the data structures described in Sect. 4.3 and is sound despite floatingpoint roundoff errors. Floatingpoint arithmetic is used in the implementation.
The results of our experiments are shown in Fig. 13. We show, for the floatingpoint interval and octagon domains, the number of iterations and time until a first inductive invariant is found, and the number of elements in each result.^{4} We do not give details about the memory consumption for each example, because it always remains low (a few hundred megabytes). For most examples, we used an \(\epsilon _s\) value of 0.05 or 0.01. We have analyzed a few simple loops, listed above the double line in Fig. 13:
– Filter is the secondorder filter from Fig. 2. As discussed earlier, it performs well with intervals and octagons.
– Linear iterates \(t=t+1; \tau =\tau +[0.25,0.5]\) from \(t=\tau =0\) until \(t<30\). To prove that \(\tau <30\), it is necessary to find a relation \(\tau \le 0.5\times t\), which cannot be represented by intervals or octagons (without disjunctive completion). Nonlinear is similar, but iterates \(\tau =[1,1.1]\times \tau \) starting from \(\tau \in [0,1]\), and also requires a nonlinear inductive invariant: \(\tau \le 1.1^t\). These examples require very few iterations of our main algorithm, but rely heavily on resplitting, and hence output a large set of elements and have a relatively high runtime, on the order of a few seconds. When using octagons, the method times out after 300s.
– Logistic map iterates the logistic function \(x_{n+1}=r\times x_n\times (1x_n)\), starting from \(x\in [0.1,0.9]\), for \(r\in [1.5,3.568]\). Our goal is to prove that x remains within [0.1, 0.9] at all times. The choice of \(r\in [1.5,3.568]\) corresponds to various cases where the series oscillates between one or several nonzero values; it is beyond the zone where it converges towards 0 (\(r\le 1\)), but before it becomes chaotic (\(r\ge 3.57\)). The inductive invariant covers the whole space \([0.1,0.9]\times [1.5,3.568]\) (hence, the number of elements equals the number of iterations), but needs to be partitioned so that the desired invariant can be proved in our linear abstract domains. Our algorithm performs this partitioning automatically.
– Newton iterates Newton’s method on a polynomial taken from [11].
– The next two examples come from the conflictdriven learning method proposed in [11]: Sine and Square root compute mathematical functions through Taylor expansions. Unlike our previous examples, they do not perform an iterative computation, only a straightline computation; nevertheless, they require partitioning of the input space to prove that the target invariant holds after the straightline computation. These examples demonstrate that the algorithm is able to compute a strongestpostcondition of a straightline computation, which helps to show how the algorithm could be generalized to handle programs with a mix of straightline and looping code. In both cases, the inductive invariant matches closely the graph of the function (up to method error and rounding error), and hence provide a proof of functional correctness. As in the Linear and NonLinear examples, these examples rely heavily on resplitting and tightening.
 1.
Can our algorithm find invariants for programs studied in earlier work?
 2.
How fast is our algorithm when analyzing such programs?
 3.
Are we able to verify some invariants found by other algorithms?
To answer these questions, we selected three programs, in three variations each, from [26], and two programs from [1]. For each of the programs from [26], the parameter \(I^\sharp \) for our algorithm (i.e., the invariant that we attempted to verify) was the bounding box of an invariant described in either the text of the publication or its additional online materials. For the programs from [1], we chose small boxes for \(I^\sharp \). We analyzed the ArrowHurwicz loop as a twovariable program by discarding the loop condition and the two variables u and v that are not needed in the body of the loop; the algorithm was able to verify that the variables x and y remain within the box \([2,2]\times [2,2]\). For Filter2, we showed that x and y remain within \([0.2,1]\times [0.2,1]\).
Our results show that we were able to find inductive invariants for all of these programs, often in less than half a second. In the case of the dampened oscillator, we were not able to verify the bounds described in the text of [26], but we were able to verify a different bounding box described in the additional online materials.
7 Related Work
A recent line of work has identified conceptual connections between SATsolving and abstract interpretation [10, 31], and has exploited these ideas to improve SATsolving algorithms. That idea is similar in spirit to work described in [24], except that the latter focuses on other classes of algorithms—those used in continuous constraints programming—that work on geometric entities, such as boxes, instead of formulas.
As mentioned in Sect. 2, even when an inductive invariant can be represented in the chosen domain, approximation due to widening can prevent an abstract interpreter from finding any inductive invariant—let alone the best one. A certain amount of recent work has been devoted to improved algorithms for computing fixpoints in abstract domains. In particular, methods based on policy iteration can guarantee, in certain cases, that the exact least fixpoint is computed [1]. The version of our algorithm presented in Sect. 5.3 uses a modified abstract domain and has the following relativecompleteness property [17]: if there is an inductive invariant in the modified abstract domain, and its inductiveness can be proven using boxes, then the modified algorithm will find an inductive invariant.
Garg et al. [13] developed a framework for loopinvariant synthesis called ICE. ICE has a property that they call robustness, which is similar to relative completeness. They observed that many earlier algorithms consist of a teacher and a learner that communicate using a restricted protocol, and that this protocol sometimes prevents the algorithms from finding an invariant. In the restricted protocol, the teacher only communicates positive and negative examples of states, and the learner tries to find an invariant consistent with these. The authors propose an alternative protocol that also allows the teacher to communicate implication pairs \((p,p^\prime )\), to convey that state p can transition to state \(p^\prime \); this generalization allows the algorithm to postpone the decision of whether to include or exclude p and \(p^\prime \) from the invariant, instead of arbitrarily—and perhaps incorrectly—choosing to classify p and \(p^\prime \) as examples or counterexamples.
In our algorithm, the analogue of ICE’s implication pairs, counterexamples, and examples is the set of boxes \(\mathfrak {S}\) and their images. Where ICE would store a concrete implication pair to represent the fact that state p leads to state \(p^\prime \), our algorithm would store a box containing state p along with the image of that box. As with ICE, this stored information can later lead to the box (and its image) being included in the invariant, or it can lead to the box being excluded if the image is excluded (or if part of the image is excluded and the box is too small to split). It can also happen that a box is split—an operation that has no analogue in ICE. The relativecompleteness of our algorithm depends on using this information to delete a box only when necessary. Our algorithm also differs in the sense that it considers a set of boxes \(\mathfrak {S}\) whose concretization only grows smaller over time; it works by iteratively strengthening an invariant until an inductive invariant is found. In contrast, an ICElearner might successively consider larger or smaller candidate invariants.
Bradley presents an algorithm called ic3 [5], which synthesizes loop invariants in a propertydirected manner by incrementally refining a sequence of formulas that overapproximate the program states reachable after different numbers of iterations of the loop. An ic3like approach could be applied to the synthesis of loop invariants for the kind of programs that we investigated. However, the performance of this approach would depend on the precision and efficiency of the underlying SMT solver for (sometimes nonlinear) floatingpoint arithmetic queries. Our algorithm has a performance advantage because it can test whether one area of the program’s state space maps into another by using abstract interpretation, instead of calling an SMT solver.
To exclude unreachable states efficiently, an ic3 implementation needs to generalize from concrete counterexamples to some larger set of states; otherwise, it might only exclude one concrete state at a time. The generalization method determines the abstract domain of invariants that is being searched. Our “quadtree” abstract domain \(C_Q\) (see Sect. 5.3) could be used to generalize each counterexample p to the largest box from Q that (i) contains p and (ii) can be soundly excluded from the invariant.
Two recent works [14, 30] search for inductive invariants using abstract interpretation, and use SMT solvers to help ease the burden of designing sound abstract transformers. Both works need to represent the program’s transition relation in a logic supported by the underlying SMT solver. Both also construct their inductive invariants by iteratively weakening a formula until it is inductive. In contrast, the algorithm described in this paper constructs its inductive invariants by iterative strengthening.
The algorithm presented in this paper can be compared to the large literature on abstraction refinement [9, 27]. It is similar to that work, in that splitting a box can be likened to the introduction of a new predicate that distinguishes the cases where a program variable is above or below some value. Note, however, that the algorithm described in this paper works in one continuous process; the analysis does not have to be restarted when a new split is introduced. Note also that the splitting of boxes has a useful locality property: splitting one box at some value does not automatically split other boxes at the same value; thus, the new predicates are introduced locally, and on demand. The algorithm in this paper differs from counterexampleguided abstraction refinement [6] in that it makes no direct use of concrete counterexamples.
Constraint programming is a popular paradigm, with numerous applications (scheduling, packing, layout design, etc.). It has also been applied to program analysis, and in particular invariant inference, where offtheshelf linear and nonlinear solvers [7, 28], SAT solvers [32], and SMT solvers [16] have been employed. In those works, a preprocessing, or abstraction, step is necessary to transform the problem of inferring an inductive invariant, which is inherently secondorder, into a firstorder formula.
Our method combines abstract interpretation and constraint solving, a connection that has also been made by others. Apt observed that applying propagators can be seen as an iterative fixpoint computation [2]. Pelleau et al. expanded on this connection to describe a constraintsolving algorithm in terms of abstract interpretation; they exploited the connection to design a new class of solvers parameterized by abstract domains [24], thereby importing abstractinterpretation results and knowhow into constraint programming. This paper goes the other way, and imports technology from constraint programming into abstract interpretation to help solve programanalysis problems. Several previous works, e.g., [25], combine abstract interpretation and constraintbased techniques, but generally delegate the problem of inductiveinvariant inference to classic abstractinterpretation methods. Our work differs in that we devised a new algorithm for inferring inductive invariants, not a new use of an existing algorithm.
8 Conclusion
In this paper, we were inspired by one class of constraintsolving algorithms, namely, continuous constraint solving, and adapted the methods employed for such problems to devise a new algorithm for inferring inductive invariants. Instead of classical increasing iterations, as ordinarily used in abstract interpretation, our algorithm employs refinement (decreasing) iterations; it tightens and splits a collection of abstract elements until an inductive invariant is found. The algorithm is parameterized on an arbitrary numeric abstract domain, in which the semantics of the loop body is evaluated. The method is sound and can be implemented efficiently using floatingpoint arithmetic. We have shown the effectiveness of our method on small but intricate loop analyses. In particular, loops that do not have any linear inductive invariant, and thus would traditionally require the design of a novel, specialized abstract domain, have been successfully analyzed by our method, employing the interval domain only.
Footnotes
 1.
If \(F^\sharp (S^\sharp )\) is empty, then \(S^\sharp \) satisfies our definition of benign and our definition of doomed; in this case we consider \(S^\sharp \) benign, not doomed.
 2.
See the note about monotonicity on p. 16.
 3.
This situation happens when the widening overshoots and settles above a fixpoint that is not the least one, because decreasing iterations can only refine up to the immediately smaller fixpoint, and not skip below a fixpoint.
 4.
We include failurerecovery rounds from Sect. 5.2 when needed to find the first invariant, but do not perform any refinement rounds after an invariant is found.
References
 1.Adjé, A., Gaubert, S., Goubault, E.: Coupling policy iteration with semidefinite relaxation to compute accurate numerical invariants in static analysis. In: Gordon, A.D. (ed.) ESOP 2010. LNCS, vol. 6012, pp. 23–42. Springer, Heidelberg (2010)CrossRefGoogle Scholar
 2.Apt, K.R.: The essence of constraint propagation. Theor. Comput. Sci. 221(1–2), 179–210 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 3.Benhamou, F., Goualard, F., Granvilliers, L., Puget, J.F.: Revisiting hull and box consistency. In: ICLP 1999, pp. 230–244 (1999)Google Scholar
 4.Bertrane, J., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Rival, X.: Static analysis and verification of aerospace software by abstract interpretation. In: AIAA Infotech\(@\)Aerospace, number –3385 in AIAA, pp. 1–38. AIAA (2010)Google Scholar
 5.Bradley, A.R.: SATbased model checking without unrolling. In: Jhala, R., Schmidt, D. (eds.) VMCAI 2011. LNCS, vol. 6538, pp. 70–87. Springer, Heidelberg (2011)CrossRefGoogle Scholar
 6.Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexampleguided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000)CrossRefGoogle Scholar
 7.Colón, M.A., Sankaranarayanan, S., Sipma, H.B.: Linear invariant generation using nonlinear constraint solving. In: Hunt Jr., W.A., Somenzi, F. (eds.) CAV 2003. LNCS, vol. 2725, pp. 420–432. Springer, Heidelberg (2003)CrossRefGoogle Scholar
 8.Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL 1977, pp. 238–252. ACM, January 1977Google Scholar
 9.Das, S., Dill, D.L., Park, S.: Experience with predicate abstraction. In: Halbwachs, N., Peled, D.A. (eds.) CAV 1999. LNCS, vol. 1633, pp. 160–171. Springer, Heidelberg (1999)CrossRefGoogle Scholar
 10.D’Silva, V., Haller, L., Kroening, D.: Satisfiability solvers are static analysers. In: Miné, A., Schmidt, D. (eds.) SAS 2012. LNCS, vol. 7460, pp. 317–333. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 11.D’Silva, V., Haller, L., Kroening, D., Tautschnig, M.: Numeric bounds analysis with conflictdriven learning. In: Flanagan, C., König, B. (eds.) TACAS 2012. LNCS, vol. 7214, pp. 48–63. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 12.Feret, J.: Static analysis of digital filters. In: Schmidt, D. (ed.) ESOP 2004. LNCS, vol. 2986, pp. 33–48. Springer, Heidelberg (2004)CrossRefGoogle Scholar
 13.Garg, P., Löding, C., Madhusudan, P., Neider, D.: ICE: a robust framework for learning invariants. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 69–87. Springer, Heidelberg (2014)Google Scholar
 14.Garoche, P.L., Kahsai, T., Tinelli, C.: Incremental invariant generation using logicbased automatic abstract transformers. In: Brat, G., Rungta, N., Venet, A. (eds.) NFM 2013. LNCS, vol. 7871, pp. 139–154. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 15.Granger, P.: Improving the results of static analyses of programs by local decreasing iterations. In: Shyamasundar, R.K. (ed.) FSTTCS 1992. LNCS, vol. 652, pp. 68–79. Springer, Heidelberg (1992)CrossRefGoogle Scholar
 16.Gulwani, S., Srivastava, S., Venkatesan, R.: Program analysis as constraint solving. In: PLDI 2008, pp. 281–292. ACM (2008)Google Scholar
 17.Itzhaky, S., Bjørner, N., Reps, T., Sagiv, M., Thakur, A.: Propertydirected shape analysis. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 35–51. Springer, Heidelberg (2014)Google Scholar
 18.Jeannet, B., Miné, A.: Apron: A Library of Numerical Abstract Domains for Static Analysis. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 661–667. Springer, Heidelberg (2009)CrossRefGoogle Scholar
 19.Miné, A.: Relational abstract domains for the detection of floatingpoint runtime errors. In: Schmidt, D. (ed.) ESOP 2004. LNCS, vol. 2986, pp. 3–17. Springer, Heidelberg (2004)CrossRefGoogle Scholar
 20.Miné, A.: The octagon abstract domain. High. Order Symbolic Comput. 19(1), 31–100 (2006)CrossRefzbMATHGoogle Scholar
 21.Miné, A., Breck, J., Reps, T.: An algorithm inspired by constraint solvers to infer inductive invariants in numeric programs. TR 1829, CS Dept., Univ. of Wisconsin, Madison, WI, January 2016Google Scholar
 22.Montanari, U.: Networks of constraints: fundamental properties and applications to picture processing. Inf. Sci. 7(2), 95–132 (1974)MathSciNetCrossRefzbMATHGoogle Scholar
 23.Moore, R.E.: Interval Analysis. Prentice Hall, Englewood Cliffs (1966)zbMATHGoogle Scholar
 24.Pelleau, M., Miné, A., Truchet, C., Benhamou, F.: A constraint solver based on abstract domains. In: Giacobazzi, R., Berdine, J., Mastroeni, I. (eds.) VMCAI 2013. LNCS, vol. 7737, pp. 434–454. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 25.Ponsini, O., Michel, C., Rueher, M.: Combining constraint programming and abstract interpretation for value analysis of floatingpoint programs. In: CSTVA 2012, pp. 775–776 (2012)Google Scholar
 26.Roux, P., Garoche, P.L.: Practical policy iterations  a practical use of policy iterations for static analysis: the quadratic case. FMSD 46(2), 163–196 (2015)zbMATHGoogle Scholar
 27.Saïdi, H., Shankar, N.: Abstract and model check while you prove. In: Halbwachs, N., Peled, D.A. (eds.) CAV 1999. LNCS, vol. 1633, pp. 443–454. Springer, Heidelberg (1999)CrossRefGoogle Scholar
 28.Sankaranarayanan, S., Sipma, H.B., Manna, Z.: Constraintbased linearrelations analysis. In: Giacobazzi, R. (ed.) SAS 2004. LNCS, vol. 3148, pp. 53–68. Springer, Heidelberg (2004)CrossRefGoogle Scholar
 29.Tarski, A.: A lattice theoretical fixpoint theorem and its applications. Pac. J. Math. 5, 285–310 (1955)MathSciNetCrossRefzbMATHGoogle Scholar
 30.Thakur, A., Lal, A., Lim, J., Reps, T.: PostHat and all that: Automating abstract interpretation. ENTCS 311, 15–32 (2015)MathSciNetGoogle Scholar
 31.Thakur, A., Reps, T.: A generalization of stålmarck’s method. In: Miné, A., Schmidt, D. (eds.) SAS 2012. LNCS, vol. 7460, pp. 334–351. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 32.Xie, Y., Aiken, A.: Saturn: a SATbased tool for bug detection. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 139–143. Springer, Heidelberg (2005)CrossRefGoogle Scholar