Invertibility Conditions for FloatingPoint Formulas
Abstract
Automated reasoning procedures are essential for a number of applications that involve bitexact floatingpoint computations. This paper presents conditions that characterize when a variable in a floatingpoint constraint has a solution, which we call invertibility conditions. We describe a novel workflow that combines human interaction and a syntaxguided synthesis (SyGuS) solver that was used for discovering these conditions. We verify our conditions for several floatingpoint formats. One implication of this result is that a fragment of floatingpoint arithmetic admits compact quantifier elimination. We implement our invertibility conditions in a prototype extension of our solver CVC4, showing their usefulness for solving quantified constraints over floatingpoints.
1 Introduction
Satisfiability Modulo Theories (SMT) formulas including either the theory of floatingpoint numbers [12] or universal quantifiers [24, 32] are widely regarded as some of the hardest to solve. Problems that combine universal quantification over floatingpoints are rare—experience to date has suggested they are hard for solvers and wouldbe users should either give up or develop their own incomplete techniques. However, progress in theory solvers for floatingpoint [11] and the use of expression synthesis for handling universal quantifiers [27, 29] suggest that these problems may not be entirely out of reach after all, which could potentially impact a number of interesting applications.
This paper makes substantial progress towards a scalable approach for solving quantified floatingpoint constraints directly in an SMT solver. Developing procedures for quantified floatingpoints requires considerable effort, both foundationally and in practice. We focus primarily on establishing a foundation for lifting to quantified floatingpoint formulas a procedure for solving quantified bitvector formulas by Niemetz et al. [26]. That procedure relies on socalled invertibility conditions, intuitively, formulas that state under which conditions an argument of a given operator and predicate in an equation has a solution. Building on this concept and a stateoftheart expression synthesis engine [29], we generate invertibility conditions for a majority of operators and predicates in the theory of floatingpoint numbers. In the context of quantifierfree floatingpoint formulas, floatingpoint invertibility conditions may enable us to lift the propagationbased local search approach for bitvectors in [25] to the theory of floatingpoint numbers.

In Sect. 3, we present invertibility conditions for the majority of operators and predicates in the SMTLIB standard theory of floatingpoint numbers.

In Sect. 4, we present a custom methodology based on syntaxguided synthesis and decision tree learning that we developed for the purpose of synthesizing the invertibility conditions presented here.

In Sect. 5, we present a quantifier elimination procedure for a fragment of the theory that is based on invertibility conditions, and give experimental evidence of its potential, based on quantified floatingpoint problems coming from a verification application.
On the other hand, approaches for universal quantification have been developed in modern SMT solvers that target other background theories, including linear arithmetic [8, 17, 29] and bitvectors [26, 27, 32]. At a high level, these approaches use modelbased refinement loops that lazily add instances of universal quantifiers until they reach a conflict at the quantifierfree level, or otherwise saturate with a model.
2 Preliminaries
We assume the usual notions and terminology of manysorted firstorder logic with equality (denoted by \(\approx \)). Let \(\varSigma \) be a signature consisting of a set \(\varSigma ^s \) of sort symbols and a set \(\varSigma ^f \) of interpreted (and sorted) function symbols. Each function symbol f has a sort \(\tau _1 \times ... \times \tau _n \rightarrow \tau \), with arity \(n \ge 0\) and \(\tau _1, ..., \tau _n, \tau \in \varSigma ^s \). We assume that \(\varSigma \) includes a Boolean sort \(\mathsf {Bool}\) and the Boolean constants \(\top \) (true) and \(\bot \) (false). We further assume the usual definition of wellsorted terms, literals, and (quantified) formulas with variables and symbols from \(\varSigma \), and refer to them as \(\varSigma \)terms, \(\varSigma \)atoms, and so on. For a \(\varSigma \)term or \(\varSigma \)formula e, we denote the free variables of e (defined as usual) as \({FV}(e)\) and use e[x] to denote that the variable x occurs free in e. We write e[t] for the term or formula obtained from e by replacing each occurrence of x in e by t.
A theory T is a pair \((\varSigma , {I})\), where \(\varSigma \) is a signature and \({I}\) is a nonempty class of \(\varSigma \)interpretations (the models of T) that is closed under variable reassignment, i.e., every \(\varSigma \)interpretation that only differs from an \({{\mathcal {I}} \in {I}}\) in how it interprets variables is also in \({I}\). A \(\varSigma \)formula \(\varphi \) is Tsatisfiable (resp. Tunsatisfiable) if it is satisfied by some (resp. no) interpretation in \({I}\); it is Tvalid if it is satisfied by all interpretations in \({I}\). We will sometimes omit T when the theory is understood from context.
We briefly recap the terminology and notation of Brain et al. [12] which defines an SMTLIB theory \(T_{ FP }\) of floatingpoint numbers based on the IEEE754 2008 standard [3]. The signature of \(T_{ FP }\) includes a parametric family of sorts \(\mathbb {F}_{\varepsilon ,\sigma } \) where \(\varepsilon \) and \(\sigma \) are integers greater than or equal to 2 giving the number of bits used to store the exponent e and significand s, respectively. Each of these sorts contains five kinds of constants: normal numbers of the form \(1.s*2^e\), subnormal numbers of the form \(0.s*2^{2^{\sigma  1}  1}\), two zeros (\(+0\) and \(0\)), two infinities (\(+\infty \) and \(\infty \)) and a single notanumber (\(\mathsf {NaN} \)). We assume a map \(\mathrm {v} _{\varepsilon , \sigma } \) for each sort, which maps these constants to their value in the set \(\mathbb {R} ^{*} = \mathbb {R} \cup \{+\infty , \infty , \mathrm {NaN} \} \). The theory also provides a roundingmode sort \(\mathsf {RM} \), which contains five elements \(\{\mathsf {RNE}, \mathsf {RNA}, \mathsf {RTP}, \mathsf {RTN}, \mathsf {RTZ} \} \).
Table 1 lists all considered operators and predicate symbols of theory \(T_{ FP }\). The theory contains a full set of arithmetic operations \(\{\ldots , +, , \cdot , \div , \sqrt{}, \mathsf {max},\mathsf {min} \} \) as well as \(\mathsf {rem}\) (remainder), \(\mathsf {rti}\) (round to integral) and \(\mathsf {fma}\) (combined multiply and add with just one rounding). The precise semantics of these operators is given in [12] and follows the same general pattern: \(\mathrm {v} _{\varepsilon , \sigma } \) is used to project the arguments to \(\mathbb {R} ^{*} \), the normal arithmetic is performed in \(\mathbb {R} ^{*} \), then the rounding mode and the result are used to select one of the adjoints of \(\mathrm {v} _{\varepsilon , \sigma } \) to convert the result back to \(\mathbb {F}_{\varepsilon ,\sigma } \). Note that the full theory in [12] includes several additional operators which we omit from discussion here, such as floatingpoint minimum/maximum, equality with floatingpoint semantics (fp.eq), and conversions between sorts.
Theory \(T_{ FP }\) further defines a set of ordering predicates \(\{<, >, \le , \ge \} \) and a set of classification predicates \(\{\mathsf {isNorm}, \mathsf {isSub}, \mathsf {isInf}, \mathsf {isZero}, \mathsf {isNaN}, \mathsf {isNeg}, \mathsf {isPos} \} \). In the following, we denote the rounding mode of an operation above the operator symbol, e.g., \(a \overset{\scriptscriptstyle \mathsf {RTZ} }{+} b\) adds a and b and rounds the result towards zero. We use the infix operator style for \(\mathsf {isInf}\) (\(\ldots \approx \pm \infty \)), \(\mathsf {isZero}\) (\(\ldots \approx \pm 0 \)), and \(\mathsf {isNaN}\) (\(\ldots \approx \mathsf {NaN} \)) for conciseness. We further use \(\text {min}_{\text {n}}\)/\(\text {max}_{\text {n}}\) and \(\text {min}_{\text {s}}\)/\(\text {max}_{\text {s}}\) for floatingpoint constants representing the minimum/maximum normal and subnormal numbers, respectively. We will omit rounding mode and floatingpoint sorts if they are clear from the context.
3 Invertibility Conditions for FloatingPoint Formulas
In this section, we adapt the concept of invertibility conditions introduced by Niemetz et al. in [26] to our theory \(T_{ FP }\). Intuitively, an invertibility condition \(\phi _\mathrm {c} \) for a literal l[x] is the exact condition under which l[x] has a solution for x, i.e., \(\phi _\mathrm {c} \) is equivalent to \(\exists x.\,l[x] \) in \(T_{ FP }\).
Definition 1
(FloatingPoint Invertibility Condition). Let l[x] be a \(\varSigma _{FP}\)literal. A quantifierfree \(\varSigma _{FP}\)formula \(\phi _\mathrm {c} \) is an invertibility condition for x in l[x] if \(x \not \in {FV}(\phi _\mathrm {c}) \) and \(\phi _\mathrm {c} \Leftrightarrow \exists x.\,l[x] \) is \(T_{ FP }\) valid.
Considered floatingpoint predicates/operators, with SMTLIB 2 syntax.
Symbol  SMTLIB syntax  Sort 

\(\mathsf {isNorm}\), \(\mathsf {isSub}\)  fp.isNormal, fp.isSubnormal  \(\mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathsf {Bool} \) 
\(\mathsf {isPos}\), \(\mathsf {isNeg}\)  fp.isPositive, fp.isNegative  \(\mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathsf {Bool} \) 
\(\mathsf {isInf}\), \(\mathsf {isNaN}\), \(\mathsf {isZero}\)  fp.isInfinite, fp.isNaN, fp.isZero  \(\mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathsf {Bool} \) 
\(\approx \), <, >, \(\le \), \(\ge \)  =, fp.lt, fp.gt, fp.leq, fp.geq  \(\mathbb {F}_{\varepsilon ,\sigma } \times \mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathsf {Bool} \) 
\(\ldots \), \(\)  fp.abs, fp.neg  \(\mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathbb {F}_{\varepsilon ,\sigma } \) 
\(\mathsf {rem}\)  fp.rem  \(\mathbb {F}_{\varepsilon ,\sigma } \times \mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathbb {F}_{\varepsilon ,\sigma } \) 
Open image in new window , \(\mathsf {rti}\)  fp.sqrt, fp.roundToIntegral  \(\mathsf {RM} \times \mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathbb {F}_{\varepsilon ,\sigma } \) 
\(+\), \(\), \(\cdot \), \(\div \)  fp.add, fp.sub, fp.mul, fp.div  \(\mathsf {RM} \times \mathbb {F}_{\varepsilon ,\sigma } \times \mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathbb {F}_{\varepsilon ,\sigma } \) 
\(\mathsf {fma}\)  fp.fma  \(\mathsf {RM} \times \mathbb {F}_{\varepsilon ,\sigma } \times \mathbb {F}_{\varepsilon ,\sigma } \times \mathbb {F}_{\varepsilon ,\sigma } \rightarrow \mathbb {F}_{\varepsilon ,\sigma } \) 
Invertibility conditions for floatingpoint operators (excl. \(\mathsf {fma}\)) with \(\approx \).
Literal  Invertibility condition 

\({x \overset{\scriptscriptstyle \mathsf {R} }{+} s \approx {t}} \)  \({t \approx (t \overset{\scriptscriptstyle \mathsf {RTP} }{} s) \overset{\scriptscriptstyle R }{+} s} \vee {t \approx (t \overset{\scriptscriptstyle \mathsf {RTN} }{} s) \overset{\scriptscriptstyle R }{+} s} \vee {s \approx t} \) 
\({x \overset{\scriptscriptstyle \mathsf {R} }{} s \approx {t}} \)  \({t \approx (s \overset{\scriptscriptstyle \mathsf {RTP} }{+} t) \overset{\scriptscriptstyle R }{} s} \vee {t \approx (s \overset{\scriptscriptstyle \mathsf {RTN} }{+} t) \overset{\scriptscriptstyle R }{} s} \vee ({s \not \approx t} \wedge {s \approx \pm \infty } \wedge {t \approx \pm \infty })\) 
\({s \overset{\scriptscriptstyle \mathsf {R} }{} x \approx {t}} \)  \({t \approx s \overset{\scriptscriptstyle R }{+} (t \overset{\scriptscriptstyle \mathsf {RTP} }{} s)} \vee {t \approx s \overset{\scriptscriptstyle R }{+} (t \overset{\scriptscriptstyle \mathsf {RTN} }{} s)} \vee {s \approx t} \) 
\({x \overset{\scriptscriptstyle \mathsf {R} }{\cdot } s \approx {t}} \)  \({t \approx (t \overset{\scriptscriptstyle \mathsf {RTP} }{\div } s) \overset{\scriptscriptstyle R }{\cdot } s} \vee {t \approx (t \overset{\scriptscriptstyle \mathsf {RTN} }{\div } s) \overset{\scriptscriptstyle R }{\cdot } s} \vee ({s \approx \pm \infty } \wedge {t \approx \pm \infty })\vee ({s \approx \pm 0} \wedge {t \approx \pm 0})\) 
\({x \overset{\scriptscriptstyle \mathsf {R} }{\div } s \approx {t}} \)  \({t \approx (s \overset{\scriptscriptstyle \mathsf {RTP} }{\cdot } t) \overset{\scriptscriptstyle R }{\div } s} \vee {t \approx (s \overset{\scriptscriptstyle \mathsf {RTN} }{\cdot } t) \overset{\scriptscriptstyle R }{\div } s} \vee ({s \approx \pm \infty } \wedge {t \approx \pm 0})\vee ({t \approx \pm \infty } \wedge {s \approx \pm 0})\) 
\({s \overset{\scriptscriptstyle \mathsf {R} }{\div } x \approx {t}} \)  \({t \approx s \overset{\scriptscriptstyle R }{\div } (s \overset{\scriptscriptstyle \mathsf {RTP} }{\div } t)} \vee {t \approx s \overset{\scriptscriptstyle R }{\div } (s \overset{\scriptscriptstyle \mathsf {RTN} }{\div } t)} \vee ({s \approx \pm \infty } \wedge {t \approx \pm \infty })\vee ({s \approx \pm 0} \wedge {t \approx \pm 0})\) 
\({x \,\mathsf {rem} \, s \approx {t}} \)  \({t \approx t \,\mathsf {rem} \, s} \) 
\({s \,\mathsf {rem} \, x \approx {t}} \)  \(?\) 
\({\root \scriptscriptstyle \mathsf {R} \of {x} \approx {t}} \)  \({t \approx \root \scriptscriptstyle R \of {(t \overset{\scriptscriptstyle \mathsf {RTP} }{\cdot } t)}} \vee {t \approx \root \scriptscriptstyle R \of {(t \overset{\scriptscriptstyle \mathsf {RTN} }{\cdot } t)}} \vee {t \approx \pm 0} \) 
\({x \approx {t}} \)  \(\lnot \mathsf {isNeg} (t) \) 
\({ x \approx {t}} \)  \(\top \) 
\({\overset{\scriptscriptstyle \mathsf {R} }{\mathsf {rti}}(x) \approx {t}} \)  \({t \approx \overset{\scriptscriptstyle R }{\mathsf {rti}}(t)} \) 
Table 2 lists the invertibility conditions for equality with the operators \(\{+, , \cdot ,\div , \mathsf {rem}, \sqrt{}, \ldots , , \mathsf {rti} \} \), parameterized over a rounding mode \(\mathsf {R}\) (one of \(\mathsf {RNE} \), \(\mathsf {RNA} \), \(\mathsf {RTP} \), \(\mathsf {RTN} \), or \(\mathsf {RTZ} \)). Note that operators \(\{+,\cdot \} \) and the multiplicative step of \(\mathsf {fma}\) are commutative, and thus the invertibility conditions for both variants are identical.
Each of the first six invertibility conditions in this table follows a pattern. The first two disjuncts are instances of the literal to solve for, where a term involving rounding modes \(\mathsf {RTP} \) and \(\mathsf {RTN} \) is substituted for x. These disjuncts are then followed by disjuncts for handling special cases for infinity and zero. From the structure of these conditions, e.g., for \(+ \), we can derive the insight that if there is a solution for x in the equation \({x \overset{\scriptscriptstyle \mathsf {R} }{+} s \approx {t}} \) and we are not in a corner case where \(s=t\), then either \(t \overset{\scriptscriptstyle RTP }{} s \) or \(t \overset{\scriptscriptstyle RTN }{} s \) must be a solution. Based on extensive runs of our syntaxguided synthesis procedure, we believe this condition is close to having minimal term size. From this, we conclude that an efficient yet complete method for solving \({x \overset{\scriptscriptstyle \mathsf {R} }{+} s \approx {t}} \) checks whether \(ts\) rounding towards positive or negative is a solution in the nontrivial case when s and t are disequal, and otherwise concludes that no solution exists. A similar insight can be derived for the other invertibility conditions of this form.
Invertibility conditions for floatingpoint operators (excl. \(\mathsf {fma}\)) with \(\ge \).
Literal  Invertibility condition 

\(x \overset{\scriptscriptstyle \mathsf {R} }{+} s \ge {t} \)  \((\mathsf {isPos} (s) \vee \mathsf {ite} ({s \approx \pm \infty },({t \approx \pm \infty } \wedge \mathsf {isNeg} (t)),\mathsf {isNeg} (s)))\wedge {t \not \approx \mathsf {NaN}} \) 
\(x \overset{\scriptscriptstyle \mathsf {R} }{} s \ge {t} \)  \(\mathsf {ite} (\mathsf {isNeg} (s),{t \not \approx \mathsf {NaN}},\mathsf {ite} ({s \approx \pm \infty },({t \approx \pm \infty } \wedge \mathsf {isNeg} (t)),(\mathsf {isPos} (s) \wedge {t \not \approx \mathsf {NaN}}))) \) 
\(s \overset{\scriptscriptstyle \mathsf {R} }{} x \ge {t} \)  \((\mathsf {isPos} (s) \vee \mathsf {ite} ({s \approx \pm \infty },({t \approx \pm \infty } \wedge \mathsf {isNeg} (t)),\mathsf {isNeg} (s)))\wedge {t \not \approx \mathsf {NaN}} \) 
\(x \overset{\scriptscriptstyle \mathsf {R} }{\cdot } s \ge {t} \)  \((\mathsf {isNeg} (t) \vee {t \approx \pm 0} \vee {s \not \approx \pm 0})\wedge {s \not \approx \mathsf {NaN}} \wedge {t \not \approx \mathsf {NaN}} \) 
\(x \overset{\scriptscriptstyle \mathsf {R} }{\div } s \ge {t} \)  \((\mathsf {isNeg} (t) \vee {t \approx \pm 0} \vee {s \not \approx \pm \infty })\wedge {s \not \approx \mathsf {NaN}} \wedge {t \not \approx \mathsf {NaN}} \) 
\(s \overset{\scriptscriptstyle \mathsf {R} }{\div } x \ge {t} \)  \((\mathsf {isNeg} (t) \vee {t \approx \pm 0} \vee {s \not \approx \pm 0})\wedge {s \not \approx \mathsf {NaN}} \wedge {t \not \approx \mathsf {NaN}} \) 
\(x \,\mathsf {rem} \, s \ge {t} \)  \(\mathsf {ite} (\mathsf {isNeg} (t),{s \not \approx \mathsf {NaN}},(t \overset{\scriptscriptstyle \mathsf {RNE} }{+} t  \le s \wedge {t \not \approx \pm \infty })) \wedge {s \not \approx \pm 0} \) 
\(s \,\mathsf {rem} \, x \ge {t} \)  \(?\) 
\(\root \scriptscriptstyle \mathsf {R} \of {x} \ge {t} \)  \({t \not \approx \mathsf {NaN}} \) 
\(x \ge {t} \)  \({t \not \approx \mathsf {NaN}} \) 
\( x \ge {t} \)  \({t \not \approx \mathsf {NaN}} \) 
\(\overset{\scriptscriptstyle \mathsf {R} }{\mathsf {rti}}(x) \ge {t} \)  \({t \not \approx \mathsf {NaN}} \) 
Table 3 gives the invertibility conditions for \(\ge \). Since these constraints admit more solutions, they typically have simpler invertibility conditions. In particular, with the exception of \(\mathsf {rem} \), all conditions only involve floatingpoint classifiers.
When considering literals with predicates, the invertibility conditions for cases involving \(x + s \) and \(s  x\) are identical for every predicate and rounding mode. This is due to the fact that \(s  x \) is equivalent to \(s + ( x) \), independent from the rounding mode. Thus, the negation of the inverse value of x for an equation involving \(x + s\) is the inverse value of x for an equation involving \(s  x\). Similarly, the invertibility conditions for \(x \cdot s\) and \(s \div x\) over predicates \(\{<, \le , >, \ge , \mathsf {isInf}, \mathsf {isNaN}, \mathsf {isNeg}, \mathsf {isZero} \} \) are identical for all rounding modes.
For all predicates except \(\{\approx , \mathsf {isNorm}, \mathsf {isSub} \} \), the invertibility conditions for operators \(\{+, ,\div , \cdot \} \) contain floatingpoint classifiers only. All of these conditions are also independent from the rounding mode. Similarly, for operator \(\mathsf {fma}\) over predicates \(\{\mathsf {isInf},\mathsf {isNaN},\mathsf {isNeg},\mathsf {isPos} \} \), the invertibility conditions contain only floatingpoint classifiers. All of these conditions except for \(\mathsf {isNeg} (\mathsf {fma} (x, s, t))\) and \(\mathsf {isPos} (\mathsf {fma} (x, s, t))\) are also independent from the rounding mode.
For all floatingpoint operators with predicate \(\mathsf {isNaN}\), the invertibility condition is \(\top \), i.e., an inverse value for x always exists. This is due to the fact that every floatingpoint operator returns \(\mathsf {NaN}\) if one of its operands is \(\mathsf {NaN}\), hence \(\mathsf {NaN}\) can be picked as an inverse value of x. Conversely, we identified four cases for which the invertibility condition is \(\bot \), i.e., an inverse value for x never exists. These four cases are \(\mathsf {isNeg} (x)\), \(\mathsf {isInf} (x \,\mathsf {rem} \, s)\), \(\mathsf {isInf} (s \,\mathsf {rem} \, x)\), and \(\mathsf {isSub} (\mathsf {rti} (x))\). For the first three cases, it is obvious why no inverse value exists. The intuition for \(\mathsf {isSub} (\mathsf {rti} (x))\) is that integers are not subnormal, and as a result if x is rounded to an integer it can never be a subnormal number. All of these cases can be easily implemented as rewrite rules in an SMT solver.
Figure 2 gives the invertibility condition visualizations for remainder over \(\approx \) with sort \(\mathbb {F}_{3,5}\) and rounding mode \(\mathsf {RNE}\). The visualization on the left hand shows that solving for x as the first argument is relatively easy. It suggests that an invertibility condition for this case involves a linear inequality relating the absolute values of s and t, which we were able to derive in Eq. (1). Solving for x as the second argument, on the other hand, is much more difficult, as indicated by the right picture, which has a significantly more complex structure. We conjecture that no simple solution exists for the latter problem. The visualization of the invertibility condition gives some of the intuition for this: the diagonal divide is caused by the fact that output t will always have a smaller absolute value than the input s. The topleft corner represents subnormal/subnormal computation, this acts as fixedpoint and behaves differently from the rest of the function. The stepped blocks along the diagonal occur when s and t have the same exponent and thus the pattern is similar to the invertibility condition for \(+ \) shown in Fig. 1. Portions right of the main diagonal appear to exhibit random behavior. We believe this is the result of repeated cancellations in the computation of the remainder for those values, which suggests a behavior that we believe is similar to the BlumBlumShub random number generator [9].
Figure 4 shows visualizations for invertibility conditions involving \(\mathsf {fma} \). The left two images are visualizations for the invertibility conditions for \(\mathsf {isZero} \). The corresponding invertibility conditions are given in Eqs. (2) and (3) above. We were not able to determine invertibility conditions for operator \(\mathsf {fma}\) over predicate \(\mathsf {isSub}\), which are visualized in the rightmost two pictures in Fig. 4. Finally, we did not succeed in finding invertibility conditions for \(\mathsf {fma} \) with binary predicates, which are particularly challenging since they are threedimensional. Finding solutions for these cases is ongoing work (see Sect. 4 for a more indepth discussion).
4 Synthesis of FloatingPoint Invertibility Conditions
Deriving invertibility conditions in \(T_{ FP }\) is a highly challenging task. We were unable to derive these conditions manually despite our substantial background knowledge of floatingpoint numbers. As a consequence, we developed a custom extension of the syntaxguided synthesis (SyGuS) paradigm [1] with the goal of finding invertibility conditions automatically, which resulted in the conditions from Sect. 3. While the extension was optimized for this task, we stress that our techniques are theoryagnostic and can be used for synthesis problems over any finite domain. Our approach builds upon the SyGuS capabilities of the SMT solver CVC4 [5, 29], which has recently been extended to support reasoning about the theory of floatingpoints [11]. We use the invertibility condition for floatingpoint addition with equality here as a running example.
 1.
\(\varepsilon \), \(\sigma \) should be large enough to exercise many (or all) of the behaviors of the operators and relations in our synthesis conjecture,
 2.
\(\varepsilon \), \(\sigma \) should be small enough for the synthesis problem to be tractable.
In our experience, the best choices for \((\varepsilon ,\sigma )\) depended on the particular invertibility condition we were solving. The most common choices for \((\varepsilon ,\sigma )\) were (3, 5), (4, 5) and (4, 6). For most twodimensional invertibility conditions (those that involve two variables s and t), we used (3, 5), since the required synthesis procedures mentioned below were roughly eight times faster than for (4, 5). For onedimensional invertibility conditions, we often used higher precision formats. Since floatingpoint operators like addition take as additional argument a rounding mode \(\mathsf {R}\), we assumed a fixed rounding mode when solving, and then crosschecked our solution for multiple rounding modes.
 1.
a SyGuS solver with support for decision tree learning, and
 2.
a solution verifier storing the full I/O specification of the invertibility condition.
We use a counterexampleguided loop, where the SyGuS solver provides the solution verifier with candidate solutions, and the solution verifier provides the SyGuS solver with an evolving subset of sample points taken from the full I/O specification. These points correspond to counterexamples to failed candidate solutions, and are sampled in a uniformly random manner over the domain of our specification. To accelerate the speed at which our framework converges on a solution, we configure the solution verifier to generate multiple counterexample points (typically 10) for each iteration of the loop. The process terminates when the SyGuS solver generates a candidate solution that is correct for all points according to its full I/O specification.
We give the user control over both the solutions and counterexample points generated in this loop. First, as is commonly done in syntaxguided synthesis applications, the user in our workflow provides an input grammar to the SyGuS solver. This is a contextfree grammar in a standard format [28], which contains a guess of the operators and patterns that may be involved in the invertibility condition we are synthesizing. Second, note that the domain of floatingpoint numbers can be subdivided into a number of subdomains and special cases (e.g. normal, subnormal, notanumber, infinity), as well as split into different classifications (e.g. positive and negative). Our workflow allows the user to provide a side condition, whose purpose is to focus on finding an invertibility condition that is correct for one of these subdomains. The side condition acts as a filtering mechanism on the counterexample points generated by the solution verifier. For example, given the side condition \(\mathsf {isNorm} (s) \wedge \mathsf {isNorm} (t) \), the solution verifier checks candidate solutions generated by the SyGuS solver only against points (s, t) where both arguments are normal, and consequently only communicates counterexamples of this form to the SyGuS solver. The solution verifier may also be configured to establish that the current candidate solution generated by the SyGuS solver is conditionally correct, that is, it is true on all points in the domain that satisfy the side condition.
There are several advantages to the form of the synthesis conjecture in (6) that we exploit in our workflow. First, its structure makes it easy to divide the problem into subcases: our synthesis workflow at all times sends only a subset of the conjuncts of (6) for some (i, j) pairs. As a result, we do not burden the underlying SyGuS solver with the entire conjecture at once, which would not scale in practice. A second advantage is that it is in programmingbyexamples (PBE) form, since it consists of a conjunction of concrete inputoutput pairs. As a consequence, specialized algorithms can be used by the SyGuS solver to generate solutions for (approximations of) our conjecture in a way that is highly scalable in practice. These techniques are broadly referred to as decision tree learning or unification algorithms. As a brief review (see Alur et al. [2] for a recent SyGuSbased approach), a decision tree learning algorithm is given as input a set of good examples \(c_1 \mapsto \top , \ldots , c_n \mapsto \top \) and a set of bad examples \(d_1 \mapsto \bot , \ldots , d_m \mapsto \bot \). The goal of a decision tree algorithm is to find a predicate, or classifier, that evaluates to true on all the good examples, and false on all the bad examples. In our context, a classifier is expressed as an ifthenelse tree of Boolean sort. Sampling the space of conjecture (6) provides the decision tree algorithm with good and bad examples and the returned classifier is a candidate solution that we give to the solution verifier. The SyGuS solver of CVC4 uses a decisiontree learning algorithm, which we rely on in our workflow. Due to the scalability of this algorithm and the fact that only a small subset of our conjecture is considered at any given time, candidate solutions are typically generated by the SyGuS solver in our framework in a matter of seconds.
Another important aspect of the SyGuS solver in Fig. 5 is that it is configured to generate multiple solutions for the current set of sample points. Due to the way the SyGuSbased decisiontree learning algorithm works, these solutions tend to become more general over the runtime of the solver. As a simple example (assuming exact integer arithmetic), say the solver is given input points \((1,1) \mapsto \top \), \((2,0) \mapsto \top \), \((1, 0) \mapsto \bot \) and \((0,1) \mapsto \bot \) for (s, t). It enumerates predicates over s and t, starting with simplest predicates first, say \(s \approx 0\), \(t \approx 0\), \(s \approx 1\), \(y \approx 1\), \(s+t > 1\), and so on. After generating the first four predicates, it constructs the solution \(\mathsf {ite} (s \approx 1, t \approx 1, t \approx 0 ) \), which is a correct classifier for the given set of points. However, after generating the fifth predicate in this list, it returns \(s+t>1\) itself as a solution; this can be seen as a generalization of the previous solution since it requires no case splitting.
Since more general candidate solutions have a higher likelihood of being actual solutions in our experience, our workflow critically relies on the ability of users to manually terminate the synthesis procedure when they are satisfied with the last generated candidate. Our synthesis procedure logs a list of candidate solutions that satisfy the conjecture on the current set of sample points. When the user terminates the synthesis process, the solution verifier will check the last solution generated in this list. Users have the option to rearrange the elements of this list by hand, if they have an intuition that a specific candidate is more likely to be correct—and so should be tested first.
Experience. The first challenging invertibility condition we solved with our framework was addition with equality for rounding mode \(\mathsf {RNE} \). Initially, we used a generic grammar that contained the entire floatingpoint signature. As a first key step towards solving this problem, the synthesis procedure suggested the single literal \({t \approx s \overset{\scriptscriptstyle \mathsf {RNE} }{+} (t \overset{\scriptscriptstyle \mathsf {RNE} }{} s)} \) as candidate solution. Although counterexamples were found for this candidate, we noticed that it satisfied over 98% of the specification, and a visualization of its I/O behavior showed similar patterns to the invertibility condition we were solving for. Based on these observations, we focused our grammar towards literals of this form. In particular, we used a function that takes two floatingpoints x, y and two rounding modes \(R_1, R_2\) as arguments and returns \(x \overset{\scriptscriptstyle R_1 }{+} (y \overset{\scriptscriptstyle R_2 }{} x) \) as a builtin symbol of our grammar. We refer to such a function as a residual computation of y, noting that its value is often approximately y. By including various functions for residual computations, we focused the effort of the synthesizer on more interesting predicates. The end solution involved multiple residual computations, as shown in Table 2. Our initial solution was specific to the rounding mode \(\mathsf {RNE} \). After solving for several other rounding modes, we were able to construct a parametric solution that was correct for all rounding modes. In total, it took roughly three days of developer time to discover the generalized invertibility condition for addition with equality. Many of the subsequent invertibility conditions took a matter of hours, since by then we had a good intuition for the residual computations that were relevant for each case.
Invertibility conditions involving \(\mathsf {rem} \), \(\mathsf {fma} \), \(\mathsf {isNorm} \), and \(\mathsf {isSub} \) were challenging and required further customizations to the grammar, for instance to include constants that corresponded to the minimum and maximum normal and subnormal values. Threedimensional invertibility conditions (which in this work is limited to cases of \(\mathsf {fma} \) with binary predicates) were especially challenging since the domain of their conjecture is a factor of 227 larger for \(\mathbb {F}_{3,5} \) than the others. Following our strategy for solving the invertibility conditions for specific formats and rounding modes, in ongoing work we are investigating solving these cases by first solving the invertibility condition for a fixed value c for one of its free variables u. Solving a twodimensional problem of this form with a solution \(\varphi \) may suggest a generalization that works for all values of u where all occurrences of c in \(\varphi \) are replaced by u.
We found the side condition feature of our workflow important for narrowing down which subdomain was the most challenging for the conjecture in question. For instance, for some cases it was very easy to find invertibility conditions that held when both s and t were normal (resp., subnormal), but very difficult when s was normal and t was subnormal or vice versa.
We also implemented a fully automated mode for the synthesis loop in Fig. 5. However, in practice, it was more effective to tweak the generated solutions manually. The amount of user interaction was not prohibitively high in our experience.
4.1 Verifying Conditions for Multiple Formats and Rounding Modes
We verified the correctness of all 167 invertibility conditions by checking them against their corresponding full I/O specification for floatingpoint formats \(\mathbb {F}_{3,5}\), \(\mathbb {F}_{4,5}\), and \(\mathbb {F}_{4,6}\) and all rounding modes, which required 1.6 days of CPU time. This is relatively cheap compared to computing the specifications, since checking is essentially constant evaluation of invertibility conditions for all possible input values. However, this quickly becomes infeasible with increasing precision, since the time required for computing the I/O specification roughly increases by a factor of 8 for each bit.
As a consequence, we generated quantified floatingpoint problems to verify the 167 invertibility conditions for formats \(\mathbb {F}_{3,5}\), \(\mathbb {F}_{4,5}\), \(\mathbb {F}_{4,6}\), \(\mathbb {F}_{5,11}\) (Float16), \(\mathbb {F}_{8,24}\) (Float32), and \(\mathbb {F}_{11,53}\) (Float64) and all rounding modes. Each problem checks the \(T_{ FP }\)unsatisfiability of formula \(\lnot (\phi _\mathrm {c} \Leftrightarrow \exists x.\,l[x]) \), where l[x] corresponds to the floatingpoint literal, and \(\phi _\mathrm {c} \) to its invertibility condition. In total, we generated 3786 problems (\(116*5+51\)^{3} for each floatingpoint format) and checked them using CVC4 [5] (master 546bf686) and Z3 [16] (version 4.8.4).
We consider an invertibility condition to be verified for a floatingpoint format and rounding mode if at least one solver reports unsatisfiable. Given a CPU time limit of one hour and a memory limit of 8 GB for each solver/benchmark pair, we were able to verify 3577 (94.5%) invertibility conditions overall, with \(99.2\%\) of \(\mathbb {F}_{3,5}\), \(99.7\%\) of \(\mathbb {F}_{4,5}\), \(100\%\) of \(\mathbb {F}_{4,6}\), \(93.8\%\) of \(\mathbb {F}_{5,11}\), \(90.2\%\) of \(\mathbb {F}_{8,24}\), and \(84\%\) of \(\mathbb {F}_{11,53}\). This verification with CVC4 and Z3 required a total of 32 days of CPU time. All verification jobs were run on cluster nodes with Intel Xeon E52637 3.5 GHz and 32 GB memory.
5 Quantifier Elimination for Unit Linear FloatingPoint Formulas
Based on the invertibility conditions presented in Sect. 3, we can define a quantifier elimination procedure for a restricted fragment of floatingpoint formulas. The procedure applies to unit linear formulas, that is, formulas of the form \(\exists x.\, P[x]\) where P is a \(\varSigma _{FP} \)literal containing exactly one occurrence of x.
Figure 6 gives a quantifier elimination procedure \(\mathsf {QE_{FP}} \) for unit linear floatingpoint formulas \(\exists x.\, P[x]\). We write \(\mathsf {getIC} (y, Q[y]) \) to indicate the invertibility condition for y in Q[y], which amounts to a table lookup for the appropriate condition as given in Sect. 3. Note that our procedure is currently a partial function because we do not have yet invertibility conditions for some unit linear formulas. The recursive procedure returns a conjunction of conditions based on the path on which x occurs in P. If x occurs beneath multiple nested function applications, a fresh variable y is introduced and used for referencing the intermediate result of the subterm we are currently solving for. We demonstrate this in the following example.
Example 2
Theorem 1
Let \(\exists x.\, P\) be a unit linear formula and let \({\mathcal {I}} \) be a model of \(T_{ FP }\). Then, \({\mathcal {I}} \) satifies \(\lnot \exists x.\, P\) if and only if there exists a model \({\mathcal {J}} \) of \(T_{ FP } \) (constructible from \({\mathcal {I}} \)) that satisfies \(\lnot \mathsf {QE_{FP}} ( \exists x.\, P )\).
Niemetz et al. [26] present a similar algorithm for solving unit linear bitvector literals. In that work, a counterexampleguided loop was devised that made use of Hilbertchoice expressions for representing quantifier instantiations. In contrast to that work, we provide here only a quantifier elimination procedure. Extending our techniques to a general quantifier instantiation strategy is the subject of ongoing work. We discuss our preliminary work in this direction in the next section.
6 Solving Quantified FloatingPoint Formulas
We implemented a prototype extension of the SMT solver CVC4 that leverages the results of the previous section to determine the satisfiability of quantified floatingpoint formulas. To handle quantified formulas, CVC4 uses a basic modelbased instantiation loop (see, e.g., [30, 32] for instantiation approaches for other theories). This technique maintains a quantifierfree set of constraints F corresponding to instantiations of universally quantified formulas. It terminates with the response “unsatisfiable” if F is unsatisfiable, and terminates with “satisfiable” if it can show that the given quantified formulas are satisfied by a model of \(T_{ FP }\) that satisfies F. For \(T_{ FP }\), the instantiations are substitutions of universally quantified variables to concrete floatingpoint values, e.g. \(\forall x.\, P( x ) \Rightarrow P( 0 )\), which can be highly inefficient in the worst case for higher precision.
We extend this basic loop with a preprocessing pass that generates theory lemmas based on the invertibility conditions corresponding to literals of quantified formulas \(\forall x. P\) with exactly one occurrence of x, as explained in the example below.
Example 3
Evaluation. We considered all 61 benchmarks from SMTLIB [6] that contained quantified formulas over floatingpoints (logic FP), which correspond to verification conditions from the software verification competition that use a floatingpoint encoding [19]. The invertibility conditions required for solving their literals include floatingpoint addition, multiplication and division (both arguments) with equality and inequality. We implemented all cases of invertibility conditions for solving these cases. We extended our SMT solver CVC4 (GitHub master 5d248c36) with the above preprocessing pass (GitHub cav19fp 9b5acd74), and compared its performance with (configuration CVC4ext) and without (configuration CVC4base) the above preprocessing pass enabled to the SMT solver Z3 (version 4.8.4). All experiments were run on the same cluster mentioned earlier, with a memory limit of 8 GB and a 1800 s time limit. Overall, CVC4base solved 35 benchmarks within the time limit (with no benchmarks uniquely solved compared to CVC4ext), CVC4ext solved 42 benchmarks (7 of these uniquely solved compared to the base version), and Z3 solved 56 benchmarks. While CVC4ext solves significantly fewer benchmarks than Z3, we believe that the improvement over CVC4base is indicative that our approach for invertibility conditions shows potential for solving quantified floatingpoint constraints in SMT solvers. A more comprehensive evaluation and implementation is left as future work.
7 Conclusion
We have presented invertibility conditions for a large subset of combinations of floatingpoint operators over floatingpoint predicates supported by SMT solvers. These conditions were found by a framework that utilizes syntaxguided synthesis solving, customized for our problem and developed over the course of this work. We have shown that invertibility conditions imply that a simple fragment of quantified floatingpoints admits compact quantifier elimination, and have given preliminary evidence that an SMT solver that partially leverages this technique can have a higher success rate on floatingpoint problems coming from a software verification application.
For future work, we plan to extend techniques for quantified and quantifierfree floatingpoint formulas to incorporate our findings, in particular to lift previous quantifier instantiation approaches (e.g., [26]) and local search procedures (e.g., [25]) for bitvectors to floatingpoints. We also plan to extend and use our synthesis framework for related challenging synthesis tasks, such as finding conditions under which more complex constraints have solutions, including those having multiple occurrences of a variable to solve for. Our synthesis framework is agnostic to theories and can be used for any sort with a small finite domain. It can thus be leveraged also for solutions to quantified bitvector constraints. Finally, we would like to establish formal proofs of correctness of our invertibility conditions that are independent of floatingpoint formats.
Footnotes
 1.
Available at https://cvc4.cs.stanford.edu/papers/CAV2019FP.
 2.
Notice that we consider all possible \((2^{\sigma 1} 1) * 2\) \(\mathsf {NaN}\) values of \(T_{ FP }\) as one single \(\mathsf {NaN}\) value. Thus, for sort \(\mathbb {F}_{3,5} \) we have 227 floatingpoint values (instead of \(2^8 = 256\)).
 3.
116 invertibility conditions from rounding mode dependent operators and 51 invertibility conditions where the operator is rounding mode independent (e.g., \(\mathsf {rem}\)).
References
 1.Alur, R., et al.: Syntaxguided synthesis. In: Formal Methods in ComputerAided Design, FMCAD 2013, Portland, 20–23 October 2013, pp. 1–8. IEEE (2013). http://ieeexplore.ieee.org/document/6679385/
 2.Alur, R., Radhakrishna, A., Udupa, A.: Scaling enumerative program synthesis via divide and conquer. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10205, pp. 319–336. Springer, Heidelberg (2017). https://doi.org/10.1007/9783662545775_18CrossRefGoogle Scholar
 3.IEEE Standards Association 7542008  IEEE standard for floatingpoint arithmetic (2008). https://ieeexplore.ieee.org/servlet/opac?punumber=4610933
 4.Barr, E.T., Vo, T., Le, V., Su, Z.: Automatic detection of floatingpoint exceptions. SIGPLAN Not. 48(1), 549–560 (2013)CrossRefGoogle Scholar
 5.Barrett, C., et al.: CVC4. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 171–177. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642221101_14CrossRefGoogle Scholar
 6.Barrett, C., Stump, A., Tinelli, C.: The satisfiability modulo theories library (SMTLIB) (2010). www.SMTLIB.org
 7.Ben Khadra, M.A., Stoffel, D., Kunz, W.: goSAT: floatingpoint satisfiability as global optimization. In: FMCAD, pp. 11–14. IEEE (2017)Google Scholar
 8.Bjørner, N., Janota, M.: Playing with quantified satisfaction. In: 20th International Conferences on Logic for Programming, Artificial Intelligence and Reasoning  Short Presentations, LPAR 2015, Suva, 24–28 November 2015, pp. 15–27 (2015)Google Scholar
 9.Blum, L., Blum, M., Shub, M.: A simple unpredictable pseudorandom number generator. SIAM J. Comput. 15(2), 364–383 (1986)MathSciNetCrossRefGoogle Scholar
 10.Brain, M., Dsilva, V., Griggio, A., Haller, L., Kroening, D.: Deciding floatingpoint logic with abstract conflict driven clause learning. Formal Methods Syst. Des. 45(2), 213–245 (2014)CrossRefGoogle Scholar
 11.Brain, M., Schanda, F., Sun, Y.: Building better bitblasting for floatingpoint problems. In: Vojnar, T., Zhang, L. (eds.) TACAS 2019, Part I. LNCS, vol. 11427, pp. 79–98. Springer, Cham (2019). https://doi.org/10.1007/9783030174620_5CrossRefGoogle Scholar
 12.Brain, M., Tinelli, C., Rümmer, P., Wahl, T.: An automatable formal semantics for IEEE754 floatingpoint arithmetic. In: 22nd IEEE Symposium on Computer Arithmetic, ARITH 2015, Lyon, 22–24 June 2015, pp. 160–167. IEEE (2015)Google Scholar
 13.Brillout, A., Kroening, D., Wahl, T.: Mixed abstractions for floatingpoint arithmetic. In: FMCAD, pp. 69–76. IEEE (2009)Google Scholar
 14.Conchon, S., Iguernlala, M., Ji, K., Melquiond, G., Fumex, C.: A threetier strategy for reasoning about floatingpoint numbers in SMT. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 419–435. Springer, Cham (2017). https://doi.org/10.1007/9783319633909_22CrossRefGoogle Scholar
 15.Daumas, M., Melquiond, G.: Certification of bounds on expressions involving rounded operators. ACM Trans. Math. Softw. 37(1), 1–20 (2010)MathSciNetCrossRefGoogle Scholar
 16.De Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540788003_24CrossRefGoogle Scholar
 17.Dutertre, B.: Solving exists/forall problems in yices. In: Workshop on Satisfiability Modulo Theories (2015)Google Scholar
 18.Fu, Z., Su, Z.: XSat: a fast floatingpoint satisfiability solver. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 187–209. Springer, Cham (2016). https://doi.org/10.1007/9783319415406_11CrossRefGoogle Scholar
 19.Heizmann, M., et al.: Ultimate automizer with an ondemand construction of FloydHoare automata. In: Legay, A., Margaria, T. (eds.) TACAS 2017, Part II. LNCS, vol. 10206, pp. 394–398. Springer, Heidelberg (2017). https://doi.org/10.1007/9783662545805_30CrossRefGoogle Scholar
 20.Lapschies, F.: SONOLAR, the solver for nonlinear arithmetic (2014). http://www.informatik.unibremen.de/agbs/florian/sonolar
 21.Liew, D.: JFS: JIT fuzzing solver. https://github.com/delcypher/jfs
 22.Marre, B., Bobot, F., Chihani, Z.: Real behavior of floating point numbers. In: SMT Workshop (2017)Google Scholar
 23.Michel, C., Rueher, M., Lebbah, Y.: Solving constraints over floatingpoint numbers. In: Walsh, T. (ed.) CP 2001. LNCS, vol. 2239, pp. 524–538. Springer, Heidelberg (2001). https://doi.org/10.1007/3540455787_36CrossRefGoogle Scholar
 24.de Moura, L., Bjørner, N.: Efficient ematching for SMT solvers. In: Pfenning, F. (ed.) CADE 2007. LNCS, vol. 4603, pp. 183–198. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540735953_13CrossRefGoogle Scholar
 25.Niemetz, A., Preiner, M., Biere, A.: Precise and complete propagation based local search for satisfiability modulo theories. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016, Part I. LNCS, vol. 9779, pp. 199–217. Springer, Cham (2016). https://doi.org/10.1007/9783319415284_11CrossRefGoogle Scholar
 26.Niemetz, A., Preiner, M., Reynolds, A., Barrett, C., Tinelli, C.: Solving quantified bitvectors using invertibility conditions. In: Chockler, H., Weissenbacher, G. (eds.) CAV 2018, Part II. LNCS, vol. 10982, pp. 236–255. Springer, Cham (2018). https://doi.org/10.1007/9783319961422_16CrossRefGoogle Scholar
 27.Preiner, M., Niemetz, A., Biere, A.: Counterexampleguided model synthesis. In: Legay, A., Margaria, T. (eds.) TACAS 2017, Part I. LNCS, vol. 10205, pp. 264–280. Springer, Heidelberg (2017). https://doi.org/10.1007/9783662545775_15CrossRefzbMATHGoogle Scholar
 28.Raghothaman, M., Udupa, A.: Language to specify syntaxguided synthesis problems, May 2014Google Scholar
 29.Reynolds, A., Deters, M., Kuncak, V., Tinelli, C., Barrett, C.: Counterexampleguided quantifier instantiation for synthesis in SMT. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015, Part II. LNCS, vol. 9207, pp. 198–216. Springer, Cham (2015). https://doi.org/10.1007/9783319216683_12CrossRefGoogle Scholar
 30.Reynolds, A., King, T., Kuncak, V.: Solving quantified linear arithmetic by counterexampleguided instantiation. Formal Methods Syst. Des. 51(3), 500–532 (2017)CrossRefGoogle Scholar
 31.Scheibler, K., Kupferschmid, S., Becker, B.: Recent improvements in the SMT solver iSAT. MBMV 13, 231–241 (2013)Google Scholar
 32.Wintersteiger, C.M., Hamadi, Y., de Moura, L.M.: Efficiently solving quantified bitvector formulas. Formal Methods Syst. Des. 42(1), 3–23 (2013)CrossRefGoogle Scholar
 33.Zeljić, A., Wintersteiger, C.M., Rümmer, P.: Approximations for model construction. In: Demri, S., Kapur, D., Weidenbach, C. (eds.) IJCAR 2014. LNCS (LNAI), vol. 8562, pp. 344–359. Springer, Cham (2014). https://doi.org/10.1007/9783319085876_26CrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.