Skip to main content
Log in

Inferring functional properties of matrix manipulating programs by abstract interpretation

  • Published:
Formal Methods in System Design Aims and scope Submit manuscript

Abstract

We present a new static analysis by abstract interpretation to prove automatically the functional correctness of algorithms implementing matrix operations, such as matrix addition, multiplication, general matrix multiplication, inversion, or more generally Basic Linear Algebra Subprograms. In order to do so, we introduce a family of abstract domains parameterized by a set of matrix predicates as well as a numerical domain. We show that our analysis is robust enough to prove the functional correctness of several versions of the same matrix operations, resulting from loop reordering, loop tiling, inverting the iteration order, line swapping, and expression decomposition. We extend our method to enable modular analysis on code fragments manipulating matrices by reference, and show that it results in a significant analysis speedup.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. \(\triangledown \) can not be used when the two abstract states do not have the same shape, in which case the analyzer will perform a join. However, ultimately, the shape will stabilize, (see proof in “Appendix B”) thus allowing the analyzer to perform a widening. This widening technique is similar to the one proposed by [11] on cofibred domains.

  2. For clarity reasons, we use here predicates with numerical value as variable instantiation instead of the conjunction of a predicate and a numerical domain.

  3. With some conditions indicated in Program 12.

  4. Those conditions are independant from the analysis, our analyzer does not verify them and does not need them to infer the iterator of the program.

References

  1. Cousot P, Cousot R (1977) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the POPL 1977. ACM, pp 238–252

  2. Blanchet B, Cousot P, Cousot R, Feret J, Mauborgne L, Miné A, Monniaux D, Rival X (2003) A static analyzer for large safety-critical software. In: Proceedings of the PLDI’03. ACM, pp 196–207

  3. Allamigeon X (2008) Non-disjunctive numerical domain for array predicate abstraction. In: SOP 2008, volume 4960 of LNCS. Springer, pp 163–177

  4. Cousot P (2003) Verification by abstract interpretation. In: Verification: theory and practice, essays dedicated to Zohar Manna on the occasion of his 64th birthday, volume 2772 of LNCS. Springer, pp 243–268

  5. Cousot P, Cousot R, Logozzo F (2011) A parametric segmentation functor for fully automatic and scalable array content analysis. In: Proceedings of POPL 2011. ACM, pp 105–118

  6. Cousot P, Halbwachs N (1978) Automatic discovery of linear restraints among variables of a program. In: Proceeings of POPL 1978. ACM, pp 84–96

  7. Bondhugula U, Baskaran M, Krishnamoorthy S, Ramanujam J, Rountev A, Sadayappan P (2008) Automatic transformations for communication-minimized parallelization and locality optimization in the polyhedral model. In: CC 2008, volume 4959 of LNCS. Springer, pp 132–146

  8. Bondhugula U, Hartono A, Ramanujam J, Sadayappan P (2008) A practical automatic polyhedral parallelizer and locality optimizer. In: Proceedings of the PLDI 2008. ACM, pp 101–113

  9. Leroy X (2006) Formal certification of a compiler back-end, or: programming a compiler with a proof assistant. In: Proceedings of the POPL 2006. ACM, pp 42–54

  10. Journault M, Miné A (2016) Static analysis by abstract interpretation of the functional correctness of matrix manipulating programs. In Xavier R (eds) Static analysis—23rd international symposium, SAS 2016, Edinburgh, UK, September 8–10, 2016, proceedings, volume 9837 of lecture notes in computer science. Springer, pp 257–277

  11. Venet A (1996) Abstract cofibered domains: application to the alias analysis of untyped programs. In: SAS’96, volume 1145 of LNCS. Springer, pp 366–382

  12. Rival X, Mauborgne L (2007) The trace partitioning abstract domain. ACM Trans. Program. Lang. Syst. 29(5):26

    Article  Google Scholar 

  13. Cousot P, Cousot R (2002) Modular static program analysis. In: Horspool RN (ed) Compiler construction, 11th international conference, CC 2002, held as part of the joint European conferences on theory and practice of software, ETAPS 2002, Grenoble, France, April 8–12, 2002, proceedings, volume 2304 of lecture notes in computer science. Springer, pp 159–178

  14. Miné A (2006) Symbolic methods to enhance the precision of numerical abstract domains. In: VMCAI 2006, volume 3855 of LNCS. Springer, pp 348–363

  15. Miné A (2006) The octagon abstract domain. Higher Order Symb Comput (HOSC) 19(1):31–100 http://www-apr.lip6.fr/~mine/publi/article-mine-HOSC06.pdf

  16. Gopan D, DiMaio F, Dor N, Reps T, Sagiv M (2004) Numeric domains with summarized dimensions. In: TACAS 2004. Springer, pp 512–529

  17. Gopan D, Reps TW, Sagiv S (2005) A framework for numeric analysis of array operations. In: Proceedings of POPL 2005. ACM, pp 338–350

  18. Halbwachs N, Péron M (2008) Discovering properties about arrays in simple programs. SIGPLAN Not 43(6):339–348

    Article  Google Scholar 

  19. Allamigeon X, Godard W, Hymans C (2006) Static analysis of string manipulations in critical embedded C programs. In: SAS 2006. Springer, pp 35–51

  20. Dillig I, Dillig T, Aiken A (2010) Fluid updates: beyond strong vs. weak updates. In: ESOP 2010. Springer, pp 246–266

  21. Monniaux D, Alberti F (2015) A simple abstraction of arrays and maps by program translation. In: SAS 2015, volume 9291 of LNCS. Springer, pp 217–234

  22. Peng Y. Automate convergence rate proof for gradient descent on quadratic functions

  23. Gunnels JA, Gustavson FG, Henry G, van de Geijn RA (2001) FLAME: formal linear algebra methods environment. ACM Trans. Math. Softw. 27(4):422–455

    Article  Google Scholar 

  24. Henzinger TA, Hottelier T, Kovács L, Voronkov A (2010) Invariant and type inference for matrices. In: Barthe G, Hermenegildo MV (eds) Verification, model checking, and abstract interpretation, 11th international conference, VMCAI 2010, Madrid, Spain, January 17–19, 2010, proceedings, volume 5944 of lecture notes in computer science. Springer, pp 163–179

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthieu Journault.

Additional information

This work is partially supported by the European Research Council under Consolidator Grant Agreement 681393—MOPSA.

Appendices

Complementary Algorithms

1.1 Set operations

figure r

1.2 Product operations

figure s
figure t
figure u
figure v

Remark 5

the \(\mathbf sanitize \) operator ensures soundness by punching holes in predicates that were holding in a cell where a write has been performed. For clarity reason we did not make it appear in the \(\mathbf {Assign}^{\sharp }_{+,\mathfrak M}\) algorithm because its description was given under stronger hypothesis.

Soundness proof

In the following m[r] denotes the memory state (and r is called an extension):

$$\begin{aligned} m': x\mapsto \left\{ \begin{array}{ll} r(x) &{}\quad \text {when } x \in \mathbf {def}(r) \\ m(x) &{} \quad \text {when } x \in \mathbf {def}(m) \wedge x \notin \mathbf {def}(r) \\ \text {undefined} &{}\quad \text {otherwise} \end{array} \right. \end{aligned}$$

Proof of Proposition 1:

Let us prove that \(\forall S_0^{\sharp },\ \gamma (S_0^{\sharp }) \subseteq \gamma (\mathbf {wf}(S_0^{\sharp }))\).

We prove the following invariant along the algorithm \(\mathbf {wf}\): \(\gamma (S_0^{\sharp }) \subseteq \gamma (S^{\sharp })\) where \(S_0^{\sharp }\) is the value at the beginning of the algorithm.

  • The invariant holds trivially initially

  • Let us consider \(\underline{S}^{\sharp }\) such that \(\gamma (S_0^{\sharp }) \subseteq \gamma (\underline{S}^{\sharp })\), and \((\mathfrak P_1,\mathfrak a_1) \in \underline{S}^{\sharp }\), \((\mathfrak P_2,\mathfrak a_2)\in \underline{S}^{\sharp }\) such that \(\mathfrak P_1 =_s \mathfrak P_2\), as \(\underline{S}^{\sharp }\) is well-named (by induction and Proposition hypothesis), there exists some \(\sigma \in \mathbf {var}(\mathfrak P_1) \rightarrow \mathbf {var}(\mathfrak P_2)\) a bijection such that \(\mathfrak P_1\sigma = \mathfrak P_2\), and \(\overline{S}^{\sharp } = \underline{S}^{\sharp } \setminus \{(\mathfrak P_1,\mathfrak a_1),(\mathfrak P_2,\mathfrak a_2)\} \cup (\mathfrak P_2,\mathfrak a_1\sigma \cup _{\mathcal V} \mathfrak a_2)\). Let \(m \in \gamma (S_0^{\sharp })\), by induction hypothesis \(m \in \gamma (\underline{S}^{\sharp })\), therefore \(m = (m_{\mathcal V}, m_{\mathcal A})\) is such that \(\exists r,\mathfrak P, \mathfrak a,\ \mathbf {def}(r) \subseteq \mathbb Y \wedge (\mathfrak P,\mathfrak a ) \in \underline{S}^{\sharp } \wedge m_{\mathcal V}[r] \in \gamma _{\mathcal V}(\mathfrak a) \wedge \forall P \in \mathfrak P, (m_{\mathcal V}[r],m_{\mathcal A}) \in \mathcal I_{\mathcal P}(P)\)

    • If \(\mathfrak P \notin \{\mathfrak P_1, \mathfrak P_2\}\) then trivially \(m \in \gamma (\underline{S}^{\sharp } \setminus \{(\mathfrak P_1,\mathfrak a_1),(\mathfrak P_2,\mathfrak a_2)\} \cup (\mathfrak P_2,\mathfrak a_1\sigma \cup _{\mathcal V} \mathfrak a_2))\) and therefore \(m \in \gamma (\overline{S}^{\sharp })\)

    • If \(\mathfrak P = \mathfrak P_1\) then \(m_{\mathcal V}[r] \in \gamma _{\mathcal V} (\mathfrak a_1)\), thus \(m_{\mathcal V}[r]\circ \sigma ^{-1} \in \gamma _{\mathcal V} (\mathfrak a_1\sigma )\) and by soundness of the \(\cup _{\mathcal V}\) operator we have: \(m_{\mathcal V}[r]\circ \sigma ^{-1} \in \gamma _{\mathcal V} (\mathfrak a_1 \sigma \cup _{\mathcal V} \mathfrak a_2 )\). Moreover we show by induction on terms, atomic predicates and predicates that \((m_{\mathcal V}[r],m_{\mathcal A}) \in \mathcal I_{\mathcal P}(P) \Rightarrow (m_{\mathcal V}[r]\circ \sigma ^{-1},m_{\mathcal A}) \in \mathcal I_{\mathcal P}(P\sigma )\). Therefore \(\forall P \in \mathfrak P_1 \sigma ,\ (m_{\mathcal V}[r]\circ \sigma ^{-1},m_{\mathcal A}[r])\in \mathcal I_{\mathcal P}(P)\) and as \(\mathbf {def}(\sigma ^{-1}) \subseteq \mathbb Y\) we have \(m_{\mathcal V} = m_{\mathcal V}[r]\circ \sigma ^{-1} _{|\mathbb X \cup \mathbb S}\). Therefore \(m \in \gamma (\overline{S}^{\sharp })\).

    • The case \(\mathfrak P = \mathfrak P_2\) is proved in the same manner as the previous case.

Therefore the invariant holds at the end of the loop, and \(\forall S_0^{\sharp },\ \gamma (S_0^{\sharp }) \subseteq \gamma (\mathbf {wf}(S_0^{\sharp }))\). \(\square \)

Proof of Theorem 1

An analyzer is said to be sound if it over-approximates the reachable states of the program. In order to prove such a result, we prove by induction on the commands that:

$$\begin{aligned} \forall C \in \mathcal C,\,\forall S,\,\forall S^{\sharp },\, S \subseteq \gamma (S^{\sharp }) \Rightarrow \mathbf {Post} \llbracket C \rrbracket (S) \subseteq \gamma (\mathbf {Post}^{\sharp }\llbracket C\rrbracket (S^{\sharp })) \end{aligned}$$

In the following r will always denote an extension such that \(\mathbf {var}(r) \subseteq \mathbb Y\), therefore \(\forall m, m[r]_{|\mathbb X \cup \mathbb S} = m\).

  • \(C= X := E\): Let \(S,S^{\sharp }\) be such that \(S \subseteq \gamma (S^{\sharp })\), let \(m_a = (m_{a,\mathcal V},m_{a,\mathcal A}) \in \mathbf {Post} \llbracket C \rrbracket (S)\), there is \(m_b = (m_{b,\mathcal V},m_{b,\mathcal A}) \in S\) such that \(m_a \in \mathbf {Post}\llbracket X := E \rrbracket (\{m_b\})\). Hence from the definition of \(\mathbf {Post}\), we have \(m_{a,\mathcal A}=m_{b,\mathcal A}\) and \(m_{a,\mathcal V}=m_{b,\mathcal V}[X := v]\) where \(v=\mathcal E\llbracket E\rrbracket (m_b)\). From \(S \subseteq \gamma (S^{\sharp })\) and the definition of \(\gamma \) we have: \(\exists r,\, \exists \mathfrak a, \mathfrak P,\,(\mathfrak a,\mathfrak P)\in S^{\sharp } \wedge m_{b,\mathcal V}[r] \in \gamma _{\mathcal V}(\mathfrak a) \wedge \forall P \in \mathfrak P, (m_{b,\mathcal V}[r],m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P)\), from the definition of \(\mathbf {Post}^{\sharp }\), \((\mathfrak P,\mathbf {Post}^{\sharp }_{\mathcal V} \llbracket X := E \rrbracket (\mathfrak a)) \in \mathbf {Post}^{\sharp }\llbracket C\rrbracket (S^{\sharp })\), from the properties of the underlying abstract domain we have: \(m_{a,\mathcal V}=m_{b,\mathcal V}[X := v] \in \gamma _{\mathcal V}(\mathbf {Post}^{\sharp }_{\mathcal V} \llbracket X := E \rrbracket (\mathfrak a))\) as \(v=\mathcal E\llbracket E\rrbracket (m_b)\), moreover \(X \in \mathbb X\) and \(\mathbb X \cap \mathbb Y = \emptyset \) therefore \(\forall P \in \mathfrak P,\,X \notin \mathbf {var}(P)\), hence \(\forall P \in \mathfrak P, (m_{b,\mathcal V}[r],m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P) \Rightarrow (m_{b,\mathcal V}[r,X := v],m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P)\). Therefore we have \(m_{a,\mathcal V}[r] \in \gamma _{\mathcal V}(\mathbf {Post}^{\sharp }_{\mathcal V} \llbracket X := E \rrbracket (\mathfrak a)) \wedge (m_{a,\mathcal V}[r],m_{a,\mathcal A}) \in \mathcal I_{\mathcal P}(P)\), hence: \(m_a \in \gamma (\mathbf {Post}^{\sharp }\llbracket X := E\rrbracket (S^{\sharp }))\) and \(\mathbf {Post} \llbracket C \rrbracket (S) \subseteq \gamma (\mathbf {Post}^{\sharp }\llbracket X := E\rrbracket (S^{\sharp }))\).

  • \(C = A[X_1][X_2] \leftarrow B[X_1][X_2] + C[X_1][X_2]\): Let \(S,S^{\sharp }\) be such that \(S \subseteq \gamma (S^{\sharp })\), let \(m_a = (m_{a,\mathcal V},m_{a,\mathcal A}) \in \mathbf {Post} \llbracket C \rrbracket (S)\), there is \(m_b = (m_{b,\mathcal V},m_{b,\mathcal A}) \in S\) such that \(m_a \in \mathbf {Post}\llbracket A[X_1][X_2] \leftarrow B[X_1][X_2] + C[X_1][X_2] \rrbracket (\{m_b\})\). Hence from the definition of \(\mathbf {Post}\), we have \(m_{a,\mathcal V}=m_{b,\mathcal V}\) and \(m_{a,\mathcal A}= m_{b,\mathcal A}[A[m_{b,\mathcal V}(X_1)][m_{b,\mathcal V}(X_2)] := m_{b,\mathcal A}(B)(m_{b,\mathcal V}(X_1),m_{b,\mathcal V}(X_2)) + m_{b,\mathcal A}(C)(m_{b,\mathcal V}(X_1),m_{b,\mathcal V}(X_2))]\). \(m_b\in \gamma (S^{\sharp })\) hence \(\exists r,\, \exists \mathfrak a, \mathfrak P \in S^{\sharp },\,m_{b,\mathcal V}[r] \in \gamma _{\mathcal V}(\mathfrak a) \wedge \forall P \in \mathfrak P, (m_{b,\mathcal V}[r],m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P)\). We now consider the following different cases: (where we will reuse the notation introduced by Algorithms 6 and  1)

    • \(\exists P_0 \in \mathfrak P,\,\textsc {find\_extension}((P_0,\mathfrak a),A,B,C,X_1,X_2) = \texttt {Some(Right)}\), then from the definition of \(\textsc {find\_extension}\) we have \(P_0 = P_+(A,B,C,x,y,z,t)\) and \(\mathfrak a \sqsubseteq _{\mathcal B,\mathcal V} ((z = X_1) \wedge (y = X_2) \wedge (t = X_2 + 1))\). As \((m_{b,\mathcal V}[r],m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P_0)\) we have \(\forall u,v \in [ m_{b,\mathcal V}[r](x), m_{b,\mathcal V}[r](z) -1] \times [ m_{b,\mathcal V}[r](y) , m_{b,\mathcal V}[r](t) -1],\,m_{b,\mathcal A}(A)(u, v) = m_{b,\mathcal A}(B)(u,v) + m_{b,\mathcal A}(C)(u,v)\) and as \(m_{b,\mathcal V}[r] \in \gamma (\mathfrak a)\) we have \(m_{b,\mathcal V}[r](X_1) = m_{b,\mathcal V}[r](z) \wedge m_{b,\mathcal V}[r](X_2) = m_{b,\mathcal V}[r](y) \wedge m_{b,\mathcal V}[r](t) = m_{b,\mathcal V}[r](X_2) + 1\) . From the definition of \(\mathbf {Post}\), we have \(m_{b,\mathcal A}(A)(m_{a,\mathcal V}(X_1), m_{a,\mathcal V}(X_2)) = m_{b,\mathcal A}(B)(m_{b,\mathcal V}(X_1), m_{b,\mathcal V}(X_2)) + m_{b,\mathcal A}(C)(m_{b,\mathcal V}(X_1), m_{b,\mathcal V}(X_2))\) therefore \(m_{a,\mathcal A}(A)(m_{a,\mathcal V}[r](z), m_{a,\mathcal V}[r](y)) = m_{a,\mathcal A}(B)(m_{a,\mathcal V}[r](z), m_{a,\mathcal V}[r](y)) + m_{a,\mathcal A}(C)(m_{a,\mathcal V}[r](z), m_{a,\mathcal V}[r](y))\), and as \(m_{a,\mathcal V}[r](t) = m_{a,\mathcal V}[r](y) + 1\) we have \(\forall u,v \in [ m_{a,\mathcal V}[r](x) , m_{a,\mathcal V}[r](z)] \times [ m_{a,\mathcal V}[r](y) , m_{a,\mathcal V}[r](t) -1],\, m_{a,\mathcal A}(A)(u,v) = m_{a,\mathcal A}(B)(u,v) + m_{a,\mathcal A}(C)(u,v)\). As \(x',y',z',t'\) are fresh variables and as the only available predicate states that the addition of B and C holds in some cells of A (thanks to the hypothesis of Sect. 4) then \(m_{a,\mathcal V}[r, x' \rightarrow x , y' \rightarrow y,z' \rightarrow z+1,t' \rightarrow t],m_{a,\mathcal A}) \in \mathcal I_{\mathcal P}(\mathfrak P \setminus P_0)\). Finally \(m_{a,\mathcal V}[r, x' \rightarrow x , y' \rightarrow y,z' \rightarrow z+1,t' \rightarrow t],m_{a,\mathcal A}) \in \mathcal I_{\mathcal P}((\mathfrak P \setminus P_0) \cup \{P_+(A,B,C,x',y',z',t')\})\) and as \(m_{a,\mathcal V}[r, x' \rightarrow x , y' \rightarrow y,z' \rightarrow z+1,t' \rightarrow t] \in \gamma _{\mathcal V}(\mathfrak a \sqcap _{\mathcal B, \mathcal V} I)\) we have \(m_a = (m_{a,\mathcal V}[r, x' \rightarrow x , y' \rightarrow y,z' \rightarrow z+1,t' \rightarrow t]_{\mid \mathbb X\cup \mathbb S},m_{a,\mathcal A}) \in \gamma (\mathbf {Post}^{\sharp }\llbracket C\rrbracket (S^{\sharp }))\)

    • The cases Down, Left, Up are very similar.

    • \(\forall P_0 \in \mathfrak P,\,\textsc {find\_extension}((P_0,\mathfrak a),A,B,C,X_1,X_2) = \texttt {None}\)

    • Case \(P_+(A',B',C',\_,\_,\_,\_) \in \mathfrak P \text { with } A'=A \wedge B'=B \wedge C'=C\).

      • Case \(\sharp \mathfrak P < M_{pred}\): \(m_{a,\mathcal V}[r,x' \leftarrow X_1,y' \leftarrow X_2,z' \leftarrow X_1 + 1, t' \leftarrow X_2 + 1] \in \gamma _{\mathcal V}(\mathfrak a \sqcap _{\mathcal B,\mathcal V} x' = X_1 \wedge y' = X_2 \wedge z' = X_1 +1 \wedge t' = X_2+1)\). As \(m_{a,\mathcal A}(A)(m_{b,\mathcal V}(X_1),m_{b,\mathcal V}(X_2)) = m_{b,\mathcal A}(B)(m_{b,\mathcal V}(X_1),m_{b,\mathcal V}(X_2)) + m_{b,\mathcal A}(C)(m_{b,\mathcal V}(X_1),m_{b,\mathcal V}(X_2))\), that is \(\forall u,v \in [ m_{b,\mathcal V}(x') , m_{b,\mathcal V}(z')-1 ] \times [ m_{b,\mathcal V}(y') , m_{b,\mathcal V}(t')-1],\, m_{a,\mathcal A}(A)(u,v) = m_{a,\mathcal A}(B)(u,v) + m_{a,\mathcal A}(C)(u,v)\), thus proving \((m_{a,\mathcal V}[r,x' \leftarrow X_1,y' \leftarrow X_2,z' \leftarrow X_1 + 1, t' \leftarrow X_2 + 1]_{\mid \mathbb X \cup \mathbb S},m_{a,\mathcal A})\in \mathcal I_{\mathcal P}(P_+(A,B,C,x',y',z',t'))\), and as variables \(x',y',z',t'\) are fresh we still have : \(\forall P \in \mathfrak P,\,(m_{a,\mathcal V}[r,x' \leftarrow X_1,y' \leftarrow X_2,z' \leftarrow X_1 + 1, t' \leftarrow X_2 + 1],m_{a,\mathcal A})\in \mathcal I_{\mathcal P}(P)\) therefore proving that \(m_a \in \gamma (\mathbf {Post}^{\sharp }\llbracket C\rrbracket (S^{\sharp }))\).

      • Case \(\sharp \mathfrak P \ge M_{pred}\): Trivial from \(A'=A \wedge B'=B \wedge C'=C\).

    • Otherwise: \(m_{a,\mathcal V}[r] = m_{b,\mathcal V}[r] \in \gamma _{\mathcal V}(\mathfrak a)\), moreover \((m_{a,\mathcal V},m_{a,\mathcal A}) \in \mathcal I_{\mathcal P}(\emptyset )\). Thus \(m_{a} \in \gamma (\mathbf {Post}^{\sharp }\llbracket C \rrbracket (S^{\sharp }))\).

  • \(C = C_1\ ;\ C_2\): Let \(S,S^{\sharp }\) be such that \(S \subseteq \gamma (S^{\sharp })\), by induction hypothesis applied on \(C_1\) we have \(\mathbf {Post} \llbracket C_1 \rrbracket (S) \subseteq \gamma (\mathbf {Post}^{\sharp }\llbracket C_1\rrbracket (S^{\sharp }))\) and now applied on \(C_2\): \(\mathbf {Post} \llbracket C_2 \rrbracket (\mathbf {Post} \llbracket C_1 \rrbracket (S)) \subseteq \gamma (\mathbf {Post}^{\sharp }\llbracket C_2\rrbracket (\mathbf {Post}^{}\llbracket C_1\rrbracket (S^{}))) \subseteq \gamma (\mathbf {Post}^{\sharp }\llbracket C_2\rrbracket (\mathbf {Post}^{\sharp }\llbracket C_1\rrbracket (S^{\sharp })))\) (by monotony), hence from the definitions: \(\mathbf {Post} \llbracket C_1 ; C_2 \rrbracket (S) \subseteq \gamma (\mathbf {Post}^{\sharp }\llbracket C_1 ; C_2 \rrbracket (S^{\sharp }))\).

  • \(C = \texttt {If}\ B\ \texttt {then}\ C_1\ \texttt {else}\ C_2\): The proof for this point is not stated here as the analysis we proposed for this command is classic and its soundness is ensured by the soundness of the underlying abstraction.

  • \(\texttt {While}\ B\ \texttt {do}\ C\ \texttt {done}\): The proof for this point is not stated here as the analysis we proposed for this command is classic and its soundness is ensured by the soundness of the underlying abstraction. However for the usual proof to work we need to show the two following lemmas. \(\square \)

Lemma 1

\(\forall S ^{\sharp },\, \gamma (S^{\sharp }) \subseteq \gamma (\mathbf {Merge}_+(S^{\sharp }))\).

Proof

Let \(m_b \in \gamma (S^{\sharp })\), by definition: \(\exists r,\, \exists (\mathfrak P, \mathfrak a) \in S^{\sharp },\, m_{b,\mathcal V}[r] \in \gamma _{\mathcal V}(\mathfrak a) \wedge \forall P \in \mathfrak P,\, (m_{b,\mathcal V}[r], m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P)\). We now show by induction on the loop in \(\mathbf {Merge}_{+,\mathfrak M}\) (defined in Algorithm 2) the following invariant (we use notations of Algorithm 2): \(m_b \in \gamma (\{\mathfrak P_o, \mathfrak a_o\})\). This is initially true thanks to the previous remark. Now let \((P_0,P_1) \in \mathfrak P_o,\,\textsc {find\_merge}(P_0,P_1,\mathfrak a_o) = \texttt {Some(Right)}\), with \(P_0 = P_+(A,B,C,x_0,y_0,z_0,t_0)\) and \(P_1 = P_+(A,B,C,x_1,y_1,z_1,t_1)\), thanks to \(\textsc {find\_merge}\) we have \(\mathfrak a_o \sqsubseteq _{\mathcal B,\mathcal V} ((x_0 = x_1) \wedge (z_0 = z_1) \wedge (t_0 = y_1))\). By induction \(\exists r,\, m_{b,\mathcal V}[r] \in \gamma _{\mathcal V}(\mathfrak a_o) \wedge \forall P \in \mathfrak P_o\, (m_{b,\mathcal V}[r], m_{b,\mathcal A})\in \mathcal I_{\mathcal P}(P)\) hence \(m_{b,\mathcal V}[r,x' \rightarrow x_0,y' \rightarrow y_0,z' \rightarrow z_1,t' \rightarrow t_1] \in \gamma _{\mathcal V}(\mathfrak a_o \sqcap _{\mathcal B,\mathcal V} I)\), moreover \(\forall u,v \in [ m_{b,\mathcal V}[r](x_0) , m_{b,\mathcal V}[r](z_0) -1] \times [ m_{b,\mathcal V}[r](y_0) , m_{b,\mathcal V}[r](t_0) -1],\, m_{a,\mathcal A}(A)(u,v) = m_{a,\mathcal A}(B)(u,v) + m_{a,\mathcal A}(C)(u,v)\) and \(\forall u,v\in [ m_{b,\mathcal V}[r](x_1),m_{b,\mathcal V}[r](z_1) -1] \times [ m_{b,\mathcal V}[r](y_1) , m_{b,\mathcal V}[r](t_1) -1],\, m_{a,\mathcal A}(A)(u,v) = m_{a,\mathcal A}(B)(u,v) + m_{a,\mathcal A}(C)(u,v)\) and as \(m_{b,\mathcal V}[r](x_0) = m_{b,\mathcal V}[r](x_1)\), \(m_{b,\mathcal V}[r](z_0) = m_{b,\mathcal V}[r](z_1)\) and \(m_{b,\mathcal V}[r](t_0) = m_{b,\mathcal V}[r](y_1)\) we get \(\forall u,v \in [ m_{b,\mathcal V}[r](x_0) ,m_{b,\mathcal V}[r](z_1) -1] \times [ m_{b,\mathcal V}[r](y_0) , m_{b,\mathcal V}[r](t_1) -1],\, m_{a,\mathcal A}(A)(u,v) = m_{a,\mathcal A}(B)(u,v) + m_{a,\mathcal A}(C)(u,v)\) hence \((m_{b,\mathcal V}[r,x' \rightarrow x_0,y' \rightarrow y_0,z' \rightarrow z_1,t' \rightarrow t_1],m_{b,\mathcal A}) \in \mathcal I_{\mathcal P}(P_+(A,B,C,x',y',z',t'))\). Thus ensuring the invariant \(m_b \in \gamma (\{\mathfrak P_o, \mathfrak a_o\})\). The case \((P_0,P_1) \in \mathfrak P_o,\,\textsc {find\_merge}(P_0,P_1,\mathfrak a_o) = \texttt {Some(Down)}\) is identical. We note that \(\mathbf {Merge}_{+,\mathfrak M}\) terminates because the number of predicates appearing in \(\mathfrak P_o\) is strictly decreasing. The previous invariant gives us that \(m_b \in \gamma (\mathbf {Merge}_{+,\mathfrak M}(\mathfrak P, \mathfrak a))\) hence \(\gamma (S^{\sharp }) \subseteq \gamma (\mathbf {Merge}_{+}S^{\sharp }))\). \(\square \)

Lemma 2

(Widening ensures convergence) For all well-formed, well-named sequences \((V_n^{\sharp })_{n\in \mathbb N}\) such that \(V_{n+1}^{\sharp } = V_n^{\sharp } \triangledown f(V_n^{\sharp })\), \((V_n^{\sharp })_{n \in \mathbb N}\) converges.

Proof

Due to the widening definition, the sequence \(V_n^{\sharp }\) has constant shape, thus allowing us (modulo a renaming of variables) to write \(\forall n \in \mathbb N,\,V_n^{\sharp } = \bigcup _{i \in I}(\mathfrak P_i,\mathfrak a_{n,i,0})\), \(\forall n \in \mathbb N,\,f(V_n^{\sharp }) = \bigcup _{i \in I}(\mathfrak P_i,\mathfrak a_{n,i,1})\) (thanks to the widening definition, \(V_n^{\sharp }\) and \(f(V_n^{\sharp })\) must have the same shape). Moreover, \(\forall n \in \mathbb N, V^{\sharp }_n\) is well-formed, therefore \(\forall i \ne j \in I^2,\) \(\mathfrak P_i\) and \(\mathfrak P_j\) have different shape, hence \(\forall n \in \mathbb N,\,V_{n+1}^{\sharp } = \bigcup _{i \in I}(\mathfrak P_i,\mathfrak a_{n,i,0} \triangledown _{\mathcal V} \mathfrak a_{n,i,1})= \bigcup _{i \in I}(\mathfrak P_i,\mathfrak a_{n+1,i,0})\) with I a finite set. As \(\forall i \in I,\,\mathfrak a_{n+1,i,0} = \mathfrak a_{n,i,0} \triangledown _{\mathcal V} \mathfrak a_{n,i,1}\), the properties of \(\triangledown _{\mathcal V}\) ensure that \(\forall i \in I,\, (\mathfrak a_{n,i,0})_{n\in \mathbb N}\) converges. Therefore (I being finite) the sequence \((V_n^{\sharp })\) converges. \(\square \)

Lemma 3

The analysis terminates.

Proof

We remind that the analyzer is only allowed a finite number \(M_{pred}\) of synchronous use of the addition predicate in monomials hence ensuring that well-formed abstract states have a finite number of possible shapes, more precisely \(2^{M_{pred}+1}\) different shapes, we consider the following shape function: \(\mathbf {sh}(S^{\sharp })=\bigcup _{(\mathfrak P,\mathfrak a)\in S^{\sharp }} \{\sharp (\mathfrak P)\}\) as we use only an addition predicate, it is easy to show that two abstract states \(S_1^{\sharp }\) and \(S_2^{\sharp }\) have the same shape if and only if \(\mathbf {sh}(S_1^{\sharp })=\mathbf {sh}(S_2^{\sharp })\). Let us now consider a sequence with \(U_0^{\sharp }\) well-formed and \(U_{n+1}^{\sharp } = \texttt { let } S_P^{\sharp } = U_n^{\sharp } \sqcup \mathbf {Post}^{\sharp }\llbracket C \rrbracket (U_n^{\sharp } \sqcap _{\mathcal B} B) \texttt { in if } S_P^{\sharp } =_{F} U_n^{\sharp } \texttt { then } U_n^{\sharp } \triangledown S_P^{\sharp } \texttt { else } S_P^{\sharp }\).The definition of \(\sqcup \) ensures that \(\forall n, U_n^{\sharp } \text { is well-formed}\). By definition of \(\sqcup \) and the \(\mathbf {wf}\) function we have that \(\forall A^{\sharp },\,\mathbf {sh}(U_n^{\sharp }) \subseteq \mathbf {sh}(U_n^{\sharp } \sqcup A^{\sharp })\), by definition of the widening \(\forall A^{\sharp },B^{\sharp },\, A^{\sharp }=_F B^{\sharp } \Rightarrow A^{\sharp } \triangledown B^{\sharp } =_F A^{\sharp }\) hence ensuring that \(\mathbf {sh}(U_{n}^{\sharp }) \subseteq \mathbf {sh}(U_{n+1}^{\sharp })\), moreover \(\forall n,\, \mathbf {sh}(U_n) \subseteq \{0,\dots ,M_{pred}\}\), hence the sequence \(\mathbf {sh}(U_n)\) converges. Let us consider the sub-sequence \(V_n^{\sharp } = U_{n+m}^{\sharp }\) such that \(\forall n,\,\mathbf {sh}(V_n^{\sharp }) = \mathbf {sh}(V_0^{\sharp })\), therefore \(V_{n+1}^{\sharp } = V_n^{\sharp } \triangledown (V_n^{\sharp } \sqcup \mathbf {Post}^{\sharp }\llbracket C \rrbracket (V_n^{\sharp } \sqcap _{\mathcal B} B))\). Finally the properties of \(\triangledown \) (Lemma 2) ensures the convergence of \((V_n^{\sharp })_{n\in \mathbb N}\) and thus of \((U_n^{\sharp })_{n\in \mathbb N}\). The termination proof in this case could be made by more simple consideration, however the idea of defining a shape function and showing its convergence can be reused to prove the convergence of other abstractions we have set up. \(\square \)

Programs

1.1 Addition tiled

figure w

1.2 Gemm

figure x

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Journault, M., Miné, A. Inferring functional properties of matrix manipulating programs by abstract interpretation. Form Methods Syst Des 53, 221–258 (2018). https://doi.org/10.1007/s10703-017-0311-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10703-017-0311-x

Keywords

Navigation