Skip to main content
Log in

Component-Graph Construction

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Component-trees are classical tree structures for grey-level image modelling. Component-graphs are defined as a generalization of component-trees to images taking their values in any (totally or partially) ordered sets. Similar to component-trees, component-graphs are a lossless image model; then, they can allow for the development of various image processing approaches. However, component-graphs are not trees, but directed acyclic graphs. This makes their construction non-trivial, leading to nonlinear time cost and resulting in nonlinear space data structures. In this theoretical article, we discuss the notion(s) of component-graph, and we propose a strategy for their efficient building and representation, which are necessary conditions for further involving them in image processing approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. This means that for all \(v,w \in V\), the two elements \(\bigvee ^\leqslant \{v,w\}\) and \(\bigwedge ^\leqslant \{v,w\}\) exist and belong to V.

References

  1. Najman, L., Talbot, H. (eds.): Mathematical Morphology: From Theory to Applications. ISTE/J. Wiley & Sons (2010)

  2. Salembier, P., Oliveras, A., Garrido, L.: Anti-extensive connected operators for image and sequence processing. IEEE Transactions on Image Processing 7, 555–570 (1998)

    Article  Google Scholar 

  3. Monasse, P., Guichard, F.: Scale-space from a level lines tree. Journal of Visual Communication and Image Representation 11, 224–236 (2000)

    Article  Google Scholar 

  4. Salembier, P., Garrido, L.: Binary partition tree as an efficient representation for image processing, segmentation and information retrieval. IEEE Transactions on Image Processing 9, 561–576 (2000)

    Article  Google Scholar 

  5. Passat, N., Naegel, B.: Component-hypertrees for image segmentation. In: ISMM, International Symposium on Mathematical Morphology, Proceedings, Lecture Notes in Computer Science, vol. 6671, pp. 284–295. Springer (2011)

  6. Perret, B., Cousty, J., Tankyevych, O., Talbot, H., Passat, N.: Directed connected operators: asymmetric hierarchies for image filtering and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1162–1176 (2015)

    Article  Google Scholar 

  7. Naegel, B., Passat, N.: Component-trees and multi-value images: a comparative study. In: International Symposium on Mathematical Morphology (ISMM), Lecture Notes in Computer Science, vol. 5720, pp. 261–271. Springer (2009)

  8. Passat, N., Naegel, B.: An extension of component-trees to partial orders. In: International Conference on Image Processing (ICIP), pp. 3981–3984 (2009)

  9. Passat, N., Naegel, B.: Component-trees and multivalued images: structural properties. J. Math. Imaging Vis. 49, 37–50 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kurtz, C., Naegel, B., Passat, N.: Connected filtering based on multivalued component-trees. IEEE Trans. Image Process. 23, 5152–5164 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Naegel, B., Passat, N.: Toward connected filtering based on component-graphs. In: International Symposium on Mathematical Morphology (ISMM), Lecture Notes in Computer Science, vol. 7883, pp. 350–361. Springer (2013)

  12. Naegel, B., Passat, N.: Colour image filtering with component-graphs. In: International Conference on Pattern Recognition (ICPR), pp. 1621–1626 (2014)

  13. Carlinet, E., Géraud, T.: MToS: a tree of shapes for multivariate images. IEEE Trans. Image Process. 24, 5330–5342 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Xu, Y., Géraud, T., Najman, L.: Connected filtering on tree-based shape-spaces. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1126–1140 (2016)

    Article  Google Scholar 

  15. Grossiord, É., Naegel, B., Talbot, H., Passat, N., Najman, L.: Shape-based analysis on component-graphs for multivalued image processing. In: International Symposium on Mathematical Morphology (ISMM), Lecture Notes in Computer Science, vol. 9082, pp. 446–457. Springer (2015)

  16. Grossiord, É., Naegel, B., Talbot, H., Passat, N., Najman L.: Shape-based analysis on component-graphs for multivalued image processing. In: Benediktsson, J., Chanussot, J., Najman, L., Talbot, H. (eds.) Mathematical Morphology and Its Applications to Signal and Image Processing, ISMM 2015. Lecture Notes in Computer Science, vol. 9082. Springer, Cham (2015)

  17. Passat, N., Naegel, B., Kurtz, C.: Implicit component-graph: a discussion. In: International Symposium on Mathematical Morphology (ISMM), Lecture Notes in Computer Science, vol. 10225, pp. 235–248. Springer (2017)

  18. Salembier, P., Serra, J.: Flat zones filtering, connected operators, and filters by reconstruction. IEEE Trans. Image Process. 4, 1153–1160 (1995)

    Article  Google Scholar 

  19. Kong, T.Y., Rosenfeld, A.: Digital topology: introduction and survey. Comput. Vis. Graph. Image Process. 48, 357–393 (1989)

    Article  Google Scholar 

  20. Adams, R., Bischof, L.: Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 16, 641–647 (1994)

    Article  Google Scholar 

  21. Jones, R.: Connected filtering and segmentation using component trees. Comput. Vis. Image Underst. 75, 215–228 (1999)

    Article  Google Scholar 

  22. Aho, A.V., Garey, M.R., Ullman, J.D.: The transitive reduction of a directed graph. SIAM J. Comput. 1, 131–137 (1972)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The research leading to these results was funded by the French Agence Nationale de la Recherche (Grant Agreements ANR-15-CE23-0009 and ANR-18-CE45-0018).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas Passat.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Proofs of Propositions

Proposition1 We assume that \((V,\leqslant )\) is totally ordered. The component-tree is the Hasse diagram of \((\varPsi ,\subseteq )\), whereas the (strong and weak) component-graphs \({{\dot{{\mathfrak {G}}}}}\) and \({\ddot{{\mathfrak {G}}}}\) are the Hasse diagrams of \(({\dot{\varTheta }},\trianglelefteq )\) and \((\ddot{\varTheta },\trianglelefteq )\), respectively. Proving the isomorphism between the Hasse diagrams is indeed the same as proving the isomorphism between the ordered sets. Let \(X \in \varPsi \). There exists a value \(v \in V\) such that \(X \in {\mathcal {C}}[\lambda _v(I)]\). We choose v as the maximal value of V satisfying this property (such maximum exists, as \(\leqslant \) is total). It is plain that \((X,v) \in {\dot{\varTheta }}\). As v is the maximal value such that \(X \in \mathcal C[\lambda _v(I)]\), it follows that for any \(w > v\), we have \(X \not \subseteq \lambda _w(I)\), and more generally, \(X \not \subseteq \bigcup _{w > v} \lambda _w(I)\). Then, there exists \(x \in X \not \subseteq \bigcup _{w > v} \lambda _w(I)\), and this point satisfies \(I(x) = v\). It follows from Eq. (18) that \((X,v) \in \ddot{\varTheta }\). Let us now assume that \((X,v) \in \ddot{\varTheta }\). From Eq. (19), we have \((X,v) \in {\dot{\varTheta }}\). Finally, let us assume that \((X,v) \in {\dot{\varTheta }}\). From Eq. (17), we directly have \(X \in \varPsi \). In other words, the projective mapping \(\pi = (X,v) \rightarrow X\) between \({\dot{\varTheta }}\), \(\ddot{\varTheta }\) and \(\varPsi \) is a bijection, whereas \({\dot{\varTheta }}= \ddot{\varTheta }\). At this stage, since the orders \(\trianglelefteq _s\) and \(\trianglelefteq _w\) are the same for \({\dot{\varTheta }}\) and \(\ddot{\varTheta }\), it is plain that \(({\dot{\varTheta }},\trianglelefteq _s)\) and \((\ddot{\varTheta },\trianglelefteq _s)\) (resp. \(({\dot{\varTheta }},\trianglelefteq _w)\) and \((\ddot{\varTheta },\trianglelefteq _w)\)) are isomorphic. More strongly, we have \(\trianglelefteq _s \ = \ \trianglelefteq _w\). Indeed, for two nodes \(K = (X,v), K' = (Y,w) \in {\dot{\varTheta }}\), we have \(X \subset Y \Rightarrow w \leqslant v\) due to the totality of \(\leqslant \). Now, let us consider that \(K = (X,v), K' = (Y,w) \in {\dot{\varTheta }}\) and \(K \trianglelefteq K'\). From Eq. (8), we have \(X \subseteq Y\). Conversely, let us consider that \(X, Y \in \varPsi \) and \(X \subseteq Y\). By considering the inverse function \(\pi ^{-1}\) and by setting \(K = (X,v) = \pi ^{-1}(X)\) and \(K' = (Y,w) = \pi ^{-1}(Y)\), we have, by construction, \(w \leqslant v\), and it follows that \(K \trianglelefteq K'\). Finally, the three ordered sets are then isomorphic and so are the associated Hasse diagrams, namely the component-tree \({\mathfrak {T}}\) and the two component-graphs \({{\dot{{\mathfrak {G}}}}}\) and \({\ddot{{\mathfrak {G}}}}\). \(\square \)

Proposition2 Let \(x \in \varOmega \). Let \(X \in {\mathcal {C}}[\lambda _{I(x)}(I)]\) be such that \(x \in X\). We have \(K = (X,I(x)) \in \ddot{\varTheta }\subseteq \mathring{\varTheta }\), and we also have \(C_K(x) = I(x)\). For any \(K' = (Y,w) \in \mathring{\varTheta }\), if \(w < v\), we have \(C_{K'}(x) \leqslant w < v\); if \(w > v\), we have \(C_{K'}(x) = \bot < v\); and if \(w = v\), we have \(C_{K'}(x) \le v\). It follows that \(I = \bigvee ^\le _{K \in {\mathring{\varTheta }}} C_K\). The term \(\bigvee ^\le _{v \in V}\bigvee ^\le _{X \in {\mathcal {C}}[\lambda _v(I)]} C_{(X,v)}\) is simply a rewriting of \(\bigvee ^\le _{K \in {\mathring{\varTheta }}} C_K\) for \({\mathring{\varTheta }} = \varTheta \). \(\square \)

Proposition3 Let \(X, Y \in \varTheta \). From the definition of \(\phi \), we have \(X \subseteq Y \Leftrightarrow \phi (X) \subseteq \phi (Y)\). Then, it follows from the definition of \(\trianglelefteq _w\) and \(\trianglelefteq _s\) (simply noted \(\trianglelefteq \) hereinafter) that \(((X,v) \trianglelefteq (Y,w)) \Leftrightarrow ((\phi (X),v) \trianglelefteq _\varPhi (\phi (Y),w))\). This is a fortiori true for the transitive reductions \(\blacktriangleleft \) and \(\blacktriangleleft _\varPhi \) of \(\trianglelefteq \) and \(\trianglelefteq _\varPhi \), respectively. Still from the definition of \(\phi \), we have \(\phi (\ddot{\varTheta }) = \ddot{\varTheta }_\varPhi \), and finally, the isomorphism between \((\varTheta ,\trianglelefteq )\) and \((\varTheta _\varPhi ,\trianglelefteq _\varPhi )\) also implies that \(\phi ({\dot{\varTheta }}) = {\dot{\varTheta }}_\varPhi \). As a consequence, Eq. (34) holds in any cases. \(\square \)

Proposition4 Equation (51) is a rewriting of the definition of \(\theta \) [Eq. (49)]. Equation (52) is a rewriting of the definition of \(\varepsilon \) [Eq. (50)].

B Proofs of Properties

Property1 Equation (16) is simply a rewriting of a part of Eq. (12) with \(\trianglelefteq _w \ = \ \trianglelefteq _2\) and \(\trianglelefteq _s \ = \ \trianglelefteq _3\). What we aim to prove is then the part \(K \trianglelefteq _3 K' \Rightarrow K \trianglelefteq _2 K'\) of Eq. (12). We set \(K = (X,v)\) and \(K' = (Y,w)\). From Eqs. (89), we have \(K \trianglelefteq _3 K' \Leftrightarrow (X,v) \trianglelefteq _3 (Y,w) \Leftrightarrow X \subseteq Y \wedge w \leqslant v \Leftrightarrow (X \subset Y \vee X = Y) \wedge w \leqslant v \Leftrightarrow (X \subset Y \wedge w \leqslant v) \vee (X = Y \wedge w \leqslant v) \Rightarrow (X \subset Y) \vee (X = Y \wedge w \leqslant v) \Leftrightarrow (X,v) \trianglelefteq _2 (Y,w) \Leftrightarrow K \trianglelefteq _2 K'\). \(\square \)

Property2 Let \(w \in v^\downarrow \). We have \(w \leqslant v\) and then \(X \subseteq \lambda _w(I)\). As X is connected in \(\varOmega \), it belongs to a unique connected component \(X' \in {\mathcal {C}}[\lambda _w(I)]\). In particular, we have \(X \subseteq X'\) and then \((X',w) \in K^\uparrow \). Thus, \(\sigma \) is surjective. Let \((Y_1,w_1), (Y_2,w_2) \in K^\uparrow \) be such that \(\sigma ((Y_1,w_1)) = \sigma ((Y_2,w_2))\). Then, we have \(w_1 = w_2\). It follows that \((Y_1,w_1), (Y_2,w_2) \in \mathcal C[\lambda _{w_1}(I)]\). But we have \(X \subseteq Y_1\) and \(X \subseteq Y_2\) with X non-empty. Then, we have \(Y_1 \cap Y_2 \ne \emptyset \), which implies \(Y_1 = Y_2\). Thus, \(\sigma \) is injective. The result follows. \(\square \)

Property3 We set \(K_1 = (X_1,v_1)\) and \(K_2 = (X_2,v_2)\). We have \(K \trianglelefteq _w K_1\) (resp. \(K \trianglelefteq _w K_2\)) and then \(X \subseteq X_1\) (resp. \(X \subseteq X_2\)). Then, we have \(X_1 \cap X_2 \ne \emptyset \). Since \(X_1 \in \mathcal C[\lambda _{v_1}(I)]\) and \(X_2 \in {\mathcal {C}}[\lambda _{v_2}(I)]\), the connectedness of \(X_1\) and \(X_2\), together with the fact that \(v_1 \geqslant v_2\) leads to \(X_1 \subseteq X_2\). Thus, we have \((X_1 \subseteq X_2) \wedge (v_2 \leqslant v_1)\), i.e. \(K_1 \trianglelefteq _s K_2\), and a fortiori \(K_1 \trianglelefteq _w K_2\). \(\square \)

Property4 We set \(K_1 = (X_1,v_1)\) and \(K_2 = (X_2,v_2)\). The “\(\Rightarrow \)” part of Eq. (25) follows the same scheme as Property 3. Then, \(v_2 \leqslant v_1\) implies \(X_1 \subseteq X_2\), and it follows that \( K_1 \trianglelefteq _s K_2\) (and a fortiori, \( K_1 \trianglelefteq _w K_2\)). The “\(\Leftarrow \)” part of Eq. (25) derives from the very definition of \(\trianglelefteq _s\). \(\square \)

Property5 Let \(K = (X,v), K' = (Y,w) \in \ddot{\varTheta }\). Let us suppose that \(K \trianglelefteq _w K'\). Then, we have either (1) \(X \subset Y\) or (2) \(X = Y\) and \(w \leqslant v\). In case (2), we directly have \(K \trianglelefteq _s K'\). Now, let us consider that case (1) holds. Since \(K \in \ddot{\varTheta }\), there exists \(x \in X\) such that \(I(x) = v\). Since \(X \subset Y\), we have \(x \in Y\). But \(Y \subseteq \lambda _w(I)\) and then \(w \leqslant v\). It follows that \(K \trianglelefteq _s K'\). We then have \(K \trianglelefteq _w K' \Rightarrow K \trianglelefteq _s K'\), and from Property 1, it comes that \(K \trianglelefteq _w K' \Leftrightarrow K \trianglelefteq _s K'\). The result follows by transitive reduction of \(\trianglelefteq _w\) and \(\trianglelefteq _s\). \(\square \)

Property6 Let \(K = (X,v), K' = (Y,w) \in {\dot{\varTheta }}\). Let us suppose that \(K \trianglelefteq _w K'\). Then, we have either (1) \(X \subset Y\) or (2) \(X = Y\) and \(w \leqslant v\). In case (2), we directly have \(K \trianglelefteq _s K'\). Now, let us consider that case (1) holds. Let \(u = \bigvee ^\leqslant \{v,w\} \in V\). Let us consider \(K'' = (Z,u)\) such that \(K'' \trianglelefteq _w K\) and \(K'' \trianglelefteq _w K'\). Such a node necessarily exists. Let \(x \in X\). Then, we have \(x \in \lambda _v(I)\) and thus \(v \leqslant I(x)\). Since \(x \in X \subset Y\), we have \(x \in \lambda _w(I)\) and thus \(w \leqslant I(x)\). But then, we have \(u = \bigvee ^\leqslant \{v,w\} \leqslant I(x)\). It follows that \(X \subseteq Z\), and since \(K'' \trianglelefteq _w K\), we also have \(Z \subseteq X\), and then \(X = Z\). It comes from the definition of \({\dot{\varTheta }}\) [Eq. (17)] that \(K'' = K\) and thus \(u = v\). It follows that \(w \leqslant \bigvee ^\leqslant \{v,w\} = u = v\). Then, we have \(K \trianglelefteq _s K'\). Consequently, we have \(K \trianglelefteq _w K' \Rightarrow K' \trianglelefteq _s K\), and from Property 1, it comes that \(K \trianglelefteq _w K' \Leftrightarrow K' \trianglelefteq _w K\). The result follows by transitive reduction of \(\trianglelefteq _w\) and \(\trianglelefteq _s\). \(\square \)

Property7 Let \(v \in V\). Let \(X \in {\mathcal {C}}[\lambda _v(I)]\). Let \(x \in X\). We have \(I(x) \leqslant v\), and then \([x]_{\leftrightarrow _V} \subseteq X\). It follows that \(X = \bigcup _{x \in X} [x]_{\leftrightarrow _V}\). The definition and bijectivity of the two inverse functions \(\phi \) and \(\phi ^{-1}\) defined in Eqs. (3233) directly follows from this equality for each value \(v \in V\). \(\square \)

Property8 Let \(\rho (x) \in P\). By definition, we have \(x \in \rho (x)\) and then \(\rho (x) \ne \emptyset \). Let \(y \in \varOmega \). We can build sequences of adjacent points of \(\varOmega \), namely \(x = x_0 \smallfrown \ldots \smallfrown x_i \smallfrown \ldots \smallfrown x_t = y\) (\(t \ge 0\)) such that for any \(i \in [\![0, t-1]\!]\), we have \(I(x_i) > I(x_{i+1})\). Since \(\varOmega \) is finite, such sequences are also finite. By choosing a sequence of maximal length, we have \(x = x_0 \in \varLambda \) and thus \(y \in \rho (x)\). It follows that \(\varOmega = \bigcup P\). Then, P is a cover of \(\varOmega \). \(\square \)

Property9 Let us assume that \(v \in \nu ((x,y))\). Then, there exists \(x' \smallfrown y'\) such that \(x' \in \sigma (x)\) and \(y' \in \sigma (y)\), and \(v \in I(x')^\downarrow \cap I(y')^\downarrow \) (from Eqs. (3940)). Since \(x'\) (resp. \(y'\)) is in the influence zone of x (resp. y), then there exists a sequence \(x = x_0 \smallfrown \ldots \smallfrown x_s = x'\) (resp. \(y = y_0 \smallfrown \ldots \smallfrown x_u = y'\)) of points within \(\sigma (x)\) (resp. \(\sigma (y)\)) such that for all \(i \in [\![0,s]\!]\) (resp. \(i \in [\![0,u]\!]\)), we have \(I(x_i) \geqslant v\) (resp. \(I(y_i) \geqslant v\)). By concatenating the first sequence and the reverse second sequence, we build a sequence \(x = x_0 \smallfrown \ldots \smallfrown x_t = y\) (\(t \ge 1\)) such that for all \(i \in [\![0,t]\!]\), \(x_i \in \sigma (x) \cup \sigma (y)\) and \(I(x_i) \geqslant v\). Now, let us assume that there exists a sequence \(x = x_0 \smallfrown \ldots \smallfrown x_t = y\) (\(t \ge 1\)) such that for all \(i \in [\![0,t]\!]\), \(x_i \in \sigma (x) \cup \sigma (y)\) and \(I(x_i) \geqslant v\). Let \(j \in [\![0,t]\!]\) be the maximal index such that \(x_j \in \sigma (x)\). Then, we must have \(j < t\) and \(x_{j+1} \in \sigma (y)\). In particular, we have \((x_{j},x_{j+1}) \in E(x,y)\). In addition, we have \(v \leqslant I(x_{j})\) and \(v \leqslant I(x_{j+1})\), i.e. \(v \in \nu ((x,y))\), from Eq. (39). \(\square \)

Property10 Let \(K_1 = (X_1,v_1), K_2 = (X_2,v_2) \in \varTheta \). Let us suppose that \(K_1 \trianglelefteq K_2\). Let \(x \in \varLambda \) be the leaf-point such that \(\ell _{K_1} = \ell (x)\). In particular, we have \(x \in X_1\). For any other leaf-point \(y \in X_1\), we have \(\ell (x) \leqslant \ell (y)\). Let \(z \in \varLambda \) be the leaf-point such that \(\ell _{K_2} = \ell (z)\). For any other leaf-point \(y \in X_2\), we have \(\ell (z) \le \ell (y)\). In particular, we have \(\ell (z) \le \ell (x)\), i.e. \(\ell _{K_2} \le \ell _{K_1}\). \(\square \)

Property11 Let \(K_1 = (X_1,v_1), K_2 = (X_2,v_2) \in \varTheta \). Let us suppose that \(K_1 \blacktriangleleft _s K_2\). We have \(K_1 \trianglelefteq _s K_2\), and it follows that \(v_2 \leqslant v_1\). We also have \(X_1 \subseteq X_2\). Let \(v_3 \in V\) with \(v_3 \ne v_1\), and let us assume that \(v_2 \leqslant v_3 \leqslant v_1\). Let \(K_3 = (X_3,v_3) \in \varTheta \) be the node such that \(X_1 \subseteq X_3\); such node exists and is unique. Then, we must have \(X_3 \subseteq X_2\). It comes \(K_1 \trianglelefteq _s K_3 \trianglelefteq _s K_2\), and \(K_1 \ne K_3\). Then, \(K_1 \trianglelefteq _s K_2\) implies that \(K_3 = K_2\), and then \(v_3 = v_2\). It follows that \(v_2 \prec v_1\). \(\square \)

Property12 Let \(v,w \in V\). Let us assume that \(v \leqslant w\). Let \(x \in \varLambda ^w\). Then, from Eq. (54) and since \(v \leqslant w\), we have \(x \in \varLambda ^w\). Let \((x,y) \in \ \smallfrown _\varLambda ^w\). We have \(w \leqslant I(x)\) and \(w \leqslant I(y)\), and then \(v \leqslant w \leqslant I(x)\) and \(v \leqslant w \leqslant I(y)\). Thus, from Eq. (55), we have \((x,y) \in \ \smallfrown _\varLambda ^v\). It follows that any connected component Y of \({\mathfrak {Z}}^w\) is a subset of a connected component X of \({\mathfrak {Z}}^v\). \(\square \)

C Computation of the Connected Components

For the three maximal values \(v = l\), m and n of V, the three matrices \(B_v\) are defined only by terms \(b^v_{i,j} = (i = j \wedge I(x_i) = v)\) [Eq. (59)]. In other words, they only carry information corresponding to leaves. We have

$$\begin{aligned} C_n= & {} B^1_n = \left( \begin{array}{ccccccc} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(66)
$$\begin{aligned} C_m= & {} B^1_m = \left( \begin{array}{ccccccc} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(67)
$$\begin{aligned} C_l= & {} B^1_l = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(68)

We then have one valued connected component for each of the three values, namely (5, n), (2, m) and (1, l). In other words, we have \(\theta (n) = \{5\}\), \(\theta (m) = \{2\}\) and \(\theta (l) = \{1\}\).

For \(v = k\), the matrix \(B_k\) is defined as the disjunction of the three matrices \(C_l\), \(C_m\) and \(C_n\). This leads to \(b^k_{1,1} = b^k_{2,2} = b^k_{5,5} = 1\). Moreover, we have \(k \in \nu ^\bigtriangledown ((x_1,x_2))\), which implies \(b^j_{1,2} = b^j_{2,1} = 1\). In addition, the leaf-point \(x_3\) satisfies \(I(x_3) = k\); then, we set \(b^k_{1,1} = 1\). We have

$$\begin{aligned} C_k = B^1_k = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(69)

Then, we have three valued connected components, namely (1, k), (3, k) and (5, k). In other words, we have \(\theta (k) = \{1,3,5\}\). From the analysis of \(C_l\), \(C_m\) and \(C_n\), it also comes \(\varepsilon ((k,l)) = \{(1,1)\}\), \(\varepsilon ((k,m)) = \{(1,2)\}\) and \(\varepsilon ((k,n)) = \{(5,5)\}\).

For \(v = j\), the matrix \(B_j\) is defined as the disjunction of the two matrices \(C_l\) and \(C_m\). Moreover, we have \(j \in \nu ^\bigtriangledown ((x_1,x_2))\), which implies \(b^j_{1,2} = b^j_{2,1} = 1\). We have

$$\begin{aligned} C_j = B^1_j = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(70)

Then, we have one valued connected component, namely (1, j). In other words, we have \(\theta (j) = \{1\}\). From the analysis of \(C_l\) and \(C_m\), it also comes \(\varepsilon ((j,l)) = \{(1,1)\}\) and \(\varepsilon ((j,m)) = \{(1,2)\}\).

For \(v = i\), the matrix \(B_i\) is defined from the matrix \(C_n\). Moreover, the leaf-point \(x_6\) satisfies \(I(x_6) = i\); then, we set \(b^k_{6,6} = 1\). We have

$$\begin{aligned} C_i = B^1_i = \left( \begin{array}{ccccccc} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(71)

Then, we have two valued connected components, namely (5, i) and (6, i). In other words, we have \(\theta (i) = \{5,6\}\). From the analysis of \(C_n\), it also comes \(\varepsilon ((i,n)) = \{(5,5)\}\).

For \(v = h\), the matrix \(B_h\) is defined from the matrix \(C_k\). We have

$$\begin{aligned} C_h = B^1_h = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(72)

Then, we have three valued connected components, namely (1, h), (3, h) and (5, h). In other words, we have \(\theta (h) = \{1,3,5\}\). From the analysis of \(C_k\), it also comes \(\varepsilon ((h,k)) = \{(1,1),(3,3),(5,5)\}\).

For \(v = g\), the matrix \(B_g\) is defined as the disjunction of the two matrices \(C_j\) and \(C_k\). Moreover, the leaf-point \(x_4\) satisfies \(I(x_4) = g\); then, we set \(b^g_{4,4} = 1\). We have

$$\begin{aligned} C_g = B^1_g = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(73)

Then, we have four valued connected components, namely (1, g), (3, g), (4, g) and (5, g). In other words, we have \(\theta (g) = \{1,3,4,5\}\). From the analysis of \(C_j\) and \(C_k\), it also comes \(\varepsilon ((g,j)) = \{(1,1)\}\) and \(\varepsilon ((g,k)) = \{(1,1),(3,3),(5,5)\}\).

For \(v = f\), the matrix \(B_f\) is defined as the disjunction of the two matrices \(C_h\) and \(C_i\). Moreover, we have \(f \in \nu ^\bigtriangledown ((x_1,x_6))\), which implies \(b^f_{1,6} = b^f_{6,1} = 1\). The same holds for \(\nu ^\bigtriangledown ((x_2,x_3))\) and \(\nu ^\bigtriangledown ((x_3,x_5))\). We have

$$\begin{aligned} B_f= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(74)
$$\begin{aligned} B_f^2= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(75)
$$\begin{aligned} C_f= & {} B_f^4 = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(76)

Then, we have one valued connected component, namely (1, f). In other words, we have \(\theta (f) = \{1\}\). From the analysis of \(C_h\) and \(C_i\), it also comes \(\varepsilon ((f,h)) = \{(1,1),(1,3),(1,5)\}\) and \(\varepsilon ((f,i)) = \{(1,5),(1,6)\}\).

For \(v = e\), the matrix \(B_e\) is defined from the matrix \(C_h\). Moreover, the leaf-point \(x_7\) satisfies \(I(x_7) = e\); then, we set \(b^e_{7,7} = 1\). Moreover, we have \(e \in \nu ^\bigtriangledown ((x_2,x_3))\), that implies \(b^e_{2,3} = b^e_{3,2} = 1\). We have

$$\begin{aligned} B_e= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} \\ \end{array} \right) \end{aligned}$$
(77)
$$\begin{aligned} C_e= & {} B_e^2 = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} \\ \end{array} \right) \end{aligned}$$
(78)

Then, we have three valued connected components, namely (1, e), (5, e) and (7, e). In other words, we have \(\theta (e) = \{1,5,7\}\). From the analysis of \(C_h\), it also comes \(\varepsilon ((e,h)) = \{(1,1),(1,3),(5,5)\}\).

For \(v = d\), the matrix \(B_d\) is defined as the disjunction of the two matrices \(C_h\) and \(C_i\). Moreover, we have \(d \in \nu ^\bigtriangledown ((x_1,x_6))\), which implies \(b^d_{1,6} = b^d_{6,1} = 1\), and this is also the case for \(\nu ^\bigtriangledown ((x_3,x_5))\). We have

$$\begin{aligned} B_d= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(79)
$$\begin{aligned} C_d= & {} B^2_d = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{array} \right) \end{aligned}$$
(80)

Then, we have two valued connected components, namely (1, d) and (3, d). In other words, we have \(\theta (d) = \{1,3\}\). From the analysis of \(C_h\) and \(C_i\), it also comes \(\varepsilon ((d,h)) = \{(1,1),(3,3),(3,5)\}\) and \(\varepsilon ((d,i)) = \{(1,6),(3,5)\}\).

For \(v = c\), the matrix \(B_c\) is defined as the disjunction of the two matrices \(C_e\) and \(C_f\). Moreover, we have \(c \in \nu ^\bigtriangledown ((x_2,x_7))\), \(\nu ^\bigtriangledown ((x_5,x_6))\) and \(\nu ^\bigtriangledown ((x_5,x_7))\). This implies \(b^c_{5,6} = b^c_{6,5} = 1\) (which was already the case), but also \(b^c_{2,7} = b^c_{7,2} = 1\) and \(b^c_{5,7} = b^c_{7,5} = 1\). We have

$$\begin{aligned} B_c= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 \\ 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \end{array} \right) \end{aligned}$$
(81)
$$\begin{aligned} C_c&\quad =&\quad B^2_c = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \end{array} \right) \end{aligned}$$
(82)

Then, we have one valued connected component, namely (1, c). In other words, we have \(\theta (c) = \{1\}\). From the analysis of \(C_e\) and \(C_f\), it also comes \(\varepsilon ((c,e)) = \{(1,1),(1,5),(1,7)\}\) and \(\varepsilon ((c,f)) = \{(1,1)\}\).

For \(v = b\), the matrix \(B_b\) is defined as the disjunction of the two matrices \(C_e\) and \(C_g\). Moreover, we have \(b \in \nu ^\bigtriangledown ((x_1,x_4))\), that implies \(b^j_{1,4} = b^j_{4,1} = 1\), and this is also the case for \(\nu ^\bigtriangledown ((x_3,x_4))\), \(\nu ^\bigtriangledown ((x_3,x_5))\) and \(\nu ^\bigtriangledown ((x_2,x_7))\). We have

$$\begin{aligned} B_b= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} \end{array} \right) \end{aligned}$$
(83)
$$\begin{aligned} B^2_b= & {} \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad \mathbf{1} \end{array} \right) \end{aligned}$$
(84)
$$\begin{aligned} C_b= & {} B^4_b = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad 0 &{}\quad \mathbf{1} \end{array} \right) \end{aligned}$$
(85)

Then, we have one valued connected component, namely (1, b). In other words, we have \(\theta (b) = \{1\}\). From the analysis of \(C_e\) and \(C_g\), it also comes \(\varepsilon ((b,e)) = \{(1,1),(1,5),(1,7)\}\) and \(\varepsilon ((b,g)) = \{(1,1),(1,3),(1,4),(1,5)\}\).

For \(v = a\), we necessarily have

$$\begin{aligned} C_a = \left( \begin{array}{ccccccc} \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \\ \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} &{}\quad \mathbf{1} \end{array} \right) \end{aligned}$$
(86)

Then, we have one valued connected component, namely (1, a). In other words, we have \(\theta (a) = \{1\}\). From the analysis of \(C_b\), \(C_c\) and \(C_d\), it also comes \(\varepsilon ((a,b)) = \{(1,1)\}\), \(\varepsilon ((a,c)) = \{(1,1)\}\) and \(\varepsilon ((a,d)) = \{(1,1),(1,3)\}\).

D Pseudo-code and Complexity Discussion

1.1 D.1 Input

The algorithm takes as input:

  • a graph \((\varOmega ,\smallfrown )\);

  • an ordered value set \((V,\leqslant )\) or its Hasse diagram \((V,\prec )\); and

  • a valuation of \(I : \varOmega \rightarrow V\).

The set \(\varOmega \) contains vertices \(x_i\) for \(i \in [\![1,|\varOmega |]\!]\). Each vertex \(x_i\) can be modelled by its index i. As a consequence, \(\varOmega \) can be represented as an integer vector \({\mathcal {V}}_\varOmega \) of length \(|\varOmega |\) such that \({\mathcal {V}}_\varOmega [i] = x_i\) for any \(i \in [\![1,|\varOmega |]\!]\). The space cost of \({\mathcal {V}}_\varOmega \) is \(|\varOmega |\).

The set V contains values \(v_i\) for \(i \in [\![1,|V|]\!]\). Each value \(v_i\) can be modelled by its index i. As a consequence, V can be represented as an integer vector \({\mathcal {V}}_V\) of length |V| such that \({\mathcal {V}}_V[i] = v_i\) for any \(i \in [\![1,|V|]\!]\). The space cost of \({\mathcal {V}}_\varOmega \) is |V|.

The valuation I is a mapping between \(\varOmega \) and V. It can be modelled as an integer vector \({\mathcal {V}}_I\) of length \(|\varOmega |\) such that \(I({\mathcal {V}}_\varOmega [i]) = {\mathcal {V}}_V[{\mathcal {V}}_I[i]]\) for any \(i \in [\![1,|\varOmega |]\!]\). The space cost of \({\mathcal {V}}_I\) is \(|\varOmega |\).

The adjacency relation \(\smallfrown \) is a part of \(\varOmega \times \varOmega \). It may be modelled as a \(|\varOmega | \times |\varOmega |\) Boolean matrix, but this would require a space cost \(|\varOmega |^2\). In general, each vertex \(x_i\) has its number of adjacent vertices bounded by a low constant value \(m_\varOmega \ll |\varOmega |\), and the induced Boolean matrix is then sparse. Under this hypothesis, it is relevant to handle \(\smallfrown \) as a mapping from \(\varOmega \) to \(2^\varOmega \) that associates to each vertex \(x_i\) the set of its adjacent vertices. Practically, \(\smallfrown \) is then modelled as a vector of integer vectors \({\mathcal {V}}_{\smallfrown }\) of length \(|\varOmega |\) such that for each \(i \in [\![1,|\varOmega |]\!]\), the integer vector \({\mathcal {V}}_{\smallfrown }[i]\) of size \(m_{\varOmega ,i} \le m_\varOmega \) is such that for all \(j \in [\![1,m_{\varOmega ,i}]\!]\), the vertices \({\mathcal {V}}_\varOmega [i]\) and \({\mathcal {V}}_\varOmega [{\mathcal {V}}_{\smallfrown }[i][j]]\) are adjacent. Note that \({\mathcal {V}}_{\smallfrown }[i][j]\) is an element of \({\mathcal {V}}_{\smallfrown }[i]\) if and only if i is an element of \({\mathcal {V}}_{\smallfrown }[{\mathcal {V}}_{\smallfrown }[i][j]]\). In other words, we store twice each adjacency link. The space cost of \({\mathcal {V}}_{\smallfrown }\) is \({\mathcal {O}}(|\varOmega |)\).

The order relation \(\prec \) is a part of \(V \times V\). It may be modelled as a \(|V| \times |V|\) Boolean matrix, but this would require a space cost \(|V|^2\). In general, each value of V has its number of successors bounded by a low constant value \(m_\prec \ll |V|\), and the induced Boolean matrix is then sparse. Under this hypothesis, it is relevant to handle \(\prec \) as a mapping from V to \(2^V\) that associates to each value v the set of its successors. Practically, \(\prec \) is then modelled as a vector of integer vectors \({\mathcal {V}}_{\prec }\) of length \({\mathcal {O}}(|V|)\). For each \(i \in [\![1,|V|]\!]\), the vector \({\mathcal {V}}_{\prec }[i]\) contains all the indices j such that \({\mathcal {V}}_V[i] \prec {\mathcal {V}}_V[{\mathcal {V}}_{\prec }[i][j]]\). The space cost of \({\mathcal {V}}_{\prec }\) is \({\mathcal {O}}(|V|)\).

If the provided input is the ordered value set \((V,\leqslant )\) instead \((V,\prec )\), then we have to explicitly compute \((V,\prec )\) from \((V,\leqslant )\). The time cost for this process depends on the nature of \((V,\leqslant )\). It may vary from \({\mathcal {O}}(|V|)\) in the simplest cases (e.g. numerical lattices) up to \({\mathcal {O}}(|V|^{\alpha })\) with \(2 \le \alpha \le 3\) when we have to carry out a transitive reduction on \((V,\leqslant )\) in order to obtain \((V,\prec )\), which has the same time cost as a transitive closure [22].

1.2 D.2 Remarks on the Partially Ordered Value Sets

We assume that we are able to compare two values \(v, w \in V\), with respect to \(\leqslant \), in constant time \({\mathcal {O}}(1)\). In general, this hypothesis is relevant since most order relations are derived from computable relations (e.g. order relations on Boolean, integer or floating values in standard programming languages). However, in few cases, it may happen that we need the explicit computation and storage of the order relation as a data structure, which generally implies a \({\mathcal {O}}(|V|^2)\) space cost and possibly a polynomial time cost \({\mathcal {O}}(|V|^\alpha )\) (with, generally, \(2 \le \alpha \le 3\)) for its computation if the only input was the Hasse diagram \((V,\prec )\) of the ordered set \((V,\leqslant )\).

The overall space and time cost of the construction of a component-graph is also conditioned by the nature of \((V,\leqslant )\). Indeed, the time cost of the algorithm depends, in part, on the ability to rapidly determine \(\bigtriangledown ^\leqslant v^\downarrow \cap w^\downarrow \) for given two values \(v, w \in V\). This is, in particular, the algorithmic foundation of function \(\nu ^\bigtriangledown \) [Eq. (42)].

In the case where v and w are comparable, i.e. \(v \leqslant w\) or \(w \leqslant v\), the definition of \(\bigtriangledown ^\leqslant v^\downarrow \cap w^\downarrow \) is simply \(\{v\}\) or \(\{w\}\), and the issue is then related to the time cost for assessing which of v or w is greater (see paragraph above).

In the case where v and w are non-comparable, i.e. \(v \not \leqslant w\) and \(w \not \leqslant v\), the access to \(\bigtriangledown ^\leqslant v^\downarrow \cap w^\downarrow \) may be more tricky. In the simplest cases (e.g. for numerical lattices), \(\bigtriangledown ^\leqslant v^\downarrow \cap w^\downarrow \) remains a singleton set, and it can be determined in constant time \({\mathcal {O}}(1)\) if the Hasse diagram \((V,\prec )\) is available (which is mandatory for building the component-graph). In less favourable cases, the cost for having access to this information for a couple of values (vw) may be up to \(|v^\downarrow \cup w^\downarrow |\), that is \({\mathcal {O}}(|V|)\) in the worst cases. In the sequel, and in particular in Sect. 15, we will note this cost C. We will keep in mind that \({\mathcal {O}}(1) \le C \le {\mathcal {O}}(|V|)\), but we encourage the interested readers to consider the construction of component-graphs in the cases of ordered sets \((V,\leqslant )\) such that \(C = {\mathcal {O}}(1)\).

1.3 D.3 Flat Zone Image Computation (See Sect. 6)

Building a flat zone image consists of partitioning \(\varOmega \) into a new set of regions \(\varPhi \), which are the vertices for a more compact graph. Each of these new vertices \(p_i \in \varPhi \) that can be indexed by an integer value \(i \in [\![1,|\varPhi |]\!]\) gathers one or many vertices of \(\varOmega \). It is relevant to handle the mapping from \(\varPhi \) to \(2^\varOmega \) (or equivalently, the inverse non-injective mapping from \(\varOmega \) to \(\varPhi \)) that associates to each vertex \(p_i \in \varPhi \) the set of the corresponding vertices in \(\varOmega \). Practically, this mapping is modelled as a vector of integers \({\mathcal {V}}_{\varPhi }\) of length \(|\varOmega |\) such that for each \(i \in [\![1,|\varOmega |]\!]\), the vertex \({\mathcal {V}}_\varOmega [i] \in \varOmega \) is one of the vertices forming the flat zone corresponding to the vertex \(p_{{\mathcal {V}}_\varPhi [i]} \in \varPhi \) of label \({\mathcal {V}}_\varPhi [i]\). The space cost of \({\mathcal {V}}_\varPhi \) is \(|\varOmega |\). The time cost for its construction (Algorithm 1) is \(\mathcal O(|\varOmega |)\).

A new valuation \(I_\varPhi \) on \(\varPhi \) is induced by the valuation I in \(\varOmega \) [Eq. (31)]. It is modelled as an integer vector \({\mathcal {V}}_{I_\varPhi }\) of length \(|\varPhi |\) such that for any \(i \in [\![1,|\varOmega |]\!]\) we have \({\mathcal {V}}_{I_\varPhi }[{\mathcal {V}}_{\varPhi }[i]] = {\mathcal {V}}_{I}[i]\). The space cost of \({\mathcal {V}}_{I_\varPhi }\) is \(|\varPhi |\), and it can be built in parallel to \({\mathcal {V}}_{I_\varPhi }\) with no extra cost (Algorithm 1).

figure d

We then have to define the adjacency relation \(\smallfrown _\varPhi \) between the vertices of \(\varPhi \), induced by the adjacency \(\smallfrown \) between those of \(\varOmega \) [Eq. (30)]. Practically, \(\smallfrown _\varPhi \) is modelled as a vector of integer vectors \({\mathcal {V}}_{\smallfrown _\varPhi }\) of length \(|\varPhi |\), similar to \({\mathcal {V}}_{\smallfrown }\). For each \(i \in [\![1,|\varPhi |]\!]\), the integer vector \({\mathcal {V}}_{\smallfrown _\varPhi }[i]\) of size \(m_{\varPhi ,i} \le m_\varPhi \) (\(m_\varPhi \) may differ from \(m_\varOmega \), but can still be assumed to be a low constant value \(m_\varPhi \ll |\varPhi |\)), is such that for all \(j \in [\![1,m_{\varPhi ,i}]\!]\), the vertices of index i and \({\mathcal {V}}_{\smallfrown _\varPhi }[i][j]\) are adjacent. Note that \({\mathcal {V}}_{\smallfrown _\varPhi }[i][j]\) is an element of \({\mathcal {V}}_{\smallfrown _\varPhi }[i]\) if and only if i is an element of \({\mathcal {V}}_{\smallfrown _\varPhi }[{\mathcal {V}}_{\smallfrown _\varPhi }[i][j]]\). In other words, we store twice each adjacency link. The space cost of \({\mathcal {V}}_{\smallfrown _\varPhi }\) is \({\mathcal {O}}(|\varPhi |)\). The time cost for its construction (Algorithm 2) is \({\mathcal {O}}(|\varOmega |)\).

figure e

1.4 D.4 Leaves Computation (Sect. 7)

The leaf-points constitute a subset of the vertices of \(\varPhi \). Each vertex \(\lambda _\ell \) of this subset \(\varLambda \subseteq \varPhi \) can be modelled by its index \(\ell \in [\![1,|\varLambda |]\!]\) [Eq. (44)]. In particular, each index \(\ell \) in \([\![1,|\varLambda |]\!]\) is simply a renaming of an index of \([\![1,|\varPhi |]\!]\) with respect to \(\varPhi \). We model this injective mapping from \([\![1,|\varLambda |]\!]\) to \([\![1,|\varPhi |]\!]\) by defining a vector of integers \({\mathcal {V}}_\varLambda \) of length \(|\varLambda |\) such that for any \(\ell \in [\![1,|\varLambda |]\!]\), the leaf-point \(\lambda _\ell \) in \(\varLambda \) is equal to the vertex \(p_{{\mathcal {V}}_\varLambda [\ell ]}\) of index \({\mathcal {V}}_\varLambda [\ell ] \in [\![1,|\varPhi |]\!]\) in \(\varPhi \). The space cost of \({\mathcal {V}}_\varLambda \) is \(|\varLambda |\). The time cost for its construction (Algorithm 3) is \({\mathcal {O}}(|\varPhi |)\).

figure f

1.5 D.5 Influence Zones Computation (Sect. 8)

Building the influence zones of the flat zone image consists of assigning to each vertex of \(\varPhi \) the label of the leaf-point of \(\varLambda \) that defines the influence zone where it lies. We model this mapping from \([\![1,|\varPhi |]\!]\) to \([\![1,|\varLambda |]\!]\) by defining a vector of integers \({\mathcal {V}}_{\rho }\) of length \(|\varPhi |\) such that the vertex \(p_i\) in \(\varPhi \) of index \(i \in [\![1,|\varPhi |]\!]\) lies in the influence zone \(\rho (p_{{\mathcal {V}}_{\varLambda }[{\mathcal {V}}_{\rho }[i]]})\) of the leaf-point \(\lambda _{{\mathcal {V}}_{\rho }[i]}\) of \(\varLambda \) of index \({\mathcal {V}}_{\rho }[i]\) in \([\![1,|\varLambda |]\!]\). The space cost of \({\mathcal {V}}_{\rho }\) is \(|\varPhi |\). The time cost for its construction (Algorithm 4) is \({\mathcal {O}}(|\varPhi |)\).

figure g

1.6 D.6 Influence Zones Graph Construction (Sect. 9)

The vertices of the influence zone graph are the influence zones of the leaf-points or, equivalently, the leaf-points themselves. The only task for building this graph is then to define the adjacency relation \(\smallfrown _\varLambda \) between the vertices of \(\varLambda \), induced by the adjacency \(\smallfrown _\varPhi \) between those of \(\varPhi \) [Eq. (38)]. Practically, \(\smallfrown _\varLambda \) is modelled as a vector of integer vectors \({\mathcal {V}}_{\smallfrown _\varLambda }\) of length \(|\varLambda |\), similarly to \({\mathcal {V}}_{\smallfrown _\varPhi }\). For each \(\ell \in [\![1,|\varLambda |]\!]\), the integer vector \({\mathcal {V}}_{\smallfrown _\varLambda }[\ell ]\) of size \(m_{\varLambda ,\ell } \le m_\varLambda \) (\(m_\varLambda \) may differ from \(m_\varPhi \), but can still be assumed to be a low constant value \(m_\varLambda \ll |\varLambda |\)), is such that for all \(k \in [\![1,m_{\varLambda ,\ell }]\!]\), the vertices of index \(\ell \) and \({\mathcal {V}}_{\smallfrown _\varLambda }[\ell ][k]\) are adjacent. Note that \({\mathcal {V}}_{\smallfrown _\varLambda }[\ell ][k]\) is an element of \({\mathcal {V}}_{\smallfrown _\varLambda }[\ell ]\) if and only if \(\ell \) is an element of \({\mathcal {V}}_{\smallfrown _\varLambda }[{\mathcal {V}}_{\smallfrown _\varLambda }[\ell ][k]]\). In other words, we store twice each adjacency link. The space cost of \({\mathcal {V}}_{\smallfrown _\varLambda }\) is \({\mathcal {O}}(|\varLambda |)\). The time cost for its construction (Algorithm 5) is \({\mathcal {O}}(|\varPhi |)\).

figure h

It is then necessary to define the valuation \(\nu \) of the edges of the graph \((\varLambda ,\smallfrown _\varLambda )\) [Eq. (39)]. As discussed in Sect. 9, it is not necessary to store the whole function \(\nu \). In particular, a less costly function, namely \(\nu ^\bigtriangledown \), can be considered [Eq. (42)]. We observe that \(\nu ^\bigtriangledown \) will then be involved in the construction of the connected components of the thresholded graph of \((\varLambda ,\smallfrown _\varLambda )\) at each value \(v \in V\) [Eq. (59)]. In this context, it is indeed relevant to associate with each value \(v \in V\) the set of all edges e of \(\smallfrown _\varLambda \) such that \(v \in \nu ^\bigtriangledown (e)\), instead of associating with each edge e the set of values \(\nu ^\bigtriangledown (e)\). In other words, we model the mapping \({\nu ^\bigtriangledown }^{-1}\) instead of \(\nu ^\bigtriangledown \). This is done by defining a vector of integer vectors \({\mathcal {V}}_{\nu }\) of length |V|. For each \(i \in [\![1,|V|]\!]\), the integer vector \({\mathcal {V}}_{\nu }[i]\) provides all the couples of indices \((\ell ,k)\) such that \(({\mathcal {V}}_\varLambda [\ell ],{\mathcal {V}}_\varLambda [k])\) is an edge of \(\smallfrown _\varLambda \) that satisfies \({\mathcal {V}}_V[i] \in \nu ^\bigtriangledown (({\mathcal {V}}_\varLambda [\ell ],{\mathcal {V}}_\varLambda [k]))\). The space cost of \({\mathcal {V}}_{\nu }\) is \({\mathcal {O}}(|V| + |\varPhi |)\), since the vector \({\mathcal {V}}_{\nu }\) is of length |V|, whereas the total amount of edges stored is at most the same as for \(\smallfrown _\varPhi \), multiplied by a low value (assumed constant) that bounds \(|\bigtriangledown ^\leqslant u^\downarrow \cap v^\downarrow |\) for any \(u, v \in V\). The time cost for its construction (Algorithm 6) is \((|V|+|\varPhi |.C)\) (see Sect. 1 for the definition of C). Note that for each \(i \in [\![1,|V|]\!]\), the size of \({\mathcal {V}}\nu [i]\) is in average \({\mathcal {O}}(|\varPhi |/|V|)\), which can be considered as a constant value \(\beta \). In particular, ensuring that \({\mathcal {V}}\nu [i]\) has no extra occurrence of each element (that is equivalent to maintaining the set sorted) has a time cost \({\mathcal {O}}(\beta \log \beta )\), whereas ensuring that each of its elements is a maximal one has a time cost \({\mathcal {O}}(\beta ^2)\). These costs are then assumed to remain constant and not to impact the overall cost \((|V|+|\varPhi |.C)\).

figure i

1.7 D.7 Connected Component Computation (Sect. 11.1)

At this stage, we have to build the connected components of the thresholded influence zone graphs for each value v of V. As stated in Sect. 11.1, this consists of computing, for each value v, the matrix \(C_v\), which is a Boolean matrix of size \(|\varLambda | \times |\varLambda |\) corresponding to the reflexive–transitive closure of the adjacency matrix \(A_v\) of the thresholded influence zone graph at value v. More efficiently, this adjacency matrix \(A_v\) can be replaced by a matrix \(B_v\) [Eq. (59)] defined from the matrices \(C_w\) for any values \(w \succ v\), the set of edges belonging to \({\nu ^\bigtriangledown }^{-1}(\{v\})\) (Sect. 15) and the set of leaves of value v. In particular, the \(B_v\) and \(C_v\) matrices are built in a recursive, top-down fashion from the maximal values of \((V,\leqslant )\) to its minimum \(\bot \).

For each value v, the matrix \(C_v\) is most often a sparse matrix. Indeed, it is a \(|\varLambda | \times |\varLambda |\) Boolean matrix, but the number of rows/columns containing at least one nonzero value is the number \(N_v \le |\varLambda |\) of leaf-points of value \(u \geqslant v\). Then, \(C_v\) can be handled and stored as a \(N_v \times N_v\) matrix. In addition, each remaining row/column of \(C_v\) can contain a number of nonzero values that is lower than \(N_v\). Storing the indices of these elements is indeed sufficient for preserving the information of the whole matrix. Then, \(C_v\) is stored as a vector of integer vectors \({\mathcal {V}}_{C_v}\) of length \(N_v\). For each \(i \in [\![1,N_v]\!]\), \(C_v[i]\) is a vector that contains as first element \(C_v[i][0]\) the index i of a leaf-point \({\mathcal {V}}_\varLambda [i]\) of \(\varLambda \) of value \(u \geqslant v\). Note that the \(C_v[i][0]\) indices are sorted from the lowest (for \(i = 1\)) to the greatest (for \(i = N_v\)). For a given vector \(C_v[i]\), the elements \(C_v[i][j]\) for \(j > 0\) are the indices of the leaf-points \({\mathcal {V}}_\varLambda [{\mathcal {V}}_{C_v}[i][j]]\) which are connected to \({\mathcal {V}}_\varLambda [i]\) in the thresholded influence zone graph at value v. Note that the \(C_v[i][j]\) indices are sorted from the lowest (for \(j = 1\)) to the greatest. The space cost of \({\mathcal {V}}_{C_v}\) is equal to the number of nonzero values of \(C_v\) and is in particular in \({\mathcal {O}}(N_v^2)\). The time cost for its computation is \({\mathcal {O}}(N_v^\alpha )\) with \(2 \le \alpha \le 3\) and \(\alpha \) close to 2 in many cases. Indeed, it proceeds in two steps.

First, we have to build the vector of integer vectors \({\mathcal {V}}_{B_v}\) of the matrix \(B_v\) (which is structured the same way as \({\mathcal {V}}_{C_v}\)). It is simply the elementwise disjunction of the matrices \(C_w\) for \(w \succ v\) and two other matrices corresponding to the vector of \({\mathcal {V}}_{\nu }[a]\) with \(v = {\mathcal {V}}_V[a]\) and the subset of indices \(\ell \) of \({\mathcal {V}}_\varLambda \) such that \({\mathcal {V}}_{I_\varPhi }[{\mathcal {V}}_{I_\varLambda }[\ell ]] = v\), respectively. These last two matrices are structured the same way as \({\mathcal {V}}_{C_v}\). By assuming that the number of successors for the relation \(\prec \) is bounded by a low constant value \(m^\prec \), the cost for building this disjunction is \({\mathcal {O}}(N_v^2)\), since it corresponds to the merging of several sorted lists while maintaining the values ordered.

Second, we have to compute the reflexive–transitive closure of \({\mathcal {V}}_{B_v}\). This is a simple matrix product procedure that consists of computing \(B_v^{2^k}\) for \(k > 0\) until convergence. In the worst case, the number of iterations is \(\log _2{N_v}\). In average, it is in general a low constant value. Each iteration consists of a self-product which time cost is, in theory, \(N_v^3\). However, since \(B_v\) is initially reflexive, the nonzero values in \(B_v\) remain nonzero in the next product matrices. Concerning the other, zero elements, only those corresponding to a couple of row and column that was modified at the previous step need to be recomputed (otherwise, the value remains necessarily zero). Finally, only the zero elements corresponding to rows and/or columns that were modified at the previous steps need an update, which cost is in the worst case linear with respect to the number of nonzero values in the corresponding row and column. In practice, this computation can be stopped when a couple of equal values is identified in the couple of row/column. The overall cost for this computation is then \({\mathcal {O}}(N_v^\alpha )\) with \(2 \le \alpha \le 3\), and \(\alpha \) close to 2 in many cases.

1.8 D.8 Hasse Diagram Enrichement (Sects. 11.2 and 11.3)

The final step for building the component-graph consists of defining the function \(\theta \) (resp. \(\varepsilon \)) that provides for each value v of V (resp. each edge (vw) of \(\prec \)) the set of the labels of the canonical leaf-points corresponding to the valued connected components at value v (resp. the couples of labels of canonical leaf-points that are linked by the \(\blacktriangleleft \) relation between values v and w in the component-graph).

On the one hand, the mapping \(\theta \) is modelled by a vector of integer vectors \({\mathcal {V}}_\theta \) of length |V|. For each \(i \in [\![1,|V|]\!]\), the vector \({\mathcal {V}}_\theta [i]\) contains all the indices j such that \({\mathcal {V}}_\varLambda [j] \in \theta ({\mathcal {V}}_V[i])\). The space cost of \({\mathcal {V}}_\theta \) is \(|\varTheta |\). The time cost for the computation of each of its |V| vectors \({\mathcal {V}}_\theta [i]\) (from \({\mathcal {V}}_{C_v}\)) is \(N_v\) with \(v = {\mathcal {V}}_V[i]\) since it is sufficient to scan the vector \({\mathcal {V}}_{C_v}\) and to add to \({\mathcal {V}}_\theta [i]\) all the indices \({\mathcal {V}}_{C_v}[j][0]\), for \(1 \le j \le N_v\), such that \({\mathcal {V}}_{C_v}[j][0] = {\mathcal {V}}_{C_v}[j][1]\).

On the other hand, the mapping \(\varepsilon \) is modelled by a vector of vectors of vectors of couples of integers \({\mathcal {V}}_\varepsilon \) of length |V|. For each valid couple of indices (ij), the vector \({\mathcal {V}}_\varepsilon [i][j]\) contains all the couples \((\ell ,k)\) of indices such that \({\mathcal {V}}_\varPhi [{\mathcal {V}}_\varLambda [\ell ]]\) and \({\mathcal {V}}_\varPhi [{\mathcal {V}}_\varLambda [k]]\) are leaf-points satisfying \({\mathcal {V}}_{I_\varPhi }[{\mathcal {V}}_\varPhi [{\mathcal {V}}_\varLambda [\ell ]]] = {\mathcal {V}}_V[i]\) and \({\mathcal {V}}_{I_\varPhi }[{\mathcal {V}}_\varPhi [{\mathcal {V}}_\varLambda [k]]] = {\mathcal {V}}_V[j]\) and \((k,{\mathcal {V}}_V[j]) \blacktriangleleft (\ell ,{\mathcal {V}}_V[i])\). The space cost of \({\mathcal {V}}_\theta \) is \(|\mathord {\blacktriangleleft }|\). The time cost for the computation of each of its \(|\mathord {\prec }|\) vectors \({\mathcal {V}}_\varepsilon [i][j]\) (from \({\mathcal {V}}_\theta [j]\) and \({\mathcal {V}}_{C_{{\mathcal {V}}_V[i]}}\)) is \(N_v\) with \(v = {\mathcal {V}}_V[i]\). Indeed, for each a in \({\mathcal {V}}_\theta [j]\) one has to add the couple \(({\mathcal {V}}_{C_{{\mathcal {V}}_V}}[b][1],a)\) such that \({\mathcal {V}}_{C_{{\mathcal {V}}_V}}[b][0] = a\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Passat, N., Naegel, B. & Kurtz, C. Component-Graph Construction. J Math Imaging Vis 61, 798–823 (2019). https://doi.org/10.1007/s10851-019-00872-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-019-00872-5

Keywords

Navigation