**SOME ISSUES CONCERNING THE DESCRIPTION OF MINIMAL FACES OF CONVEX SETS**

In this appendix, we present results concerning properties of convex sets in finite-dimensional spaces. The majority of these properties are well known or they are consequences of more general results. However, since no acceptable and concise presentation needed for the purposes of the current paper is available, these results are expounded here in an autonomous fashion. The basic results are presented in the form of theorems, corollaries to them, and remarks; the division into theorems, corollaries, and remarks is mainly motivated by the topic and structural reasons. Some proofs are omitted due to their simplicity or wide availability in the literature.

**Minimal Faces of Convex Sets**

**Theorem A1.***Each closed convex set *\(V = \operatorname{co} \operatorname{cl} V \subset {{\mathbb{R}}^{n}}\)*of finite dimension that includes the linear manifold *\(y + L\)* is the direct sum *\(L \oplus V{\kern 1pt} '\)*of the corresponding linear subspace *\(L\)* and a closed convex set *\(V'\)* of the complementary* (*in*\({{\mathbb{R}}^{n}}\)) *dimension to *\(L\)*. Furthermore, every minimal face of such a set has the same structure *\({{\Gamma }_{{min}}}({{y}_{0}}) = L \oplus \Gamma {\kern 1pt} '\)* with *\(\Gamma {\kern 1pt} {\text{'}} = \operatorname{co} \operatorname{cl} \Gamma {\kern 1pt} {\text{'}} = {{\Gamma }_{{min}}}(y_{0}^{'})\)* and *\(y_{0}^{'} \in V{\kern 1pt} '\)* such that*\({{y}_{0}} - y_{0}^{'} \in L\).

**Theorem A2.***For all convex sets *\(W \subset V \subset {{\mathbb{R}}^{n}}\)*and for *\({{y}_{0}} \in {{\operatorname{int} }_{{min}}}W\)*, *\(y \in {{\partial }_{{min}}}W\)*, any supporting hyperplane *\(\Pi ({{y}_{0}},V)\)* coincides with one of the hyperplanes*\(\Pi (y,V)\).

**Proof.** Only the case \(W \subset \partial V\) is relatively not trivial. In this case, the straight line connecting \(y\) with \({{y}_{0}}\) contains, due to \({{y}_{0}} \in {{\operatorname{int} }_{{min}}}W\), an interval containing \({{y}_{0}}\) and completely lying in the supporting hyperplane \(\Pi ({{y}_{0}},V)\) (its definition); hence \(y \in \Pi ({{y}_{0}},V)\) and, therefore, \(\Pi ({{y}_{0}},V) \subset \left\{ {\Pi (y,V)} \right\}\).

**Corollary 1.** Under the conditions of the theorem, \({{\Gamma }_{{min}}}\left( y \right) \subset {{\Gamma }_{{min}}}({{y}_{0}})\).

**Corollary 2.** For all convex sets\(W \subset V \subset {{\mathbb{R}}^{n}}\) and for \({{y}_{{0,1}}} \in {{\operatorname{int} }_{{min}}}W\), the collections of supporting hyperplanes \(\left\{ {\Pi ({{y}_{0}},V)} \right\}\) and \(\left\{ {\Pi ({{y}_{1}},V)} \right\}\) are identical. In particular, \({{\Gamma }_{{min}}}({{y}_{1}}) = {{\Gamma }_{{min}}}({{y}_{0}})\).

**Proof.** It is sufficient to construct an open (in the internal topology of \(W\)) convex neighborhood \({{W}_{1}}\) such that \({{y}_{0}} \in {{W}_{1}}\) and \({{y}_{1}} \in {{\partial }_{{min}}}{{W}_{1}}\). Next, we replace \(W\) by \({{W}_{1}}\) in Theorem A2.

The application of Theorem A2 (or its Corollary 2 in the case when there is no boundary) yields for the wedge \(K = W = V\).

**Corollary 3.** The assertions of Theorem A2 and of Corollary 1 are valid for all\({{y}_{0}} \in K\) and \(y \in E(K)\).

Define \(L = E(K)\) and denote the orthogonal complement of \(L\) in \({{\mathbb{R}}^{n}}\) by \({{L}^{ \bot }}\) for the wedge \(K \subset {{\mathbb{R}}^{n}}\); then we can write \(K = L \oplus K{\kern 1pt} '\), where \(K{\kern 1pt} ' = K \cap {{L}^{ \bot }}\) is a cone in \({{L}^{ \bot }}\). Since \({{\Gamma }_{{min}}}\left( 0 \right) = \left\{ 0 \right\}\) for the cone \(K{\kern 1pt} ' \subset {{L}^{ \bot }}\) (in the space \({{L}^{ \bot }}\), we can construct a basis consisting of supporting vectors; more precisely, this basis consists of \(\left\{ {{{e}_{l}}} \right\} \subset {{\operatorname{int} }_{{min}}}K{\kern 1pt} '\) supplemented, if needed, by the orthogonal set that complements the linear span \({\text{Lin}}(K{\kern 1pt} ') = \left\{ {\sum {{{a}_{j}}{{x}_{j}}} ,\;{{a}_{j}} \in K{\kern 1pt} ',\;{{x}_{j}} \in \mathbb{R}} \right\}\) to \({{L}^{ \bot }} = {\text{Lin}}(K{\kern 1pt} {\text{'}} \cup \{ {{e}_{l}}\} )\)), we can apply Theorem A1 to the wedge \(\operatorname{cl} K\) to find that \({{\Gamma }_{{min}}}\left( 0 \right) = E\left( K \right)\) for *K*; taking into account Corollary 1, this equality implies the following result.

**Corollary 4.** For any\(y \in E(K)\), it holds that \({{\Gamma }_{{min}}}\left( y \right) = E\left( K \right)\).

**Minimal Subsets of Wedges**

**Theorem A3.***For every cone *\({{K}_{S}}\), *there exists a unique *(*up to a positive collinear equivalence*) *minimal subset *\({{S}_{{min}}} = {{S}_{{min}}}(S) \subset S\)* such that*\({{K}_{S}} = {{K}_{{{{S}_{{min}}}}}}\).

**Proof.** We can define the following operation on the set \({{2}^{S}}\). We assign to the arbitrary set \({{S}_{1}} \subset S\) the rank \(\operatorname{rank} {{a}_{{{{S}_{1}}}}}\) of the matrix \({{a}_{{{{S}_{1}}}}} = \mathop {({{a}_{j}})}\nolimits_{j \in {{S}_{1}}} \) and find the maximal \({{S}_{2}} \subset S\) such that \(\operatorname{rank} {{a}_{{{{S}_{2}}}}} = \operatorname{rank} {{a}_{{{{S}_{1}}}}}\). Then, select the minimal \({{S}_{3}} \subset {{S}_{2}}\) such that \({{K}_{{{{S}_{3}}}}} = {{K}_{{{{S}_{2}}}}}\). In this case, \(\operatorname{rank} {{a}_{{{{S}_{3}}}}} = \operatorname{rank} {{a}_{{{{S}_{2}}}}}\).

It turns out that, up to a positive collinear equivalence, the superposition of operations

\({{S}_{1}} \to {{S}_{2}} \to {{S}_{3}}\) is one-valued. Indeed, the one-valuedness of the first operation is obvious. As a result of its application, the collection

\({{\{ {{a}_{j}}\} }_{{j \in {{S}_{2}}}}}\) consists of the vectors included in

\({{\{ {{a}_{j}}\} }_{{j \in S}}}\) that lie in the linear span

$${{L}_{{{{S}_{1}}}}} = {\text{lin}}\left\{ {{{a}_{j}};j \in {{S}_{1}}} \right\} = \left\{ {\sum\limits_{j \in {{S}_{1}}} \,{{a}_{j}}{{x}_{j}},\;{{x}_{j}} \in \mathbb{R}} \right\}$$

of the collection

\({{\{ {{a}_{j}}\} }_{{j \in {{S}_{1}}}}}\).

To verify the one-valuedness of the second operation, we restrict ourselves to the case when \({{S}_{2}}\) does not contain positively collinear vectors (the extension of this result to the general case is trivial). In the case when there are two different \(S{\kern 1pt} '\) and \(S{\kern 1pt} ''\) as \({{S}_{3}}\) with nonempty differences, due to their minimality, there are \(j{\kern 1pt} ' \in S{\kern 1pt} '\backslash S{\kern 1pt} ''\) and \(j{\kern 1pt} '' \in S{\kern 1pt} ''\backslash S{\kern 1pt} '\) such that \({{x}_{{j{\kern 1pt} ''}}} > 0\) in the expansion \({{a}_{{j'}}} = \sum\nolimits_{j \in S''}^{} {{{a}_{j}}{{x}_{j}}} ,\;{{x}_{j}} \geqslant 0\). Since \({{a}_{{j''}}} = \sum\nolimits_{j \in S'}^{} {{{a}_{j}}{{x}_{j}}} ,\;{{x}_{j}} \geqslant 0\), the substitution yields \({{a}_{{j'}}} = \sum\nolimits_{j \in S'}^{} {{{a}_{j}}x_{j}^{'}} ,\;x_{j}^{'} \geqslant 0\) with \(x_{j}^{'} > 0\) for a certain \(j \ne j{\kern 1pt} '\) due to the assumption on the absence of positive collinearity. In the case \({{x}_{{j'}}} < 1\), we obtain the representation \({{a}_{{j'}}} = \sum\nolimits_{j \in S'{\backslash\text{ }}\{ j'\} }^{} {{{a}_{j}}x_{j}^{{''}}} ,\;x_{j}^{{''}} \geqslant 0\); i.e., the set \(S{\kern 1pt} '\) can be reduced. If \({{x}_{{j'}}} \geqslant 1\), then we find \(0 = \sum\nolimits_{j \in S'}^{} {{{a}_{j}}x_{j}^{{'''}}} ,\;x_{j}^{{'''}} \geqslant 0\), where not all \(x_{j}^{{'''}}\) are zero; therefore, the wedge \({{K}_{{S'}}}\) contains a nontrivial subspace.

For the constructed operation, at the first step we obtain \(S{\kern 1pt} ' \to {{S}_{2}}\) for \({{S}_{1}} = S\) and any \(S{\kern 1pt} '\) such that \({{K}_{S}} = {{K}_{{S'}}}\). Therefore, the results of applying this operation to all such \(S{\kern 1pt} '\) are identical, including any appropriate \({{S}_{{min}}}\). Since any \({{S}_{{min}}}\) can be used as \({{S}_{3}}\) in the case under examination, the uniqueness proved above implies \({{S}_{{min}}} = {{S}_{3}}\).

**The Linear Structure of Wedges**

**Theorem A4.***The wedge *\({{K}_{S}}\)*is the direct sum *\({{K}_{S}} = L \oplus {{K}^{ + }}\)*of the linear space *\(L = E({{K}_{S}})\)* and a cone *\({{K}^{ + }}\)*; moreover, they satisfy the representations *\(L = {\text{lin}}\left\{ {{{a}_{j}};j \in {{S}^{0}}} \right\}\)* and *\({{K}^{ + }} = \operatorname{colin} \left\{ {{{a}_{j}};j \in {{S}^{ + }}} \right\}\)* for the unique decomposition of *\(S\)* into *\({{S}^{0}} = {{S}^{0}}(S)\)* and*\({{S}^{ + }} = {{S}^{ + }}(S)\).

**Proof.** Assign to \({{S}^{0}}\) all \(j{\kern 1pt} ' \in S\) for which there is at least one representation \(y = \sum\nolimits_{j \in S}^{} {{{a}_{j}}{{x}_{j}}} ,\;{{x}_{j}} \geqslant 0\) with \({{x}_{{j'}}} > 0\) for at least \(y \in L\). By choosing, if required, the mean of appropriate representations, we can assume that strict inequalities are satisfied simultaneously for all \(j{\kern 1pt} ' \in {{S}^{0}}\). It is clear that, if \({{a}_{{j'}}} \in L\), then \(j{\kern 1pt} ' \in {{S}^{0}}\). The converse result is equivalent to the proposition that \({{a}_{{j{\kern 1pt} ''}}} \notin L\) implies \(j{\kern 1pt} '' \in {{S}^{ + }} = S{\backslash\text{ }}{{S}^{0}}\). To check this proposition, specify a subspace \({{L}^{ \bot }}\) that is orthogonal to \(L\) in \({{\mathbb{R}}^{n}}\) and satisfies \(L \oplus {{L}^{ \bot }} = {{\mathbb{R}}^{n}}\); i.e., for every \(b \in {{\mathbb{R}}^{n}}\), there exists a unique decomposition \(b = {{b}^{L}} + {{b}^{{{{L}^{ \bot }}}}}\) with the projections \({{b}^{L}} \in L\) and \({{b}^{{{{L}^{ \bot }}}}} \in {{L}^{ \bot }}\). Let us compose the matrix \({{a}^{{{{L}^{ \bot }}}}} = \left( {a_{{j{\kern 1pt} ''}}^{{{{L}^{ \bot }}}}} \right)\) from the projections of the vectors \({{a}_{{j{\kern 1pt} ''}}} \notin L\) (\(j{\kern 1pt} '' \in S\)) on \({{L}^{ \bot }}\). Since \(K\left( {{{a}^{{{{L}^{ \bot }}}}}} \right) \subset {{L}^{ \bot }}\), the wedge \(K\left( {{{a}^{{{{L}^{ \bot }}}}}} \right)\) is a cone by the construction of \(L\), and \(K\left( {{{a}^{{{{L}^{ \bot }}}}}} \right) \cap L = \left\{ 0 \right\}\). Hence, we obtain both properties for \({{K}_{{\left\{ {j'':{{a}_{{j''}}} \notin L} \right\}}}}\) (the elementary verification by contradiction); this implies \(j{\kern 1pt} '' \in {{S}^{ + }}\) for such \(j{\kern 1pt} ''\) (if the expansion \(y = \sum\nolimits_{j \in S}^{} {{{a}_{j}}{{x}_{j}}} ,\;{{x}_{j}} \geqslant 0\) includes nonzero \({{x}_{{j''}}}\) with \(j{\kern 1pt} '' \in S\) such that \({{a}_{{j''}}} \notin L\), then \(y\) does not belong to \(L\)).

**Remark.** In the case of the wedge \(K(a) = {{K}_{M}}\), by \({{S}_{{min}}}(S)\) for \(S \subset {{S}^{ + }} = {{S}^{ + }}\left( M \right)\) we mean \({{S}_{{min}}}({{S}^{ + }})\) for the cone \({{K}_{{{{S}^{ + }}}}}\).

Taking into account Theorem A3 in this proof and choosing an appropriate set \(S_{{min}}^{0}\) with the minimal number of elements, we obtain the following result.

**Corollary 1.** Under the conditions of Theorem A4,\({{S}^{ + }}\) and \({{S}^{0}}\) can be replaced, respectively, by the unique (up to a positive collinear equivalence) set \(S_{{min}}^{ + } = {{S}_{{min}}}({{S}^{ + }}) \subset {{S}^{ + }}\) and a certain \(S_{{min}}^{0} \subset {{S}^{0}}\).

**Remark.** In the robust case, \(S_{{min}}^{0}\) can be chosen such that \(\left| {S_{{min}}^{0}} \right| = dimL + 1\) (in the partition of the bounded domain in \({{K}_{{{{S}^{0}}}}}\) containing the origin in its interior, a unique simplex containing the origin is chosen using \(dimL\)-dimensional simplexes with vertices at the points \({{\left\{ {{{a}_{j}}} \right\}}_{{j \in {{S}_{{min}}}}}}\)). Generally, \(\left| {S_{{min}}^{0}} \right| \leqslant 2dimL\) (if the origin is on the boundary of a simplex, which reduces the dimension of the problem by one, then in the case \(\left\{ {\mathbf{0}} \right\} \in {{K}_{{{{S}^{0}}}}}\) there are two elements in \({{\left\{ {{{a}_{j}}} \right\}}_{{j \in {{S}_{{min}}}}}}\) that lie on different sides of this boundary).

**Corollary 2.** Under the conditions of Corollary 1, the irremovable vectors\({{a}_{{j'}}}\) satisfy \(j{\kern 1pt} ' \in S_{{min}}^{ + }\).

**Proof.** Since \({\mathbf{0}} \in E(K)\), Theorem A4 implies that, for every \(j \in {{S}^{0}}\), it holds that \({{a}_{j}} \in \operatorname{lin} {{\left\{ {{{a}_{l}}} \right\}}_{{l \in {{S}^{0}}{\backslash\text{ }}\{ j\} }}}\); therefore, the dimension of the range of the matrix spanned by the collection of vectors \(\left\{ {{{a}_{1}}, \ldots ,{{a}_{m}}} \right\}\) is preserved after the vector \({{a}_{j}}\) is removed from the collection. Hence, \(j{\kern 1pt} ' \notin {{S}^{0}}\). In the case \(j{\kern 1pt} ' \in {{S}^{ + }}{\backslash\text{ }}S_{{min}}^{ + }\), ignoring (i.e. nullifying the corresponding coefficients) the vector \({{a}_{{j'}}}\) in the expansions of the elements of the wedge \(K\), preserves it due to Theorems A3 and A4; on the other hand, due to the irremovability of \({{a}_{{j'}}}\), this preserves the dimension of this wedge.

For the wedge \(K(a)\), we construct by induction a finite collection of sets \(\Sigma = \bigcup\nolimits_{i = 0}^{n'} {\bigcup\nolimits_l^{} {S_{i}^{l}} } \), where \(n{\kern 1pt} ' = \operatorname{rank} {{a}_{{{{S}^{ + }}}}}\). Define \({{S}_{0}} = {{S}_{{min}}}({{S}^{ + }}) \cup S_{{min}}^{0}\) and \(S_{i}^{l} = {{S}_{{min}}}(S) \cup S_{{min}}^{0}\) for \(i \geqslant 1\) and \(l \in \mathbb{N}\), where \(S \subset {{S}^{ + }}\) is such that \(\operatorname{rank} {{a}_{S}} = \operatorname{rank} {{a}_{{{{S}^{ + }}}}} - i\) and \(K({{a}_{S}}) \subset {{\partial }_{{min}}}K({{a}_{{S_{{i - 1}}^{{l'}}}}})\) for a certain \(l{\kern 1pt} '\). Due to Theorem A3, if \(K(a)\) is a cone, then the collection thus defined is unique up to a positive collinear equivalence.

**Corollary 3.** For the collection of subsets\(\Sigma \), it holds that \(K(a) = \bigcup\nolimits_{S \in \Sigma }^{} {{{{\operatorname{int} }}_{{min}}}{{K}_{S}}} \) and \({{\operatorname{int} }_{{min}}}{{K}_{{{{S}_{1}}}}} \cap {{\operatorname{int} }_{{min}}}{{K}_{{{{S}_{2}}}}} = \emptyset \) for different \({{S}_{{1,2}}} \in \Sigma \).

**Proof.** By Corollary 1, for every \(y \in K(a)\), there exists a representation \(y = \sum\nolimits_{j \in S_{{min}}^{ + } \cup S_{{min}}^{0}}^{} {{{a}_{j}}{{x}_{j}}} ,\;{{x}_{j}} \geqslant 0\) with the maximal set \(\sigma (y) \subset S_{{min}}^{ + } \cup S_{{min}}^{0}\) of positive terms (here *maximal* means that the set cannot be expanded due to small variations of nonzero coefficients). Since the terms with the indices \(j \in S_{{min}}^{0}\) can always be made positive, \(S_{{min}}^{0} \subset \sigma (y)\) for every \(y \in K(a)\). It is clear that \(y \in {{\operatorname{int} }_{{min}}}{{K}_{S}}\) if and only if \(S = \sigma (y)\). For every \(y \in K(a)\), at least one appropriate option will be realized. This yields the first equality, and the logical inconsistency of different options entails the second equality.

**Wedges as Finite Intersections**

**Theorem A5.***For the *\(n \times m\)*matrix *\(a\)* of *\(\operatorname{rank} a = n\)*, the wedge *\(K(a) \subset {{\mathbb{R}}^{n}}\)* is the intersection of the uniquely defined collection of the subspaces *\(\left\{ {\Pi _{{{{\nu }_{k}}}}^{ + }(0)} \right\}\)*, where *\(k \in \left\{ {1, \ldots ,m{\kern 1pt} '} \right\}\)* and*\(m{\kern 1pt} ' \leqslant m\).

**Proof.** First, consider the case of a cone. Applying Theorem A3 to \(S = \left\{ {1, \ldots ,m} \right\}\), we set without loss of generality \({{S}_{{min}}} = \left\{ {1, \ldots ,m{\kern 1pt} ''} \right\}\), where \(n \leqslant m{\kern 1pt} '' \leqslant m\). It is clear that \(\operatorname{rank} {{a}_{{{{S}_{{min}}}}}} = n\), where \({{a}_{{{{S}_{{min}}}}}}\) is the matrix consisting of the columns \({{a}_{j}}\), \(j \in {{S}_{{min}}}\).

The case \(m{\kern 1pt} '' = n\). Here \({{a}_{{{{S}_{{min}}}}}}\) is a square matrix with \(det{{a}_{{{{S}_{{min}}}}}} \ne 0\). Define \(\eta = {{\left( {a_{{{{S}_{{min}}}}}^{{\text{T}}}} \right)}^{{ - 1}}}\). It is easy to verify that \(K(\eta ) = K{\kern 1pt} {\text{*}}(a)\). Since \(K(a) = K{\kern 1pt} {\text{*}}{\kern 1pt} {\text{*}}(a)\), we have \(K(a) = K{\kern 1pt} {\text{*}}(\eta )\). On the right, we have the cone of vectors that form nonobtuse angles with each of the \(n\) rows of columns \({{\eta }_{j}}\) of the matrix \(\eta \); i.e., this is the intersection of positive half-spaces corresponding to these vectors: \(K(\eta ) = \bigcap\nolimits_{j = 1}^n {\Pi _{{{{\eta }_{j}}}}^{ + }(0)} \). It remains to choose \(m{\kern 1pt} ' = m{\kern 1pt} ''\) and \({{\nu }_{j}} = {{\eta }_{j}}\).

The case \(m{\kern 1pt} '' > n\). In this case, we triangulate the cone \(K({{a}_{{{{S}_{{min}}}}}})\), i.e., represent it as \(K({{a}_{{{{S}_{{min}}}}}}) = \bigcup\nolimits_{l = 1}^\Lambda {{{K}_{{{{S}_{l}}}}}} \) for a certain \(\Lambda \in \mathbb{N}\) with \({{S}_{{min}}} = \bigcup\nolimits_{l = 1}^\Lambda {{{S}_{l}}} \), where \(\left| {{{S}_{l}}} \right| = n\) and \(\operatorname{rank} {{a}_{{{{S}_{{min}}}}}} = n\) for all \(l \in \left\{ {1, \ldots ,\Lambda } \right\}\) so that \({\text{in}}{{{\text{t}}}_{{min}}}{{K}_{{{{S}_{l}}}}} \cap {\text{in}}{{{\text{t}}}_{{min}}}{{K}_{{{{S}_{{l'}}}}}} = \not {0}\) for \(l{\kern 1pt} ' \ne l\). Put \({{\eta }^{{{{S}_{l}}}}} = {{\left( {a_{{{{S}_{j}}}}^{{\text{T}}}} \right)}^{{ - 1}}}\), and construct the adjoint cones \(K\left( {{{\eta }^{{{{S}_{l}}}}}} \right) = K_{{{{S}_{l}}}}^{ * }\) spanned by the columns \(\eta _{j}^{{{{S}_{l}}}}\) (\(j \in \left\{ {1, \ldots ,n} \right\}\)) of the matrices \({{\eta }^{{{{S}_{l}}}}}\). Their intersection is not empty because \(\bigcap\nolimits_{l = 1}^\Lambda {K_{{{{S}_{l}}}}^{ * }} = K{\kern 1pt} {\text{*}}({{a}_{{{{S}_{{min}}}}}})\), and \(K{\kern 1pt} {\text{*}}({{a}_{{{{S}_{{min}}}}}}) \ne \not {0}\) because the convex cone \(K(a)\) has a supporting vector at zero. Each element of the convex cone \(K{\kern 1pt} {\text{*}}({{a}_{{{{S}_{{min}}}}}}) = \bigcap\nolimits_{l = 1}^\Lambda {K\left( {{{\eta }^{{{{S}_{l}}}}}} \right)} \) is a positive linear combination of vectors \(\left\{ {{{\eta }_{k}},k = 1, \ldots ,m'} \right\}\) that form its edges; each of these edges is formed by the intersection of the collection of \(\left( {n - 1} \right)\) hyperplanes \({{\Pi }_{{{{a}_{j}}}}}({\mathbf{0}})\) of which each is determined from the orthogonality condition to one of the vectors \({{a}_{j}}\) forming the columns of the matrix \({{a}_{{{{S}_{{min}}}}}}\); in each such collection, all \(\left( {n - 1} \right)\) vectors \({{a}_{j}}\) are linearly independent. In the cone \(K({{a}_{{{{S}_{{min}}}}}})\), which is adjoint of \(K{\kern 1pt} {\text{*}}({{a}_{{{{S}_{{min}}}}}})\), such intersections are associated with the hyperplanes \({{\Pi }_{{{{\eta }_{k}}}}}({\mathbf{0}})\) spanned by the abovementioned collections of \(\left( {n - 1} \right)\) vectors \({{a}_{j}}\). By choosing the appropriate direction of the vectors \({{\eta }_{k}}\) to ensure the inclusion \(K({{a}_{{{{S}_{{min}}}}}}) \subset \Pi _{{{{\eta }_{k}}}}^{ + }({\mathbf{0}})\), we obtain \(K({{a}_{{{{S}_{{min}}}}}}) = \bigcap\nolimits_{k = 1}^{m'} {\Pi _{{{{\eta }_{k}}}}^{ + }(0)} \). It remains to choose \({{\nu }_{k}} = {{\eta }_{k}}\).

If the wedge is not a cone, we use Theorem A4 and repeat the construction described above for the lower dimensional cone involved in it, and then use the direct sums of the half-spaces of this dimension with the linear space mentioned in Theorem A4 as the desired half-spaces.

In both cases, the one-valuedness follows from the fact that, if two finite-dimensional sets with nonempty interiors that are finite intersections of minimal collections of half-spaces are identical, then these collections are identical as well.

This is a consequence of the fact that the boundary of each subspace in the appropriate minimal collection contains a (*planar*) point of the set that is an interior point with respect to its intersection with the boundary of this half-space. The existence of such points obviously implies that the half-space under examination must be included in the collection (strictly speaking, the closure of the complement of this half-space would do to cover the planar points, but only if this set were localized on their common boundary; however, this is impossible if the set has a nonempty interior). The planar points are the points that belong to the boundary of only one half-space. If there are no such points for a half-space under examination, then this half-space is redundant in the collection and, therefore, this collection is not minimal. The fact that such a half-space can be removed from the collection follows from the fact that its boundary can be excluded from the boundary of the set and from the fact that the presence of an interior point establishes a one-to-one correspondence between the half-spaces containing this point and their boundaries. This implies that the segment connecting the interior point with the boundary point of the half-space being checked is already included in the collection corresponding to other half-spaces; therefore, the half-space being checked may be removed from the collection after its boundary is removed.

**Corollary 1.** Under the conditions of the theorem, for each\({{y}_{0}} \in K(a)\), every (generalized) face \(\Gamma ({{y}_{0}})\) of the cone \(K(a)\) is the intersection of a uniquely determined collection of hyperplanes \(\left\{ {{{\Pi }_{{{{\nu }_{{k'}}}}}}({\mathbf{0}})} \right\}\) obtained by replacing some of the half-spaces\(\left\{ {\Pi _{{{{\nu }_{{k'}}}}}^{ + }({\mathbf{0}})} \right\}\) from the collection \(\left\{ {\Pi _{{{{\nu }_{k}}}}^{ + }({\mathbf{0}})} \right\}\) by their boundaries with the other half-spaces of the collection.

**Proof.** From the collection of half-spaces \(\left\{ {\Pi _{{{{\nu }_{k}}}}^{ + }(0)} \right\}\) (\(k \in \left\{ {1, \ldots ,m{\kern 1pt} '} \right\}\)) mentioned in the condition of the theorem, we select the half-spaces with the indices \(k \in Z({{y}_{0}})\) for which the point \({{y}_{0}}\) is on their boundary \({{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})\). The collection of hyperplanes \(\mathop {\left\{ {{{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})} \right\}}\nolimits_{k \in Z({{y}_{0}})} \) that determine the boundaries of these half-spaces forms the desired collection \(\left\{ {{{\Pi }_{{{{\nu }_{{k'}}}}}}({\mathbf{0}})} \right\}\).

**Corollary 2.** Under the conditions of the theorem, for each\({{y}_{0}} \in K(a)\), every (generalized) face \(\Gamma ({{y}_{0}})\) of the wedge \(K(a)\) is the intersection of this wedge with the intersection of a finite collection of supporting hyperplanes \(\mathop {\left\{ {{{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})} \right\}}\nolimits_{k \in Z({{y}_{0}})} \). The same is true for the minimal face \({{\Gamma }_{{min}}}({{y}_{0}})\).

**Proof.** Removing in the proof of Corollary 1 the remaining half-spaces that did not degenerate at the point \({{y}_{0}}\) into one of the hyperplanes \(\mathop {\left\{ {{{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})} \right\}}\nolimits_{k \in Z({{y}_{0}})} \), we have to replace the intersection with those half-spaces by the intersection with \(K(a)\). Since \({{y}_{0}} \in {{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})\) for each \(k \in Z({{y}_{0}})\) in this case, each such \({{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})\) is a supporting hyperplane at the point \({{y}_{0}}\).

**Minimal Faces as Convex Linear Spans**

**Theorem A6.***Each minimal face of the wedge *\({{\Gamma }_{{min}}}({{y}_{0}}) \subset K(a)\)*is uniquely determined by a subset *\(S = S({{y}_{0}}) \in {{2}^{M}}\)* as the convex linear span *\({{K}_{S}}\)* of the vectors*\({{a}_{j}},j \in S\).

**Remark.** Taking into account Corollary 2 to Theorem A2, we conclude that the minimal face \({{\Gamma }_{{min}}}({{y}_{0}})\) of the point \({{y}_{0}} \in K(a)\) is the minimal face for each point \(y \in {{\operatorname{int} }_{{min}}}{{\Gamma }_{{min}}}({{y}_{0}})\) and only for these points. In particular, \({{y}_{0}} \in {{\operatorname{int} }_{{min}}}{{\Gamma }_{{min}}}({{y}_{0}})\).

**Proof.** To verify the assertion of the theorem, it suffices to show that \({{\Gamma }_{{min}}}({{y}_{0}}) = {{K}_{S}}\) for \({{y}_{0}} \in {{\operatorname{int} }_{{min}}}{{K}_{S}}\). Indeed, due to Corollary 3 to Theorem A4, we can take \(\Sigma \subset {{2}^{M}}\) as an appropriate collection of sets, so that for \({{y}_{0}} \in K(a)\) there exists an \(S \in \Sigma \) such that \({{y}_{0}} \in {{\operatorname{int} }_{{min}}}{{K}_{S}}\).

To prove the inclusion \({{K}_{S}} \subset {{\Gamma }_{{min}}}({{y}_{0}})\), we note that \({{y}_{0}}\) belongs to \({{\operatorname{int} }_{{min}}}{{K}_{S}}\) together with its neighborhood \({{y}_{0}} + {{O}_{\varepsilon }}({\mathbf{0}})\), where \({{O}_{\varepsilon }}({\mathbf{0}})\) is the neighborhood of zero of radius \(\varepsilon > 0\) in \({{L}_{S}}\). In particular, this implies that, for every \(l \in {{L}_{S}}\), \({{\operatorname{int} }_{{min}}}{{K}_{S}}\) contains the segment \({{I}_{{\varepsilon '}}} = ({{y}_{0}} - \varepsilon {\kern 1pt} {\text{'}}l,{{y}_{0}} + \varepsilon {\kern 1pt} {\text{'}}l)\) of the straight line passing through \({{y}_{0}}\) for a certain \(\varepsilon {\kern 1pt} ' > 0\). Since \({{I}_{{\varepsilon '}}} \subset {{K}_{S}} \subset K(a)\), we have \({{(\nu ,l)}_{n}} = 0\) for every supporting vector \(\nu = \nu ({{y}_{0}},V)\); i.e., \(\nu \in L_{S}^{ \bot }\). This implies the inclusion \({{L}_{S}} \subset P({{y}_{0}})\); i.e., taking into account the intersections with \(K(a)\) (\({{L}_{S}} \cap K(a) = {{K}_{S}}\) and \(P({{y}_{0}}) \cap K(a) = {{\Gamma }_{{min}}}({{y}_{0}})\)), this also implies the inclusion to be proved.

To prove the inverse inclusion \({{\Gamma }_{{min}}}({{y}_{0}}) \subset {{K}_{S}}\), it suffices to show (taking into account the intersections with \(K(a)\), see above) that \({{L}_{S}}\) is the intersection of a finite (possibly empty, and in this case \({{L}_{S}} = {{\mathbb{R}}^{n}}\)) collection of supporting hyperplanes at the point \({{y}_{0}} \in {{\operatorname{int} }_{{min}}}{{K}_{S}}\). If \(\operatorname{rank} a = n{\kern 1pt} ' < n\), i.e., if \(K(a)\) is contained within the subspace \({{L}_{{n'}}} \subset {{\mathbb{R}}^{n}}\) of dimension \(n{\kern 1pt} '\), then we can represent \({{L}_{{n'}}}\) as the intersection of \(\left( {n - n{\kern 1pt} '} \right)\) hyperplanes; then, by solving the problem in the space \({{\mathbb{R}}^{{n'}}}\), which is isomorphic to \({{L}_{{n'}}}\), we can construct the solution in the original space \({{\mathbb{R}}^{n}}\) by extending the found hyperplanes from \({{L}_{{n'}}}\) to \({{\mathbb{R}}^{n}}\) orthogonally to \({{L}_{{n'}}}\). For this reason, we may assume below without loss of generality that \(\operatorname{rank} a = n\). In this case, we use Theorem A5 and select from the collection of half-spaces \(\left\{ {\Pi _{{{{\nu }_{k}}}}^{ + }({\mathbf{0}})} \right\}\) mentioned in this theorem the half-spaces with the indices \(k \in Z({{y}_{0}})\) for which the point \({{y}_{0}}\) is on their boundary \({{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})\). The collection of hyperplanes \(\mathop {\left\{ {{{\Pi }_{{{{\nu }_{k}}}}}({\mathbf{0}})} \right\}}\nolimits_{k \in Z({{y}_{0}})} \) that specify the boundaries of those half-spaces is the desired collection of supporting hyperplanes. The fact that this collection is sufficient for determining the minimal manifold \(P({{y}_{0}})\) follows from Corollary 2 to Theorem A5.

Taking into account Theorem A4, this result implies the following corollary.

**Corollary 1.** In the notation of the theorem, the set\(S\) has the form \(S = {{S}_{{min}}}(S{\kern 1pt} ') \cup S_{{min}}^{0}\) for a certain \(S{\kern 1pt} ' \subset {{S}^{ + }}\left( M \right)\); moreover, in the case of a cone, \(S_{{min}}^{0} = \not {0}\).

**Corollary 2.** For\({{y}_{0}} \in K(a)\), the minimal face is given by \({{\Gamma }_{{min}}}({{y}_{0}}) = \left( {y = \sum\nolimits_{j = 1}^m {{{a}_{j}}{{\xi }_{j}}x_{j}^{0}} ,\;{{\xi }_{j}} \geqslant 0} \right)\) with \(x_{j}^{0} \geqslant 0\) and a maximal number of nonzero terms in the expansion \({{y}_{0}} = \sum\nolimits_{j = 1}^m {{{a}_{j}}x_{j}^{0}} \) (here maximal means that the number of terms cannot be increased by adding small variations \(x_{j}^{0} \geqslant 0\)). In this case, \({\text{in}}{{{\text{t}}}_{{min}}}{{\Gamma }_{{min}}}({{y}_{0}}) = \left( {y = \sum\nolimits_{j = 1}^m {{{a}_{j}}{{\xi }_{j}}x_{j}^{0}} ,\;{{\xi }_{j}} > 0} \right)\).

**Proof.** The membership \({{y}_{0}} \in {{\operatorname{int} }_{{min}}}{{K}_{S}}\) implies that there exists an expansion \({{y}_{0}} = \sum\nolimits_{j \in S}^{} {{{a}_{j}}x_{j}^{0}} \) with \(x_{j}^{0} > 0\), and conversely. The addition of factors \({{\xi }_{j}} > 0\) to \(x_{j}^{0} > 0\) does not lead out of the set \({{\operatorname{int} }_{{min}}}{{K}_{S}}\), while nullifying some \({{\xi }_{j}}\) can lead to the boundary of \({{K}_{S}}\). As a result, we obtain the wedge \({{K}_{S}}\) that, taking into account Theorem A6 and remarks to it, coincides with \({{\Gamma }_{{min}}}({{y}_{0}})\).