# On the Strong Convergence of Subgradients of Convex Functions

- 114 Downloads

## Abstract

In this paper, results on the strong convergence of subgradients of convex functions along a given direction are presented; that is, the relative compactness (with respect to the norm) of the union of subdifferentials of a convex function along a given direction is investigated.

## Keywords

Convexity Subdifferentials Strong convergence of subgradients Gâteux derivative## Mathematics Subject Classification

Primary 49J52 Secondary 52A41 41A65## 1 Introduction

There are several results on the strong convergence of subgradients of a sequence of convex functions defined on a Banach space. The most celebrated result is the Attouch theorem; see, for example, [1], where the equivalence of Mosco convergence of lower semicontinuous convex functions to the Painleve–Kuratowski graph convergence of their subdifferentials is established on reflexive Banach space. There are also results extending the Attouch theorem to general Banach spaces; see, for example, [2, 3, 4, 5, 6, 7] and references therein. To the best of our knowledge all known results are of the form: there are sequences of points and subgradients such that the strong limits of sequences of subgradients exist (limits with respect to the norm of the space); see, for example, Theorem 3.1 in [7]. This is inconvenient. Simply we want to have subgradients with a desired property. The postulate of existence (“there are”) does not allow one to guarantee that the subgradients are as good as it is needed. This disadvantage can be observed also, when the directional derivative is calculated. Namely, the difference quotients form sequences of functions with respect to directions, whenever we consider the limit over a discrete subset. It is natural to ask about the convergence of subgradients of this functions; see, for example, Giannessi’s questions, which are recalled in (5). The question, about the existence of a convergent subsequence (at least) for this sequence of function, is the question on the existence of a convergent sequence of subgradients along a direction. In a finite-dimensional case, the existence of convergent subsequences is guaranteed by the continuity of the convex function under investigation. However, there can be subsequences with different limits; see [8]; see also [9, 10, 11]. It turns out that the set of “wrong directions” (there in no unique limit) has the Lebesgue measure equal zero; see Lemma 3.1. In infinite-dimensional setting, it is hard to expect the convergence. Thus, the basic question in this case, concerning directional convergence of subgradients, is: *when does the union of subdifferentials along a given direction form a relatively compact set (with respect to the norm topology)*? We should also ask about the uniqueness of the limit, which is the essence of Giannessi’s questions in the finite-dimensional setting. In the infinite-dimensional case, results of this type are rather unknown, but it would be convenient to have such results at hand. For instance, when the limit exists, then the limiting subgradients inherit properties of a convergent sequence, like: size of norm, being in a specified closed set, a good behaving with respect to the weak convergence of arguments, and so on. In Sect. 3, we present a result which guarantees the relative compactness for some special classes of convex functions; see Theorem 3.1. In Lemmas 2.2 (in the Hilbert space setting) and 3.2 (in the reflexive Banach space setting) examples of functions from the class are provided too.

## 2 Preliminaries

In this section, some basic notions and their properties are gathered.

In the sequel, \((X, \Vert \cdot \Vert )\) stands for a real normed space, \(X^*\) for its dual space and \({\mathbb {H}}\) for a real Hilbert space (with a real inner product). The weak convergence is denoted by \({\buildrel weak \over \longrightarrow }\), and the limit from the right is denoted by \(t\downarrow a\), which means that \(t>a\) and \(t\longrightarrow a\).

*x*and of radius

*r*, the sphere is denoted by \({\mathbb {S}}_X[x,r]:=\{ y\in X \, : \, \, \Vert y-x\Vert =r \} \) and \({\mathbb {S}}_X:={\mathbb {S}}_X[0,1]\), \(`` \mathrm {cl}\,\)” stands for the topological closure. A point

*x*is an interior point of

*D*if there exists an open ball centered at

*x*which is completely contained in

*D*, all interior points of

*D*are denoted by \(\mathrm{int}\,D\). For given \(x,y\in X\) we put

*M*and it is the smallest (in the sense of inclusion) linear subspace of

*X*containing

*M*. We call the set

*drop*, where \(\mu _0>0, \beta _0>0\), \(x\in X\) and \(w_0 \in X\setminus \{0\}\) are given; we refer to [13] and references therein for information on drops. For a given function \(f:X\longrightarrow \mathbb {R}\cup \{+\infty \}\) the domain of

*f*is defined by

*distance function*from the set

*S*is denoted by \(d_S(\cdot )\), that is,

## Lemma 2.1

*W*be a closed subspace of \({\mathbb {H}}\) (thus

*W*is a Hilbert space too), \(S\subset {\mathbb {H}}\) be a nonempty subset and \(x \not \in \mathrm {cl}\,S\). Suppose that \(y \not \in \mathrm {cl}\,S\), \(\Vert y-x\Vert =d_S(x)\), \(\langle w,x-y\rangle =0\) for all \(w\in W\) and that for some \(t\in ]0,1]\) and all \(u\in {\mathbb {H}}\) such that

## Proof

## Lemma 2.2

*W*be a closed subspace of \({\mathbb {H}}\), \(S\subset {\mathbb {H}}\) be a nonempty subset and \(x\not \in \mathrm {cl}\,S\), \(h\in {\mathbb {S}}_{{\mathbb {H}}}\), \(S_h\subset S\), \(\mu >0\) be such that \(d_S(x+t_i h)=d_{S_h}(x+t_i h)\) for all \(t_i\in [0,\mu ]\). Suppose that \(y\not \in \mathrm {cl}\,S\), \(\Vert y-x\Vert =d_S(x)\), \(\langle w,x-y\rangle =0\) for all \(w\in W\) and that for some \(t\in ]0,1]\) and all \(u\in {\mathbb {H}}\) such that

## Proof

*f*at \(x\in X\) is defined by

## 3 Relative Compactness of Sets of Subgradients

Let us recall Giannessi’s questions; see [8], see also [9, 10, 11] for examples of convex functions in two-dimensional spaces, for which the limit in (5) does not exist:

*Let*\(f:\mathbb {R}^n \longrightarrow \mathbb {R}\),

*with*\(n\ge 2\),

*be a convex function, and set*

*with*\(t \in \mathbb {R}\).

*Assume that*\( \nabla f(x(t))\)

*exists for every*\(t > 0\),

*and consider the following limit:*

*We conjecture that the above limit may not exist. Hence, however, the question is still open. The above question can be generalized in several ways. For instance,*

*x*(

*t*)

*may represent a curve having the origin as endpoint instead of a ray;*\(\mathbb {R}^n\)

*may be replaced with an infinite-dimensional space.*

Below directions along which we have the weak\(^*\) convergence of subgradients are indicated.

## Lemma 3.1

*x*along \(w_0\) is finite) such that

## Proof

*p*defined in Lemma 2.2, see (4). Observe that the continuity of the Asplund function ensures that (11) is fulfilled for all \(\rho >0\) and all sequences \(\{t_i\}_{i\in \mathbb {N}}\) such that \(t_i >0\) for all \(i\in \mathbb {N}\), and \(t_i \downarrow 0\), whenever we put \(p(0,\cdot )=0\). Below it is shown that weak continuous convex functions can be also used to construct functions from the class.

## Lemma 3.2

*f*is \(M_0-\)Lipschitz continuous on the set \(D[x,x+\mu _0\mathbb {B}_X[h,\beta _0]]\), where \(x\in \mathrm{dom}\,f\), \(h\in {\mathbb {S}}_X\), \(M_0\ge 0\), \(\mu _0>0, \beta _0>0\), \(x\in \mathrm{dom}\,f\) are given. If \(Y\subset X\) is a subspace such that

*f*is weak continuous on the set \(D[x,x+\mu _0\mathbb {B}_Y[h,\beta _0]]\) and

*p*defined on \([0,\infty [\times \mathbb {B}_Y[0,\rho ]\), with \(\rho \in ]0,\beta _0]\), as follows

## Proof

*f*we have

*Y*is a closed subspace of

*X*. If there exists a closed subspace \(Z\subset X\) such that \(X=Y+Z\) and \(Y\cap Z=\{0\}\), then

*Y*is said to be complemented in

*X*. In this case,

*X*is said to be the direct sum of

*Y*and

*Z*, and the notation

*Y*has a finite codimension, then

*Y*is complemented; see, for example, Lemma 4.21 in [17] or Definition 4.1 and Theorem 5.5 in [12].

## Theorem 3.1

*f*is \(M_0-\)Lipschitz continuous on the set \(D[x,x+\mu _0\mathbb {B}_X[w_0,\beta _0]]\). Suppose that for a given sequence \(\{t_i\}_{i\in \mathbb {N}}\) such that \(t_i \in ]0,\mu _0[\) for all \(i\in \mathbb {N}\), and \(t_i \downarrow 0\), there exist positive numbers \(\alpha ^n_i \in ]0,1]\), where \(i,n\in \mathbb {N}\) and a closed subspace \(Y\subset X\) with a finite codimension, that is \(X=Y\oplus Z\), where

*Z*is a vector space with a finite dimension, and a \(G_{\delta }\) (a countable intersection of open sets) dense subset \(B\subset \mathbb {B}_Y[0, \beta _0]\) for which there are functions \(p_n\in {\mathbb {F}}(\{t_i\}_{i\in \mathbb {N}}, \beta _0, Y)\), for all \(n\in \mathbb {N}\), such that

*X*and to 0 on

*Y*.

## Proof

There are sequences \(\{t_i\}_{i\in \mathbb {N}}\) such that \(\mu _0>t_i>0\) for all \(i\in \mathbb {N}\), \(t_i\downarrow 0\), and \(\{x_i^*\}_{i\in \mathbb {N}} \) such that \(x_i^*\in \partial f(x+t_iw_0)\) for all \(i\in \mathbb {N}\), and (14) is fulfilled for a sequence of functions \(\{p_n\}_{\in \mathbb {N}}\) from \({\mathbb {F}}(\{t_i\}_{i\in \mathbb {N}}, \beta _0, Y)\) and positive numbers \(\{\alpha ^n_i\}_{i,n\in \mathbb {N}}\) from ]0, 1].

*f*and the upper semicontinuity of \(p_n\))

*Y*, say that \(\{x^*_{i_k}\}_{k\in \mathbb {N}}\) is the subsequence. We recall that \(X=Y\oplus Z\),

*Y*is closed subspace of

*X*and

*Z*is a finite-dimensional subspace. In order to get the strong convergence of some subsequence of \(\{x^*_{i_k}\}_{k\in \mathbb {N}}\) on

*X*it is enough to observe that

*Z*, since the dimension of

*Z*is finite. Having the strong convergence on

*Y*and

*Z*, we have the convergence on

*X*, since \(X=Y\oplus Z\). \(\square \)

## 4 Conclusions

- 1.
It is shown that, for some convex functions and some direction, it is possible to find a convergent sequence of subgradients along a direction, namely they belong to subdifferentials at points of some segment (a convergent sequence of points) and they form a convergent sequence of functionals, see Theorem 3.1.

- 2.
Examples of convex functions and directions for which Theorem 3.1 can be applied are delivered, see Lemmas 2.2 and 3.2.

- 3.
In Lemma 3.1 an answer to question:

*we ask for conditions under which the limit*(5)*exists*is provided. In fact, under assumptions of Lemma 3.1 we have not only the existence of the limit, even for subgradients, but it says, due to the Rademacher Theorem, that the set of directions with the property that the limit in (5) exists is a set of full measure in the finite-dimensional setting, and it is a dense \(G_{\delta }\) subset in the weak Asplund space, whenever the weak convergence is postulated instead of the strong one. Thus, Theorem 3.1, together with Lemma 3.1, gives an answer to Giannessi’s question in the infinite-dimensional setting.

## References

- 1.Attouch, H.: Variational Convergence for Functions and Operators. Pitman Advanced Publishing Program, Boston (1984)MATHGoogle Scholar
- 2.Beer, G., Théra, M.: Attouch-Wets convergence and a differential operator for convex functions. Proc. Am. Math. Soc.
**122**(3), 851–860 (1994)MathSciNetCrossRefMATHGoogle Scholar - 3.Combari, C., Thibault, L.: On the graph convergence of subdifferentials of convex functions. Proc. Am. Math. Soc.
**126**(8), 2231–2240 (1998)MathSciNetCrossRefMATHGoogle Scholar - 4.Zagrodny, D.: On the weak\(^*\) convergence of subdifferentials of convex functions. J. Convex Anal.
**12**(1), 213–219 (2005)MathSciNetMATHGoogle Scholar - 5.Zagrodny, D.: Minimizers of the limit of Mosco converging functions. Arch. Math.
**85**, 440–445 (2005)MathSciNetCrossRefMATHGoogle Scholar - 6.Zagrodny, D.: A weak\(^*\) approximation of subgradient of convex function. Control Cybernet.
**36**(3), 793–802 (2007)MathSciNetMATHGoogle Scholar - 7.Zagrodny, D.: Convergences of subgradients of sequences of convex functions. Nonlinear Anal.
**84**, 84–90 (2013)MathSciNetCrossRefMATHGoogle Scholar - 8.Giannessi, F.: A problem on convex functions. J. Optim. Theory Appl.
**59**, 525 (1988)MathSciNetCrossRefMATHGoogle Scholar - 9.Pontini, C.: Solving in the affirmative a conjecture about a limit of gradients. J. Optim. Theory Appl.
**70**, 623–629 (1991)MathSciNetCrossRefMATHGoogle Scholar - 10.Rockafellar, R.T.: On a special class of convex functions. J. Optim. Theory Appl.
**70**, 619–621 (1991)MathSciNetCrossRefMATHGoogle Scholar - 11.Zagrodny, D.: An example of bad convex function. J. Optim. Theory Appl.
**70**, 631–638 (1991)MathSciNetCrossRefMATHGoogle Scholar - 12.Fabian, M., Habala, M.P., Hajek, P., Montesinos, V., Zizler, V.: Banach Space Theory. The Basis for Linear and Nonlinear Analysis. Springer, New York (2011)MATHGoogle Scholar
- 13.Correa, R., Gajardo, P., Thibault, L., Zagrodny, D.: Existence of minimizers on drops. SIAM. J. Optim.
**23**, 1154–1166 (2013)MathSciNetCrossRefMATHGoogle Scholar - 14.Zagrodny, D.: On closures of preimages of metric projection mappings in Hilbert spaces. Set Valued Var. Anal.
**23**, 581–612 (2015)MathSciNetCrossRefMATHGoogle Scholar - 15.Asplund, E.: C̆ebys̆ev sets in Hilbert spaces. Trans. Am. Math. Soc.
**144**, 235–240 (1969)MATHGoogle Scholar - 16.Yosida, K.: Functional Analysis, 6th edn. Springer, Berlin (1980)MATHGoogle Scholar
- 17.Rudin, W.: Functional Analysis. McGraw-Hill, New York (1973)MATHGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.