1 Introduction

The concepts of approximate convexity for extended real-valued functions include among others, γ-paraconvexity [3, 4, 7, 8], γ-semiconcavity [1], α-paraconvexity, strong α-paraconvexity [9], semiconcavity [1], and approximate convexity [6]. Relations between these concepts were investigated by Rolewicz [7,8,9], Daniilidis and Georgiev [2], and Tabor and Tabor [11]. These concepts were used, e.g., in [1], to investigate Hamilton–Jacobi equation. In a series of papers [7,8,9], Rolewicz investigated Gâteaux and Fréchet differentiability of strongly α-paraconvex, generalizing in this way the Mazur theorem (1933).

Generalization of the above concepts to vector-valued mappings with values in a general vector space Y was given by Veselý and Zajicek [13,14,15,16], Valadier [12], and Rolewicz [10]. In the paper [10], Rolewicz defined vector-valued strongly α-k paraconvex mappings and investigated their Gateaux and Fréchet differentiability, where kK and K is a closed convex cone in a normed vector space Y.

Let \(\alpha :\mathbb {R}_{+}\rightarrow \mathbb {R}_{+}\) be a nondecreasing function satisfying the condition

$$\lim\limits_{t\rightarrow 0^{+}}\frac{\alpha(t)}{t}= 0. $$

Let X be a normed space and let kK. The mapping F : XY is strongly α-k paraconvex on a convex subset A of X if there exists a constant C > 0 such that for every x1,x2A and every λ ∈ [0,1]

$$ F(\lambda x_{1}+(1-\lambda)x_{2})\le_{K} \lambda F(x_{1})+(1-\lambda) F(x_{2})+C\min\{\lambda, 1-\lambda\}\alpha(\|x_{1}-x_{2}\|)k, $$
(1)

where x K yyxK. In the sequel, we use the notation ≤ if the cone K is clear from the context.

The mapping F : XY is strongly α-K paraconvex on a convex subset A of X if for every kK there exists a constant C > 0 such that for every x1,x2A and every λ ∈ [0,1]

$$ F(\lambda x_{1}+(1-\lambda)x_{2})\le_{K} \lambda F(x_{1})+(1-\lambda) F(x_{2})+C\min\{\lambda, 1-\lambda\}\alpha(\|x_{1}-x_{2}\|)k. $$

A strongly α (⋅)-K paraconvex mapping F is called strongly cone-paraconvex if the cone K and the function α are clear from the context. Since for every λ ∈ [0,1]

$$\lambda(1-\lambda)\le\min\{\lambda, 1-\lambda\}\le 2 \lambda(1-\lambda) $$

condition (1) can be equivalently rewritten as

$$ F(\lambda x_{1}+(1-\lambda)x_{2})\le_{K} \lambda F(x_{1})+(1-\lambda) F(x_{2})+ 2C\lambda(1-\lambda)\alpha(\|x_{1}-x_{2}\|)k. $$

Strong cone-paraconvexity generalizes the cone convexity. The mapping F : XY is K-convex on a convex subset A of X if for every x1,x2A and every λ ∈ [0,1]

$$ F(\lambda x_{1}+(1-\lambda)x_{2})\le_{K} \lambda F(x_{1})+(1-\lambda) F(x_{2}). $$

In the present paper, we investigate the existence of directional derivatives for strongly cone-paraconvex mappings. Our main result (Theorem 2) is a generalization of the theorem of Valadier [12] concerning directional differentiability of cone convex mappings.

2 Preliminary Facts

Let Y be the dual space of Y and KY be the positive dual cone to K,

$$K^{\ast}:=\{y^{\ast}\in Y^{\ast}~ |~ y^{\ast}(y)\ge 0~ \forall~ y\in K\}. $$

Clearly, if F is a strongly α (⋅)-k paraconvex mapping with constant C > 0, then for every yK, the function yF is a strongly α (⋅)-paraconvex function with the constant Cy(k).

In a normed space Y, a cone K is normal (see [12]) if there is a number C > 0 such that

$$0\le_{K} x\le_{K} y \Rightarrow \|x\|\le C \|y\|\quad \text{ for all } x, y \in Y. $$

Every normal cone is pointed, i.e., K ∩ (−K) = {0}.

In [13], Veselỳ and Zajiček introduced the concept of d.c. (delta-convex) mappings acting between Banach spaces X and Y. A mapping F : XY is d.c. if there exists a continuous convex function \(g:X\rightarrow \mathbb {R}\) such that for every yY the function yF + g is a d.c. function, i.e., it is representable as a difference of two convex functions.

According to [15], F is order d.c. if F is representable as a difference of two cone convex mappings on A. Consequently, if the cone K is normal, then F is also weakly order d.c.

Moreover, if the range space Y of an order d.c. mapping F is ordered by a well-based cone K (and this is true for L1(μ)), it is easy to show (see Proposition 4.1 [15]) that the mapping is then d.c.

In the example below, we show that any strongly ∥⋅∥2-k0-paraconvex mapping is order d.c.

Example 1

Let X be a Hilbert space. A mapping F : XY is strongly ∥⋅∥2-k0-paraconvex with constant C ≥ 0 on a convex set A if and only if the mapping F + C∥⋅∥2k0 is K-convex on A. Indeed, let x1,x2X. Since

$$ \lambda \|x_{1}\|^{2} + (1-\lambda) \|x_{2}\|^{2} - \|\lambda x_{1} + (1-\lambda) x_{2})\|^{2}= \lambda(1-\lambda) \|x_{1}-x_{2}\|^{2} $$

and

$$F(\lambda x_{1}+(1-\lambda)x_{2})\le_{K} \lambda F(x_{1})+(1-\lambda) F(x_{2})+{C}\lambda(1-\lambda)\|x_{1}-x_{2}\|^{2}k_{0} $$

we have

$$\begin{array}{@{}rcl@{}} &&F(\lambda x_{1}+(1-\lambda)x_{2}) +C\|\lambda x_{1} + (1-\lambda) x_{2})\|^{2}k_{0}\\ &&\qquad \le_{K} \lambda F(x_{1})+(1-\lambda) F(x_{2})+C(\lambda \|x_{1}\|^{2}k_{0} + (1-\lambda) \|x_{2}\|^{2})k_{0}. \end{array} $$

The mapping F (⋅) = F (⋅) + C∥⋅∥2k0 is clearly order d.c. Furthermore, if K is well based (∃yY such that y(k) ≥∥k∥ for any kK), then F is d.c.

For d.c. mappings, we have the following result on the existence of directional derivative.

Theorem 1 (Proposition 3.1 of 13)

Let X be a normed linear space and let Y be a Banach space. Let GX be an open convex set and let F : GY be a d.c. mapping. Then, the directional derivative F(x0,h) exists whenever x0G and hX .

Let us observe that if the function α (⋅) is not convex, then we cannot expect a strongly α (⋅)-k0 paraconvex mapping F to be d.c.

3 Monotonicity of Difference Quotients

Let X be a normed space. Let Y be a topological vector space and let KY be a closed convex pointed cone.

For K-convex mappings, the difference quotient is nondecreasing in the sense that

$$\phi(t_{1})-\phi(t_{2}):=\frac{F(x_{0}+t_{1}h)-F(x_{0})}{t_{1}}-\frac{F(x_{0}+t_{2}h)-F(x_{0})}{t_{2}}\in K \quad \text{for } t_{1} \ge t_{2}. $$

For strongly α (⋅)-K paraconvex and strongly α (⋅)-k0 paraconvex mappings, the difference quotient may not be nondecreasing.

Example 2

Let \(Y=\mathbb {R}\), \(K=\mathbb {R}_{+}\), α (x) = x2 and let F (x) = −x2. The mapping F is strongly α (⋅)-K-paraconvex. Observe that for any \(x_{1}, x_{2}\in \mathbb {R}\), we have \(t({x^{2}_{1}}+{x_{2}^{2}})-2t(x_{1}x_{2})\le 0\) if and only if t ≤ 0. Hence, for t = −λ2 + λ − 1 ≤ 0, we have

$$\begin{array}{@{}rcl@{}} &(-\lambda^{2}+\lambda-1)({x_{1}^{2}}+{x_{2}^{2}})-2x_{1}x_{2}(-\lambda^{2}+\lambda-1)\le 0,&\\ &{x_{1}^{2}}(-\lambda^{2}+\lambda-1)+x_{1}x_{2}(-2\lambda(1-\lambda)+ 2)+{x_{2}^{2}}(-(1-\lambda)^{2}+ 1-\lambda-1)\le 0,&\\ &-(\lambda x_{1} +(1-\lambda)x_{2})^{2}\le -\lambda {x_{1}^{2}}-(1-\lambda){x_{2}^{2}}+(x_{1}-x_{2})^{2},&\\ &F(\lambda x_{1} + (1-\lambda)x_{2})\le \lambda F(x_{1}) + (1-\lambda) F(x_{2}) + (x_{1}-x_{2})^{2}.& \end{array} $$

The last inequality and Proposition 2.1 from [5] give us paraconvexity of the mapping F.

Let x0 = 0, h = 1. The difference quotient \(\phi (t)=\frac {F(x_{0}+th)-F(x_{0})}{t}\) is decreasing. Indeed, for t1t2, we have ϕ (t1) = −t1 and ϕ (t2) = −t2.

The following two propositions are basic tools for the proof of the main result in the next section. In the proposition below, we investigate the monotonicity properties of the α (⋅)-difference quotients for strongly α (⋅)-k paraconvex mappings.

Proposition 1

Let X be a normed space and let Y be a vector space and ordered by a convex pointed cone K. Let F : XY be strongly α (⋅)- k0 paraconvex on a convex set AX with constant C ≥ 0, k0K ∖{0}. For any x0A and any hX,h∥ = 1such that x0 + thA for all t sufficiently small, the α (⋅)-difference quotient mapping \(\phi :\mathbb {R}\rightarrow Y\) defined as

$$ \phi(t):=\frac{F(x_{0}+th)-F(x_{0}+t_{0}h)}{t-t_{0}} + C\frac{\alpha(t-t_{0})}{t-t_{0}}k_{0} \quad \text{ for } t_{0}<t, $$
(2)

where \(t_{0}\in \mathbb {R}\) isα (⋅)-nondecreasingin the sense that

$$ \phi(t)-\phi(t_{1})+C\frac{\alpha(t_{1}-t_{0})}{t_{1}-t_{0}}k_{0}\in K\quad \text{ for }t_{0}<t_{1}<t. $$
(3)

Proof

Take any t0 < t1 < t. We have \(0<\lambda :=\frac {t_{1}-t_{0}}{t-t_{0}}<1\) and

$$x_{0}+t_{1}h = \lambda(x_{0}+th)+(1-\lambda)(x_{0}+t_{0}h). $$

Let k0K ∖{0}. Since F is strongly α (⋅)-k0 paraconvex with constant C ≥ 0, we have

$$\begin{array}{@{}rcl@{}} F(x_{0}+t_{1}h) &\le_{K}&\lambda F(x_{0}+th)+(1-\lambda)F(x_{0}+t_{0}h)\\ &&+C\min\{\lambda, 1-\lambda\}\alpha(t-t_{0})k_{0}. \end{array} $$

Hence,

$$\begin{array}{@{}rcl@{}} 0&\le_{K}&\lambda[F(x_{0}+th)-F(x_{0}+t_{0}h)]-[F(x_{0}+t_{1}h) - F(x_{0}+t_{0}h)]\\ &&+C\min\{\lambda, 1-\lambda\}\alpha(t-t_{0})k_{0}, \end{array} $$

i.e.,

$$\begin{array}{@{}rcl@{}} &&\left[\frac{F(x_{0}+th)-F(x_{0}+t_{0}h)}{t-t_{0}}\right] - \left[\frac{F(x_{0}+t_{1}h)-F(x_{0}+t_{0}h)}{t_{1}-t_{0}}\right]\\ &&+ C\min\{\lambda, 1-\lambda\}\frac{\alpha(t-t_{0})}{t_{1}-t_{0}}k_{0}\in K. \end{array} $$

We have

  1. (i)

    If λ ≤ 1 − λ, i.e., t1t0tt0, then

    $$ \min\{\lambda, 1-\lambda\} \frac{\alpha(t-t_{0})}{t_{1}-t_{0}}= \frac{\alpha(t-t_{0})}{t-t_{0}}. $$
  2. (ii)

    If λ > 1 − λ, i.e., \(\frac {t_{1}-t_{0}}{t-t_{0}}> \frac {t-t_{1}}{t-t_{0}}\), then

    $$ \min\{\lambda, 1-\lambda\} \frac{\alpha(t-t_{0})}{t_{1}-t_{0}}=\frac{t-t_{1}}{t-t_{0}}\frac{\alpha(t-t_{0})}{t_{1}-t_{0}}<\frac{\alpha(t-t_{0})}{t-t_{0}}. $$

In both cases,

$$\begin{array}{@{}rcl@{}} &&\left[\frac{F(x_{0}+th)-F(x_{0}+t_{0}h)}{t-t_{0}}\right] - \left[\frac{F(x_{0}+t_{1}h)-F(x_{0}+t_{0}h)}{t_{1}-t_{0}}\right]\\ &&+ C\frac{\alpha(t-t_{0})}{t-t_{0}}k_{0}-C\frac{\alpha(t_{1}-t_{0})}{t_{1}-t_{0}}k_{0}+C\frac{\alpha(t_{1}-t_{0})}{t_{1}-t_{0}}k_{0}\in K. \end{array} $$

If int K, then any strongly α (⋅)-k0 paraconvex mapping F is strongly α (⋅)-K paraconvex and for any kK the α (⋅)-difference quotients satisfy formula (3) with different constants C, and in general, one cannot find a single constant C for all 0 ≠ kK.

In the proposition below, we investigate the boundedness of α (⋅)-difference quotient for strongly α (⋅)-k paraconvex mappings.

Proposition 2

Let X be a normed space. Let Y be a topological vector space and let Y be ordered by a closed convex pointed cone K. Let F : XY be strongly α (⋅)-k0 paraconvex on a convex set AX with constant C ≥ 0, k0K ∖{0}.

For any x0A and any hX ,h∥ = 1such that x0 + thA for all t sufficiently small, the α (⋅)-difference quotient mapping ϕ : [0, + ) → Y,

$$ \phi(t):=\frac{F(x_{0}+th)-F(x_{0})}{t} + C\frac{\alpha(t)}{t}k_{0} $$

is bounded from below in the sense that there are an elementaY andδ > 0suchthat

$$ \phi(t)-a\in K\quad \text{ for } 0<t<\delta. $$
(4)

Proof

Let us take t0 = −t, t1 = 0. From inclusion (3), we have

$$ \frac{F(x_{0}+th)-F(x_{0}-th)}{2t} + C\frac{\alpha(2t)}{2t}k_{0}- \frac{F(x_{0})-F(x_{0}-th)}{t} - C\frac{\alpha(t)}{t}k_{0}+C\frac{\alpha(t)}{t}k_{0} \in K. $$

Multiplying both sides by 2t > 0, we get

$$F(x_{0}+th)-F(x_{0}-th)+C{\alpha(2t)}k_{0}-2F(x_{0})+ 2F(x_{0}-th)\in K. $$

By simple calculations, we get

$$\frac{F(x_{0}+th)-F(x_{0})}{t}+\frac{F(x_{0}-th)-F(x_{0})}{t}+ 2C\frac{\alpha(2t)}{2t}k_{0}\in K. $$

Since \(\lim\limits_{t \rightarrow 0^{+}}\frac {\alpha (t)}{t}= 0\), there exists δ > 0 such that \(2C\frac {\alpha (2t)}{2t} \le 1\) for t ∈ (0,δ). We have

$$ \frac{F(x_{0}+th)-F(x_{0})}{t}+k_{0}\ge_{K} -\frac{F(x_{0}-th)-F(x_{0})}{t}. $$
(5)

Now, let us take − 1 < −t < 0. We have

$$x_{0}-th=t\underbrace{(x_{0}-h)}_{x_{1}}+(1-t)\underbrace{x_{0}}_{x_{2}}. $$

From the α (⋅)-k0 paraconvexity (1) for λ := t, we get

$$F(x_{0}-th)\le_{K} tF(x_{0}-h)+(1-t)F(x_{0})+C\min\{t, 1-t\}\alpha(1)k_{0} $$

By simple calculation, we get

$$-\frac{F(x_{0}-th)-F(x_{0})}{t}- F(x_{0})+F(x_{0}-h)+C\frac{\min\{t, 1-t\}}{t}\alpha(1)k_{0}\in K. $$

Since \(\frac {\min \{t, 1-t\}}{t}=\frac {1-|2t-1|}{2t}\) and the fact that \(\frac {1-|2t-1|}{2t}\le 1\) is bounded, we get

$$-\frac{F(x_{0}-th)-F(x_{0})}{t}- F(x_{0})+F(x_{0}-h)+ C\alpha(1)k_{0}\in K. $$

Hence,

$$-\frac{F(x_{0}-th)-F(x_{0})}{t}- F(x_{0})+F(x_{0}-h)+C\alpha(1)k_{0}\in K. $$

From (5), we get

$$\frac{F(x_{0}+th)-F(x_{0})}{t}-b\ge_{K} 0, $$

where b := F (x0) − F (x0h) − (Cα (1) + 1) k0. Finally,

$$\phi(t) - b \ge_{K} 0\quad \text{ for } 0<t<\delta. $$

4 Main Result

The proof of the main theorem is based on the following lemma.

Lemma 1

Let Y be a Banach space. Let KY be a closed convex normal cone. Let\({\Phi }: \mathbb {R}_{+} \rightarrow Y\) satisfy the following conditions

  1. (i)

    Φ(t) ∈ K for any \(t \in \mathbb {R}_{+}\),

  2. (ii)

    for 0 < t1 < t we have \({\Phi }(t)-{\Phi }(t_{1}) + \frac {\alpha (t_{1})}{t_{1}}k_{0}\in K\) for some k0K,

  3. (iii)

    Φ(t) is weakly convergent to 0 when t → 0+.

Then, ∥Φ(t)∥→ 0when t → 0+.

Proof

By contradiction, suppose that \(\|{\Phi }(t)\|\nrightarrow 0\) when t → 0+ and (i) and (ii) are satisfied. We will obtain a contradiction with (iii). By this, there is ε > 0 such that for all δ > 0 one can find 0 < t < δ with ∥Φ(t)∥ > ε. In particular, for \(\delta _{n}=\frac {1}{n}\), there exist \(t_{n}\in (0,\frac {1}{n})\), \(n\in \mathbb {N}\), such that

$$ \|{\Phi}(t_{n})\|>\varepsilon. $$
(6)

Let \(x\in A:=\text {co}({\Phi }(t_{n}), n\in \mathbb {N})\). There are positive numbers λ1,λ2,…,λ m and t1,t2…,t m such that \(x={\sum }_{i = 1}^{m} \lambda _{i} {\Phi }(t_{i})\), where \({\sum }_{i = 1}^{m}\lambda _{i}= 1\). There exists \(N\in \mathbb {N}\) such that for all n > N, we have

$$\begin{array}{@{}rcl@{}} {\Phi}(t_{1})-{\Phi}(t_{n}) + \frac{\alpha(t_{n})}{t_{n}}k_{0}&\in &K, \\ {\Phi}(t_{2})-{\Phi}(t_{n})+ \frac{\alpha(t_{n})}{t_{n}}k_{0}&\in &K,\\ &\vdots& \\ {\Phi}(t_{m})-{\Phi}(t_{n})+ \frac{\alpha(t_{n})}{t_{n}}k_{0}&\in &K. \end{array} $$

We get

$$x-{\Phi}(t_{n})+ \frac{\alpha(t_{n})}{t_{n}} k_{0}\in K\quad \text{ for all } n>N. $$

From the fact that Φ(t n ) ∈ K and K is normal, there is some c > 0 such that \(\|{\Phi }(t_{n})\|\le c \|x+ \frac {\alpha (t_{n})}{t_{n}}k_{0}\|\). By (6), we obtain \(\|x+ \frac {\alpha (t_{n})}{t_{n}}k_{0}\| > \beta := \frac {\varepsilon }{c}\) for all xA and n > N.

We show that

$$\mathbb{B}_{\beta/2} \cap (A+k_{0}[0,s])=\emptyset $$

for s > 0 satisfying \(\frac {\alpha (t_{n})}{t_{n}}\le s\). To see this, take any ∈ (0,s], where \(\mathbb {B}_{r}:=\{y\in Y: \|y\|\le r \}\). Since \(\lim _{n\rightarrow +\infty }\frac {\alpha (t_{n})}{t_{n}}= 0\), there exists \(n\in \mathbb {N}\) such that

$$0\le_{K} x+ \frac{\alpha(t_{n})}{t_{n}}k_{0}\le_{K} x+\ell k_{0}. $$

By (6) and the normality of K,

$$\beta/2< \| x+ \frac{\alpha(t_{n})}{t_{n}}k_{0}\|\le\| x+\ell k_{0}\|. $$

From the Hahn–Banach theorem applied to \(\mathbb {B}_{\beta /2}\) and (A + k0[0,s]), there is a linear functional yY and r > 0 such that

$$y^{\ast}(x+ \ell k_{0}) > r \quad \text{ for all } x+\ell k_{0}\in A+k_{0}[0,s]. $$

In particular, \(y^{\ast }({\Phi }(t_{n})+ \frac {\alpha (t_{n})}{t_{n}}k_{0})>r>0\), which contradicts (iii). □

We are in a position to prove our main result.

Theorem 2

Let X be a normed space. Let Y be a weakly sequentially complete Banach space ordered by a closed convex normal cone K. Let F : XY be stronglyα (⋅)-k0 paraconvex on a convex set AX with constant C ≥ 0, k0K ∖{0}. Then, the directional derivative

$$F^{\prime}(x_{0};h):=\lim\limits_{t\rightarrow 0^{+}}\frac{F(x_{0}+th)-F(x_{0})}{t} $$

of F at x0 exists for any x0A and any direction 0 ≠ hX,∥h∥ = 1such that x0 + thA for all t sufficiently small.

Proof

Let x0A and let 0 ≠ hX, ∥h∥ = 1 be such that x0 + thA for all t sufficiently small. Let t n 0. For t0 = 0, the α (⋅)-difference quotient by (2) takes the form

$$\phi(t_{n})= \frac{F(x_{0}+t_{n}h)-F(x_{0})}{t_{n}} + C\frac{\alpha(t_{n})}{t_{n}}k_{0}. $$

Let yK. By (4), the sequence a n := y(ϕ (t n )), \(n\in \mathbb {N}\) is bounded from below, i.e.,

$$ a_{n}\ge a:=y^{\ast}(b) \quad \text{for all \textit{n} sufficiently large and } b\in Y. $$

Let us take ε > 0. There is N such that

$$ a_{N}< \underline{a} + \frac{\varepsilon}{2}, $$
(7)

where \(\underline {a}:=\inf \{a_{n}: n\in \mathbb {N}\}\). Since {t n } is decreasing, from (3), we get

$$ a_{N}-a_{n} + C\frac{\alpha(t_{n})}{t_{n}}y^{\ast}(k_{0})\ge 0\quad \text{ for } n> N. $$
(8)

Let \(b_{n}:= C\frac {\alpha (t_{n})}{t_{n}}y^{\ast }(k_{0})\). Since b n → 0 there is N1 such that \(b_{n}\le \frac {\varepsilon }{2}\) for n > N1.

From (7) and (8), we get

$$\underline{a} - \varepsilon < \underline{a} \le a_{n} \le a_{N} +b_{n} \le \underline{a} +\frac{\varepsilon}{2} +b_{n}\le \underline{a}+\varepsilon\quad \text{ for } n>\max\{N,N_{1}\}. $$

Hence, the sequence {a n } is convergent and consequently every sequence {y(ϕ (t n ))} is Cauchy for yK.

Let us take any hY. We show that the sequence {h(ϕ (t n ))} is Cauchy. From the fact that K is normal, we have Y = KK and h = gq with g,qK. Since {g(ϕ (t n ))} and {q(ϕ (t n ))} are Cauchy sequences, there exist N1, N2 such that for \(n,m > \bar {N}:=\max (N_{1}, N_{2})\), we have

$$|g^{\ast}(\phi(t_{n}))- g^{\ast}(\phi(t_{m}))| \le \frac \varepsilon 2 \quad \text{ and }\quad |q^{\ast}(\phi(t_{n}))- q^{\ast}(\phi(t_{m}))| \le \frac \varepsilon 2. $$

For \(n> \bar {N}\), we have

$$\begin{array}{@{}rcl@{}} |h^{\ast}(\phi(t_{n}))-h^{\ast}(\phi(t_{m}))|&=&|g^{\ast}(\phi(t_{n}))-q^{*}(\phi(t_{n}))-g^{\ast}(\phi(t_{m}))+q^{\ast}(\phi(t_{m}))|\\ &\le& \frac \varepsilon 2 + \frac \varepsilon 2=\varepsilon. \end{array} $$

We show that ϕ (t) weakly converges when t → 0+, i.e., there is an y0Y such that for arbitrary t n 0, we have

$$\lim\limits_{n\rightarrow \infty} y^{\ast}(\phi(t_{n})) = y^{\ast}(y_{0}) \quad \text{ for any } y^{\ast} \in Y^{\ast} $$

which is equivalent to

$$ \phi(t)\rightharpoonup y_{0} \quad\text{ when } t\rightarrow 0^{+}. $$
(9)

Since Y is weakly sequentially complete, we need only to show that y0 is the same for all sequences {t n }, t n 0. On the contrary, suppose that there are two different weak limits \({y_{0}^{1}}\), \({y_{0}^{2}}\) corresponding to sequences \({t_{n}^{1}}\) and \({t_{n}^{2}}\), respectively.

We can subtract subsequences \(\{\bar {t}_{n}^{2}\}\subset \{{t_{n}^{2}}\}\) and \(\{\bar {t}_{n}^{1}\}\subset \{{t_{n}^{1}}\}\) such that \(\bar {t}_{n}^{2} \le {t_{n}^{1}} \le \bar {t}_{n}^{1}\). Correspondingly,

$$y^{\ast}(\phi(\bar{t}_{n}^{2}))\le y^{\ast}(\phi({t_{n}^{1}}))\le y^{\ast}(\phi(\bar{t}_{n}^{1})) $$

which proves that it must be \({y_{0}^{1}}={y_{0}^{2}}\).

Now, we show that the mapping Φ(t) := ϕ (t) − y0 satisfies all the assumptions of Lemma 1. From (3) and (9), it is enough to show that Φ(t) ∈ K for all t ≥ 0.

By contradiction, let us assume that there is some \(\bar {t}>0\) such that \({\Phi }(\bar {t})\notin K\). There exists yK such that

$$ y^{\ast}({\Phi}(\bar{t}))=y^{\ast}(\phi(\bar{t})-y_{0})<0. $$
(10)

From inclusion (3) in Proposition 1, we have

$$\phi(\bar{t})-y_{0}-\phi({t})+y_{0} + C\frac{\alpha(t)}{t}k_{0}\in K \quad \text{ for all } t\in (0,\bar{t}). $$

In particular

$$y^{\ast}(\phi(\bar{t})-y_{0}) \ge y^{\ast}\left( \phi({t})-y_{0}-C\frac{\alpha(t)}{t}k_{0}\right)\quad \text{ for all } t\in (0,\bar{t}). $$

And by (10), we get

$$0>y^{\ast}(\phi(\bar{t})-y_{0})\ge y^{\ast}\left( \phi({t})-y_{0}-C\frac{\alpha(t)}{t}k_{0}\right)\quad \text{ for all } t\in (0,\bar{t}). $$

Then, by letting t → 0+, we get contradiction with (9). By Lemma 1, Φ(t) tends to 0 when t → 0+. Since \(\lim _{t\rightarrow 0^{+}}\frac {\alpha (t)}{t}= 0\), we get

$$\lim\limits_{t\rightarrow 0^{+}} \frac{F(x_{0}+th)-F(x_{0})}{t}=y_{0} $$

which completes the proof. □

Remark 1

For K-convex mappings F, i.e., strongly α (⋅)-K paracanovex mappings with constant C = 0 Theorem 2 can be found in [12].